Science.gov

Sample records for earthquake loss estimation

  1. Earthquake Loss Estimation Uncertainties

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander

    2013-04-01

    The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity

  2. Loss estimation of Membramo earthquake

    NASA Astrophysics Data System (ADS)

    Damanik, R.; Sedayo, H.

    2016-05-01

    Papua Tectonics are dominated by the oblique collision of the Pacific plate along the north side of the island. A very high relative plate motions (i.e. 120 mm/year) between the Pacific and Papua-Australian Plates, gives this region a very high earthquake production rate, about twice as much as that of Sumatra, the western margin of Indonesia. Most of the seismicity occurring beneath the island of New Guinea is clustered near the Huon Peninsula, the Mamberamo region, and the Bird's Neck. At 04:41 local time(GMT+9), July 28th 2015, a large earthquake of Mw = 7.0 occurred at West Mamberamo Fault System. The earthquake focal mechanism are dominated by northwest-trending thrust mechanisms. GMPE and ATC vulnerability curve were used to estimate distribution of damage. Mean of estimated losses was caused by this earthquake is IDR78.6 billion. We estimated insurance loss will be only small portion in total general due to deductible.

  3. Building Loss Estimation for Earthquake Insurance Pricing

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Erdik, M.; Sesetyan, K.; Demircioglu, M. B.; Fahjan, Y.; Siyahi, B.

    2005-12-01

    After the 1999 earthquakes in Turkey several changes in the insurance sector took place. A compulsory earthquake insurance scheme was introduced by the government. The reinsurance companies increased their rates. Some even supended operations in the market. And, most important, the insurance companies realized the importance of portfolio analysis in shaping their future market strategies. The paper describes an earthquake loss assessment methodology that can be used for insurance pricing and portfolio loss estimation that is based on our work esperience in the insurance market. The basic ingredients are probabilistic and deterministic regional site dependent earthquake hazard, regional building inventory (and/or portfolio), building vulnerabilities associated with typical construction systems in Turkey and estimations of building replacement costs for different damage levels. Probable maximum and average annualized losses are estimated as the result of analysis. There is a two-level earthquake insurance system in Turkey, the effect of which is incorporated in the algorithm: the national compulsory earthquake insurance scheme and the private earthquake insurance system. To buy private insurance one has to be covered by the national system, that has limited coverage. As a demonstration of the methodology we look at the case of Istanbul and use its building inventory data instead of a portfolio. A state-of-the-art time depent earthquake hazard model that portrays the increased earthquake expectancies in Istanbul is used. Intensity and spectral displacement based vulnerability relationships are incorporated in the analysis. In particular we look at the uncertainty in the loss estimations that arise from the vulnerability relationships, and at the effect of the implemented repair cost ratios.

  4. Regional Earthquake Shaking and Loss Estimation

    NASA Astrophysics Data System (ADS)

    Sesetyan, K.; Demircioglu, M. B.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses in the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Both Level 0 (similar to PAGER system of USGS) and Level 1 analyses of the ELER routine are based on obtaining intensity distributions analytically and estimating total number of casualties and their geographic distribution either using regionally adjusted intensity-casualty or magnitude-casualty correlations (Level 0) of using regional building inventory data bases (Level 1). Level 0 analysis is similar to the PAGER system being developed by USGS. For given

  5. Estimating economic losses from earthquakes using an empirical approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2013-01-01

    We extended the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) empirical fatality estimation methodology proposed by Jaiswal et al. (2009) to rapidly estimate economic losses after significant earthquakes worldwide. The requisite model inputs are shaking intensity estimates made by the ShakeMap system, the spatial distribution of population available from the LandScan database, modern and historic country or sub-country population and Gross Domestic Product (GDP) data, and economic loss data from Munich Re's historical earthquakes catalog. We developed a strategy to approximately scale GDP-based economic exposure for historical and recent earthquakes in order to estimate economic losses. The process consists of using a country-specific multiplicative factor to accommodate the disparity between economic exposure and the annual per capita GDP, and it has proven successful in hindcast-ing past losses. Although loss, population, shaking estimates, and economic data used in the calibration process are uncertain, approximate ranges of losses can be estimated for the primary purpose of gauging the overall scope of the disaster and coordinating response. The proposed methodology is both indirect and approximate and is thus best suited as a rapid loss estimation model for applications like the PAGER system.

  6. Status of developing Earthquake Loss Estimation in Korea Using HAZUS

    NASA Astrophysics Data System (ADS)

    Kang, S. Y.; Kim, K. H.

    2015-12-01

    HAZUS, a loss estimation tool due to natural hazards, has been used in Korea. In the earlier development of earthquake loss estimation system in Korea, a ShakeMap due to magnitude 6.7 scenario earthquake in the southeastern Korea prepared by USGS was used. Attenuation relation proposed by Boore et al. (1997) is assumed to simulate the strong ground motion with distance. During the initial stage, details of local site characteristics and attenuation relations were not properly accounted. Later, the attenuation relation proposed by Sadigh et al. (1997) for site classes B, C, and D were reviewed and applied to the Korean Peninsula. Loss estimations were improved using the attenuation relation and the deterministic methods available in HAZUS. Most recently, a site classification map has been derived using geologic and geomorphologic data, which are readily available from the geologic and topographic maps of Korea. Loss estimations using the site classification map differ from earlier ones. For example, earthquake loss using ShakeMap overestimates house damages. 43% of houses are estimated to experience moderate or severe damage in the results using ShakeMap, while 23 % is estimated in those using the site classification map. The number of people seeking emergency shelters is also different from previous estimates. It is considered revised estimates are more realistic since the ground motions ensuing from earthquakes are better represented. In the next application, landslide, liquefaction and fault information are planned to be implemented in HAZUS. The result is expected to better represent any loss under the emergency situation, thus help the planning disaster response and hazard mitigations.

  7. Estimating annualized earthquake losses for the conterminous United States

    USGS Publications Warehouse

    Jaiswal, Kishor S.; Bausch, Douglas; Chen, Rui; Bouabid, Jawhar; Seligson, Hope

    2015-01-01

    We make use of the most recent National Seismic Hazard Maps (the years 2008 and 2014 cycles), updated census data on population, and economic exposure estimates of general building stock to quantify annualized earthquake loss (AEL) for the conterminous United States. The AEL analyses were performed using the Federal Emergency Management Agency's (FEMA) Hazus software, which facilitated a systematic comparison of the influence of the 2014 National Seismic Hazard Maps in terms of annualized loss estimates in different parts of the country. The losses from an individual earthquake could easily exceed many tens of billions of dollars, and the long-term averaged value of losses from all earthquakes within the conterminous U.S. has been estimated to be a few billion dollars per year. This study estimated nationwide losses to be approximately $4.5 billion per year (in 2012$), roughly 80% of which can be attributed to the States of California, Oregon and Washington. We document the change in estimated AELs arising solely from the change in the assumed hazard map. The change from the 2008 map to the 2014 map results in a 10 to 20% reduction in AELs for the highly seismic States of the Western United States, whereas the reduction is even more significant for Central and Eastern United States.

  8. Property loss estimation for wind and earthquake perils.

    PubMed

    Chandler, A M; Jones, E J; Patel, M H

    2001-04-01

    This article describes the development of a generic loss assessment methodology, which is applicable to earthquake and windstorm perils worldwide. The latest information regarding hazard estimation is first integrated with the parameters that best describe the intensity of the action of both windstorms and earthquakes on building structures, for events with defined average return periods or recurrence intervals. The subsequent evaluation of building vulnerability (damageability) under the action of both earthquake and windstorm loadings utilizes information on damage and loss from past events, along with an assessment of the key building properties (including age and quality of design and construction), to assess information about the ability of buildings to withstand such loadings and hence to assign a building type to the particular risk or portfolio of risks. This predicted damage information is then translated into risk-specific mathematical vulnerability functions, which enable numerical evaluation of the probability of building damage arising at various defined levels. By assigning cost factors to the defined damage levels, the associated computation of total loss at a given level of hazard may be achieved. This developed methodology is universal in the sense that it may be applied successfully to buildings situated in a variety of earthquake and windstorm environments, ranging from very low to extreme levels of hazard. As a loss prediction tool, it enables accurate estimation of losses from potential scenario events linked to defined return periods and, hence, can greatly assist risk assessment and planning.

  9. Global Building Inventory for Earthquake Loss Estimation and Risk Management

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David; Porter, Keith

    2010-01-01

    We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat’s demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature.

  10. Earthquake Loss Estimates in Near Real-Time

    NASA Astrophysics Data System (ADS)

    Wyss, Max; Wang, Rongjiang; Zschau, Jochen; Xia, Ye

    2006-10-01

    The usefulness to rescue teams of nearreal-time loss estimates after major earthquakes is advancing rapidly. The difference in the quality of data available in highly developed compared with developing countries dictates that different approaches be used to maximize mitigation efforts. In developed countries, extensive information from tax and insurance records, together with accurate census figures, furnish detailed data on the fragility of buildings and on the number of people at risk. For example, these data are exploited by the method to estimate losses used in the Hazards U.S. Multi-Hazard (HAZUSMH)software program (http://www.fema.gov/plan/prevent/hazus/). However, in developing countries, the population at risk is estimated from inferior data sources and the fragility of the building stock often is derived empirically, using past disastrous earthquakes for calibration [Wyss, 2004].

  11. Earthquake loss estimates in real time begin to assist rescue teams worldwide

    NASA Astrophysics Data System (ADS)

    Wyss, M.

    Recent advances are improving the speed and accuracy of loss estimates immediately after earthquakes so that injured people may be rescued more efficiently After major and large earthquakes, rescue agencies and civil defense managers rapidly need quantitative estimates of the extent of the potential disaster, at a time when data from the affected area may not yet have reached the outside world. Loss estimates for hypothetical future earthquakes are also reaching a level where they are useful for motivating and planning earthquake disaster mitigation.In many developing countries, urbanization and population are increasing at an unprecedented pace. Therefore, the extent of future earthquake disasters cannot easily be estimated from historical experience that typically dates from a hundred years ago. Even for order of magnitude estimates of future losses, it is necessary to include information on the current quality of buildings, the soil properties, and the present population.

  12. Spatial modeling for estimation of earthquakes economic loss in West Java

    NASA Astrophysics Data System (ADS)

    Retnowati, Dyah Ayu; Meilano, Irwan; Riqqi, Akhmad; Hanifa, Nuraini Rahma

    2017-07-01

    Indonesia has a high vulnerability towards earthquakes. The low adaptive capacity could make the earthquake become disaster that should be concerned. That is why risk management should be applied to reduce the impacts, such as estimating the economic loss caused by hazard. The study area of this research is West Java. The main reason of West Java being vulnerable toward earthquake is the existence of active faults. These active faults are Lembang Fault, Cimandiri Fault, Baribis Fault, and also Megathrust subduction zone. This research tries to estimates the value of earthquakes economic loss from some sources in West Java. The economic loss is calculated by using HAZUS method. The components that should be known are hazard (earthquakes), exposure (building), and the vulnerability. Spatial modeling is aimed to build the exposure data and make user get the information easier by showing the distribution map, not only in tabular data. As the result, West Java could have economic loss up to 1,925,122,301,868,140 IDR ± 364,683,058,851,703.00 IDR, which is estimated from six earthquake sources with maximum possibly magnitude. However, the estimation of economic loss value in this research is the worst case earthquakes occurrence which is probably over-estimated.

  13. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    USGS Publications Warehouse

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  14. A global building inventory for earthquake loss estimation and risk management

    USGS Publications Warehouse

    Jaiswal, K.; Wald, D.; Porter, K.

    2010-01-01

    We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat's demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature. ?? 2010, Earthquake Engineering Research Institute.

  15. Comparing population exposure to multiple Washington earthquake scenarios for prioritizing loss estimation studies

    USGS Publications Warehouse

    Wood, Nathan J.; Ratliff, Jamie L.; Schelling, John; Weaver, Craig S.

    2014-01-01

    Scenario-based, loss-estimation studies are useful for gauging potential societal impacts from earthquakes but can be challenging to undertake in areas with multiple scenarios and jurisdictions. We present a geospatial approach using various population data for comparing earthquake scenarios and jurisdictions to help emergency managers prioritize where to focus limited resources on data development and loss-estimation studies. Using 20 earthquake scenarios developed for the State of Washington (USA), we demonstrate how a population-exposure analysis across multiple jurisdictions based on Modified Mercalli Intensity (MMI) classes helps emergency managers understand and communicate where potential loss of life may be concentrated and where impacts may be more related to quality of life. Results indicate that certain well-known scenarios may directly impact the greatest number of people, whereas other, potentially lesser-known, scenarios impact fewer people but consequences could be more severe. The use of economic data to profile each jurisdiction’s workforce in earthquake hazard zones also provides additional insight on at-risk populations. This approach can serve as a first step in understanding societal impacts of earthquakes and helping practitioners to efficiently use their limited risk-reduction resources.

  16. A new Tool for Estimating Losses due to Earthquakes: QUAKELOSS2

    NASA Astrophysics Data System (ADS)

    Kaestli, P.; Wyss, M.; Bonjour, C.; Wiemer, S.; Wyss, B. M.

    2007-12-01

    WAPMERR and the Swiss Seismological Service are developing new software for estimating mean damage to buildings, number of injured and number of fatalities due to earthquakes worldwide. The focus for applications is real-time estimates of losses after earthquakes in countries without dense seismograph networks, and results that are easy to digest by relief agencies. Therefore, the standard version of the software addresses losses by settlement, subdivisions of settlements and important pieces of infrastructure. However, a generic design, an open source policy and well defined interfaces will allow the software to work on any gridded or discrete building stock data, to do Monte-Carlo simulations for error assessment and to plug in more elaborate source models than simple point and line sources and thus to compute realistic loss scenarios as well as probabilistic risk maps. It will provide interfaces to SHAKEMAP and PAGER, such that innovations developed for the latter programs may be used in QUAKELOSS2, and vice versa. A client server design will provide a front-end web interface where the user may directly manage servers as well as run the software in one's&pown laboratory. The input-output features and mapping will be designed to allow the user to run QUAKELOSS2 remotely with basic functions, as well as in a laboratory setting including a full-featured GIS setup for additional analysis. In many cases, the input data (earthquake parameters as well as population and building stock data) are poorly known for developing countries. Calibration of loss estimates, using past earthquakes that have caused damage and WAPMERR's experience of four years" estimating losses, will help to produce approximately correct results in countries with strong earthquake activity. A worldwide standard dataset on population and building stock will be provided as open source together with the software. The dataset will be improved successively, based on input from satellite images

  17. Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge

    USGS Publications Warehouse

    Jaiswal, K.S.; Wald, D.J.

    2012-01-01

    We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.

  18. Hazus® estimated annualized earthquake losses for the United States

    USGS Publications Warehouse

    Jaiswal, Kishor; Bausch, Doug; Rozelle, Jesse; Holub, John; McGowan, Sean

    2017-01-01

    Large earthquakes can cause social and economic disruption that can be unprecedented to any given community, and the full recovery from these impacts may or may not always be achievable. In the United States (U.S.), the 1994 M6.7 Northridge earthquake in California remains the third costliest disaster in U.S. history; and it was one of the most expensive disasters for the federal government. Internationally, earthquakes in the last decade alone have claimed tens of thousands of lives and caused hundreds of billions of dollars of economic impact throughout the globe (~90 billion U.S. dollars (USD) from 2008 M7.9 Wenchuan China, ~20 billion USD from 2010 M8.8 Maule earthquake in Chile, ~220 billion USD from 2011 M9.0 Tohoku Japan earthquake, ~25 billion USD from 2011 M6.3 Christchurch New Zealand, and ~22 billion USD from 2016 M7.0 Kumamoto Japan). Recent earthquakes show a pattern of steadily increasing damages and losses that are primarily due to three key factors: (1) significant growth in earthquake-prone urban areas, (2) vulnerability of the older building stock, including poorly engineered non-ductile concrete buildings, and (3) an increased interdependency in terms of supply and demand for the businesses that operate among different parts of the world. In the United States, earthquake risk continues to grow with increased exposure of population and development even though the earthquake hazard has remained relatively stable except for the regions of induced seismic activity. Understanding the seismic hazard requires studying earthquake characteristics and locales in which they occur, while understanding the risk requires an assessment of the potential damage from earthquake shaking to the built environment and to the welfare of people—especially in high-risk areas. Estimating the varying degree of earthquake risk throughout the United States is critical for informed decision-making on mitigation policies, priorities, strategies, and funding levels in the

  19. Estimating earthquake potential

    USGS Publications Warehouse

    Page, R.A.

    1980-01-01

    The hazards to life and property from earthquakes can be minimized in three ways. First, structures can be designed and built to resist the effects of earthquakes. Second, the location of structures and human activities can be chosen to avoid or to limit the use of areas known to be subject to serious earthquake hazards. Third, preparations for an earthquake in response to a prediction or warning can reduce the loss of life and damage to property as well as promote a rapid recovery from the disaster. The success of the first two strategies, earthquake engineering and land use planning, depends on being able to reliably estimate the earthquake potential. The key considerations in defining the potential of a region are the location, size, and character of future earthquakes and frequency of their occurrence. Both historic seismicity of the region and the geologic record are considered in evaluating earthquake potential. 

  20. Loss estimates for a Puente Hills blind-thrust earthquake in Los Angeles, California

    USGS Publications Warehouse

    Field, E.H.; Seligson, H.A.; Gupta, N.; Gupta, V.; Jordan, T.H.; Campbell, K.W.

    2005-01-01

    Based on OpenSHA and HAZUS-MH, we present loss estimates for an earthquake rupture on the recently identified Puente Hills blind-thrust fault beneath Los Angeles. Given a range of possible magnitudes and ground motion models, and presuming a full fault rupture, we estimate the total economic loss to be between $82 and $252 billion. This range is not only considerably higher than a previous estimate of $69 billion, but also implies the event would be the costliest disaster in U.S. history. The analysis has also provided the following predictions: 3,000-18,000 fatalities, 142,000-735,000 displaced households, 42,000-211,000 in need of short-term public shelter, and 30,000-99,000 tons of debris generated. Finally, we show that the choice of ground motion model can be more influential than the earthquake magnitude, and that reducing this epistemic uncertainty (e.g., via model improvement and/or rejection) could reduce the uncertainty of the loss estimates by up to a factor of two. We note that a full Puente Hills fault rupture is a rare event (once every ???3,000 years), and that other seismic sources pose significant risk as well. ?? 2005, Earthquake Engineering Research Institute.

  1. Regional earthquake loss estimation in the Autonomous Province of Bolzano - South Tyrol (Italy)

    NASA Astrophysics Data System (ADS)

    Huttenlau, Matthias; Winter, Benjamin

    2013-04-01

    Beside storm events geophysical events cause a majority of natural hazard losses on a global scale. However, in alpine regions with a moderate earthquake risk potential like in the study area and thereupon connected consequences on the collective memory this source of risk is often neglected in contrast to gravitational and hydrological hazards processes. In this context, the comparative analysis of potential disasters and emergencies on a national level in Switzerland (Katarisk study) has shown that earthquakes are the most serious source of risk in general. In order to estimate the potential losses of earthquake events for different return periods and loss dimensions of extreme events the following study was conducted in the Autonomous Province of Bolzano - South Tyrol (Italy). The applied methodology follows the generally accepted risk concept based on the risk components hazard, elements at risk and vulnerability, whereby risk is not defined holistically (direct, indirect, tangible and intangible) but with the risk category losses on buildings and inventory as a general risk proxy. The hazard analysis is based on a regional macroseismic scenario approach. Thereby, the settlement centre of each community (116 communities) is defined as potential epicentre. For each epicentre four different epicentral scenarios (return periods of 98, 475, 975 and 2475 years) are calculated based on the simple but approved and generally accepted attenuation law according to Sponheuer (1960). The relevant input parameters to calculate the epicentral scenarios are (i) the macroseismic intensity and (ii) the focal depth. The considered macroseismic intensities are based on a probabilistic seismic hazard analysis (PSHA) of the Italian earthquake catalogue on a community level (Dipartimento della Protezione Civile). The relevant focal depth are considered as a mean within a defined buffer of the focal depths of the harmonized earthquake catalogues of Italy and Switzerland as well as

  2. Ways to increase the reliability of earthquake loss estimations in emergency mode

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valeri; Ugarov, Aleksander

    2016-04-01

    The lessons of earthquake disasters in Nepal, China, Indonesia, India, Haiti, Turkey and many others show that authorities in charge of emergency response are most often lacking prompt and reliable information on the disaster itself and its secondary effects. Timely and adequate action just after a strong earthquake can result in significant benefits in saving lives and other benefits, especially, in densely populated areas with high level of industrialization. The reliability of rough and rapid information provided by "global systems" (i.e. systems operated without consideration on wherever the earthquake has occurred), in emergency mode is strongly dependent on many factors dealt with input data and simulation models used in such systems. The paper analyses the different factors contribution to the total "error" of fatality estimation in emergency mode. Examples of four strong events in Nepal, Italy, China, Italy allowed to make a conclusion that the reliability of loss estimations is first of all influenced by the uncertainties in event parameters determination (coordinates, magnitude, source depth); this factors' group rating is the highest; as the degree of influence on reliability of loss estimations is equal to about 50%. The second place is taken by the factors' group responsible for macroseismic field simulation; the degree of influence of the group errors is about 30%. The last place is taken by group of factors, which describes the built environment distribution and regional vulnerability functions; the factors' group contributes about 20% to the error of loss estimation. Ways to minimize the influence of different factors on the reliability of loss assessment in near real time are proposed. The first one is to determine the rating of seismological surveys for different zones in attempting to decrease uncertainties in the earthquake parameters input determination in emergency mode. The second one is to "calibrate" the "global systems" drawing advantage

  3. Annualized earthquake loss estimates for California and their sensitivity to site amplification

    USGS Publications Warehouse

    Chen, Rui; Jaiswal, Kishor; Bausch, D; Seligson, H; Wills, C.J.

    2016-01-01

    Input datasets for annualized earthquake loss (AEL) estimation for California were updated recently by the scientific community, and include the National Seismic Hazard Model (NSHM), site‐response model, and estimates of shear‐wave velocity. Additionally, the Federal Emergency Management Agency’s loss estimation tool, Hazus, was updated to include the most recent census and economic exposure data. These enhancements necessitated a revisit to our previous AEL estimates and a study of the sensitivity of AEL estimates subjected to alternate inputs for site amplification. The NSHM ground motions for a uniform site condition are modified to account for the effect of local near‐surface geology. The site conditions are approximated in three ways: (1) by VS30 (time‐averaged shear‐wave velocity in the upper 30 m) value obtained from a geology‐ and topography‐based map consisting of 15 VS30 groups, (2) by site classes categorized according to National Earthquake Hazards Reduction Program (NEHRP) site classification, and (3) by a uniform NEHRP site class D. In case 1, ground motions are amplified using the Seyhan and Stewart (2014) semiempirical nonlinear amplification model. In cases 2 and 3, ground motions are amplified using the 2014 version of the NEHRP site amplification factors, which are also based on the Seyhan and Stewart model but are approximated to facilitate their use for building code applications. Estimated AELs are presented at multiple resolutions, starting with the state level assessment and followed by detailed assessments for counties, metropolitan statistical areas (MSAs), and cities. AEL estimate at the state level is ∼$3.7  billion, 70% of which is contributed from Los Angeles–Long Beach–Santa Ana, San Francisco–Oakland–Fremont, and Riverside–San Bernardino–Ontario MSAs. The statewide AEL estimate is insensitive to alternate assumptions of site amplification. However, we note significant differences in AEL estimates

  4. Seismic Site Characterizations and Earthquake Loss Estimation Analyses for K-12 Schools in Washington State

    NASA Astrophysics Data System (ADS)

    Cakir, R.; Walsh, T. J.; Hayashi, K.; Norman, D. K.; Lau, T.; Scott, S.

    2016-12-01

    Washington State has the second-highest earthquake risk in the U.S. after only California, and major earthquakes in western Washington in 1946, 1949, 1965, and 2001 killed 15 people and caused billions of dollars' worth of property damage. Washington State has not been exempt from earthquake damage to school buildings. The mission of The Washington Department of Natural Resources-Division of Geology and Earth Resources is to "reduce or eliminate risks to life and property from natural hazards." We conducted active and passive seismic surveys, and estimated shear-wave velocity (Vs) profiles, then determined NEHRP soil classifications using calculated Vs30m values at public schools in Thurston, Grays Harbor, Walla Walla, Chelan and Okanogan counties, Washington. We used active and passive seismic surveys: 1D and 2D MASW and MAM, P- and S-wave refraction, horizontal-to-vertical spectral ratio (H/V), and 2-Station SPAC (2ST-SPAC) surveys to measure Vs and Vp at shallow (0-70m) and Vs at greater (10 to 500 or 10 -3000 meters) depths at the sites, respectively. We then ran Ground Penetrating Radar (GPR) surveys along each seismic line to check possible horizontal subsurface variations between the survey line and the actual location of the school buildings. These survey results were then used for calculations of Vs30m to determine the NEHRP site classifications at school sites. These site classes were also used for determining soil amplification effects on the ground motions affecting structural damage estimations of the school buildings. These seismic site characterization results associated with structural engineering evaluations were then used as inputs in FEMA Hazus-Advanced Engineering Building Module (AEBM) analysis to provide estimated casualties, nonstructural, and structural losses. The final AEBM loss estimation along with the more detailed structural evaluations will help school districts assess the earthquake performance of school buildings in order to

  5. Seismic microzonation of the city of Elche (Spain) for earthquake loss estimation

    NASA Astrophysics Data System (ADS)

    Agea-Medina, Noelia; Galiana-Merino, Juan Jose; Navarro, Manuel; Molina-Palacios, Sergio; Rosa-Herranz, Julio; Soler-Llorens, Juan Luis

    2017-04-01

    Elche town is located in the SE of the Alicante province (Southeast of Spain). This part of Spain is one of the most hazardous zones from the viewpoint of the seismic hazard. The current seismic normative assigns a PGA value of 0.20g (return period of 475 years) to this city being the maximum 0.23g in the city of Jacarilla (Alicante). The urban area comprises more than 20000 buildings with an important number constructed without seismic considerations. Therefore, a correct seismic microzonation will let us to establish the shear wave velocity, predominant periods and dispersion curves needed to compute accurately the ground motions scenarios in the city for an earthquake loss estimation (ELE). We have tested several techniques: multichannel analysis of surface waves (MASW) and spatial autocorrelation (SPAC) and calibrated the results with geotechnical information. The dispersion curves were obtained in different wavelength ranges and finally the 1-D Vs model was computed for each final dispersion curve using an iterative process. Additionally, a map of predominant periods has been obtained for the city. The sensitivity of the results according to the used techniques and the recording instruments has been analysed and its influence when computing earthquake damage has been addressed.

  6. Impact of Uncertainty on Loss Estimates for a Repeat of the 1908 Messina-Reggio Calabria Earthquake in Southern Italy

    SciTech Connect

    Franco, Guillermo; Shen-Tu, Bing Ming; Bazzurro, Paolo; Goretti, Agostino; Valensise, Gianluca

    2008-07-08

    Increasing sophistication in the insurance and reinsurance market is stimulating the move towards catastrophe models that offer a greater degree of flexibility in the definition of model parameters and model assumptions. This study explores the impact of uncertainty in the input parameters on the loss estimates by departing from the exclusive usage of mean values to establish the earthquake event mechanism, the ground motion fields, or the damageability of the building stock. Here the potential losses due to a repeat of the 1908 Messina-Reggio Calabria event are calculated using different plausible alternatives found in the literature that encompass 12 event scenarios, 2 different ground motion prediction equations, and 16 combinations of damage functions for the building stock, a total of 384 loss scenarios. These results constitute the basis for a sensitivity analysis of the different assumptions on the loss estimates that allows the model user to estimate the impact of the uncertainty on input parameters and the potential spread of the model results. For the event under scrutiny, average losses would amount today to about 9.000 to 10.000 million Euros. The uncertainty in the model parameters is reflected in the high coefficient of variation of this loss, reaching approximately 45%. The choice of ground motion prediction equations and vulnerability functions of the building stock contribute the most to the uncertainty in loss estimates. This indicates that the application of non-local-specific information has a great impact on the spread of potential catastrophic losses. In order to close this uncertainty gap, more exhaustive documentation practices in insurance portfolios will have to go hand in hand with greater flexibility in the model input parameters.

  7. Impact of Uncertainty on Loss Estimates for a Repeat of the 1908 Messina-Reggio Calabria Earthquake in Southern Italy

    NASA Astrophysics Data System (ADS)

    Franco, Guillermo; Shen-Tu, BingMing; Goretti, Agostino; Bazzurro, Paolo; Valensise, Gianluca

    2008-07-01

    Increasing sophistication in the insurance and reinsurance market is stimulating the move towards catastrophe models that offer a greater degree of flexibility in the definition of model parameters and model assumptions. This study explores the impact of uncertainty in the input parameters on the loss estimates by departing from the exclusive usage of mean values to establish the earthquake event mechanism, the ground motion fields, or the damageability of the building stock. Here the potential losses due to a repeat of the 1908 Messina-Reggio Calabria event are calculated using different plausible alternatives found in the literature that encompass 12 event scenarios, 2 different ground motion prediction equations, and 16 combinations of damage functions for the building stock, a total of 384 loss scenarios. These results constitute the basis for a sensitivity analysis of the different assumptions on the loss estimates that allows the model user to estimate the impact of the uncertainty on input parameters and the potential spread of the model results. For the event under scrutiny, average losses would amount today to about 9.000 to 10.000 million Euros. The uncertainty in the model parameters is reflected in the high coefficient of variation of this loss, reaching approximately 45%. The choice of ground motion prediction equations and vulnerability functions of the building stock contribute the most to the uncertainty in loss estimates. This indicates that the application of non-local-specific information has a great impact on the spread of potential catastrophic losses. In order to close this uncertainty gap, more exhaustive documentation practices in insurance portfolios will have to go hand in hand with greater flexibility in the model input parameters.

  8. Observed and estimated economic losses in Guadeloupe (French Antilles) after Les Saintes Earthquake (2004). Application to risk comparison

    NASA Astrophysics Data System (ADS)

    Monfort, Daniel; Reveillère, Arnaud; Lecacheux, Sophie; Muller, Héloise; Grisanti, Ludovic; Baills, Audrey; Bertil, Didier; Sedan, Olivier; Tinard, Pierre

    2013-04-01

    The main objective of this work is to compare the potential direct economic losses between two different hazards in Guadeloupe (French Antilles), earthquakes and storm surges, for different return periods. In order to validate some hypotheses which are done concerning building typologies and their insured values a comparison between real economic loss data and estimated ones is done using a real event. In 2004 there was an earthquake in Guadeloupe, Mw 6.3, in a little archipelago in the south of Guadeloupe called Les Saintes. The heaviest intensities were VIII in the municipalities of Les Saintes and decreases from VII to IV in the other municipalities of Guadeloupe. The CCR, French Reinsurance Organism, has provided data about the total insured economic losses estimated per municipality (in a situation in 2011) and the insurance penetration ratio, it means, the ratio of insured exposed elements per municipality. Some other information about observed damaged structures is quite irregular all over the archipelago, being the only reliable one the observed macroseismic intensity per municipality (field survey done by BCSF). These data at Guadeloupe's scale has been compared with results coming from a retro damage scenario for this earthquake done with the vulnerability data from current buildings and their mean economic value of each building type and taking into account the local amplification effects on the earthquake propagation. In general the results are quite similar but with some significant differences. The results coming from scenario are quite correlated with the spatial attenuation from the earthquake intensity; the heaviest economic losses are concentrated within the municipalities exposed to a considerable and damageable intensity (VII to VIII). On the other side, CCR data show that heavy economic damages are not only located in the most impacted cities but also in the most important municipalities of the archipelago in terms of economic activity

  9. Integrating landslide and liquefaction hazard and loss estimates with existing USGS real-time earthquake information products

    USGS Publications Warehouse

    Tanyas, Hakan

    2017-01-01

    he U.S. Geological Survey (USGS) has made significant progress toward the rapid estimation of shaking and shakingrelated losses through their Did You Feel It? (DYFI), ShakeMap, ShakeCast, and PAGER products. However, quantitative estimates of the extent and severity of secondary hazards (e.g., landsliding, liquefaction) are not currently included in scenarios and real-time post-earthquake products despite their significant contributions to hazard and losses for many events worldwide. We are currently running parallel global statistical models for landslides and liquefaction developed with our collaborators in testing mode, but much work remains in order to operationalize these systems. We are expanding our efforts in this area by not only improving the existing statistical models, but also by (1) exploring more sophisticated, physics-based models where feasible; (2) incorporating uncertainties; and (3) identifying and undertaking research and product development to provide useful landslide and liquefaction estimates and their uncertainties. Although our existing models use standard predictor variables that are accessible globally or regionally, including peak ground motions, topographic slope, and distance to water bodies, we continue to explore readily available proxies for rock and soil strength as well as other susceptibility terms. This work is based on the foundation of an expanding, openly available, case-history database we are compiling along with historical ShakeMaps for each event. The expected outcome of our efforts is a robust set of real-time secondary hazards products that meet the needs of a wide variety of earthquake information users. We describe the available datasets and models, developments currently underway, and anticipated products. 

  10. Planning a Preliminary program for Earthquake Loss Estimation and Emergency Operation by Three-dimensional Structural Model of Active Faults

    NASA Astrophysics Data System (ADS)

    Ke, M. C.

    2015-12-01

    Large scale earthquakes often cause serious economic losses and a lot of deaths. Because the seismic magnitude, the occurring time and the occurring location of earthquakes are still unable to predict now. The pre-disaster risk modeling and post-disaster operation are really important works of reducing earthquake damages. In order to understanding disaster risk of earthquakes, people usually use the technology of Earthquake simulation to build the earthquake scenarios. Therefore, Point source, fault line source and fault plane source are the models which often are used as a seismic source of scenarios. The assessment results made from different models used on risk assessment and emergency operation of earthquakes are well, but the accuracy of the assessment results could still be upgrade. This program invites experts and scholars from Taiwan University, National Central University, and National Cheng Kung University, and tries using historical records of earthquakes, geological data and geophysical data to build underground three-dimensional structure planes of active faults. It is a purpose to replace projection fault planes by underground fault planes as similar true. The analysis accuracy of earthquake prevention efforts can be upgraded by this database. Then these three-dimensional data will be applied to different stages of disaster prevention. For pre-disaster, results of earthquake risk analysis obtained by the three-dimensional data of the fault plane are closer to real damage. For disaster, three-dimensional data of the fault plane can be help to speculate that aftershocks distributed and serious damage area. The program has been used 14 geological profiles to build the three dimensional data of Hsinchu fault and HisnCheng faults in 2015. Other active faults will be completed in 2018 and be actually applied on earthquake disaster prevention.

  11. Urban Earthquake Shaking and Loss Assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Level 2 analysis of the ELER Software (similar to HAZUS and SELENA) is essentially intended for earthquake risk assessment (building damage, consequential human casualties and macro economic loss quantifiers) in urban areas. The basic Shake Mapping is similar to the Level 0 and Level 1 analysis however, options are available for more sophisticated treatment of site response through externally entered data and improvement of the shake map through incorporation

  12. Too generous to a fault? Is reliable earthquake safety a lost art? Errors in expected human losses due to incorrect seismic hazard estimates

    NASA Astrophysics Data System (ADS)

    Bela, James

    2014-11-01

    "One is well advised, when traveling to a new territory, to take a good map and then to check the map with the actual territory during the journey." In just such a reality check, Global Seismic Hazard Assessment Program (GSHAP) maps (prepared using PSHA) portrayed a "low seismic hazard," which was then also assumed to be the "risk to which the populations were exposed." But time-after-time-after-time the actual earthquakes that occurred were not only "surprises" (many times larger than those implied on the maps), but they were often near the maximum potential size (Maximum Credible Earthquake or MCE) that geologically could occur. Given these "errors in expected human losses due to incorrect seismic hazard estimates" revealed globally in these past performances of the GSHAP maps (> 700,000 deaths 2001-2011), we need to ask not only: "Is reliable earthquake safety a lost art?" but also: "Who and what were the `Raiders of the Lost Art?' "

  13. A new method for the production of social fragility functions and the result of its use in worldwide fatality loss estimation for earthquakes

    NASA Astrophysics Data System (ADS)

    Daniell, James; Wenzel, Friedemann

    2014-05-01

    A review of over 200 fatality models over the past 50 years for earthquake loss estimation from various authors has identified key parameters that influence fatality estimation in each of these models. These are often very specific and cannot be readily adapted globally. In the doctoral dissertation of the author, a new method is used for regression of fatalities to intensity using loss functions based not only on fatalities, but also using population models and other socioeconomic parameters created through time for every country worldwide for the period 1900-2013. A calibration of functions was undertaken from 1900-2008, and each individual quake analysed from 2009-2013 in real-time, in conjunction with www.earthquake-report.com. Using the CATDAT Damaging Earthquakes Database containing socioeconomic loss information for 7208 damaging earthquake events from 1900-2013 including disaggregation of secondary effects, fatality estimates for over 2035 events have been re-examined from 1900-2013. In addition, 99 of these events have detailed data for the individual cities and towns or have been reconstructed to create a death rate as a percentage of population. Many historical isoseismal maps and macroseismic intensity datapoint surveys collected globally, have been digitised and modelled covering around 1353 of these 2035 fatal events, to include an estimate of population, occupancy and socioeconomic climate at the time of the event at each intensity bracket. In addition, 1651 events without fatalities but causing damage have also been examined in this way. The production of socioeconomic and engineering indices such as HDI and building vulnerability has been undertaken on a country-level and state/province-level leading to a dataset allowing regressions not only using a static view of risk, but also allowing for the change in the socioeconomic climate between the earthquake events to be undertaken. This means that a year 1920 event in a country, will not simply be

  14. Rapid exposure and loss estimates for the May 12, 2008 Mw 7.9 Wenchuan earthquake provided by the U.S. Geological Survey's PAGER system

    USGS Publications Warehouse

    Earle, P.S.; Wald, D.J.; Allen, T.I.; Jaiswal, K.S.; Porter, K.A.; Hearne, M.G.

    2008-01-01

    One half-hour after the May 12th Mw 7.9 Wenchuan, China earthquake, the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) system distributed an automatically generated alert stating that 1.2 million people were exposed to severe-to-extreme shaking (Modified Mercalli Intensity VIII or greater). It was immediately clear that a large-scale disaster had occurred. These alerts were widely distributed and referenced by the major media outlets and used by governments, scientific, and relief agencies to guide their responses. The PAGER alerts and Web pages included predictive ShakeMaps showing estimates of ground shaking, maps of population density, and a list of estimated intensities at impacted cities. Manual, revised alerts were issued in the following hours that included the dimensions of the fault rupture. Within a half-day, PAGER’s estimates of the population exposed to strong shaking levels stabilized at 5.2 million people. A coordinated research effort is underway to extend PAGER’s capability to include estimates of the number of casualties. We are pursuing loss models that will allow PAGER the flexibility to use detailed inventory and engineering results in regions where these data are available while also calculating loss estimates in regions where little is known about the type and strength of the built infrastructure. Prototype PAGER fatality estimates are currently implemented and can be manually triggered. In the hours following the Wenchuan earthquake, these models predicted fatalities in the tens of thousands.

  15. Trends in global earthquake loss

    NASA Astrophysics Data System (ADS)

    Arnst, Isabel; Wenzel, Friedemann; Daniell, James

    2016-04-01

    Based on the CATDAT damage and loss database we analyse global trends of earthquake losses (in current values) and fatalities for the period between 1900 and 2015 from a statistical perspective. For this time period the data are complete for magnitudes above 6. First, we study the basic statistics of losses and find that losses below 10 bl. US satisfy approximately a power law with an exponent of 1.7 for the cumulative distribution. Higher loss values are modelled with the General Pareto Distribution (GPD). The 'transition' between power law and GPD is determined with the Mean Excess Function. We split the data set into a period of pre 1955 and post 1955 loss data as in those periods the exposure is significantly different due to population growth. The Annual Average Loss (AAL) for direct damage for events below 10 bl. US differs by a factor of 6, whereas the incorporation of the extreme loss events increases the AAL from 25 bl. US/yr to 30 bl. US/yr. Annual Average Deaths (AAD) show little (30%) difference for events below 6.000 fatalities and AAD values of 19.000 and 26.000 deaths per year if extreme values are incorporated. With data on the global Gross Domestic Product (GDP) that reflects the annual expenditures (consumption, investment, government spending) and on capital stock we relate losses to the economic capacity of societies and find that GDP (in real terms) grows much faster than losses so that the latter one play a decreasing role given the growing prosperity of mankind. This reasoning does not necessarily apply on a regional scale. Main conclusions of the analysis are that (a) a correct projection of historic loss values to nowadays US values is critical; (b) extreme value analysis is mandatory; (c) growing exposure is reflected in the AAL and AAD results for the periods pre and post 1955 events; (d) scaling loss values with global GDP data indicates that the relative size - from a global perspective - of losses decreases rapidly over time.

  16. Origin of Human Losses due to the Emilia Romagna, Italy, M5.9 Earthquake of 20 May 2012 and their Estimate in Real Time

    NASA Astrophysics Data System (ADS)

    Wyss, M.

    2012-12-01

    Estimating human losses within less than an hour worldwide requires assumptions and simplifications. Earthquake for which losses are accurately recorded after the event provide clues concerning the influence of error sources. If final observations and real time estimates differ significantly, data and methods to calculate losses may be modified or calibrated. In the case of the earthquake in the Emilia Romagna region with M5.9 on May 20th, the real time epicenter estimates of the GFZ and the USGS differed from the ultimate location by the INGV by 6 and 9 km, respectively. Fatalities estimated within an hour of the earthquake by the loss estimating tool QLARM, based on these two epicenters, numbered 20 and 31, whereas 7 were reported in the end, and 12 would have been calculated if the ultimate epicenter released by INGV had been used. These four numbers being small, do not differ statistically. Thus, the epicenter errors in this case did not appreciably influence the results. The QUEST team of INGV has reported intensities with I ≥ 5 at 40 locations with accuracies of 0.5 units and QLARM estimated I > 4.5 at 224 locations. The differences between the observed and calculated values at the 23 common locations show that the calculation in the 17 instances with significant differences were too high on average by one unit. By assuming higher than average attenuation within standard bounds for worldwide loss estimates, the calculated intensities model the observed ones better: For 57% of the locations, the difference was not significant; for the others, the calculated intensities were still somewhat higher than the observed ones. Using a generic attenuation law with higher than average attenuation, but not tailored to the region, the number of estimated fatalities becomes 12 compared to 7 reported ones. Thus, attenuation in this case decreased the discrepancy between observed and reported death by approximately a factor of two. The source of the fatalities is

  17. Earthquake Loss Scenarios: Warnings about the Extent of Disasters

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Tolis, S.; Rosset, P.

    2016-12-01

    It is imperative that losses expected due to future earthquakes be estimated. Officials and the public need to be aware of what disaster is likely in store for them in order to reduce the fatalities and efficiently help the injured. Scenarios for earthquake parameters can be constructed to a reasonable accuracy in highly active earthquake belts, based on knowledge of seismotectonics and history. Because of the inherent uncertainties of loss estimates however, it would be desirable that more than one group calculate an estimate for the same area. By discussing these estimates, one may find a consensus of the range of the potential disasters and persuade officials and residents of the reality of the earthquake threat. To model a scenario and estimate earthquake losses requires data sets that are sufficiently accurate of the number of people present, the built environment, and if possible the transmission of seismic waves. As examples we use loss estimates for possible repeats of historic earthquakes in Greece that occurred between -464 and 700. We model future large Greek earthquakes as having M6.8 and rupture lengths of 60 km. In four locations where historic earthquakes with serious losses have occurred, we estimate that 1,000 to 1,500 people might perish, with an additional factor of four people injured. Defining the area of influence of these earthquakes as that with shaking intensities larger and equal to V, we estimate that 1.0 to 2.2 million people in about 2,000 settlements may be affected. We calibrate the QLARM tool for calculating intensities and losses in Greece, using the M6, 1999 Athens earthquake and matching the isoseismal information for six earthquakes, which occurred in Greece during the last 140 years. Comparing fatality numbers that would occur theoretically today with the numbers reported, and correcting for the increase in population, we estimate that the improvement of the building stock has reduced the mortality and injury rate in Greek

  18. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  19. Pan-European Seismic Risk Assessment: A proof of concept using the Earthquake Loss Estimation Routine (ELER)

    NASA Astrophysics Data System (ADS)

    Corbane, Christina; Hancilar, Ufuk; Silva, Vitor; Ehrlich, Daniele; De Groeve, Tom

    2016-04-01

    One of the key objectives of the new EU civil protection mechanism is an enhanced understanding of risks the EU is facing. Developing a European perspective may create significant opportunities of successfully combining resources for the common objective of preventing and mitigating shared risks. Risk assessments and mapping represent the first step in these preventive efforts. The EU is facing an increasing number of natural disasters. Among them earthquakes are the second deadliest after extreme temperatures. A better-shared understanding of where seismic risk lies in the EU is useful to identify which regions are most at risk and where more detailed seismic risk assessments are needed. In that scope, seismic risk assessment models at a pan-European level have a great potential in obtaining an overview of the expected economic and human losses using a homogeneous quantitative approach and harmonized datasets. This study strives to demonstrate the feasibility of performing a probabilistic seismic risk assessment at a pan-European level with an open access methodology and using open datasets available across the EU. It aims also at highlighting the challenges and needs in datasets and the information gaps for a consistent seismic risk assessment at the pan-European level. The study constitutes a "proof of concept" that can complement the information provided by Member States in their National Risk Assessments. Its main contribution lies in pooling open-access data from different sources in a homogeneous format, which could serve as baseline data for performing more in depth risk assessments in Europe.

  20. A quick earthquake disaster loss assessment method supported by dasymetric data for emergency response in China

    NASA Astrophysics Data System (ADS)

    Xu, Jinghai; An, Jiwen; Nie, Gaozong

    2016-04-01

    Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.

  1. Inferring Peak Ground Acceleration (PGA) from observed building damage and EO-derived exposure development to develop rapid loss estimates following the April 2015 Nepal earthquake.

    NASA Astrophysics Data System (ADS)

    Huyck, C. K.

    2016-12-01

    The April 25th 7.8 Gorkha earthquake in Nepal occurred in an area with very few seismic stations. Ground motions were estimated primarily by Ground Motion Prediction Equations (GMPEs) over a very large region, with a very high degree of uncertainty. Accordingly, initial fatality estimates and their distribution was highly uncertain, with a 65% chance of fatalities ranging from 1,000 to 100,000. With an aim to developing estimates of: 1) the number of buildings damaged by category (slight, moderate, extensive, complete), 2) fatalities and their distribution, and 3) rebuilding costs, researchers at ImageCat have developed a preliminary inferred Peak Ground Acceleration product in %g (PGA). The inferred PGA is determined by using observations of building collapse from the National Geospatial Agency and building exposure estimates derived from EO data to determine the percentage of buildings collapsed in key locations. The percentage of building collapse is adjusted for accuracy and cross referenced with composite building damage functions for 4 development patterns in Nepal: 1) sparsely populated, 2) rural, 3) dense development, and 4) urban development to yield an inferred PGA. Composite damage functions are derived from USGS Pager collapse fragility functions (Jaiswal et al., 2011) and are weighted by building type frequencies developed by ImageCat. The PGA is interpolated to yield a surface. An initial estimate of the fatalities based on ATC 13 (Rojan and Sharpe, 1985) using these PGA yields an estimate of: Extensively damaged or destroyed buildings: 225,000 to 450,000 Fatalities: 8,700 to 22,000, with a mean estimate of 15,700. The total number of displaced persons is estimated between 1 and 2 million. Rebuilding costs for building damage only are estimated to be between 2 and 3 billion USD. The inferred PGA product is recommended for use solely in loss estimation processes.

  2. Spatial correlation of probabilistic earthquake ground motion and loss

    USGS Publications Warehouse

    Wesson, R.L.; Perkins, D.M.

    2001-01-01

    Spatial correlation of annual earthquake ground motions and losses can be used to estimate the variance of annual losses to a portfolio of properties exposed to earthquakes A direct method is described for the calculations of the spatial correlation of earthquake ground motions and losses. Calculations for the direct method can be carried out using either numerical quadrature or a discrete, matrix-based approach. Numerical results for this method are compared with those calculated from a simple Monte Carlo simulation. Spatial correlation of ground motion and loss is induced by the systematic attenuation of ground motion with distance from the source, by common site conditions, and by the finite length of fault ruptures. Spatial correlation is also strongly dependent on the partitioning of the variability, given an event, into interevent and intraevent components. Intraevent variability reduces the spatial correlation of losses. Interevent variability increases spatial correlation of losses. The higher the spatial correlation, the larger the variance in losses to a port-folio, and the more likely extreme values become. This result underscores the importance of accurately determining the relative magnitudes of intraevent and interevent variability in ground-motion studies, because of the strong impact in estimating earthquake losses to a portfolio. The direct method offers an alternative to simulation for calculating the variance of losses to a portfolio, which may reduce the amount of calculation required.

  3. Earthquake interdependence and insurance loss modeling

    NASA Astrophysics Data System (ADS)

    Muir Wood, R.

    2005-12-01

    Probabilistic Catastrophe loss modeling generally assumes that earthquakes are independent events and occur far enough apart in time that damage from one event is fully restituted before another earthquake occurs. While time dependence and cascade fault rupturing are today standard elements of the earthquake hazard engine, in the next generation of Catastrophe loss models one can expect to find a more comprehensive range of earthquake interdependence represented in a full simulation modeling environment. Such behavior includes the incorporation of the ways in which earthquakes relate one to another in both space and time (including foreshock, aftershock and triggered mainshock distinctions) and the damage that can be predicted from overlapping damage fields as related to the length of time for reconstruction that has elapsed between events. For insurance purposes losses are framed by the 168 hour clause for classifying losses as falling within the same `event' for reinsurance recoveries as well as the annual insurance contract. The understanding of the ways in which stress changes associated with fault rupture affect the probabilities of earthquakes on surrounding faults has also expanded the predictability of potential earthquake sequences as well as highlighted the potential to identify locations where, for some time window, risk can be discounted. While it can be illuminating to explore the loss and insurance implications of the patterns of historical earthquake occurrence seen historically along the Nankaido subduction zone of Southern Japan, in New Madrid from 1811-1812, or Nevada in 1954, the sequences to be expected in the future are unlikely to have historical precedent in the region in which they form.

  4. Rapid estimation of the economic consequences of global earthquakes

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2011-01-01

    The U.S. Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, operational since mid 2007, rapidly estimates the most affected locations and the population exposure at different levels of shaking intensities. The PAGER system has significantly improved the way aid agencies determine the scale of response needed in the aftermath of an earthquake. For example, the PAGER exposure estimates provided reasonably accurate assessments of the scale and spatial extent of the damage and losses following the 2008 Wenchuan earthquake (Mw 7.9) in China, the 2009 L'Aquila earthquake (Mw 6.3) in Italy, the 2010 Haiti earthquake (Mw 7.0), and the 2010 Chile earthquake (Mw 8.8). Nevertheless, some engineering and seismological expertise is often required to digest PAGER's exposure estimate and turn it into estimated fatalities and economic losses. This has been the focus of PAGER's most recent development. With the new loss-estimation component of the PAGER system it is now possible to produce rapid estimation of expected fatalities for global earthquakes (Jaiswal and others, 2009). While an estimate of earthquake fatalities is a fundamental indicator of potential human consequences in developing countries (for example, Iran, Pakistan, Haiti, Peru, and many others), economic consequences often drive the responses in much of the developed world (for example, New Zealand, the United States, and Chile), where the improved structural behavior of seismically resistant buildings significantly reduces earthquake casualties. Rapid availability of estimates of both fatalities and economic losses can be a valuable resource. The total time needed to determine the actual scope of an earthquake disaster and to respond effectively varies from country to country. It can take days or sometimes weeks before the damage and consequences of a disaster can be understood both socially and economically. The objective of the U.S. Geological Survey's PAGER system is

  5. Strategies for rapid global earthquake impact estimation: the Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, D.J.

    2013-01-01

    This chapter summarizes the state-of-the-art for rapid earthquake impact estimation. It details the needs and challenges associated with quick estimation of earthquake losses following global earthquakes, and provides a brief literature review of various approaches that have been used in the past. With this background, the chapter introduces the operational earthquake loss estimation system developed by the U.S. Geological Survey (USGS) known as PAGER (for Prompt Assessment of Global Earthquakes for Response). It also details some of the ongoing developments of PAGER’s loss estimation models to better supplement the operational empirical models, and to produce value-added web content for a variety of PAGER users.

  6. Ten Years of Real-Time Earthquake Loss Alerts

    NASA Astrophysics Data System (ADS)

    Wyss, M.

    2013-12-01

    The priorities of the most important parameters of an earthquake disaster are: Number of fatalities, number of injured, mean damage as a function of settlement, expected intensity of shaking at critical facilities. The requirements to calculate these parameters in real time are: 1) Availability of reliable earthquake source parameters within minutes. 2) Capability of calculating expected intensities of strong ground shaking. 3) Data sets on population distribution and conditions of building stock as a function of settlements. 4) Data on locations of critical facilities. 5) Verified methods of calculating damage and losses. 6) Personnel available on a 24/7 basis to perform and review these calculations. There are three services available that distribute information about the likely consequences of earthquakes within about half an hour of the event. Two of these calculate losses, one gives only general information. Although, much progress has been made during the last ten years improving the data sets and the calculating methods, much remains to be done. The data sets are only first order approximations and the methods bare refinement. Nevertheless, the quantitative loss estimates after damaging earthquakes in real time are generally correct in the sense that they allow distinguishing disastrous from inconsequential events.

  7. Hypocentre estimation of induced earthquakes in Groningen

    NASA Astrophysics Data System (ADS)

    Spetzler, Jesper; Dost, Bernard

    2017-04-01

    Induced earthquakes due to gas production have taken place in the province of Groningen in the northeast of The Netherlands since 1986. In the first years of seismicity, a sparse seismological network with large station distances from the seismogenic area in Groningen was used. The location of induced earthquakes was limited by the few and wide spread stations. Recently, the station network has been extended significantly and the location of induced earthquakes in Groningen has become routine work. Except for the depth estimation of the events. In the hypocentre method used for source location by the Royal Netherlands Meteorological Institute (KNMI), the depth of the induced earthquakes is by default set to 3 km which is the average depth of the gas-reservoir. Alternatively, a differential traveltime for P-waves approach for source location is applied on recorded data from the extended network. The epicentre and depth of 87 induced earthquakes from 2014 to July 2016 have been estimated. The newly estimated epicentres are close to the induced earthquake locations from the current method applied by the KNMI. It is observed that most induced earthquakes take place at reservoir level. Several events in the same magnitude order are found near a brittle anhydrite layer in the overburden of mainly rock salt evaporites.

  8. Hypocenter Estimation of Induced Earthquakes in Groningen

    NASA Astrophysics Data System (ADS)

    Spetzler, Jesper; Dost, Bernard

    2017-01-01

    Induced earthquakes due to gas production have taken place in the province of Groningen in the North-East of the Netherlands since 1986. In the first years of seismicity, a sparse seismological network with large station distances from the seismogenic area in Groningen was used. The location of induced earthquakes was limited by the few and wide spread stations. Recently, the station network has been extended significantly and the location of induced earthquakes in Groningen has become routine work. Except for the depth estimation of the events. In the hypocenter method used for source location by the Royal Netherlands Meteorological Institute (KNMI), the depth of the induced earthquakes is by default set to 3 km which is the average depth of the gas-reservoir. Alternatively, a differential travel time for P-waves approach for source location is applied on recorded data from the extended network. The epicenter and depth of 87 induced earthquakes from 2014 to July 2016 have been estimated. The newly estimated epicentres are close to the induced earthquake locations from the current method applied by the KNMI. It is observed that most induced earthquakes take place at reservoir level. Several events in the same magnitude order are found near a brittle anhydrite layer in the overburden of mainly rock salt evaporites.

  9. Losses to single-family housing from ground motions in the 1994 Northridge, California, earthquake

    USGS Publications Warehouse

    Wesson, R.L.; Perkins, D.M.; Leyendecker, E.V.; Roth, R.J.; Petersen, M.D.

    2004-01-01

    The distributions of insured losses to single-family housing following the 1994 Northridge, California, earthquake for 234 ZIP codes can be satisfactorily modeled with gamma distributions. Regressions of the parameters in the gamma distribution on estimates of ground motion, derived from ShakeMap estimates or from interpolated observations, provide a basis for developing curves of conditional probability of loss given a ground motion. Comparison of the resulting estimates of aggregate loss with the actual aggregate loss gives satisfactory agreement for several different ground-motion parameters. Estimates of loss based on a deterministic spatial model of the earthquake ground motion, using standard attenuation relationships and NEHRP soil factors, give satisfactory results for some ground-motion parameters if the input ground motions are increased about one and one-half standard deviations above the median, reflecting the fact that the ground motions for the Northridge earthquake tended to be higher than the median ground motion for other earthquakes with similar magnitude. The results give promise for making estimates of insured losses to a similar building stock under future earthquake loading. ?? 2004, Earthquake Engineering Research Institute.

  10. Using Socioeconomic Data to Calibrate Loss Estimates

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2013-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  11. Development of fragility functions to estimate homelessness after an earthquake

    NASA Astrophysics Data System (ADS)

    Brink, Susan A.; Daniell, James; Khazai, Bijan; Wenzel, Friedemann

    2014-05-01

    used to estimate homelessness as a function of information that is readily available immediately after an earthquake. These fragility functions could be used by relief agencies and governments to provide an initial assessment of the need for allocation of emergency shelter immediately after an earthquake. Daniell JE (2014) The development of socio-economic fragility functions for use in worldwide rapid earthquake loss estimation procedures, Ph.D. Thesis (in publishing), Karlsruhe, Germany. Daniell, J. E., Khazai, B., Wenzel, F., & Vervaeck, A. (2011). The CATDAT damaging earthquakes database. Natural Hazards and Earth System Science, 11(8), 2235-2251. doi:10.5194/nhess-11-2235-2011 Daniell, J.E., Wenzel, F. and Vervaeck, A. (2012). "The Normalisation of socio-economic losses from historic worldwide earthquakes from 1900 to 2012", 15th WCEE, Lisbon, Portugal, Paper No. 2027. Jaiswal, K., & Wald, D. (2010). An Empirical Model for Global Earthquake Fatality Estimation. Earthquake Spectra, 26(4), 1017-1037. doi:10.1193/1.3480331

  12. Creating a Global Building Inventory for Earthquake Loss Assessment and Risk Management

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2008-01-01

    Earthquakes have claimed approximately 8 million lives over the last 2,000 years (Dunbar, Lockridge and others, 1992) and fatality rates are likely to continue to rise with increased population and urbanizations of global settlements especially in developing countries. More than 75% of earthquake-related human casualties are caused by the collapse of buildings or structures (Coburn and Spence, 2002). It is disheartening to note that large fractions of the world's population still reside in informal, poorly-constructed & non-engineered dwellings which have high susceptibility to collapse during earthquakes. Moreover, with increasing urbanization half of world's population now lives in urban areas (United Nations, 2001), and half of these urban centers are located in earthquake-prone regions (Bilham, 2004). The poor performance of most building stocks during earthquakes remains a primary societal concern. However, despite this dark history and bleaker future trends, there are no comprehensive global building inventories of sufficient quality and coverage to adequately address and characterize future earthquake losses. Such an inventory is vital both for earthquake loss mitigation and for earthquake disaster response purposes. While the latter purpose is the motivation of this work, we hope that the global building inventory database described herein will find widespread use for other mitigation efforts as well. For a real-time earthquake impact alert system, such as U.S. Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER), (Wald, Earle and others, 2006), we seek to rapidly evaluate potential casualties associated with earthquake ground shaking for any region of the world. The casualty estimation is based primarily on (1) rapid estimation of the ground shaking hazard, (2) aggregating the population exposure within different building types, and (3) estimating the casualties from the collapse of vulnerable buildings. Thus, the

  13. Rapid Earthquake Magnitude Estimation for Early Warning Applications

    NASA Astrophysics Data System (ADS)

    Goldberg, Dara; Bock, Yehuda; Melgar, Diego

    2017-04-01

    Earthquake magnitude is a concise metric that provides invaluable information about the destructive potential of a seismic event. Rapid estimation of magnitude for earthquake and tsunami early warning purposes requires reliance on near-field instrumentation. For large magnitude events, ground motions can exceed the dynamic range of near-field broadband seismic instrumentation (clipping). Strong motion accelerometers are designed with low gains to better capture strong shaking. Estimating earthquake magnitude rapidly from near-source strong-motion data requires integration of acceleration waveforms to displacement. However, integration amplifies small errors, creating unphysical drift that must be eliminated with a high pass filter. The loss of the long period information due to filtering is an impediment to magnitude estimation in real-time; the relation between ground motion measured with strong-motion instrumentation and magnitude saturates, leading to underestimation of earthquake magnitude. Using station displacements from Global Navigation Satellite System (GNSS) observations, we can supplement the high frequency information recorded by traditional seismic systems with long-period observations to better inform rapid response. Unlike seismic-only instrumentation, ground motions measured with GNSS scale with magnitude without saturation [Crowell et al., 2013; Melgar et al., 2015]. We refine the current magnitude scaling relations using peak ground displacement (PGD) by adding a large GNSS dataset of earthquakes in Japan. Because it does not suffer from saturation, GNSS alone has significant advantages over seismic-only instrumentation for rapid magnitude estimation of large events. The earthquake's magnitude can be estimated within 2-3 minutes of earthquake onset time [Melgar et al., 2013]. We demonstrate that seismogeodesy, the optimal combination of GNSS and seismic data at collocated stations, provides the added benefit of improving the sensitivity of

  14. Estimating System Impact of Earthquakes on a Major Metropolitan Roadway Network

    NASA Astrophysics Data System (ADS)

    Perkins, D. M.; Taylor, C. E.; Werner, S. D.

    2003-12-01

    The impact of an earthquake on the Memphis, TN, roadway system has been estimated using a prototype computer program, REDARS (Risks from Earthquake DAmage to Roadway Systems). For scenario earthquakes, the program computes ground motions and ground deformations at bridges and other components throughout the system. Then, estimates of costs and times to repair this damage, together with the component's ability to accommodate traffic flows during repairs, are estimated, and these are used to establish modified post-earthquake system states at various times after the earthquake. Transportation network analysis procedures are then applied to each system state, in order to estimate how post-earthquake traffic flows and travel times are affected by these various roadway closures. Consequences of this damage to the roadway system, in terms of economic losses, reduced access to key locations in the regions (e.g., hospitals, airports, etc.) are then estimated. Uncertainties are incorporated throughout. Using Monte Carlo simulation in earthquake occurrence and modeled uncertainties, the program was used to produce a 50,000-year history of annual estimated drive-time loss in dollars, with nearly 800 non-zero loss years. The ordered list of annual losses makes up an empirical annual-rate loss distribution function. From this list, likelihood functions for the 500-yr and 2500-yr losses can be obtained, yielding most likely values of about 300 million dollars and 600 million, respectively, with ranges of uncertainty (10th to 90th percentile) of about 10 and 20 percent. The ordered list of non-zero losses is nearly exponentially distributed. This simple structure permits estimation, through the employment of bootstrapping with variance-reduction techniques, of the average conditional loss around several tens of times more precise than the average of the simulated losses. The model for earthquake magnitude, locations, and occurrence frequency has been taken from the US Geological

  15. An approximate estimate of the earthquake risk in the United Arab Emirates

    NASA Astrophysics Data System (ADS)

    Al-Homoud, A.; Wyss, M.

    2003-04-01

    The UAE is not as safe from earthquake disasters as often assumed. The magnitude 5.1 earthquake of 11 March 2002 in Fujairah Masafi demonstrated that earthquakes can occur in the UAE. The threat of large earthquakes in southern Iran is well known to seismologist, but people generally do not realize that the international expert team that assessed the earthquake hazard for the entire world placed the UAE into the same class as many parts of Iran and Turkey, as well as California. There is no question that large earthquakes will occur again in southern Iran and that moderate earthquakes will happen again in the UAE. The only question is: when will they happen? From the history of earthquakes, we have an understanding, although limited to the last few decades, of what size earthquakes may be expected. For this reason, it is timely to estimate the probable consequences in the UAE of a large to great earthquake in southern Iran and a moderate earthquake in the UAE themselves. We propose to estimate the number of possible injuries, fatalities, and the financial loss in building value that might occur in the UAE in several future likely earthquakes. This estimate will be based on scenario earthquakes with positions and magnitudes determined by us, based on seismic hazard maps. Scenario earthquakes are events that are very likely to occur in the future, because similar ones have happened in the past. The time when they may happen will not be estimated in this work. The input for calculating the earthquake risk in the UAE, as we propose, will be the census figures for the population and the estimated properties of the building stock. WAPPMERR is the only research group capable to make these estimates for the UAE. The deliverables will be a scientific manuscript to be submitted to a reviewed journal, which will contain tables and figures showing the estimated numbers of (a) people killed and (b) people injured (slightly and seriously counted separately), (c) buildings

  16. Earthquakes trigger the loss of groundwater biodiversity

    PubMed Central

    Galassi, Diana M. P.; Lombardo, Paola; Fiasca, Barbara; Di Cioccio, Alessia; Di Lorenzo, Tiziana; Petitta, Marco; Di Carlo, Piero

    2014-01-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and “ecosystem engineers”, we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems. PMID:25182013

  17. Earthquakes trigger the loss of groundwater biodiversity

    NASA Astrophysics Data System (ADS)

    Galassi, Diana M. P.; Lombardo, Paola; Fiasca, Barbara; di Cioccio, Alessia; di Lorenzo, Tiziana; Petitta, Marco; di Carlo, Piero

    2014-09-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and ``ecosystem engineers'', we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems.

  18. Earthquakes trigger the loss of groundwater biodiversity.

    PubMed

    Galassi, Diana M P; Lombardo, Paola; Fiasca, Barbara; Di Cioccio, Alessia; Di Lorenzo, Tiziana; Petitta, Marco; Di Carlo, Piero

    2014-09-03

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and "ecosystem engineers", we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems.

  19. Development of Rapid Earthquake Loss Assessment Methodologies for Euro-Med Region

    NASA Astrophysics Data System (ADS)

    Erdik, M.

    2009-04-01

    For almost-real time estimation of the ground shaking and losses after a major earthquake in the Euro-Mediterranean region the JRA-3 component of the EU Project entitled "Network of research Infrastructures for European Seismology, NERIES" foresees: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base, supported, if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line regional broadband stations. 2. Estimation of the spatial distribution of selected ground motion parameters at engineering bedrock through region specific ground motion attenuation relationships and/or actual physical simulation of ground motion. 3. Estimation of the spatial distribution of site-specific ground selected motion parameters using regional geology (or urban geotechnical information) data-base using appropriate amplification models. 4. Estimation of the losses and uncertainties at various orders of sophistication (buildings, casualties) Main objective of the JRA-3 wprk package is to develop a methodology for real time estimation of losses after a major earthquake in the Euro-Mediterranean region. The multi-level methodology being developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variabilities and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical ane social elements subjected to earthquake hazard and the associated vulnerability relationships. A comprehensive methodology has been developed and the related software ELER is under preparation. The apllications of the ELER software are presented in the following two accompanying papers. 1. Regional Earthquake Shaking and Loss Estimation 2. Urban Earthquake Shakıng and Loss Assessment

  20. Real-time earthquake shake, damage, and loss mapping for Istanbul metropolitan area

    NASA Astrophysics Data System (ADS)

    Zülfikar, A. Can; Fercan, N. Özge Zülfikar; Tunç, Süleyman; Erdik, Mustafa

    2017-01-01

    The past devastating earthquakes in densely populated urban centers, such as the 1994 Northridge; 1995 Kobe; 1999 series of Kocaeli, Düzce, and Athens; and 2011 Van-Erciş events, showed that substantial social and economic losses can be expected. Previous studies indicate that inadequate emergency response can increase the number of casualties by a maximum factor of 10, which suggests the need for research on rapid earthquake shaking damage and loss estimation. The reduction in casualties in urban areas immediately following an earthquake can be improved if the location and severity of damages can be rapidly assessed by information from rapid response systems. In this context, a research project (TUBITAK-109M734) titled "Real-time Information of Earthquake Shaking, Damage, and Losses for Target Cities of Thessaloniki and Istanbul" was conducted during 2011-2014 to establish the rapid estimation of ground motion shaking and related earthquake damages and casualties for the target cities. In the present study, application to Istanbul metropolitan area is presented. In order to fulfill this objective, earthquake hazard and risk assessment methodology known as Earthquake Loss Estimation Routine, which was developed for the Euro-Mediterranean region within the Network of Research Infrastructures for European Seismology EC-FP6 project, was used. The current application to the Istanbul metropolitan area provides real-time ground motion information obtained by strong motion stations distributed throughout the densely populated areas of the city. According to this ground motion information, building damage estimation is computed by using grid-based building inventory, and the related loss is then estimated. Through this application, the rapidly estimated information enables public and private emergency management authorities to take action and allocate and prioritize resources to minimize the casualties in urban areas during immediate post-earthquake periods. Moreover, it

  1. A Method for Estimation of Death Tolls in Disastrous Earthquake

    NASA Astrophysics Data System (ADS)

    Pai, C.; Tien, Y.; Teng, T.

    2004-12-01

    Fatality tolls caused by the disastrous earthquake are the one of the most important items among the earthquake damage and losses. If we can precisely estimate the potential tolls and distribution of fatality in individual districts as soon as the earthquake occurrences, it not only make emergency programs and disaster management more effective but also supply critical information to plan and manage the disaster and the allotments of disaster rescue manpower and medicine resources in a timely manner. In this study, we intend to reach the estimation of death tolls caused by the Chi-Chi earthquake in individual districts based on the Attributive Database of Victims, population data, digital maps and Geographic Information Systems. In general, there were involved many factors including the characteristics of ground motions, geological conditions, types and usage habits of buildings, distribution of population and social-economic situations etc., all are related to the damage and losses induced by the disastrous earthquake. The density of seismic stations in Taiwan is the greatest in the world at present. In the meantime, it is easy to get complete seismic data by earthquake rapid-reporting systems from the Central Weather Bureau: mostly within about a minute or less after the earthquake happened. Therefore, it becomes possible to estimate death tolls caused by the earthquake in Taiwan based on the preliminary information. Firstly, we form the arithmetic mean of the three components of the Peak Ground Acceleration (PGA) to give the PGA Index for each individual seismic station, according to the mainshock data of the Chi-Chi earthquake. To supply the distribution of Iso-seismic Intensity Contours in any districts and resolve the problems for which there are no seismic station within partial districts through the PGA Index and geographical coordinates in individual seismic station, the Kriging Interpolation Method and the GIS software, The population density depends on

  2. The Enormous Challenge faced by China to Reduce Earthquake Losses

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Mooney, W. D.; Wang, B.

    2014-12-01

    In past six years, several big earthquakes occurred in Chinese continent that have caused enormous economic loss and casualties. These earthquakes include the following: 2008 Mw=7.9 Wenchuan; 2010 Mw=6.9 Yushu; 2013 Mw=6.6 Lushan; and 2013 Mw=5.9 Minxian events. On August 4, 2014 the Mw=6.1 earthquake struck Ludian in Yunnan province. Althought it was a moderate size earthquake, the casualties have reached at least 589 people. In fact, more than 50% of Chinese cities and more than 70% of large to medium size cities are located in the areas where the seismic intensity may reach Ⅶ or higher. Collapsing buildings are the main cause of Chinese earthquake casualties; the secondary causes are induced geological disasters such as landslide and barrier lakes. Several enormous challenges must be overcome to reduce hazards from earthquakes and secondary disasters.(1)Much of the infrastructure in China cannot meet the engineering standard for adequate seismic protection. In particular, some buildings are not strong enough to survive the potential strong ground shaking, and some of them did do not keep away from the active fault with a safe distance. It will be very costly to reinforce or rebuild such buildings. (2) There is lack of the rigorous legislation on earthquake disaster protection. (3) It appears that both government and citizen rely too much on earthquake prediction to avoid earthquake casualties. (4) Geologic conditions is very complicate and in need of additional studies, especially in southwest of China. There still lack of detail survey on potential geologic disasters, such as landslides. Although we still cannot predict earthquakes, it is possible to greatly reduce earthquake hazards. For example, some Chinese scientists have begun studies with the aim of identifying active faults under large cities and to propose higher building standards. It will be a very difficult work to improve the quality and scope of earthquake disaster protection dramatically in

  3. Estimation of earthquake risk curves of physical building damage

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias; Janouschkowetz, Silke; Fischer, Thomas; Simon, Christian

    2014-05-01

    In this study, a new approach to quantify seismic risks is presented. Here, the earthquake risk curves for the number of buildings with a defined physical damage state are estimated for South Africa. Therein, we define the physical damage states according to the current European macro-seismic intensity scale (EMS-98). The advantage of such kind of risk curve is that its plausibility can be checked more easily than for other types. The earthquake risk curve for physical building damage can be compared with historical damage and their corresponding empirical return periods. The number of damaged buildings from historical events is generally explored and documented in more detail than the corresponding monetary losses. The latter are also influenced by different economic conditions, such as inflation and price hikes. Further on, the monetary risk curve can be derived from the developed risk curve of physical building damage. The earthquake risk curve can also be used for the validation of underlying sub-models such as the hazard and vulnerability modules.

  4. Uncertainties in Earthquake Loss Analysis: A Case Study From Southern California

    NASA Astrophysics Data System (ADS)

    Mahdyiar, M.; Guin, J.

    2005-12-01

    Probabilistic earthquake hazard and loss analyses play important roles in many areas of risk management, including earthquake related public policy and insurance ratemaking. Rigorous loss estimation for portfolios of properties is difficult since there are various types of uncertainties in all aspects of modeling and analysis. It is the objective of this study to investigate the sensitivity of earthquake loss estimation to uncertainties in regional seismicity, earthquake source parameters, ground motions, and sites' spatial correlation on typical property portfolios in Southern California. Southern California is an attractive region for such a study because it has a large population concentration exposed to significant levels of seismic hazard. During the last decade, there have been several comprehensive studies of most regional faults and seismogenic sources. There have also been detailed studies on regional ground motion attenuations and regional and local site responses to ground motions. This information has been used by engineering seismologists to conduct regional seismic hazard and risk analysis on a routine basis. However, one of the more difficult tasks in such studies is the proper incorporation of uncertainties in the analysis. From the hazard side, there are uncertainties in the magnitudes, rates and mechanisms of the seismic sources and local site conditions and ground motion site amplifications. From the vulnerability side, there are considerable uncertainties in estimating the state of damage of buildings under different earthquake ground motions. From an analytical side, there are challenges in capturing the spatial correlation of ground motions and building damage, and integrating thousands of loss distribution curves with different degrees of correlation. In this paper we propose to address some of these issues by conducting loss analyses of a typical small portfolio in southern California, taking into consideration various source and ground

  5. Future Earth: Reducing Loss By Automating Response to Earthquake Shaking

    NASA Astrophysics Data System (ADS)

    Allen, R. M.

    2014-12-01

    Earthquakes pose a significant threat to society in the U.S. and around the world. The risk is easily forgotten given the infrequent recurrence of major damaging events, yet the likelihood of a major earthquake in California in the next 30 years is greater than 99%. As our societal infrastructure becomes ever more interconnected, the potential impacts of these future events are difficult to predict. Yet, the same inter-connected infrastructure also allows us to rapidly detect earthquakes as they begin, and provide seconds, tens or seconds, or a few minutes warning. A demonstration earthquake early warning system is now operating in California and is being expanded to the west coast (www.ShakeAlert.org). In recent earthquakes in the Los Angeles region, alerts were generated that could have provided warning to the vast majority of Los Angelinos who experienced the shaking. Efforts are underway to build a public system. Smartphone technology will be used not only to issue that alerts, but could also be used to collect data, and improve the warnings. The MyShake project at UC Berkeley is currently testing an app that attempts to turn millions of smartphones into earthquake-detectors. As our development of the technology continues, we can anticipate ever-more automated response to earthquake alerts. Already, the BART system in the San Francisco Bay Area automatically stops trains based on the alerts. In the future, elevators will stop, machinery will pause, hazardous materials will be isolated, and self-driving cars will pull-over to the side of the road. In this presentation we will review the current status of the earthquake early warning system in the US. We will illustrate how smartphones can contribute to the system. Finally, we will review applications of the information to reduce future losses.

  6. Earthquake detection by new motion estimation algorithm in video processing

    NASA Astrophysics Data System (ADS)

    Hong, Chien-Shiang; Wang, Chuen-Ching; Tai, Shen-Chuan; Chen, Ji-Feng; Wang, Chung-Yao

    2011-01-01

    As increasing urbanization is taking place worldwide, earthquake hazards pose serious threats to lives and properties for urban areas. A practical earthquake prediction method appears to be far from realization. Generally, the traditional instruments for earthquake detection have the disadvantages of high cost and size. To solve these problems, this paper presents a new method which can detect earthquake intensity using video capture device. The main method is based on a new proposed motion vector algorithm with simple but effective methods to immediately calculate acceleration of a predefined target object. By estimating the motion vector variation, the movement distance of predefined target object can be computed, and therefore the earthquake amplitude can be defined. The effectiveness of the proposed scheme is demonstrated in a series of experimental simulations. It is shown that the scheme successfully detects the earthquake occurrence and identifies the earthquake amplitude from video streams.

  7. An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling

    USGS Publications Warehouse

    Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.

    2009-01-01

    We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global

  8. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  9. Estimating the macroseismic parameters of earthquakes in eastern Iran

    NASA Astrophysics Data System (ADS)

    Amini, H.; Gasperini, P.; Zare, M.; Vannucci, G.

    2017-10-01

    Macroseismic intensity values allow assessing the macroseismic parameters of earthquakes such as location, magnitude, and fault orientation. This information is particularly useful for historical earthquakes whose parameters were estimated with low accuracy. Eastern Iran (56°-62°E, 29.5°-35.5°N), which is characterized by several active faults, was selected for this study. Among all earthquakes occurred in this region, only 29 have some macroseismic information. Their intensity values were reported in various intensity scales. After collecting the descriptions, their intensity values were re-estimated in a uniform intensity scale. Thereafter, Boxer method was applied to estimate their corresponding macroseismic parameters. Boxer estimates of macroseismic parameters for instrumental earthquakes (after 1964) were found to be consistent with those published by Global Centroid Moment Tensor Catalog (GCMT). Therefore, this method was applied to estimate location, magnitude, source dimension, and orientation of these earthquakes with macroseismic description in the period 1066-2012. Macroseismic parameters seem to be more reliable than instrumental ones not only for historical earthquakes but also for instrumental earthquakes especially for the ones occurred before 1960. Therefore, as final results of this study we propose to use the macroseismically determined parameters in preparing a catalog for earthquakes before 1960.

  10. Application of the loss estimation tool QLARM in Algeria

    NASA Astrophysics Data System (ADS)

    Rosset, P.; Trendafiloski, G.; Yelles, K.; Semmane, F.; Wyss, M.

    2009-04-01

    During the last six years, WAPMERR has used Quakeloss for real-time loss estimation for more than 440 earthquakes worldwide. Loss reports, posted with an average delay of 30 minutes, include a map showing the average degree of damage in settlements near the epicenter, the total number of fatalities, the total number of injured, and a detailed list of casualties and damage rates in these settlements. After the M6.7 Boumerdes earthquake in 2003, we reported 1690-3660 fatalities. The official death toll was around 2270. Since the El Asnam earthquake, seismic events in Algeria have killed about 6,000 people, injured more than 20,000 and left more than 300,000 homeless. On average, one earthquake with the potential to kill people (M>5.4) happens every three years in Algeria. In the frame of a collaborative project between WAPMERR and CRAAG, we propose to calibrate our new loss estimation tool QLARM (qlarm.ethz.ch) and estimate human losses for future likely earthquakes in Algeria. The parameters needed for this calculation are the following. (1) Ground motion relation and soil amplification factors (2) distribution of building stock and population into vulnerability classes of the European Macroseismic Scale (EMS-98) as given in the PAGER database and (3) population by settlement. Considering the resolution of the available data, we construct 1) point city models for cases where only summary data for the city are available and, 2) discrete city models when data regarding city districts are available. Damage and losses are calculated using: (a) vulnerability models pertinent to EMS-98 vulnerability classes previously validated with the existing ones in Algeria (Tipaza and Chlef) (b) building collapse models pertinent to Algeria as given in the World Housing Encyclopedia and, (c) casualty matrices pertinent to EMS-98 vulnerability classes assembled from HAZUS casualty rates. As a first trial, we simulated the 2003 Boumerdes earthquake to check the validity of the proposed

  11. Global assessment of human losses due to earthquakes

    USGS Publications Warehouse

    Silva, Vitor; Jaiswal, Kishor; Weatherill, Graeme; Crowley, Helen

    2014-01-01

    Current studies have demonstrated a sharp increase in human losses due to earthquakes. These alarming levels of casualties suggest the need for large-scale investment in seismic risk mitigation, which, in turn, requires an adequate understanding of the extent of the losses, and location of the most affected regions. Recent developments in global and uniform datasets such as instrumental and historical earthquake catalogues, population spatial distribution and country-based vulnerability functions, have opened an unprecedented possibility for a reliable assessment of earthquake consequences at a global scale. In this study, a uniform probabilistic seismic hazard assessment (PSHA) model was employed to derive a set of global seismic hazard curves, using the open-source software OpenQuake for seismic hazard and risk analysis. These results were combined with a collection of empirical fatality vulnerability functions and a population dataset to calculate average annual human losses at the country level. The results from this study highlight the regions/countries in the world with a higher seismic risk, and thus where risk reduction measures should be prioritized.

  12. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part A, Prehistoric earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.

  13. Precise estimation of repeating earthquake moment: Example from parkfield, california

    USGS Publications Warehouse

    Rubinstein, J.L.; Ellsworth, W.L.

    2010-01-01

    We offer a new method for estimating the relative size of repeating earthquakes using the singular value decomposition (SVD). This method takes advantage of the highly coherent waveforms of repeating earthquakes and arrives at far more precise and accurate descriptions of earthquake size than standard catalog techniques allow. We demonstrate that uncertainty in relative moment estimates is reduced from ??75% for standard coda-duration techniques employed by the network to an uncertainty of ??6.6% when the SVD method is used. This implies that a single-station estimate of moment using the SVD method has far less uncertainty than the whole-network estimates of moment based on coda duration. The SVD method offers a significant improvement in our ability to describe the size of repeating earthquakes and thus an opportunity to better understand how they accommodate slip as a function of time.

  14. A Model For Rapid Estimation of Economic Loss

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2012-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  15. Tsunami source estimate of the 1906 Ecuador-Colombia earthquake

    NASA Astrophysics Data System (ADS)

    Yoshimoto, M.; Kumagai, H.

    2016-12-01

    A great earthquake occurred in 1906 along the Ecuador-Colombia subduction zone. The 1906 earthquake has been interpreted as a megathrust earthquake (Mw 8.8) that ruptured the source regions of smaller earthquakes in 1942, 1958, and 1979 [Kanamori and McNally, BSSA, 1982]. However, the slip distribution of the 1906 earthquake has not been estimated. Recent advancement in the tsunami simulation method opened a way to perform quantitative analysis of tsunami waveforms of the 1906 earthquake recorded only at trans-Pacific distances. In this study, we inverted the tsunami source of the 1906 earthquake using far-field tsunami data. We used 3 tide gauge records at Honolulu (Hawaii), San Francisco (California), and Ayukawa (Japan). We assumed 11 × 3 sub-faults along strike and dip direction, respectively, each with dimensions of 50 km × 50 km. We used the phase-corrected tsunami waveforms computed by the method proposed by Watada et al. [JGR, 2014]. Our analysis of tsunami waveforms of the 1906 event indicated Mw 8.4. The large-slip area was estimated in the shallow region, where the resolution was better as shown by our checkerboard test. We compared the observed tsunami waveform at Honolulu station with the simulated tsunami waveforms generated from each sub-fault. This comparison indicated that the arrival times of the simulated tsunami waveforms from this large-slip area were consistent with the observed waveform, supporting our inversion results. Our results show that the source region of the 1906 earthquake did not overlap those of the 1942, 1958, and 1979 earthquakes.

  16. Estimation of vulnerability functions based on a global earthquake damage database

    NASA Astrophysics Data System (ADS)

    Spence, R. J. S.; Coburn, A. W.; Ruffle, S. J.

    2009-04-01

    Developing a better approach to the estimation of future earthquake losses, and in particular to the understanding of the inherent uncertainties in loss models, is vital to confidence in modelling potential losses in insurance or for mitigation. For most areas of the world there is currently insufficient knowledge of the current building stock for vulnerability estimates to be based on calculations of structural performance. In such areas, the most reliable basis for estimating vulnerability is performance of the building stock in past earthquakes, using damage databases, and comparison with consistent estimates of ground motion. This paper will present a new approach to the estimation of vulnerabilities using the recently launched Cambridge University Damage Database (CUEDD). CUEDD is based on data assembled by the Martin Centre at Cambridge University since 1980, complemented by other more-recently published and some unpublished data. The database assembles in a single, organised, expandable and web-accessible database, summary information on worldwide post-earthquake building damage surveys which have been carried out since the 1960's. Currently it contains data on the performance of more than 750,000 individual buildings, in 200 surveys following 40 separate earthquakes. The database includes building typologies, damage levels, location of each survey. It is mounted on a GIS mapping system and links to the USGS Shakemaps of each earthquake which enables the macroseismic intensity and other ground motion parameters to be defined for each survey and location. Fields of data for each building damage survey include: · Basic earthquake data and its sources · Details of the survey location and intensity and other ground motion observations or assignments at that location · Building and damage level classification, and tabulated damage survey results · Photos showing typical examples of damage. In future planned extensions of the database information on human

  17. Building losses assessment for Lushan earthquake utilization multisource remote sensing data and GIS

    NASA Astrophysics Data System (ADS)

    Nie, Juan; Yang, Siquan; Fan, Yida; Wen, Qi; Xu, Feng; Li, Lingling

    2015-12-01

    On 20 April 2013, a catastrophic earthquake of magnitude 7.0 struck the Lushan County, northwestern Sichuan Province, China. This earthquake named Lushan earthquake in China. The Lushan earthquake damaged many buildings. The situation of building loss is one basis for emergency relief and reconstruction. Thus, the building losses of the Lushan earthquake must be assessed. Remote sensing data and geographic information systems (GIS) can be employed to assess the building loss of the Lushan earthquake. The building losses assessment results for Lushan earthquake disaster utilization multisource remote sensing dada and GIS were reported in this paper. The assessment results indicated that 3.2% of buildings in the affected areas were complete collapsed. 12% and 12.5% of buildings were heavy damaged and slight damaged, respectively. The complete collapsed buildings, heavy damaged buildings, and slight damaged buildings mainly located at Danling County, Hongya County, Lushan County, Mingshan County, Qionglai County, Tianquan County, and Yingjing County.

  18. Benefits of multidisciplinary collaboration for earthquake casualty estimation models: recent case studies

    NASA Astrophysics Data System (ADS)

    So, E.

    2010-12-01

    Earthquake casualty loss estimation, which depends primarily on building-specific casualty rates, has long suffered from a lack of cross-disciplinary collaboration in post-earthquake data gathering. An increase in our understanding of what contributes to casualties in earthquakes involve coordinated data-gathering efforts amongst disciplines; these are essential for improved global casualty estimation models. It is evident from examining past casualty loss models and reviewing field data collected from recent events, that generalized casualty rates cannot be applied globally for different building types, even within individual countries. For a particular structure type, regional and topographic building design effects, combined with variable material and workmanship quality all contribute to this multi-variant outcome. In addition, social factors affect building-specific casualty rates, including social status and education levels, and human behaviors in general, in that they modify egress and survivability rates. Without considering complex physical pathways, loss models purely based on historic casualty data, or even worse, rates derived from other countries, will be of very limited value. What’s more, as the world’s population, housing stock, and living and cultural environments change, methods of loss modeling must accommodate these variables, especially when considering casualties. To truly take advantage of observed earthquake losses, not only do damage surveys need better coordination of international and national reconnaissance teams, but these teams must integrate difference areas of expertise including engineering, public health and medicine. Research is needed to find methods to achieve consistent and practical ways of collecting and modeling casualties in earthquakes. International collaboration will also be necessary to transfer such expertise and resources to the communities in the cities which most need it. Coupling the theories and findings from

  19. Seismic Risk Assessment and Loss Estimation for Tbilisi City

    NASA Astrophysics Data System (ADS)

    Tsereteli, Nino; Alania, Victor; Varazanashvili, Otar; Gugeshashvili, Tengiz; Arabidze, Vakhtang; Arevadze, Nika; Tsereteli, Emili; Gaphrindashvili, Giorgi; Gventcadze, Alexander; Goguadze, Nino; Vephkhvadze, Sophio

    2013-04-01

    The proper assessment of seismic risk is of crucial importance for society protection and city sustainable economic development, as it is the essential part to seismic hazard reduction. Estimation of seismic risk and losses is complicated tasks. There is always knowledge deficiency on real seismic hazard, local site effects, inventory on elements at risk, infrastructure vulnerability, especially for developing countries. Lately great efforts was done in the frame of EMME (earthquake Model for Middle East Region) project, where in the work packages WP1, WP2 , WP3 and WP4 where improved gaps related to seismic hazard assessment and vulnerability analysis. Finely in the frame of work package wp5 "City Scenario" additional work to this direction and detail investigation of local site conditions, active fault (3D) beneath Tbilisi were done. For estimation economic losses the algorithm was prepared taking into account obtained inventory. The long term usage of building is very complex. It relates to the reliability and durability of buildings. The long term usage and durability of a building is determined by the concept of depreciation. Depreciation of an entire building is calculated by summing the products of individual construction unit' depreciation rates and the corresponding value of these units within the building. This method of calculation is based on an assumption that depreciation is proportional to the building's (constructions) useful life. We used this methodology to create a matrix, which provides a way to evaluate the depreciation rates of buildings with different type and construction period and to determine their corresponding value. Finally loss was estimated resulting from shaking 10%, 5% and 2% exceedance probability in 50 years. Loss resulting from scenario earthquake (earthquake with possible maximum magnitude) also where estimated.

  20. An empirical model for global earthquake fatality estimation

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David

    2010-01-01

    We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U. S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits. [DOI: 10.1193/1.3480331

  1. Earthquake Loss Assessment for Post-2000 Buildings in Istanbul

    NASA Astrophysics Data System (ADS)

    Hancilar, Ufuk; Cakti, Eser; Sesetyan, Karin

    2016-04-01

    Current building inventory of Istanbul city, which was compiled by street surveys in 2008, consists of more than 1.2 million buildings. The inventory provides information on lateral-load carrying system, number of floors and construction year, where almost 200,000 buildings are reinforced concrete frame type structures built after 2000. These buildings are assumed to be designed based on the provisions of Turkish Earthquake Resistant Design Code (1998) and are tagged as high-code buildings. However, there are no empirical or analytical fragility functions associated with these types of buildings. In this study we perform a damage and economic loss assessment exercise focusing on the post-2000 building stock of Istanbul. Three M7.4 scenario earthquakes near the city represent the input ground motion. As for the fragility functions, those provided by Hancilar and Cakti (2015) for code complying reinforced concrete frames are used. The results are compared with the number of damaged buildings given in the loss assessment studies available in the literature wherein expert judgment based fragilities for post-2000 buildings were used.

  2. An empirical evolutionary magnitude estimation for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wu, Yih-Min; Chen, Da-Yi

    2016-04-01

    For earthquake early warning (EEW) system, it is a difficult mission to accurately estimate earthquake magnitude in the early nucleation stage of an earthquake occurrence because only few stations are triggered and the recorded seismic waveforms are short. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters within the initial portion of waveform after P-wave arrival. However, a large-magnitude earthquake (Mw > 7.0) may take longer time to complete the whole ruptures of the causative fault. Instead of adopting amplitude contents in fixed-length time window, that may underestimate magnitude for large-magnitude events, we suppose a fast, robust and unsaturated approach to estimate earthquake magnitudes. In this new method, the EEW system can initially give a bottom-bund magnitude in a few second time window and then update magnitude without saturation by extending the time window. Here we compared two kinds of time windows for adopting amplitudes. One is pure P-wave time widow (PTW); the other is whole-wave time window after P-wave arrival (WTW). The peak displacement amplitude in vertical component were adopted from 1- to 10-s length PTW and WTW, respectively. Linear regression analysis were implemented to find the empirical relationships between peak displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that using WTW to estimate magnitudes accompanies with smaller standard deviation. In addition, large uncertainties exist in the 1-second time widow. Therefore, for magnitude estimations we suggest the EEW system need to progressively adopt peak displacement amplitudes form 2- to 10-s WTW.

  3. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  4. A cluster-based decision support system for estimating earthquake damage and casualties.

    PubMed

    Aleskerov, Fuad; Say, Arzu Iseri; Toker, Aysegül; Akin, H Levent; Altay, Gülay

    2005-09-01

    This paper describes a Decision Support System for Disaster Management (DSS-DM) to aid operational and strategic planning and policy-making for disaster mitigation and preparedness in a less-developed infrastructural context. Such contexts require a more flexible and robust system for fast prediction of damage and losses. The proposed system is specifically designed for earthquake scenarios, estimating the extent of human losses and injuries, as well as the need for temporary shelters. The DSS-DM uses a scenario approach to calculate the aforementioned parameters at the district and sub-district level at different earthquake intensities. The following system modules have been created: clusters (buildings) with respect to use; buildings with respect to construction typology; and estimations of damage to clusters, human losses and injuries, and the need for shelters. The paper not only examines the components of the DSS-DM, but also looks at its application in Besiktas municipality in the city of Istanbul, Turkey.

  5. Earthquake Early Warning with Seismogeodesy: Detection, Location, and Magnitude Estimation

    NASA Astrophysics Data System (ADS)

    Goldberg, D.; Bock, Y.; Melgar, D.

    2016-12-01

    Earthquake early warning is critical to reducing injuries and casualties in case of a large magnitude earthquake. The system must rely on near-source data to minimize the time between event onset and issuance of a warning. Early warning systems typically use seismic instruments (seismometers and accelerometers), but these instruments experience difficulty maintaining reliable data in the near-source region and undergo magnitude saturation for large events. Global Navigation Satellite System (GNSS) instruments capture the long period motions and have been shown to produce robust estimates of the true size of the earthquake source. However, GNSS is often overlooked in this context in part because it is not precise enough to record the first seismic wave arrivals (P-wave detection), an important consideration for issuing an early warning. GNSS instruments are becoming integrated into early warning, but are not yet fully exploited. Our approach involves the combination of direct measurements from collocated GNSS and accelerometer stations to estimate broadband coseismic displacement and velocity waveforms [Bock et al., 2011], a method known as seismogeodesy. We present the prototype seismogeodetic early warning system developed at Scripps and demonstrate that the seismogeodetic dataset can be used for P-wave detection, hypocenter location, and shaking onset determination. We discuss uncertainties in each of these estimates and include discussion of the sensitivity of our estimates as a function of the azimuthal distribution of monitoring stations. The seismogeodetic combination has previously been shown to be immune to magnitude saturation [Crowell et al., 2013; Melgar et al., 2015]. Rapid magnitude estimation is an important product in earthquake early warning, and is the critical metric in current tsunami hazard warnings. Using the seismogeodetic approach, we refine earthquake magnitude scaling using P-wave amplitudes (Pd) and peak ground displacements (PGD) for a

  6. Rapid Ice Mass Loss: Does It Have an Influence on Earthquake Occurrence in Southern Alaska?

    NASA Technical Reports Server (NTRS)

    Sauber, Jeanne M.

    2008-01-01

    The glaciers of southern Alaska are extensive, and many of them have undergone gigatons of ice wastage on time scales on the order of the seismic cycle. Since the ice loss occurs directly above a shallow main thrust zone associated with subduction of the Pacific-Yakutat plate beneath continental Alaska, the region between the Malaspina and Bering Glaciers is an excellent test site for evaluating the importance of recent ice wastage on earthquake faulting potential. We demonstrate the influence of cumulative glacial mass loss following the 1899 Yakataga earthquake (M=8.1) by using a two dimensional finite element model with a simple representation of ice fluctuations to calculate the incremental stresses and change in the fault stability margin (FSM) along the main thrust zone (MTZ) and on the surface. Along the MTZ, our results indicate a decrease in FSM between 1899 and the 1979 St. Elias earthquake (M=7.4) of 0.2 - 1.2 MPa over an 80 km region between the coast and the 1979 aftershock zone; at the surface, the estimated FSM was larger but more localized to the lower reaches of glacial ablation zones. The ice-induced stresses were large enough, in theory, to promote the occurrence of shallow thrust earthquakes. To empirically test the influence of short-term ice fluctuations on fault stability, we compared the seismic rate from a reference background time period (1988-1992) against other time periods (1993-2006) with variable ice or tectonic change characteristics. We found that the frequency of small tectonic events in the Icy Bay region increased in 2002-2006 relative to the background seismic rate. We hypothesize that this was due to a significant increase in the rate of ice wastage in 2002-2006 instead of the M=7.9, 2002 Denali earthquake, located more than 100km away.

  7. A Spectral Estimate of Average Slip in Earthquakes

    NASA Astrophysics Data System (ADS)

    Boatwright, J.; Hanks, T. C.

    2014-12-01

    We demonstrate that the high-frequency acceleration spectral level ao of an ω-square source spectrum is directly proportional to the average slip of the earthquake ∆u divided by the travel time to the station r/βao = 1.37 Fs (β/r) ∆uand multiplied by the radiation pattern Fs. This simple relation is robust but depends implicitly on the assumed relation between the corner frequency and source radius, which we take from the Brune (1970, JGR) model. We use this relation to estimate average slip by fitting spectral ratios with smaller earthquakes as empirical Green's functions. For a pair of Mw = 1.8 and 1.2 earthquakes in Parkfield, we fit the spectral ratios published by Nadeau et al. (1994, BSSA) to obtain 0.39 and 0.10 cm. For the Mw= 3.9 earthquake that occurred on Oct 29, 2012, at the Pinnacles, we fit spectral ratios formed with respect to an Md = 2.4 aftershock to obtain 4.4 cm. Using the Sato and Hirasawa (1973, JPE) model instead of the Brune model increases the estimates of average slip by 75%. These estimates of average slip are factors of 5-40 (or 3-23) times less than the average slips of 3.89 cm and 23.3 cm estimated by Nadeau and Johnson (1998, BSSA) from the slip rates, average seismic moments and recurrence intervals for the two sequences to which they associate these earthquakes. The most reasonable explanation for this discrepancy is that the stress release and rupture processes of these earthquakes is strongly heterogeneous. However, the fits to the spectral ratios do not indicate that the spectral shapes are distorted in the first two octaves above the corner frequency.

  8. Development of a Global Slope Dataset for Estimation of Landslide Occurrence Resulting from Earthquakes

    USGS Publications Warehouse

    Verdin, Kristine L.; Godt, Jonathan W.; Funk, Christopher C.; Pedreros, Diego; Worstell, Bruce; Verdin, James

    2007-01-01

    Landslides resulting from earthquakes can cause widespread loss of life and damage to critical infrastructure. The U.S. Geological Survey (USGS) has developed an alarm system, PAGER (Prompt Assessment of Global Earthquakes for Response), that aims to provide timely information to emergency relief organizations on the impact of earthquakes. Landslides are responsible for many of the damaging effects following large earthquakes in mountainous regions, and thus data defining the topographic relief and slope are critical to the PAGER system. A new global topographic dataset was developed to aid in rapidly estimating landslide potential following large earthquakes. We used the remotely-sensed elevation data collected as part of the Shuttle Radar Topography Mission (SRTM) to generate a slope dataset with nearly global coverage. Slopes from the SRTM data, computed at 3-arc-second resolution, were summarized at 30-arc-second resolution, along with statistics developed to describe the distribution of slope within each 30-arc-second pixel. Because there are many small areas lacking SRTM data and the northern limit of the SRTM mission was lat 60?N., statistical methods referencing other elevation data were used to fill the voids within the dataset and to extrapolate the data north of 60?. The dataset will be used in the PAGER system to rapidly assess the susceptibility of areas to landsliding following large earthquakes.

  9. Development of a clobal slope dataset for estimation of landslide occurrence resulting from earthquakes

    USGS Publications Warehouse

    Verdin, Kristine L.; Godt, Jonathan W.; Funk, Christopher C.; Pedreros, Diego; Worstell, Bruce; Verdin, James

    2007-01-01

    Landslides resulting from earthquakes can cause widespread loss of life and damage to critical infrastructure. The U.S. Geological Survey (USGS) has developed an alarm system, PAGER (Prompt Assessment of Global Earthquakes for Response), that aims to provide timely information to emergency relief organizations on the impact of earthquakes. Landslides are responsible for many of the damaging effects following large earthquakes in mountainous regions, and thus data defining the topographic relief and slope are critical to the PAGER system. A new global topographic dataset was developed to aid in rapidly estimating landslide potential following large earthquakes. We used the remotely-sensed elevation data collected as part of the Shuttle Radar Topography Mission (SRTM) to generate a slope dataset with nearly global coverage. Slopes from the SRTM data, computed at 3-arc-second resolution, were summarized at 30-arc-second resolution, along with statistics developed to describe the distribution of slope within each 30-arc-second pixel. Because there are many small areas lacking SRTM data and the northern limit of the SRTM mission was lat 60?N., statistical methods referencing other elevation data were used to fill the voids within the dataset and to extrapolate the data north of 60?. The dataset will be used in the PAGER system to rapidly assess the susceptibility of areas to landsliding following large earthquakes.

  10. Large Earthquakes in Developing Countries: Estimating and Reducing their Consequences

    NASA Astrophysics Data System (ADS)

    Tucker, B. E.

    2003-12-01

    Recent efforts to reduce the risk of earthquakes in developing countries have been diverse, earnest, and inadequate. The earthquake risk in developing countries is large and growing rapidly. It is largely ignored. Unless something is done - quickly - to reduce it, both developing and developed countries will suffer human and economic losses far greater than have been experienced in the past. GeoHazards International (GHI) is a nonprofit organization that has attempted to reduce the death and suffering caused by earthquakes in the world's most vulnerable communities, through preparedness, mitigation and prevention. Its approach has included raising awareness, strengthening local institutions and launching mitigation activities, particularly for schools. GHI and its partners around the world have achieved some success: thousands of school children are safer, hundreds of cities are aware of their risk, tens of cities have been assessed and advised, and some local organizations have been strengthened. But there is disturbing evidence that what is being done is insufficient. The problem outpaces the cure. A new program is now being considered that would attempt to improve earthquake-resistant construction of schools, internationally, by publicizing well-managed programs around the world that design, construct and maintain earthquake-resistant schools. While focused on schools, this program might have broader applications in the future.

  11. An Account of Preliminary Landslide Damage and Losses Resulting from the February 28, 2001, Nisqually, Washington, Earthquake

    USGS Publications Warehouse

    Highland, Lynn M.

    2003-01-01

    The February 28, 2001, Nisqually, Washington, earthquake (Mw = 6.8) damaged an area of the northwestern United States that previously experienced two major historical earthquakes, in 1949 and in 1965. Preliminary estimates of direct monetary losses from damage due to earthquake-induced landslides is approximately $34.3 million. However, this figure does not include costs from damages to the elevated portion of the Alaskan Way Viaduct, a major highway through downtown Seattle, Washington that will be repaired or rebuilt, depending on the future decision of local and state authorities. There is much debate as to the cause of the damage to this viaduct with evaluations of cause ranging from earthquake shaking and liquefaction to lateral spreading to a combination of these effects. If the viaduct is included in the costs, the losses increase to $500+ million (if it is repaired) or to more than $1+ billion (if it is replaced). Preliminary estimate of losses due to all causes of earthquake damage is approximately $2 billion, which includes temporary repairs to the Alaskan Way Viaduct. These preliminary dollar figures will no doubt increase when plans and decisions regarding the Viaduct are completed.

  12. Estimation of the magnitudes and epicenters of Philippine historical earthquakes

    NASA Astrophysics Data System (ADS)

    Bautista, Maria Leonila P.; Oike, Kazuo

    2000-02-01

    The magnitudes and epicenters of Philippine earthquakes from 1589 to 1895 are estimated based on the review, evaluation and interpretation of historical accounts and descriptions. The first step involves the determination of magnitude-felt area relations for the Philippines for use in the magnitude estimation. Data used were the earthquake reports of 86, recent, shallow events with well-described effects and known magnitude values. Intensities are assigned according to the modified Mercalli intensity scale of I to XII. The areas enclosed by Intensities III to IX [ A(III) to A(IX)] are measured and related to magnitude values. The most robust relations are found for magnitudes relating to A(VI), A(VII), A(VIII) and A(IX). Historical earthquake data are obtained from primary sources in libraries in the Philippines and Spain. Most of these accounts were made by Spanish priests and officials stationed in the Philippines during the 15th to 19th centuries. More than 3000 events are catalogued, interpreted and their intensities determined by considering the possible effects of local site conditions, type of construction and the number and locations of existing towns to assess completeness of reporting. Of these events, 485 earthquakes with the largest number of accounts or with at least a minimum report of damage are selected. The historical epicenters are estimated based on the resulting generalized isoseismal maps augmented by information on recent seismicity and location of known tectonic structures. Their magnitudes are estimated by using the previously determined magnitude-felt area equations for recent events. Although historical epicenters are mostly found to lie on known tectonic structures, a few, however, are found to lie along structures that show not much activity during the instrumented period. A comparison of the magnitude distributions of historical and recent events showed that only the period 1850 to 1900 may be considered well-reported in terms of

  13. An empirical evolutionary magnitude estimation for early warning of earthquakes

    NASA Astrophysics Data System (ADS)

    Chen, Da-Yi; Wu, Yih-Min; Chin, Tai-Lin

    2017-03-01

    The earthquake early warning (EEW) system is difficult to provide consistent magnitude estimate in the early stage of an earthquake occurrence because only few stations are triggered and few seismic signals are recorded. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters using the initial portion of the recorded waveforms after P-wave arrival. However, for a large-magnitude earthquake (Mw > 7.0), the time to complete the whole ruptures resulted from the corresponding fault may be very long. The magnitude estimations may not be correctly predicted by the initial portion of the seismograms. To estimate the magnitude of a large earthquake in real-time, the amplitude parameters should be updated with ongoing waveforms instead of adopting amplitude contents in a predefined fixed-length time window, since it may underestimate magnitude for large-magnitude events. In this paper, we propose a fast, robust and less-saturated approach to estimate earthquake magnitudes. The EEW system will initially give a lower-bound of the magnitude in a time window with a few seconds and then update magnitude with less saturation by extending the time window. Here we compared two kinds of time windows for measuring amplitudes. One is P-wave time window (PTW) after P-wave arrival; the other is whole-wave time window after P-wave arrival (WTW), which may include both P and S wave. One to ten second time windows for both PTW and WTW are considered to measure the peak ground displacement from the vertical component of the waveforms. Linear regression analysis are run at each time step (1- to 10-s time interval) to find the empirical relationships among peak ground displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 in Taiwan with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that considering WTW to estimate magnitudes has smaller standard deviation than PTW. The

  14. Monitoring road losses for Lushan 7.0 earthquake disaster utilization multisource remote sensing images

    NASA Astrophysics Data System (ADS)

    Huang, He; Yang, Siquan; Li, Suju; He, Haixia; Liu, Ming; Xu, Feng; Lin, Yueguan

    2015-12-01

    Earthquake is one major nature disasters in the world. At 8:02 on 20 April 2013, a catastrophic earthquake with Ms 7.0 in surface wave magnitude occurred in Sichuan province, China. The epicenter of this earthquake located in the administrative region of Lushan County and this earthquake was named the Lushan earthquake. The Lushan earthquake caused heavy casualties and property losses in Sichuan province. After the earthquake, various emergency relief supplies must be transported to the affected areas. Transportation network is the basis for emergency relief supplies transportation and allocation. Thus, the road losses of the Lushan earthquake must be monitoring. The road losses monitoring results for Lushan earthquake disaster utilization multisource remote sensing images were reported in this paper. The road losses monitoring results indicated that there were 166 meters' national roads, 3707 meters' provincial roads, 3396 meters' county roads, 7254 meters' township roads, and 3943 meters' village roads were damaged during the Lushan earthquake disaster. The damaged roads mainly located at Lushan County, Baoxing County, Tianquan County, Yucheng County, Mingshan County, and Qionglai County. The results also can be used as a decision-making information source for the disaster management government in China.

  15. Rapid estimate of earthquake source duration: application to tsunami warning.

    NASA Astrophysics Data System (ADS)

    Reymond, Dominique; Jamelot, Anthony; Hyvernaud, Olivier

    2016-04-01

    We present a method for estimating the source duration of the fault rupture, based on the high-frequency envelop of teleseismic P-Waves, inspired from the original work of (Ni et al., 2005). The main interest of the knowledge of this seismic parameter is to detect abnormal low velocity ruptures that are the characteristic of the so called 'tsunami-earthquake' (Kanamori, 1972). The validation of the results of source duration estimated by this method are compared with two other independent methods : the estimated duration obtained by the Wphase inversion (Kanamori and Rivera, 2008, Duputel et al., 2012) and the duration calculated by the SCARDEC process that determines the source time function (M. Vallée et al., 2011). The estimated source duration is also confronted to the slowness discriminant defined by Newman and Okal, 1998), that is calculated routinely for all earthquakes detected by our tsunami warning process (named PDFM2, Preliminary Determination of Focal Mechanism, (Clément and Reymond, 2014)). Concerning the point of view of operational tsunami warning, the numerical simulations of tsunami are deeply dependent on the source estimation: better is the source estimation, better will be the tsunami forecast. The source duration is not directly injected in the numerical simulations of tsunami, because the cinematic of the source is presently totally ignored (Jamelot and Reymond, 2015). But in the case of a tsunami-earthquake that occurs in the shallower part of the subduction zone, we have to consider a source in a medium of low rigidity modulus; consequently, for a given seismic moment, the source dimensions will be decreased while the slip distribution increased, like a 'compact' source (Okal, Hébert, 2007). Inversely, a rapid 'snappy' earthquake that has a poor tsunami excitation power, will be characterized by higher rigidity modulus, and will produce weaker displacement and lesser source dimensions than 'normal' earthquake. References: CLément, J

  16. Earthquakes

    USGS Publications Warehouse

    Shedlock, Kaye M.; Pakiser, Louis Charles

    1998-01-01

    One of the most frightening and destructive phenomena of nature is a severe earthquake and its terrible aftereffects. An earthquake is a sudden movement of the Earth, caused by the abrupt release of strain that has accumulated over a long time. For hundreds of millions of years, the forces of plate tectonics have shaped the Earth as the huge plates that form the Earth's surface slowly move over, under, and past each other. Sometimes the movement is gradual. At other times, the plates are locked together, unable to release the accumulating energy. When the accumulated energy grows strong enough, the plates break free. If the earthquake occurs in a populated area, it may cause many deaths and injuries and extensive property damage. Today we are challenging the assumption that earthquakes must present an uncontrollable and unpredictable hazard to life and property. Scientists have begun to estimate the locations and likelihoods of future damaging earthquakes. Sites of greatest hazard are being identified, and definite progress is being made in designing structures that will withstand the effects of earthquakes.

  17. Emergency Physician Estimation of Blood Loss

    DTIC Science & Technology

    2011-01-01

    between laboratory determination and visual estimation of blood loss during normal delivery. Eur J Obstet Gynecol Reprod Biol. 1991;38:119–124. 3...exaggeration. Acta Obstet Gynecol Scand. 2006;85:1448–1452. 4. Meiser A, Casagranda O, Skipka G, et al. Quantification of blood loss. How precise is visual

  18. Global earthquake casualties due to secondary effects: A quantitative analysis for improving rapid loss analyses

    USGS Publications Warehouse

    Marano, K.D.; Wald, D.J.; Allen, T.I.

    2010-01-01

    This study presents a quantitative and geospatial description of global losses due to earthquake-induced secondary effects, including landslide, liquefaction, tsunami, and fire for events during the past 40 years. These processes are of great importance to the US Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, which is currently being developed to deliver rapid earthquake impact and loss assessments following large/significant global earthquakes. An important question is how dominant are losses due to secondary effects (and under what conditions, and in which regions)? Thus, which of these effects should receive higher priority research efforts in order to enhance PAGER's overall assessment of earthquakes losses and alerting for the likelihood of secondary impacts? We find that while 21.5% of fatal earthquakes have deaths due to secondary (non-shaking) causes, only rarely are secondary effects the main cause of fatalities. The recent 2004 Great Sumatra-Andaman Islands earthquake is a notable exception, with extraordinary losses due to tsunami. The potential for secondary hazards varies greatly, and systematically, due to regional geologic and geomorphic conditions. Based on our findings, we have built country-specific disclaimers for PAGER that address potential for each hazard (Earle et al., Proceedings of the 14th World Conference of the Earthquake Engineering, Beijing, China, 2008). We will now focus on ways to model casualties from secondary effects based on their relative importance as well as their general predictability. ?? Springer Science+Business Media B.V. 2009.

  19. Global Earthquake Casualties due to Secondary Effects: A Quantitative Analysis for Improving PAGER Losses

    USGS Publications Warehouse

    Wald, David J.

    2010-01-01

    This study presents a quantitative and geospatial description of global losses due to earthquake-induced secondary effects, including landslide, liquefaction, tsunami, and fire for events during the past 40 years. These processes are of great importance to the US Geological Survey’s (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, which is currently being developed to deliver rapid earthquake impact and loss assessments following large/significant global earthquakes. An important question is how dominant are losses due to secondary effects (and under what conditions, and in which regions)? Thus, which of these effects should receive higher priority research efforts in order to enhance PAGER’s overall assessment of earthquakes losses and alerting for the likelihood of secondary impacts? We find that while 21.5% of fatal earthquakes have deaths due to secondary (non-shaking) causes, only rarely are secondary effects the main cause of fatalities. The recent 2004 Great Sumatra–Andaman Islands earthquake is a notable exception, with extraordinary losses due to tsunami. The potential for secondary hazards varies greatly, and systematically, due to regional geologic and geomorphic conditions. Based on our findings, we have built country-specific disclaimers for PAGER that address potential for each hazard (Earle et al., Proceedings of the 14th World Conference of the Earthquake Engineering, Beijing, China, 2008). We will now focus on ways to model casualties from secondary effects based on their relative importance as well as their general predictability.

  20. Estimating The Magnitude Ms of Historical Earthquakes From Macroseismic Observations

    NASA Astrophysics Data System (ADS)

    Kaiser, D.; Gutdeutsch, R.; Jentzsch, G.

    Magnitudes of earthquakes earlier than 1900 are derived from macroseismic observa- tions, i.e. the maximum intensity I0, or isoseismal radii RI of different intensities I and the focal depth h. The purpose of our study is to compare the importance of I0 and RI as input parameters for the estimation of the surface wave magnitude MS of his- torical earthquakes and to derive appropriate empirical relationships. We use carefully selected instrumental parts (since 1900) of 2 earthquake catalogues: Kárník 1996 (Eu- rope and the Mediterranean) and Shebalin et al. 1998 (Central and Eastern Europe). In order to establish relationships we use the orthogonal regression because we presume that all parameters are in error and because it has the advantage to provide reversible regression equations. Estimation of MS from I0 and h. As correlation analysis of Kárník's catalogue shows no significant influence of h on the relation between MS and I0 we obtain MS = 0.55I0+1.26, with derived equivalent standard error MS = +/-0.44 and I0 = +/-0.86. The practical use of this relationship is limited due to rather large errors. In addition we observe systematic regional variations which need further investigation. We were able to apply much more stringent selection criteria to the Shebalin catalogue and found a substantial improvement of the correlation when considering the influence of h [km], in contrast to Kárník's catalogue. We obtain MS = 0.65I0 + 1.90log(h) - 1.62 with error MS = +/-0.21. We recommend this equation for application. Estimation of MS from average isoseismal radii RI. In order to establish a relation- ship between MS and RI we apply a theoretically based model which takes into account both exponential decay and geometrical spreading factor. We find MS = 0.695I + 2.14 log (RI) + 0.00329RI - 1.93 with MS = +/-0.32. Here I is the macroseismic intensity (I = 3 ... 9) of the isoseismal RI [km]. With this equation it is possible to reliably estimate MS and we recommend

  1. Estimating earthquake location and magnitude from seismic intensity data

    USGS Publications Warehouse

    Bakun, W.H.; Wentworth, C.M.

    1997-01-01

    Analysis of Modified Mercalli intensity (MMI) observations for a training set of 22 California earthquakes suggests a strategy for bounding the epicentral region and moment magnitude M from MMI observations only. We define an intensity magnitude MI that is calibrated to be equal in the mean to M. MI = mean (Mi), where Mi = (MMIi + 3.29 + 0.0206 * ??i)/1.68 and ??i is the epicentral distance (km) of observation MMIi. The epicentral region is bounded by contours of rms [MI] = rms (MI - Mi) - rms0 (MI - Mi-), where rms is the root mean square, rms0 (MI - Mi) is the minimum rms over a grid of assumed epicenters, and empirical site corrections and a distance weighting function are used. Empirical contour values for bounding the epicenter location and empirical bounds for M estimated from MI appropriate for different levels of confidence and different quantities of intensity observations are tabulated. The epicentral region bounds and MI obtained for an independent test set of western California earthquakes are consistent with the instrumental epicenters and moment magnitudes of these earthquakes. The analysis strategy is particularly appropriate for the evaluation of pre-1900 earthquakes for which the only available data are a sparse set of intensity observations.

  2. Post-Earthquake People Loss Evaluation Based on Seismic Multi-Level Hybrid Grid: A Case Study on Yushu Ms 7.1 Earthquake in China

    NASA Astrophysics Data System (ADS)

    Yang, Xiaohong; Xie, Zhong; Ling, Feng; Luo, Xiangang; Zhong, Ming

    2016-01-01

    People loss is one of the most important information that the government concerns after an earthquake, because it affects appropriate rescue levels. However, existing evaluation methods often consider an entire stricken region as a whole assessment area but disregard the spatial disparity of influencing factors. As a consequence, results are inaccurately evaluated. In order to address this problem, this paper proposes a post-earthquake evaluation approach of people loss based on the seismic multi-level hybrid grid (SMHG). In SMHG, the whole area is divided into grids at different levels with various sizes. In this manner, the efficiency of data management is improved. With SMHG, disaster statistics can be easily counted under both the administrative unit and per unit area. The proposed approach was then applied to investigate Yushu Ms7.1 earthquake in China. Results revealed that the number of deaths varied with different exposure grids. Among all the different grids, we found that using the 50×50 exposure grid can get the most satisfactory results, and the estimated number of deaths was 2,203, with an 18.3% deviation from the actual loss. People loss results obtained through the proposed approach were more accurate than those obtained through traditional GIS-based methods.

  3. Clinical Image: Visual Estimation of Blood Loss.

    PubMed

    Donham, Benjamin; Frondozo, Robby; Petro, Michael; Reynolds, Andrew; Swisher, Jonathan; Knight, Ryan M

    Military prehospital providers frequently have to make important clinical decisions with only limited objective information and vital signs. Because of this, accurate estimation of blood loss, at the point of injury, can augment any available objective information. Prior studies have shown that individuals significantly overestimate the amount of blood loss when the amount of hemorrhage is small, and they tend to underestimate the amount of blood loss with larger amounts of hemorrhage. Furthermore, the type of surface on which the blood is deposited can impact the visual estimation of the amount of hemorrhage. To aid providers with the ability to accurately estimate blood loss, we took several units of expired packed red blood cells and deposited them in different ways on varying surfaces to mimic the visual impression of combat casualties. 2017.

  4. Public Release of Estimated Impact-Based Earthquake Alerts - An Update to the U.S. Geological Survey PAGER System

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Jaiswal, K. S.; Marano, K.; Hearne, M.; Earle, P. S.; So, E.; Garcia, D.; Hayes, G. P.; Mathias, S.; Applegate, D.; Bausch, D.

    2010-12-01

    The U.S. Geological Survey (USGS) has begun publicly releasing earthquake alerts for significant earthquakes around the globe based on estimates of potential casualties and economic losses. These estimates should significantly enhance the utility of the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system that has been providing estimated ShakeMaps and computing population exposures to specific shaking intensities since 2007. Quantifying earthquake impacts and communicating loss estimates (and their uncertainties) to the public has been the culmination of several important new and evolving components of the system. First, the operational PAGER system now relies on empirically-based loss models that account for estimated shaking hazard, population exposure, and employ country-specific fatality and economic loss functions derived using analyses of losses due to recent and past earthquakes. In some countries, our empirical loss models are informed in part by PAGER’s semi-empirical and analytical loss models, and building exposure and vulnerability data sets, all of which are being developed in parallel to the empirical approach. Second, human and economic loss information is now portrayed as a supplement to existing intensity/exposure content on both PAGER summary alert (available via cell phone/email) messages and web pages. Loss calculations also include estimates of the economic impact with respect to the country’s gross domestic product. Third, in order to facilitate rapid and appropriate earthquake responses based on our probable loss estimates, in early 2010 we proposed a four-level Earthquake Impact Scale (EIS). Instead of simply issuing median estimates for losses—which can be easily misunderstood and misused—this scale provides ranges of losses from which potential responders can gauge expected overall impact from strong shaking. EIS is based on two complementary criteria: the estimated cost of damage, which is most suitable for U

  5. Centralized web-based loss estimation tool: INLET for disaster response

    NASA Astrophysics Data System (ADS)

    Huyck, C. K.; Chung, H.-C.; Cho, S.; Mio, M. Z.; Ghosh, S.; Eguchi, R. T.; Mehrotra, S.

    2006-03-01

    In the years following the 1994 Northridge earthquake, many researchers in the earthquake community focused on the development of GIS-based loss estimation tools such as HAZUS. These highly customizable programs have many users, and different results after an event can be problematic. Online IMS (Internet Map Servers) offer a centralized system where data, model updates and results cascade to all users. INLET (Internet-based Loss Estimation Tool) is the first online real-time loss estimation system available to the emergency management and response community within Southern California. In the event of a significant earthquake, Perl scripts written to respond to USGS ShakeCast notifications will call INLET routines that use USGS ShakeMaps to estimate losses within minutes after an event. INLET incorporates extensive publicly available GIS databases and uses damage functions simplified from FEMA's HAZUS (R) software. INLET currently estimates building damage, transportation impacts, and casualties. The online model simulates the effects of earthquakes, in the context of the larger RESCUE project, in order to test the integration of IT in evacuation routing. The simulation tool provides a "testbed" environment for researchers to model the effect that disaster awareness and route familiarity can have on traffic congestion and evacuation time.

  6. Time-varying loss forecast for an earthquake scenario in Basel, Switzerland

    NASA Astrophysics Data System (ADS)

    Herrmann, Marcus; Zechar, Jeremy D.; Wiemer, Stefan

    2014-05-01

    When an unexpected earthquake occurs, people suddenly want advice on how to cope with the situation. The 2009 L'Aquila quake highlighted the significance of public communication and pushed the usage of scientific methods to drive alternative risk mitigation strategies. For instance, van Stiphout et al. (2010) suggested a new approach for objective evacuation decisions on short-term: probabilistic risk forecasting combined with cost-benefit analysis. In the present work, we apply this approach to an earthquake sequence that simulated a repeat of the 1356 Basel earthquake, one of the most damaging events in Central Europe. A recent development to benefit society in case of an earthquake are probabilistic forecasts of the aftershock occurrence. But seismic risk delivers a more direct expression of the socio-economic impact. To forecast the seismic risk on short-term, we translate aftershock probabilities to time-varying seismic hazard and combine this with time-invariant loss estimation. Compared with van Stiphout et al. (2010), we use an advanced aftershock forecasting model and detailed settlement data to allow us spatial forecasts and settlement-specific decision-making. We quantify the risk forecast probabilistically in terms of human loss. For instance one minute after the M6.6 mainshock, the probability for an individual to die within the next 24 hours is 41 000 times higher than the long-term average; but the absolute value remains at minor 0.04 %. The final cost-benefit analysis adds value beyond a pure statistical approach: it provides objective statements that may justify evacuations. To deliver supportive information in a simple form, we propose a warning approach in terms of alarm levels. Our results do not justify evacuations prior to the M6.6 mainshock, but in certain districts afterwards. The ability to forecast the short-term seismic risk at any time-and with sufficient data anywhere-is the first step of personal decision-making and raising risk

  7. Estimating Source Duration for Moderate and Large Earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Yen; Hwang, Ruey-Der; Ho, Chien-Yin; Lin, Tzu-Wei

    2017-04-01

    Estimating Source Duration for Moderate and Large Earthquakes in Taiwan Wen-Yen Chang1, Ruey-Der Hwang2, Chien-Yin Ho3 and Tzu-Wei Lin4 1 Department of Natural Resources and Environmental Studies, National Dong Hwa University, Hualien, Taiwan, ROC 2Department of Geology, Chinese Culture University, Taipei, Taiwan, ROC 3Department of Earth Sciences, National Cheng Kung University, Tainan, Taiwan, ROC 4Seismology Center, Central Weather Bureau, Taipei, Taiwan, ROC ABSTRACT To construct a relationship between seismic moment (M0) and source duration (t) was important for seismic hazard in Taiwan, where earthquakes were quite active. In this study, we used a proposed inversion process using teleseismic P-waves to derive the M0-t relationship in the Taiwan region for the first time. Fifteen earthquakes with MW 5.5-7.1 and focal depths of less than 40 km were adopted. The inversion process could simultaneously determine source duration, focal depth, and pseudo radiation patterns of direct P-wave and two depth phases, by which M0 and fault plane solutions were estimated. Results showed that the estimated t ranging from 2.7 to 24.9 sec varied with one-third power of M0. That is, M0 is proportional to t**3, and then the relationship between both of them was M0=0.76*10**23(t)**3 , where M0 in dyne-cm and t in second. The M0-t relationship derived from this study was very close to those determined from global moderate to large earthquakes. For further understanding the validity in the derived relationship, through the constructed relationship of M0-, we inferred the source duration of the 1999 Chi-Chi (Taiwan) earthquake with M0=2-5*10**27 dyne-cm (corresponding to Mw = 7.5-7.7) to be approximately 29-40 sec, in agreement with many previous studies for source duration (28-42 sec).

  8. Estimating the confidence of earthquake damage scenarios: examples from a logic tree approach

    NASA Astrophysics Data System (ADS)

    Molina, S.; Lindholm, C. D.

    2007-07-01

    Earthquake loss estimation is now becoming an important tool in mitigation planning, where the loss modeling usually is based on a parameterized mathematical representation of the damage problem. In parallel with the development and improvement of such models, the question of sensitivity to parameters that carry uncertainties becomes increasingly important. We have to this end applied the capacity spectrum method (CSM) as described in FEMA HAZUS-MH. Multi-hazard Loss Estimation Methodology, Earthquake Model, Advanced Engineering Building Module. Federal Emergency Management Agency, United States (2003), and investigated the effects of selected parameters. The results demonstrate that loss scenarios may easily vary by as much as a factor of two because of simple parameter variations. Of particular importance for the uncertainty is the construction quality of the structure. These results represent a warning against simple acceptance of unbounded damage scenarios and strongly support the development of computational methods in which parameter uncertainties are propagated through the computations to facilitate confidence bounds for the damage scenarios.

  9. Locating earthquakes with surface waves and centroid moment tensor estimation

    NASA Astrophysics Data System (ADS)

    Wei, Shengji; Zhan, Zhongwen; Tan, Ying; Ni, Sidao; Helmberger, Don

    2012-04-01

    Traditionally, P wave arrival times have been used to locate regional earthquakes. In contrast, the travel times of surface waves dependent on source excitation and the source parameters and depth must be determined independently. Thus surface wave path delays need to be known before such data can be used for location. These delays can be estimated from previous earthquakes using the cut-and-paste technique, Ambient Seismic Noise tomography, and from 3D models. Taking the Chino Hills event as an example, we show consistency of path corrections for (>10 s) Love and Rayleigh waves to within about 1 s obtained from these methods. We then use these empirically derived delay maps to determine centroid locations of 138 Southern California moderate-sized (3.5 > Mw> 5.7) earthquakes using surface waves alone. It appears that these methods are capable of locating the main zone of rupture within a few (˜3) km accuracy relative to Southern California Seismic Network locations with 5 stations that are well distributed in azimuth. We also address the timing accuracy required to resolve non-double-couple source parameters which trades-off with location with less than a km error required for a 10% Compensated Linear Vector Dipole resolution.

  10. Soil amplification maps for estimating earthquake ground motions in the Central US

    USGS Publications Warehouse

    Bauer, R.A.; Kiefer, J.; Hester, N.

    2001-01-01

    The State Geologists of the Central United States Earthquake Consortium (CUSEC) are developing maps to assist State and local emergency managers and community officials in evaluating the earthquake hazards for the CUSEC region. The state geological surveys have worked together to produce a series of maps that show seismic shaking potential for eleven 1 X 2 degree (scale 1:250 000 or 1 in. ??? 3.9 miles) quadrangles that cover the high-risk area of the New Madrid Seismic Zone in eight states. Shear wave velocity values for the surficial materials were gathered and used to classify the soils according to their potential to amplify earthquake ground motions. Geologic base maps of surficial materials or 3-D material maps, either existing or produced for this project, were used in conjunction with shear wave velocities to classify the soils for the upper 15-30 m. These maps are available in an electronic form suitable for inclusion in the federal emergency management agency's earthquake loss estimation program (HAZUS). ?? 2001 Elsevier Science B.V. All rights reserved.

  11. Real-Time Earthquake Intensity Estimation Using Streaming Data Analysis of Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Yelena; Tiampo, Kristy F.; Qin, Jinhui; Bauer, Michael A.

    2016-10-01

    Earthquake intensity is one of the key components of the decision-making process for disaster response and emergency services. Accurate and rapid intensity calculations can help to reduce total loss and the number of casualties after an earthquake. Modern intensity assessment procedures handle a variety of information sources, which can be divided into two main categories. The first type of data is that derived from physical sensors, such as seismographs and accelerometers, while the second type consists of data obtained from social sensors, such as witness observations of the consequences of the earthquake itself. Estimation approaches using additional data sources or that combine sources from both data types tend to increase intensity uncertainty due to human factors and inadequate procedures for temporal and spatial estimation, resulting in precision errors in both time and space. Here we present a processing approach for the real-time analysis of streams of data from both source types. The physical sensor data is acquired from the U.S. Geological Survey (USGS) seismic network in California and the social sensor data is based on Twitter user observations. First, empirical relationships between tweet rate and observed Modified Mercalli Intensity (MMI) are developed using data from the M6.0 South Napa, CAF earthquake that occurred on August 24, 2014. Second, the streams of both data types are analyzed together in simulated real-time to produce one intensity map. The second implementation is based on IBM InfoSphere Streams, a cloud platform for real-time analytics of big data. To handle large processing workloads for data from various sources, it is deployed and run on a cloud-based cluster of virtual machines. We compare the quality and evolution of intensity maps from different data sources over 10-min time intervals immediately following the earthquake. Results from the joint analysis shows that it provides more complete coverage, with better accuracy and higher

  12. Real-Time Earthquake Intensity Estimation Using Streaming Data Analysis of Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Yelena; Tiampo, Kristy F.; Qin, Jinhui; Bauer, Michael A.

    2017-06-01

    Earthquake intensity is one of the key components of the decision-making process for disaster response and emergency services. Accurate and rapid intensity calculations can help to reduce total loss and the number of casualties after an earthquake. Modern intensity assessment procedures handle a variety of information sources, which can be divided into two main categories. The first type of data is that derived from physical sensors, such as seismographs and accelerometers, while the second type consists of data obtained from social sensors, such as witness observations of the consequences of the earthquake itself. Estimation approaches using additional data sources or that combine sources from both data types tend to increase intensity uncertainty due to human factors and inadequate procedures for temporal and spatial estimation, resulting in precision errors in both time and space. Here we present a processing approach for the real-time analysis of streams of data from both source types. The physical sensor data is acquired from the U.S. Geological Survey (USGS) seismic network in California and the social sensor data is based on Twitter user observations. First, empirical relationships between tweet rate and observed Modified Mercalli Intensity (MMI) are developed using data from the M6.0 South Napa, CAF earthquake that occurred on August 24, 2014. Second, the streams of both data types are analyzed together in simulated real-time to produce one intensity map. The second implementation is based on IBM InfoSphere Streams, a cloud platform for real-time analytics of big data. To handle large processing workloads for data from various sources, it is deployed and run on a cloud-based cluster of virtual machines. We compare the quality and evolution of intensity maps from different data sources over 10-min time intervals immediately following the earthquake. Results from the joint analysis shows that it provides more complete coverage, with better accuracy and higher

  13. Blood Loss Estimation Using Gauze Visual Analogue

    PubMed Central

    Ali Algadiem, Emran; Aleisa, Abdulmohsen Ali; Alsubaie, Huda Ibrahim; Buhlaiqah, Noora Radhi; Algadeeb, Jihad Bagir; Alsneini, Hussain Ali

    2016-01-01

    Background Estimating intraoperative blood loss can be a difficult task, especially when blood is mostly absorbed by gauze. In this study, we have provided an improved method for estimating blood absorbed by gauze. Objectives To develop a guide to estimate blood absorbed by surgical gauze. Materials and Methods A clinical experiment was conducted using aspirated blood and common surgical gauze to create a realistic amount of absorbed blood in the gauze. Different percentages of staining were photographed to create an analogue for the amount of blood absorbed by the gauze. Results A visual analogue scale was created to aid the estimation of blood absorbed by the gauze. The absorptive capacity of different gauze sizes was determined when the gauze was dripping with blood. The amount of reduction in absorption was also determined when the gauze was wetted with normal saline before use. Conclusions The use of a visual analogue may increase the accuracy of blood loss estimation and decrease the consequences related to over or underestimation of blood loss. PMID:27626017

  14. Real Time Seismic Loss Estimation in Italy

    NASA Astrophysics Data System (ADS)

    Goretti, A.; Sabetta, F.

    2009-04-01

    By more than 15 years the Seismic Risk Office is able to perform a real-time evaluation of the earthquake potential loss in any part of Italy. Once the epicentre and the magnitude of the earthquake are made available by the National Institute for Geophysiscs and Volca-nology, the model, based on the Italian Geographic Information Sys-tems, is able to evaluate the extent of the damaged area and the consequences on the built environment. In recent years the model has been significantly improved with new methodologies able to conditioning the uncertainties using observa-tions coming from the fields during the first days after the event. However it is reputed that the main challenges in loss analysis are related to the input data, more than to methodologies. Unlike the ur-ban scenario, where the missing data can be collected with enough accuracy, the country-wise analysis requires the use of existing data bases, often collected for other purposed than seismic scenario evaluation, and hence in some way lacking of completeness and homogeneity. Soil properties, building inventory and population dis-tribution are the main input data that are to be known in any site of the whole Italian territory. To this end the National Census on Popu-lation and Dwellings has provided information on the residential building types and the population that lives in that building types. The critical buildings, such as Hospital, Fire Brigade Stations, Schools, are not included in the inventory, since the national plan for seismic risk assessment of critical buildings is still under way. The choice of a proper soil motion parameter, its attenuation with distance and the building type fragility are important ingredients of the model as well. The presentation will focus on the above mentioned issues, highlight-ing the different data sets used and their accuracy, and comparing the model, input data and results when geographical areas with dif-ferent extent are considered: from the urban scenarios

  15. Atmospheric Baseline Monitoring Data Losses Due to the Samoa Earthquake

    NASA Astrophysics Data System (ADS)

    Schnell, R. C.; Cunningham, M. C.; Vasel, B. A.; Butler, J. H.

    2009-12-01

    The National Oceanic and Atmospheric Administration (NOAA) operates an Atmospheric Baseline Observatory at Cape Matatula on the north-eastern point of American Samoa, opened in 1973. The manned observatory conducts continuous measurements of a wide range of climate forcing and atmospheric composition data including greenhouse gas concentrations, solar radiation, CFC and HFC concentrations, aerosols and ozone as well as less frequent measurements of many other parameters. The onset of September 29, 2009 earthquake is clearly visible in the continuous data streams in a variety of ways. The station electrical generator came online when the Samoa power grid failed so instruments were powered during and subsequent to the earthquake. Some instruments ceased operation in a spurt of spurious data followed by silence. Other instruments just stopped sending data abruptly when the shaking from the earthquake broke a data or power links, or an integral part of the instrument was damaged. Others survived the shaking but were put out of calibration. Still others suffered damage after the earthquake as heaters ran uncontrolled or rotating shafts continued operating in a damaged environment grinding away until they seized up or chewed a new operating space. Some instruments operated as if there was no earthquake, others were brought back online within a few days. Many of the more complex (and in most cases, most expensive) instruments will be out of service, some for at least 6 months or more. This presentation will show these results and discuss the impact of the earthquake on long-term measurements of climate forcing agents and other critical climate measurements.

  16. Estimating the Threat of Tsunamigenic Earthquakes and Earthquake Induced-Landslide Tsunami in the Caribbean

    NASA Astrophysics Data System (ADS)

    McCann, W. R.

    2007-05-01

    more likely to produce slow earthquakes. Subduction of rough seafloor may activate thrust faults within the accretionary prism above the main decollement, causing indentation of the prism toe. Later reactivation of a dormant decollement would enhance the possibility of slow earthquakes. Subduction of significant seafloor relief and corresponding indentation of the accretionary prism toe would then be another parameter to estimate the likelihood of slow earthquakes. Using these criteria, several regions of the Northeastern Caribbean stand out as more likely sources for slow earthquakes.

  17. Applicability of source scaling relations for crustal earthquakes to estimation of the ground motions of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe

    2017-01-01

    A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic

  18. Estimation of earthquake effects associated with a great earthquake in the New Madrid seismic zone

    USGS Publications Warehouse

    Hopper, Margaret G.; Algermissen, Sylvester Theodore; Dobrovolny, Ernest E.

    1983-01-01

    Estimates have been made of the effects of a large Ms = 8.6, Io = XI earthquake hypothesed to occur anywhere in the New Madrid seismic zone. The estimates are based on the distributions of intensities associated with the earthquakes of 1811-12, 1843 and 1895 although the effects of other historical shocks are also considered. The resulting composite type intensity map for a maximum intensity XI is believed to represent the upper level of shaking likely to occur. Specific intensity maps have been developed for six cities near the epicentral region taking into account the most likely distribution of site response in each city. Intensities found are: IX for Carbondale, IL; VIII and IX for Evansville, IN; VI and VIII for Little Rock, AR; IX and X for Memphis, TN; VIII, IX, and X for Paducah, KY; and VIII and X for Poplar Bluff, MO. On a regional scale, intensities are found to attenuate from the New Madrid seismic zone most rapidly to the west and southwest sides of the zone, most slowly to the northwest along the Mississippi River, on the northeast along the Ohio River, and on the southeast toward Georgia and South Carolina. Intensities attenuate toward the north, east, and south in a more normal fashion. Known liquefaction effects are documented but much more research is needed to define the liquefaction potential.

  19. Earthquake Loss Assessment for the Evaluation of the Sovereign Risk and Financial Sustainability of Countries and Cities

    NASA Astrophysics Data System (ADS)

    Cardona, O. D.

    2013-05-01

    Recently earthquakes have struck cities both from developing as well as developed countries, revealing significant knowledge gaps and the need to improve the quality of input data and of the assumptions of the risk models. The quake and tsunami in Japan (2011) and the disasters due to earthquakes in Haiti (2010), Chile (2010), New Zealand (2011) and Spain (2011), only to mention some unexpected impacts in different regions, have left several concerns regarding hazard assessment as well as regarding the associated uncertainties to the estimation of the future losses. Understanding probable losses and reconstruction costs due to earthquakes creates powerful incentives for countries to develop planning options and tools to cope with sovereign risk, including allocating the sustained budgetary resources necessary to reduce those potential damages and safeguard development. Therefore the use of robust risk models is a need to assess the future economic impacts, the country's fiscal responsibilities and the contingent liabilities for governments and to formulate, justify and implement risk reduction measures and optimal financial strategies of risk retention and transfer. Special attention should be paid to the understanding of risk metrics such as the Loss Exceedance Curve (empiric and analytical) and the Expected Annual Loss in the context of conjoint and cascading hazards.

  20. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  1. The global historical and future economic loss and cost of earthquakes during the production of adaptive worldwide economic fragility functions

    NASA Astrophysics Data System (ADS)

    Daniell, James; Wenzel, Friedemann

    2014-05-01

    Over the past decade, the production of economic indices behind the CATDAT Damaging Earthquakes Database has allowed for the conversion of historical earthquake economic loss and cost events into today's terms using long-term spatio-temporal series of consumer price index (CPI), construction costs, wage indices, and GDP from 1900-2013. As part of the doctoral thesis of Daniell (2014), databases and GIS layers for a country and sub-country level have been produced for population, GDP per capita, net and gross capital stock (depreciated and non-depreciated) using studies, census information and the perpetual inventory method. In addition, a detailed study has been undertaken to collect and reproduce as many historical isoseismal maps, macroseismic intensity results and reproductions of earthquakes as possible out of the 7208 damaging events in the CATDAT database from 1900 onwards. a) The isoseismal database and population bounds from 3000+ collected damaging events were compared with the output parameters of GDP and net and gross capital stock per intensity bound and administrative unit, creating a spatial join for analysis. b) The historical costs were divided into shaking/direct ground motion effects, and secondary effects costs. The shaking costs were further divided into gross capital stock related and GDP related costs for each administrative unit, intensity bound couplet. c) Costs were then estimated based on the optimisation of the function in terms of costs vs. gross capital stock and costs vs. GDP via the regression of the function. Losses were estimated based on net capital stock, looking at the infrastructure age and value at the time of the event. This dataset was then used to develop an economic exposure for each historical earthquake in comparison with the loss recorded in the CATDAT Damaging Earthquakes Database. The production of economic fragility functions for each country was possible using a temporal regression based on the parameters of

  2. Microseismic Network Performance Estimation: Comparing Predictions to an Earthquake Catalogue

    NASA Astrophysics Data System (ADS)

    Greig, Wesley; Ackerley, Nick

    2014-05-01

    The design of networks for monitoring induced seismicity is of critical importance as specific standards of performance are necessary. One of the difficulties involved in designing networks for monitoring induced seismicity is that it is difficult to determine whether or not the network meets these standards without first developing an earthquake catalog. We develop a tool that can assess two key measures of network performance without an earthquake catalog: location accuracy and magnitude of completeness. Site noise is measured either at existing seismic stations or as part of a noise survey. We then interpolate measured values to determine a noise map for the entire region. This information is combined with instrument noise for each station to accurately assess total ambient noise at each station. Location accuracy is evaluated according to the approach of Peters and Crosson (1972). Magnitude of completeness is computed by assuming isotropic radiation and mandating a threshold signal to noise ratio (similar to Stabile et al. 2013). We apply this tool to a seismic network in the central United States. We predict the magnitude of completeness and the location accuracy and compare predicted values with observed values generated from the existing earthquake catalog for the network. We investigate the effects of hypothetical station additions and removals to a network to simulate network expansions and station failures. We find that the addition of stations to areas of low noise results in significantly larger improvements in network performance than station additions to areas of elevated noise, particularly with respect to magnitude of completeness. Our results highlight the importance of site noise considerations in the design of a seismic network. The ability to predict hypothetical station performance allows for the optimization of seismic network design and enables the prediction of performance for a purely hypothetical seismic network. If near real

  3. Resource loss, self-efficacy, and family support predict posttraumatic stress symptoms: a 3-year study of earthquake survivors.

    PubMed

    Warner, Lisa Marie; Gutiérrez-Doña, Benicio; Villegas Angulo, Maricela; Schwarzer, Ralf

    2015-01-01

    Social support and self-efficacy are regarded as coping resources that may facilitate readjustment after traumatic events. The 2009 Cinchona earthquake in Costa Rica serves as an example for such an event to study resources to prevent subsequent severity of posttraumatic stress symptoms. At Time 1 (1-6 months after the earthquake in 2009), N=200 survivors were interviewed, assessing resource loss, received family support, and posttraumatic stress response. At Time 2 in 2012, severity of posttraumatic stress symptoms and general self-efficacy beliefs were assessed. Regression analyses estimated the severity of posttraumatic stress symptoms accounted for by all variables. Moderator and mediator models were examined to understand the interplay of received family support and self-efficacy with posttraumatic stress symptoms. Baseline posttraumatic stress symptoms and resource loss (T1) accounted for significant but small amounts of the variance in the severity of posttraumatic stress symptoms (T2). The main effects of self-efficacy (T2) and social support (T1) were negligible, but social support buffered resource loss, indicating that only less supported survivors were affected by resource loss. Self-efficacy at T2 moderated the support-stress relationship, indicating that low levels of self-efficacy could be compensated by higher levels of family support. Receiving family support at T1 enabled survivors to feel self-efficacious, underlining the enabling hypothesis. Receiving social support from relatives shortly after an earthquake was found to be an important coping resource, as it alleviated the association between resource loss and the severity of posttraumatic stress response, compensated for deficits of self-efficacy, and enabled self-efficacy, which was in turn associated with more adaptive adjustment 3 years after the earthquake.

  4. Estimation of Europa's exosphere loss rates

    NASA Astrophysics Data System (ADS)

    Lucchetti, Alice; Plainaki, Christina; Cremonese, Gabriele; Milillo, Anna; Shematovich, Valery; Jia, Xianzhe; Cassidy, Timothy

    2015-04-01

    Reactions in Europa's exosphere are dominated by plasma interactions with neutrals. The cross-sections for these processes are energy dependent and therefore the respective loss rates of the exospheric species depend on the speed distribution of the charged particles relative to the neutrals, as well as the densities of each reactant. In this work we review the average H2O, O2, and H2 loss rates due to plasma-neutral interactions to perform an estimation of the Europa's total exosphere loss. Since the electron density at Europa's orbit varies significantly with the magnetic latitude of the moon in Jupiter's magnetosphere, the dissociation and ionization rates for electron-impact processes are subject to spatial and temporal variations. Therefore, the resulting neutral loss rates determining the actual spatial distribution of the neutral density is not homogeneous. In addition, the ion-neutral interactions have an input to the loss of exospheric species as well as to the modification of the energy distribution of the existing species (for example, the O2 energy distribution is modified through charge-exchange between O2 and O2+). In our calculations, the photoreactions were considered for conditions of quiet and active Sun.

  5. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  6. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  7. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  8. Rupture Process of the 1969 and 1975 Kurile Earthquakes Estimated from Tsunami Waveform Analyses

    NASA Astrophysics Data System (ADS)

    Ioki, Kei; Tanioka, Yuichiro

    2016-12-01

    The 1969 and 1975 great Kurile earthquakes occurred along the Kurile trench. Tsunamis generated by these earthquakes were observed at tide gauge stations around the coasts of the Okhotsk Sea and Pacific Ocean. To understand rupture process of the 1969 and 1975 earthquakes, slip distributions of the 1969 and 1975 events were estimated using tsunami waveform inversion technique. Seismic moments estimated from slip distributions of the 1969 and 1975 earthquakes were 1.1 × 1021 Nm ( M w 8.0) and 0.6 × 1021 Nm ( M w 7.8), respectively. The 1973 Nemuro-Oki earthquake occurred at the plate interface adjacent to that ruptured by the 1969 Kurile earthquake. The 1975 Shikotan earthquake occurred in a shallow region of the plate interface where was not ruptured by the 1969 Kurile earthquake. Further, like a sequence of the 1969 and 1975 earthquakes, it is possible that a great earthquake may occur in a shallow part of the plate interface a few years after a great earthquake that occurs in a deeper part of the same region along the trench.

  9. Near-Real-Time Loss-Estimation for Instrumented Buildings

    NASA Astrophysics Data System (ADS)

    Porter, K. A.; Beck, J. L.; Ching, J.; Mitrani, J.

    2003-12-01

    Building owners make several important decisions in the hours after an earthquake occurs: whether to engage a structural engineer to inspect the building; what to tell investors, rating agencies, or other financial stakeholders; and how to assess the safety of tenants. A current research project seeks to develop the means to perform an automated, building-specific, probabilistic evaluation of detailed physical damage, safety, and loss to instrumented buildings. The project relies on three recent developments: real-time monitoring, an unscented particle filter, and the assembly-based vulnerability (ABV) technique. Real-time monitoring systems such as COMET and R-Shape continuously record and analyze accelerometer and other building data for several instrumented buildings. Potentially sparse response information can be input to a new, unscented particle filter to estimate potentially highly nonlinear structural response at all the building's degrees of freedom. The complete structural response is then input to the ABV framework, which applies a set of empirical component fragility functions to estimate the probabilistic damage state of every damageable component in the building. Damage data are then input within ABV to standard safety-evaluation criteria to estimate the likely action of safety inspectors. The probabilistic damage state is also input in ABV to a construction-cost-estimation algorithm to evaluate probabilistic repair cost. The project will combine these three elements for software implementation so that damage, safety, and loss can be calculated and transmitted to a decision maker within minutes of the cessation of strong motion. The research is illustrated using two buildings: one in California, the other in Japan.

  10. Methodology for estimating costs of burn injuries and property losses.

    PubMed

    Stacey, G S; Smith, K S

    1979-08-01

    This paper deals with the broad subject of the estimation of losses that result from fires and the comparison of the costs of actions to prevent losses with the resultant benefits in the form of loss reduction. It discusses how to estimate fire losses and suggests a framework for comparing prevention costs and loss reduction benefits.

  11. Global Earthquake and Volcanic Eruption Economic losses and costs from 1900-2014: 115 years of the CATDAT database - Trends, Normalisation and Visualisation

    NASA Astrophysics Data System (ADS)

    Daniell, James; Skapski, Jens-Udo; Vervaeck, Armand; Wenzel, Friedemann; Schaefer, Andreas

    2015-04-01

    Over the past 12 years, an in-depth database has been constructed for socio-economic losses from earthquakes and volcanoes. The effects of earthquakes and volcanic eruptions have been documented in many databases, however, many errors and incorrect details are often encountered. To combat this, the database was formed with socioeconomic checks of GDP, capital stock, population and other elements, as well as providing upper and lower bounds to each available event loss. The definition of economic losses within the CATDAT Damaging Earthquakes Database (Daniell et al., 2011a) as of v6.1 has now been redefined to provide three options of natural disaster loss pricing, including reconstruction cost, replacement cost and actual loss, in order to better define the impact of historical disasters. Similarly for volcanoes as for earthquakes, a reassessment has been undertaken looking at the historical net and gross capital stock and GDP at the time of the event, including the depreciated stock, in order to calculate the actual loss. A normalisation has then been undertaken using updated population, GDP and capital stock. The difference between depreciated and gross capital can be removed from the historical loss estimates which have been all calculated without taking depreciation of the building stock into account. The culmination of time series from 1900-2014 of net and gross capital stock, GDP, direct economic loss data, use of detailed studies of infrastructure age, and existing damage surveys, has allowed the first estimate of this nature. The death tolls in earthquakes from 1900-2014 are presented in various forms, showing around 2.32 million deaths due to earthquakes (with a range of 2.18 to 2.63 million) and around 59% due to masonry buildings and 28% from secondary effects. For the death tolls from the volcanic eruption database, 98000 deaths with a range from around 83000 to 107000 is seen from 1900-2014. The application of VSL life costing from death and injury

  12. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  13. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  14. Estimation of strong ground motions from hypothetical earthquakes on the Cascadia subduction zone, Pacific Northwest

    USGS Publications Warehouse

    Heaton, T.H.; Hartzell, S.H.

    1989-01-01

    Strong ground motions are estimated for the Pacific Northwest assuming that large shallow earthquakes, similar to those experienced in southern Chile, southwestern Japan, and Colombia, may also occur on the Cascadia subduction zone. Fifty-six strong motion recordings for twenty-five subduction earthquakes of Ms???7.0 are used to estimate the response spectra that may result from earthquakes Mw<81/4. Large variations in observed ground motion levels are noted for a given site distance and earthquake magnitude. When compared with motions that have been observed in the western United States, large subduction zone earthquakes produce relatively large ground motions at surprisingly large distances. An earthquake similar to the 22 May 1960 Chilean earthquake (Mw 9.5) is the largest event that is considered to be plausible for the Cascadia subduction zone. This event has a moment which is two orders of magnitude larger than the largest earthquake for which we have strong motion records. The empirical Green's function technique is used to synthesize strong ground motions for such giant earthquakes. Observed teleseismic P-waveforms from giant earthquakes are also modeled using the empirical Green's function technique in order to constrain model parameters. The teleseismic modeling in the period range of 1.0 to 50 sec strongly suggests that fewer Green's functions should be randomly summed than is required to match the long-period moments of giant earthquakes. It appears that a large portion of the moment associated with giant earthquakes occurs at very long periods that are outside the frequency band of interest for strong ground motions. Nevertheless, the occurrence of a giant earthquake in the Pacific Northwest may produce quite strong shaking over a very large region. ?? 1989 Birkha??user Verlag.

  15. A comparison of socio-economic loss analysis from the 2013 Haiyan Typhoon and Bohol Earthquake events in the Philippines in near real-time

    NASA Astrophysics Data System (ADS)

    Daniell, James; Mühr, Bernhard; Kunz-Plapp, Tina; Brink, Susan A.; Kunz, Michael; Khazai, Bijan; Wenzel, Friedemann

    2014-05-01

    In the aftermath of a disaster, the extent of the socioeconomic loss (fatalities, homelessness and economic losses) is often not known and it may take days before a reasonable estimate is known. Using the technique of socio-economic fragility functions developed (Daniell, 2014) using a regression of socio-economic indicators through time against historical empirical loss vs. intensity data, a first estimate can be established. With more information from the region as the disaster unfolds, a more detailed estimate can be provided via a calibration of the initial loss estimate parameters. In 2013, two main disasters hit the Philippines; the Bohol earthquake in October and the Haiyan typhoon in November. Although both disasters were contrasting and hit different regions, the same generalised methodology was used for initial rapid estimates and then the updating of the disaster loss estimate through time. The CEDIM Forensic Disaster Analysis Group of KIT and GFZ produced 6 reports for Bohol and 2 reports for Haiyan detailing various aspects of the disasters from the losses to building damage, the socioeconomic profile and also the social networking and disaster response. This study focusses on the loss analysis undertaken. The following technique was used:- 1. A regression of historical earthquake and typhoon losses for the Philippines was examined using the CATDAT Damaging Earthquakes Database, and various Philippines databases respectively. 2. The historical intensity impact of the examined events were placed in a GIS environment in order to allow correlation with the population and capital stock database from 1900-2013 to create a loss function. The modified human development index from 1900-2013 was also used to also calibrate events through time. 3. The earthquake intensity and the wind speed intensity was used from the 2013 events as well as the 2013 capital stock and population in order to calculate the number of fatalities (except in Haiyan), homeless and

  16. Improving Estimates of Coseismic Subsidence from southern Cascadia Subduction Zone Earthquakes at northern Humboldt Bay, California

    NASA Astrophysics Data System (ADS)

    Padgett, J. S.; Engelhart, S. E.; Hemphill-Haley, E.; Kelsey, H. M.; Witter, R. C.

    2015-12-01

    Geological estimates of subsidence from past earthquakes help to constrain Cascadia subduction zone (CSZ) earthquake rupture models. To improve subsidence estimates for past earthquakes along the southern CSZ, we apply transfer function analysis on microfossils from 3 intertidal marshes in northern Humboldt Bay, California, ~60 km north of the Mendocino Triple Junction. The transfer function method uses elevation-dependent intertidal foraminiferal and diatom assemblages to reconstruct relative sea-level (RSL) change indicated by shifts in microfossil assemblages. We interpret stratigraphic evidence associated with sudden shifts in microfossils to reflect sudden RSL rise due to subsidence during past CSZ earthquakes. Laterally extensive (>5 km) and sharp mud-over-peat contacts beneath marshes at Jacoby Creek, Mad River Slough, and McDaniel Slough demonstrate widespread earthquake subsidence in northern Humboldt Bay. C-14 ages of plant macrofossils taken from above and below three contacts that correlate across all three sites, provide estimates of the times of subsidence at ~250 yr BP, ~1300 yr BP and ~1700 yr BP. Two further contacts observed at only two sites provide evidence for subsidence during possible CSZ earthquakes at ~900 yr BP and ~1100 yr BP. Our study contributes 20 AMS radiocarbon ages, of identifiable plant macrofossils, that improve estimates of the timing of past earthquakes along the southern CSZ. We anticipate that our results will provide more accurate and precise reconstructions of RSL change induced by southern CSZ earthquakes. Prior to our work, studies in northern Humboldt Bay provided subsidence estimates with vertical uncertainties >±0.5 m; too imprecise to adequately constrain earthquake rupture models. Our method, applied recently in coastal Oregon, has shown that subsidence during past CSZ earthquakes can be reconstructed with a precision of ±0.3m and substantially improves constraints on rupture models used for seismic hazard

  17. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    USGS Publications Warehouse

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  18. Estimating surface faulting impacts from the shakeout scenario earthquake

    USGS Publications Warehouse

    Treiman, J.A.; Pontib, D.J.

    2011-01-01

    An earthquake scenario, based on a kinematic rupture model, has been prepared for a Mw 7.8 earthquake on the southern San Andreas Fault. The rupture distribution, in the context of other historic large earthquakes, is judged reasonable for the purposes of this scenario. This model is used as the basis for generating a surface rupture map and for assessing potential direct impacts on lifelines and other infrastructure. Modeling the surface rupture involves identifying fault traces on which to place the rupture, assigning slip values to the fault traces, and characterizing the specific displacements that would occur to each lifeline impacted by the rupture. Different approaches were required to address variable slip distribution in response to a variety of fault patterns. Our results, involving judgment and experience, represent one plausible outcome and are not predictive because of the variable nature of surface rupture. ?? 2011, Earthquake Engineering Research Institute.

  19. Estimates of loss rates of jaw tags on walleyes

    USGS Publications Warehouse

    Newman, Steven P.; Hoff, Michael H.

    1998-01-01

    The rate of jaw tag loss was evaluated for walleye Stizostedion vitreum in Escanaba Lake, Wisconsin. We estimated tag loss using two recapture methods, a creel census and fykenetting. Average annual tag loss estimates were 17.5% for fish recaptured by anglers and 27.8% for fish recaptured in fyke nets. However, fyke-net data were biased by tag loss during netting. The loss rate of jaw tags increased with time and walleye length.

  20. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  1. Damage and Loss Estimation for Natural Gas Networks: The Case of Istanbul

    NASA Astrophysics Data System (ADS)

    Çaktı, Eser; Hancılar, Ufuk; Şeşetyan, Karin; Bıyıkoǧlu, Hikmet; Şafak, Erdal

    2017-04-01

    Natural gas networks are one of the major lifeline systems to support human, urban and industrial activities. The continuity of gas supply is critical for almost all functions of modern life. Under natural phenomena such as earthquakes and landslides the damages to the system elements may lead to explosions and fires compromising human life and damaging physical environment. Furthermore, the disruption in the gas supply puts human activities at risk and also results in economical losses. This study is concerned with the performance of one of the largest natural gas distribution systems in the world. Physical damages to Istanbul's natural gas network are estimated under the most recent probabilistic earthquake hazard models available, as well as under simulated ground motions from physics based models. Several vulnerability functions are used in modelling damages to system elements. A first-order assessment of monetary losses to Istanbul's natural gas distribution network is also attempted.

  2. Water, soil and nutrient losses caused by Wenchuan Earthquake: a case study in Pengzhou.

    PubMed

    Guo, Haixia; Sun, Geng; Shi, Fusun; Lu, Tao; Wang, Qian; Wu, Yan; Wu, Ning

    2013-01-01

    Wenchuan Earthquake triggered a large number of geological hazards, dramatically stimulating soil erosion. This study was carried out in Pengzhou County, Sichuan Province. By comparison of sediment, runoff and nutrient losses in earthquake-damaged forests (EF) and unaffected forests (UF), the actual status of soil erosion after the Wenchuan Earthquake was investigated by runoff plots. Results showed that water and soil losses were dramatically increased after earthquake. During the study period (from August to November 2010), UF runoffs were 19.26, 36.76, 10.68 and 7.51 L m(-2), while total runoffs in EF sites were 30.41, 25.79, 5.03 and 2.67 L m(-2) respectively, which were 15, 15, 18 and 19 times more than those in UF. Total sediment losses in EF sites were 28.94, 25.16, 4.11 and 1.98 t km(-2) respectively while in UF they were 707.69, 610.05, 113.43 and 58.95 t km(-2) respectively during the same study period, i.e. 23, 23, 32 and 29 times more than those in UF. Path analysis showed that both vegetation and rainfall exerted an indirect influence on sediment loss by significantly influencing runoff, which correlated with sediment loss very significantly. Although no obvious differences of the nutrients' concentration in runoff water (soluble organic carbon (SOC), total nitrogen (TN), total phosphorus (TP) and total potassium (TK)) between EF and UF sites were observed, total losses of the four nutrients were significantly higher in EF than in UF sites (for example, in EF sites, SOC, TN, TP and TK losses were 970.52, 114.46, 2.26 and 307.00 g m(-2) respectively, while in UF they were 38.13, 4.22, 0.10 and 13.28 g m(-2)) due to significantly higher runoff in EF sites. In conclusion, soil erosion was significantly more serious due to the loss of forested lands resulting from the Wenchuan Earthquake, delaying the restoring process of forest cover and weakening the ecological linkage between upstream and downstream.

  3. Quasi real-time estimation of the moment magnitude of large earthquake from static strain changes

    NASA Astrophysics Data System (ADS)

    Itaba, S.

    2016-12-01

    The 2011 Tohoku-Oki (off the Pacific coast of Tohoku) earthquake, of moment magnitude 9.0, was accompanied by large static strain changes (10-7), as measured by borehole strainmeters operated by the Geological Survey of Japan in the Tokai, Kii Peninsula, and Shikoku regions. A fault model for the earthquake on the boundary between the Pacific and North American plates, based on these borehole strainmeter data, yielded a moment magnitude of 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency (JMA) announced just after earthquake occurrence was 7.9. Such geodetic moment magnitudes, derived from static strain changes, can be estimated almost as rapidly as determinations using seismic waves. I have to verify the validity of this method in some cases. In the case of this earthquake's largest aftershock, which occurred 29 minutes after the mainshock. The prompt report issued by JMA assigned this aftershock a magnitude of 7.3, whereas the moment magnitude derived from borehole strain data is 7.6, which is much closer to the actual moment magnitude of 7.7. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using static strain changes is one of the strong methods for rapid estimation of the magnitude of large earthquakes, and useful to improve the accuracy of Earthquake Early Warning.

  4. Evaluating simplified methods for liquefaction assessment for loss estimation

    NASA Astrophysics Data System (ADS)

    Kongar, Indranil; Rossetto, Tiziana; Giovinazzi, Sonia

    2017-06-01

    Currently, some catastrophe models used by the insurance industry account for liquefaction by applying a simple factor to shaking-induced losses. The factor is based only on local liquefaction susceptibility and this highlights the need for a more sophisticated approach to incorporating the effects of liquefaction in loss models. This study compares 11 unique models, each based on one of three principal simplified liquefaction assessment methods: liquefaction potential index (LPI) calculated from shear-wave velocity, the HAZUS software method and a method created specifically to make use of USGS remote sensing data. Data from the September 2010 Darfield and February 2011 Christchurch earthquakes in New Zealand are used to compare observed liquefaction occurrences to forecasts from these models using binary classification performance measures. The analysis shows that the best-performing model is the LPI calculated using known shear-wave velocity profiles, which correctly forecasts 78 % of sites where liquefaction occurred and 80 % of sites where liquefaction did not occur, when the threshold is set at 7. However, these data may not always be available to insurers. The next best model is also based on LPI but uses shear-wave velocity profiles simulated from the combination of USGS VS30 data and empirical functions that relate VS30 to average shear-wave velocities at shallower depths. This model correctly forecasts 58 % of sites where liquefaction occurred and 84 % of sites where liquefaction did not occur, when the threshold is set at 4. These scores increase to 78 and 86 %, respectively, when forecasts are based on liquefaction probabilities that are empirically related to the same values of LPI. This model is potentially more useful for insurance since the input data are publicly available. HAZUS models, which are commonly used in studies where no local model is available, perform poorly and incorrectly forecast 87 % of sites where liquefaction occurred, even at

  5. Estimating business and residential water supply interruption losses from catastrophic events

    NASA Astrophysics Data System (ADS)

    Brozović, Nicholas; Sunding, David L.; Zilberman, David

    2007-08-01

    Following man-made or natural catastrophes, widespread and long-lasting disruption of lifelines can lead to economic impacts for both business and residential lifeline users. As a result, the total economic losses caused by infrastructure damage may be much higher than the value of damage to infrastructure itself. In this paper, we consider the estimation of economic impacts on businesses and residential consumers resulting from water supply disruption. The methodology we present for estimating business interruption losses assumes that marginal losses are increasing in the severity of disruption and that there may be a critical water availability cutoff below which business activity ceases. To estimate residential losses from water supply interruption, we integrate consumers' demand curves, calibrated to water agency price and quantity data. Our methodologies are spatially disaggregated and explicitly account for the time profile of infrastructure repair and restoration. As an illustration, we estimate the economic losses to business and residential water users of one of the major water supply systems of the San Francisco Bay Area of California resulting from two potential earthquake scenarios, a magnitude 7.9 event on the San Andreas Fault and a magnitude 7.1 event on the Hayward Fault. For the business loss estimation, our modeling framework is general enough to calculate and compare losses using loss functions from several previous studies. Estimated business and residential losses for the San Andreas event are 14.4 billion and 279 million, respectively. For the Hayward event, estimated business and residential losses are 9.3 billion and 37 million, respectively.

  6. PAGER--Rapid assessment of an earthquake?s impact

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  7. Estimation of completeness magnitude with a Bayesian modeling of daily and weekly variations in earthquake detectability

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2014-12-01

    In the analysis of seismic activity, assessment of earthquake detectability of a seismic network is a fundamental issue. For this assessment, the completeness magnitude Mc, the minimum magnitude above which all earthquakes are recorded, is frequently estimated. In most cases, Mc is estimated for an earthquake catalog of duration longer than several weeks. However, owing to human activity, noise level in seismic data is higher on weekdays than on weekends, so that earthquake detectability has a weekly variation [e.g., Atef et al., 2009, BSSA]; the consideration of such a variation makes a significant contribution to the precise assessment of earthquake detectability and Mc. For a quantitative evaluation of the weekly variation, we introduced the statistical model of a magnitude-frequency distribution of earthquakes covering an entire magnitude range [Ogata & Katsura, 1993, GJI]. The frequency distribution is represented as the product of the Gutenberg-Richter law and a detection rate function. Then, the weekly variation in one of the model parameters, which corresponds to the magnitude where the detection rate of earthquakes is 50%, was estimated. Because earthquake detectability also have a daily variation [e.g., Iwata, 2013, GJI], and the weekly and daily variations were estimated simultaneously by adopting a modification of a Bayesian smoothing spline method for temporal change in earthquake detectability developed in Iwata [2014, Aust. N. Z. J. Stat.]. Based on the estimated variations in the parameter, the value of Mc was estimated. In this study, the Japan Meteorological Agency catalog from 2006 to 2010 was analyzed; this dataset is the same as analyzed in Iwata [2013] where only the daily variation in earthquake detectability was considered in the estimation of Mc. A rectangular grid with 0.1° intervals covering in and around Japan was deployed, and the value of Mc was estimated for each gridpoint. Consequently, a clear weekly variation was revealed; the

  8. Uncertainty of earthquake losses due to model uncertainty of input ground motions in the Los Angeles area

    USGS Publications Warehouse

    Cao, T.; Petersen, M.D.

    2006-01-01

    In a recent study we used the Monte Carlo simulation method to evaluate the ground-motion uncertainty of the 2002 update of the California probabilistic seismic hazard model. The resulting ground-motion distribution is used in this article to evaluate the contribution of the hazard model to the uncertainty in earthquake loss ratio, the ratio of the expected loss to the total value of a structure. We use the Hazards U.S. (HAZUS) methodology for loss estimation because it is a widely used and publicly available risk model and intended for regional studies by public agencies and for use by governmental decision makers. We found that the loss ratio uncertainty depends not only on the ground-motion uncertainty but also on the mean ground-motion level. The ground-motion uncertainty, as measured by the coefficient of variation (COV), is amplified when converting to the loss ratio uncertainty because loss increases concavely with ground motion. By comparing the ground-motion uncertainty with the corresponding loss ratio uncertainty for the structural damage of light wood-frame buildings in Los Angeles area, we show that the COV of loss ratio is almost twice the COV of ground motion with a return period of 475 years around the San Andreas fault and other major faults in the area. The loss ratio for the 2475-year ground-motion maps is about a factor of three higher than for the 475-year maps. However, the uncertainties in ground motion and loss ratio for the longer return periods are lower than for the shorter return periods because the uncertainty parameters in the hazard logic tree are independent of the return period, but the mean ground motion increases with return period.

  9. Estimation of economic losses caused by disruption of lifeline service: An analysis of the Memphis Light, Gas and Water system

    SciTech Connect

    Chang, S.E.; Seligson, H.A.; Eguchi, R.T.

    1995-12-31

    The assessment of economic impact remains an important missing link in earthquake loss estimation procedures. This paper presents a general methodology for evaluating the economic losses caused by seismically-induced disruption of lifeline service in an urban area. The methodology consists of three steps: (1) development of a lifeline usage model on an industry basis; (2) estimation of the spatial distribution of economic activity throughout the urban area; and (3) assessment of direct losses through evaluation of the spatial coincidence of economic activity with lifeline service disruption. To demonstrate this methodology, a pilot analysis was conducted on the Memphis Light, Gas and Water electric power system for a Magnitude 7.5 earthquake in New Madrid seismic Zone. Using newly-available empirical data, business interruption in Shelby County, Tennessee, was estimated for major industries in the local economy. Extensions of the methodology are also discussed.

  10. Building vulnerability and human loss assessment in different earthquake intensity and time: a case study of the University of the Philippines, Los Baños (UPLB) Campus

    NASA Astrophysics Data System (ADS)

    Rusydy, I.; Faustino-Eslava, D. V.; Muksin, U.; Gallardo-Zafra, R.; Aguirre, J. J. C.; Bantayan, N. C.; Alam, L.; Dakey, S.

    2017-02-01

    Study on seismic hazard, building vulnerability and human loss assessment become substantial for building education institutions since the building are used by a lot of students, lecturers, researchers, and guests. The University of the Philippines, Los Banos (UPLB) located in an earthquake prone area. The earthquake could cause structural damage and injury of the UPLB community. We have conducted earthquake assessment in different magnitude and time to predict the posibility of ground shaking, building vulnerability and estimated the number of casualty of the UPLB community. The data preparation in this study includes the earthquake scenario modeling using Intensity Prediction Equations (IPEs) for shallow crustal shaking attenuation to produce intensity map of bedrock and surface. Earthquake model was generated from the segment IV and the segment X of the Valley Fault System (VFS). Building vulnerability of different type of building was calculated using fragility curve of the Philippines building. The population data for each building in various occupancy time, damage ratio, and injury ratio data were used to compute the number of casualties. The result reveals that earthquake model from the segment IV and the segment X of the VFS could generate earthquake intensity between 7.6 – 8.1 MMI in the UPLB campus. The 7.7 Mw earthquake (scenario I) from the segment IV could cause 32% - 51% damage of building and 6.5 Mw earthquake (scenario II) occurring in the segment X could cause 18% - 39% structural damage of UPLB buildings. If the earthquake occurs at 2 PM (day-time), it could injure 10.2% - 18.8% for the scenario I and could injure 7.2% - 15.6% of UPLB population in scenario II. The 5 Pm event, predicted will injure 5.1%-9.4% in the scenario I, and 3.6%-7.8% in scenario II. A nighttime event (2 Am) cause injury to students and guests who stay in dormitories. The earthquake is predicted to injure 13 - 66 students and guests in the scenario I and 9 - 47 people in

  11. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  12. Using a genetic algorithm to estimate the details of earthquake slip distributions from point surface displacements

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nic Bhloscaidh, M.

    2016-03-01

    Examining fault activity over several earthquake cycles is necessary for long-term modeling of the fault strain budget and stress state. While this requires knowledge of coseismic slip distributions for successive earthquakes along the fault, these exist only for the most recent events. However, overlying the Sunda Trench, sparsely distributed coral microatolls are sensitive to tectonically induced changes in relative sea levels and provide a century-spanning paleogeodetic and paleoseismic record. Here we present a new technique called the Genetic Algorithm Slip Estimator to constrain slip distributions from observed surface deformations of corals. We identify a suite of models consistent with the observations, and from them we compute an ensemble estimate of the causative slip. We systematically test our technique using synthetic data. Applying the technique to observed coral displacements for the 2005 Nias-Simeulue earthquake and 2007 Mentawai sequence, we reproduce key features of slip present in previously published inversions such as the magnitude and location of slip asperities. From the displacement data available for the 1797 and 1833 Mentawai earthquakes, we present slip estimates reproducing observed displacements. The areas of highest modeled slip in the paleoearthquake are nonoverlapping, and our solutions appear to tile the plate interface, complementing one another. This observation is supported by the complex rupture pattern of the 2007 Mentawai sequence, underlining the need to examine earthquake occurrence through long-term strain budget and stress modeling. Although developed to estimate earthquake slip, the technique is readily adaptable for a wider range of applications.

  13. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  14. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  15. Probability estimates of seismic event occurrence compared to health hazards - Forecasting Taipei's Earthquakes

    NASA Astrophysics Data System (ADS)

    Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.

    2014-12-01

    Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.

  16. Inter-plate aseismic slip on the subducting plate boundaries estimated from repeating earthquakes

    NASA Astrophysics Data System (ADS)

    Igarashi, T.

    2015-12-01

    Sequences of repeating earthquakes are caused by repeating slips of small patches surrounded by aseismic slip areas at plate boundary zones. Recently, they have been detected in many regions. In this study, I detected repeating earthquakes which occurred in Japan and the world by using seismograms observed in the Japanese seismic network, and investigated the space-time characteristics of inter-plate aseismic slip on the subducting plate boundaries. To extract repeating earthquakes, I calculate cross-correlation coefficients of band-pass filtering seismograms at each station following Igarashi [2010]. I used two data-set based on USGS catalog for about 25 years from May 1990 and JMA catalog for about 13 years from January 2002. As a result, I found many sequences of repeating earthquakes in the subducting plate boundaries of the Andaman-Sumatra-Java and Japan-Kuril-Kamchatka-Aleutian subduction zones. By applying the scaling relations among a seismic moment, recurrence interval and slip proposed by Nadeau and Johnson [1998], they indicate the space-time changes of inter-plate aseismic slips. Pairs of repeating earthquakes with the longest time interval occurred in the Solomon Islands area and the recurrence interval was about 18.5 years. The estimated slip-rate is about 46 mm/year, which correspond to about half of the relative plate motion in this area. Several sequences with fast slip-rates correspond to the post-seismic slips after the 2004 Sumatra-Andaman earthquake (M9.0), the 2006 Kuril earthquake (M8.3), the 2007 southern Sumatra earthquake (M8.5), and the 2011 Tohoku-oki earthquake (M9.0). The database of global repeating earthquakes enables the comparison of the inter-plate aseismic slips of various plate boundary zones of the world. I believe that I am likely to detect more sequences by extending analysis periods in the area where they were not found in this analysis.

  17. Estimating blood transfusion requirements in preparation for a major earthquake: the Tehran, Iran study.

    PubMed

    Tabatabaie, Morteza; Ardalan, Ali; Abolghasemi, Hassan; Holakouie Naieni, Kourosh; Pourmalek, Farshad; Ahmadi, Batool; Shokouhi, Mostafa

    2010-01-01

    Tehran, Iran, with a population of approximately seven million people, is at a very high risk for a devastating earthquake. This study aims to estimate the number of units of blood required at the time of such an earthquake. To assume the damage of an earthquake in Tehran, the researchers applied the Centre for Earthquake and Environmental Studies of Tehran/Japan International Cooperation Agency (CEST/JICA) fault-activation scenarios, and accordingly estimated the injury-to-death ratio (IDR), hospital admission rate (HAR), and blood transfusion rate (BTR). The data were based on Iran's major earthquakes during last two decades. The following values were considered for the analysis: (1) IDR = 1, 2, and 3; (2) HAR = 0.25 and 0.35; and (3) BTR = 0.05, 0.07, and 0.10. The American Association of Blood Banks' formula was adapted to calculate total required numbers of Type- O red blood cell (RBC) units. Calculations relied on the following assumptions: (1) no change in Tehran's vulnerability from CEST/JICA study time; (2) no functional damage to Tehran Blood Transfusion Post; and (3) standards of blood safety are secure during the disaster responses. Surge capacity was estimated based on the Bam earthquake experience. The maximum, optimum, and minimum blood deficits were calculated accordingly. No deficit was estimated in case of the Mosha fault activation and the optimum scenario of North Tehran fault. The maximum blood deficit was estimated from the activation of the Ray fault, requiring up to 107,293 and 95,127 units for the 0-24 hour and the 24-72 hour periods after the earthquake, respectively. The optimum deficit was estimated up to 46,824 and 16,528 units for 0-24 hour and 24-72 hour period after the earthquake, respectively. In most Tehran earthquake scenarios, a shortage of blood was estimated to surge the capacity of all blood transfusion posts around the country within first three days, as it might ask for a 2-8 times more than what the system had produced

  18. Estimation of blood loss is inaccurate and unreliable.

    PubMed

    Rothermel, Luke D; Lipman, Jeremy M

    2016-10-01

    To determine the characteristics associated with improved accuracy or reliability of estimating operative blood loss. Operating room personnel at a tertiary care hospital evaluated 3 operative simulations and provided estimations of blood loss. The simulations utilized precise, known volumes of porcine blood and saline on tapes, sponges, and in suction containers. Low volume (50 mL), mid volume (300 mL), and high volume (900 mL) blood loss scenarios were used in this simulation. Information collected included the blood loss estimation, the participant's occupation, years of experience in the operating room, confidence level in estimating blood loss, and their opinion as to which group would provide the most accurate estimation. Sixty practitioners participated: 17 anesthesia providers, 22 surgeons, and 21 nurses and technicians. Overall, estimations were significantly inaccurate: scenario 1, mean error 52%; scenario 2, mean error 61%; scenario 3, mean error 85%. Ninety-five percent of participants provided estimations that had >25% error in at least 1 scenario. Only 27% demonstrated consistency in over or under-estimating the blood loss. There was no association between specialty, years of experience, or confidence in ability with consistency or accuracy of estimating blood loss. This study demonstrates that visual estimation of operative blood loss is unreliable and inaccurate. No provider specialty, level of experience, or self-assessment of ability was associated with improved estimation. Blood loss estimations are not a reliable metric to judge physician performance or patient outcomes. Consideration should be given to alternative reporting of operative blood loss to better direct perioperative care. Copyright © 2016. Published by Elsevier Inc.

  19. Problems of seismic hazard estimation in regions with few large earthquakes: Examples from eastern Canada

    NASA Astrophysics Data System (ADS)

    Basham, P. W.; Adams, John

    1989-10-01

    Seismic hazard estimates and seismic zoning maps are based on an assessment of historical and recent seismieity and any correlations with geologic and tectonic features that might define the earthquake potential. Evidence is accumulating that the large earthquakes in eastern Canada ( M ~ 7) may be associated with the rift systems hat surround or break the integrity of the North American craton. The problem for seismic hazard estimation is that the larger historical earthquakes are not uniformly distributed along the Paleozoic St. Lawrence-Ottawa rift system and are too rare on the Mesozoic eastern margin rift to assess the overall seismogenic potential. Multiple source zone models for hazard estimation could include hypotheses of future M = 7 earthquakes at any location along these rift systems, but at a moderate probability (such as that used in the Canadian zoning maps) the resultant hazard will be so diluted that it will not result in adequate design against the near-source effects of such earthquakes. The near-source effects of large, rare earthquakes can, however, be accommodated in conservative codes and standards for critical facilities, if society is willing to pay the price.

  20. Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations

    NASA Astrophysics Data System (ADS)

    Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.

    2015-08-01

    This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using

  1. Ground motion modeling of the 1906 San Francisco earthquake II: Ground motion estimates for the 1906 earthquake and scenario events

    SciTech Connect

    Aagaard, B; Brocher, T; Dreger, D; Frankel, A; Graves, R; Harmsen, S; Hartzell, S; Larsen, S; McCandless, K; Nilsson, S; Petersson, N A; Rodgers, A; Sjogreen, B; Tkalcic, H; Zoback, M L

    2007-02-09

    We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  2. Anomalous ULF signals and their possibility to estimate the earthquake magnitude

    NASA Astrophysics Data System (ADS)

    Armansyah, Ahadi, Suadi

    2017-07-01

    Ultra Low Frequency geomagnetic data were observed for several days prior to the occurance of an earthquake. The earthquake investigated was located within Indonesian territory, Jayapura Regency-Papua Province, with the distance from the epicenter and depth less than 50 km. The magnitude of the earthquake investigated was 4earthquake magnitude. The geomagnetic data were processed using polarization power ratio Z/H method to detect ULF anomalies as earthquake precursors. The research yielded interesting processing and analysis results showing that there was a strong correlation between the earthquake magnitude and ULF amplitude anomalies by 0.852. This correlation suggests that there is a possibility of estimating the magnitude of an earthquake that is going to occur based on the power ratio Z/H amplitude anomalies detected.

  3. A Probabilistic Estimate of the Most Perceptible Earthquake Magnitudes in the NW Himalaya and Adjoining Regions

    NASA Astrophysics Data System (ADS)

    Yadav, R. B. S.; Koravos, G. Ch.; Tsapanos, T. M.; Vougiouka, G. E.

    2015-02-01

    NW Himalaya and its neighboring region (25°-40°N and 65°-85°E) is one of the most seismically hazardous regions in the Indian subcontinent, a region that has historically experienced large to great damaging earthquakes. In the present study, the most perceptible earthquake magnitudes, M p, are estimated for intensity I = VII, horizontal peak ground acceleration a = 300 cm/s2 and horizontal peak ground velocity v = 10 cm/s in 28 seismogenic zones using the two earthquake recurrence models of Kijko and Sellevoll (Bulletin of the Seismological Society of America 82(1):120-134 1992 ) and Gumbel's third asymptotic distribution of extremes (GIII). Both methods deal with maximum magnitudes. The earthquake perceptibility is calculated by combining earthquake recurrence models with ground motion attenuation relations at a particular level of intensity, acceleration and velocity. The estimated results reveal that the values of M p for velocity v = 10 cm/s show higher estimates than corresponding values for intensity I = VII and acceleration a = 300 cm/s2. It is also observed that differences in perceptible magnitudes calculated by the Kijko-Sellevoll method and GIII statistics show significantly high values, up to 0.7, 0.6 and 1.7 for intensity, acceleration and velocity, respectively, revealing the importance of earthquake recurrence model selection. The estimated most perceptible earthquake magnitudes, M p, in the present study vary from M W 5.1 to 7.7 in the entire zone of the study area. Results of perceptible magnitudes are also represented in the form of spatial maps in 28 seismogenic zones for the aforementioned threshold levels of intensity, acceleration and velocity, estimated from two recurrence models. The spatial maps show that the Quetta of Pakistan, the Hindukush-Pamir Himalaya, the Caucasus mountain belt and the Himalayan frontal thrust belt (Kashmir-Kangra-Uttarkashi-Chamoli regions) exhibit higher values of the most perceptible earthquake magnitudes ( M

  4. USGS approach to real-time estimation of earthquake-triggered ground failure - Results of 2015 workshop

    USGS Publications Warehouse

    Allstadt, Kate E.; Thompson, Eric M.; Wald, David J.; Hamburger, Michael W.; Godt, Jonathan W.; Knudsen, Keith L.; Jibson, Randall W.; Jessee, M. Anna; Zhu, Jing; Hearne, Michael; Baise, Laurie G.; Tanyas, Hakan; Marano, Kristin D.

    2016-03-30

    The U.S. Geological Survey (USGS) Earthquake Hazards and Landslide Hazards Programs are developing plans to add quantitative hazard assessments of earthquake-triggered landsliding and liquefaction to existing real-time earthquake products (ShakeMap, ShakeCast, PAGER) using open and readily available methodologies and products. To date, prototype global statistical models have been developed and are being refined, improved, and tested. These models are a good foundation, but much work remains to achieve robust and defensible models that meet the needs of end users. In order to establish an implementation plan and identify research priorities, the USGS convened a workshop in Golden, Colorado, in October 2015. This document summarizes current (as of early 2016) capabilities, research and operational priorities, and plans for further studies that were established at this workshop. Specific priorities established during the meeting include (1) developing a suite of alternative models; (2) making use of higher resolution and higher quality data where possible; (3) incorporating newer global and regional datasets and inventories; (4) reducing barriers to accessing inventory datasets; (5) developing methods for using inconsistent or incomplete datasets in aggregate; (6) developing standardized model testing and evaluation methods; (7) improving ShakeMap shaking estimates, particularly as relevant to ground failure, such as including topographic amplification and accounting for spatial variability; and (8) developing vulnerability functions for loss estimates.

  5. Estimates of disturbances of space vehicle orbits in the upper ionosphere prior to strong earthquakes

    NASA Astrophysics Data System (ADS)

    Tertyshnikov, A. V.; Skripachev, V. O.

    2009-10-01

    Estimates of drag characteristics of the space vehicles with orbit heights of 450-540 and 700-900 km before and after strong (with a magnitude M ≥ 6.5) crust earthquakes of 2000-2006 are presented. The method of estimation of seismic orbital effects is presented using as an example the small Mozhaets-4 spacecraft. Two weeks prior to earthquakes, variations in the drag of low-orbital spacecraft increase. 3-6 days prior to strong crust earthquakes with epicenters on the land, the drag of low-orbit spacecraft in the upper atmosphere increases. The effect of increased viscosity of the neutral component of the atmosphere at spacecraft heights 3-6 days prior to strong crust earthquakes is consistent with the results of studies of disturbances in the ionization density variations in the ionospheric F region prior to earthquakes. No anomalies are found in the day of the earthquake. In the future, it is proposed to use elements of space debris for diagnostics of seismic orbital effects and disturbances of the upper atmosphere.

  6. Are Lowered Socioeconomic Circumstances Causally Related to Tooth Loss? A Natural Experiment Involving the 2011 Great East Japan Earthquake.

    PubMed

    Matsuyama, Yusuke; Aida, Jun; Tsuboya, Toru; Hikichi, Hiroyuki; Kondo, Katsunori; Kawachi, Ichiro; Osaka, Ken

    2017-07-01

    Oral health status is correlated with socioeconomic status. However, the causal nature of the relationship is not established. Here we describe a natural experiment involving deteriorating socioeconomic circumstances following exposure to the 2011 Great East Japan Earthquake and Tsunami. We investigated the relationship between subjective economic deterioration and housing damage due to the disaster and tooth loss in a cohort of community-dwelling residents (n = 3,039), from whom we obtained information about socioeconomic status and health status in 2010 (i.e., predating the disaster). A follow-up survey was performed in 2013 (postdisaster), and 82.1% of the 4,380 eligible survivors responded. We estimated the impact of subjective economic deterioration and housing damage due to the disaster on tooth loss by fitting an instrumental variable probit model. Subjective economic deterioration and housing damage due to the disaster were significantly associated with 8.1% and 1.7% increases in the probability of tooth loss (probit coefficients were 0.469 (95% confidence interval: 0.065, 0.872) and 0.103 (95% confidence interval: 0.011, 0.196), respectively). In this natural experiment, we confirmed the causal relationship between deteriorating socioeconomic circumstances and tooth loss. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Fault model of the 1771 Yaeyama earthquake along the Ryukyu Trench estimated from the devastating tsunami

    NASA Astrophysics Data System (ADS)

    Nakamura, Mamoru

    2009-10-01

    The 24 April 1771 Yaeyama earthquake generated a large tsunami with a maximum runup of 30 m, causing significant damage in south Ryukyu, Japan, despite the weak ground shaking. Previously proposed mechanisms of the tsunami include intraplate faulting or submarine landslide in the forearc slope. In this study, I estimate the fault parameters of the 1771 earthquake by numerically computing the tsunami heights and comparing them with the recorded heights. The result indicates that the source fault of the tsunami is very close to the Ryukyu Trench. The results are consistent with a thrust-faulting earthquake that had a fault-width of less than 50 km. The 1771 Yaeyama tsunami was caused by a tsunami earthquake (Mw = 8.0) that occurred in the subducted sediments beneath the accretionary wedge.

  8. Using Modified Mercalli Intensities to estimate acceleration response spectra for the 1906 San Francisco earthquake

    USGS Publications Warehouse

    Boatwright, J.; Bundock, H.; Seekins, L.C.

    2006-01-01

    We derive and test relations between the Modified Mercalli Intensity (MMI) and the pseudo-acceleration response spectra at 1.0 and 0.3 s - SA(1.0 s) and SA(0.3 s) - in order to map response spectral ordinates for the 1906 San Francisco earthquake. Recent analyses of intensity have shown that MMI ??? 6 correlates both with peak ground velocity and with response spectra for periods from 0.5 to 3.0 s. We use these recent results to derive a linear relation between MMI and log SA(1.0 s), and we refine this relation by comparing the SA(1.0 s) estimated from Boatwright and Bundock's (2005) MMI map for the 1906 earthquake to the SA(1.0 s) calculated from recordings of the 1989 Loma Prieta earthquake. South of San Jose, the intensity distributions for the 1906 and 1989 earthquakes are remarkably similar, despite the difference in magnitude and rupture extent between the two events. We use recent strong motion regressions to derive a relation between SA(1.0 s) and SA(0.3 s) for a M7.8 strike-slip earthquake that depends on soil type, acceleration level, and source distance. We test this relation by comparing SA(0.3 s) estimated for the 1906 earthquake to SA(0.3 s) calculated from recordings of both the 1989 Loma Prieta and 1994 Northridge earthquakes, as functions of distance from the fault. ?? 2006, Earthquake Engineering Research Institute.

  9. Housing type after the Great East Japan Earthquake and loss of motor function in elderly victims: a prospective observational study

    PubMed Central

    Tomata, Yasutake; Kogure, Mana; Sugawara, Yumi; Watanabe, Takashi; Asaka, Tadayoshi; Tsuji, Ichiro

    2016-01-01

    Objective Previous studies have reported that elderly victims of natural disasters might be prone to a subsequent decline in motor function. Victims of the Great East Japan Earthquake (GEJE) relocated to a wide range of different types of housing. As the evacuee lifestyle varies according to the type of housing available to them, their degree of motor function loss might also vary accordingly. However, the association between postdisaster housing type and loss of motor function has never been investigated. The present study was conducted to investigate the association between housing type after the GEJE and loss of motor function in elderly victims. Methods We conducted a prospective observational study of 478 Japanese individuals aged ≥65 years living in Miyagi Prefecture, one of the areas most significantly affected by the GEJE. Information on housing type after the GEJE, motor function as assessed by the Kihon checklist and other lifestyle factors was collected by interview and questionnaire in 2012. Information on motor function was then collected 1 year later. The multiple logistic regression model was used to estimate the multivariate adjusted ORs of motor function loss. Results We classified 53 (11.1%) of the respondents as having loss of motor function. The multivariate adjusted OR (with 95% CI) for loss of motor function among participants who were living in privately rented temporary housing/rental housing was 2.62 (1.10 to 6.24) compared to those who had remained in the same housing as that before the GEJE, and this increase was statistically significant. Conclusions The proportion of individuals with loss of motor function was higher among persons who had relocated to privately rented temporary housing/rental housing after the GEJE. This result may reflect the influence of a move to a living environment where few acquaintances are located (lack of social capital). PMID:27810976

  10. Coastal land loss and gain as potential earthquake trigger mechanism in SCRs

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2007-12-01

    In stable continental regions (SCRs), historic data show earthquakes can be triggered by natural tectonic sources in the interior of the crust and also by sources stemming from the Earth's sub/surface. Building off of this framework, the following abstract will discuss both as potential sources that might have triggered the 2007 ML4.2 Folkestone earthquake in Kent, England. Folkestone, located along the Southeast coast of Kent in England, is a mature aseismic region. However, a shallow earthquake with a local magnitude of ML = 4.2 occurred on April 28 2007 at 07:18 UTC about 1 km East of Folkestone (51.008° N, 1.206° E) between Dover and New Romney. The epicentral error is about ±5 km. While coastal land loss has major effects towards the Southwest and the Northeast of Folkestone, research observations suggest that erosion and landsliding do not exist in the immediate Folkestone city area (<1km). Furthermore, erosion removes rock material from the surface. This mass reduction decreases the gravitational stress component and would bring a fault away from failure, given a tectonic normal and strike-slip fault regime. In contrast, land gain by geoengineering (e.g., shingle accumulation) in the harbor of Folkestone dates back to 1806. The accumulated mass of sand and gravel accounted for a 2.8·109 kg (2.8 Mt) in 2007. This concentrated mass change less than 1 km away from the epicenter of the mainshock was able to change the tectonic stress in the strike-slip/normal stress regime. Since 1806, shear and normal stresses increased at most on oblique faults dipping 60±10°. The stresses reached values ranging between 1.0 KPa and 30.0 KPa in up to 2 km depth, which are critical for triggering earthquakes. Furthermore, the ratio between holding and driving forces continuously decreased for 200 years. In conclusion, coastal engineering at the surface most likely dominates as potential trigger mechanism for the 2007 ML4.2 Folkestone earthquake. It can be anticipated that

  11. Communicating Earthquake Preparedness: The Influence of Induced Mood, Perceived Risk, and Gain or Loss Frames on Homeowners' Attitudes Toward General Precautionary Measures for Earthquakes.

    PubMed

    Marti, Michèle; Stauffacher, Michael; Matthes, Jörg; Wiemer, Stefan

    2017-08-11

    Despite global efforts to reduce seismic risk, actual preparedness levels remain universally low. Although earthquake-resistant building design is the most efficient way to decrease potential losses, its application is not a legal requirement across all earthquake-prone countries and even if, often not strictly enforced. Risk communication encouraging homeowners to take precautionary measures is therefore an important means to enhance a country's earthquake resilience. Our study illustrates that specific interactions of mood, perceived risk, and frame type significantly affect homeowners' attitudes toward general precautionary measures for earthquakes. The interdependencies of the variables mood, risk information, and frame type were tested in an experimental 2 × 2 × 2 design (N = 156). Only in combination and not on their own, these variables effectively influence attitudes toward general precautionary measures for earthquakes. The control variables gender, "trait anxiety" index, and alteration of perceived risk adjust the effect. Overall, the group with the strongest attitudes toward general precautionary actions for earthquakes are homeowners with induced negative mood who process high-risk information and gain-framed messages. However, the conditions comprising induced negative mood, low-risk information and loss-frame and induced positive mood, low-risk information and gain-framed messages both also significantly influence homeowners' attitudes toward general precautionary measures for earthquakes. These results mostly confirm previous findings in the field of health communication. For practitioners, our study emphasizes that carefully compiled communication measures are a powerful means to encourage precautionary attitudes among homeowners, especially for those with an elevated perceived risk. © 2017 Society for Risk Analysis.

  12. ShakeMap Atlas 2.0: an improved suite of recent historical earthquake ShakeMaps for global hazard analyses and loss model calibration

    USGS Publications Warehouse

    Garcia, D.; Mah, R.T.; Johnson, K.L.; Hearne, M.G.; Marano, K.D.; Lin, K.-W.; Wald, D.J.

    2012-01-01

    We introduce the second version of the U.S. Geological Survey ShakeMap Atlas, which is an openly-available compilation of nearly 8,000 ShakeMaps of the most significant global earthquakes between 1973 and 2011. This revision of the Atlas includes: (1) a new version of the ShakeMap software that improves data usage and uncertainty estimations; (2) an updated earthquake source catalogue that includes regional locations and finite fault models; (3) a refined strategy to select prediction and conversion equations based on a new seismotectonic regionalization scheme; and (4) vastly more macroseismic intensity and ground-motion data from regional agencies All these changes make the new Atlas a self-consistent, calibrated ShakeMap catalogue that constitutes an invaluable resource for investigating near-source strong ground-motion, as well as for seismic hazard, scenario, risk, and loss-model development. To this end, the Atlas will provide a hazard base layer for PAGER loss calibration and for the Earthquake Consequences Database within the Global Earthquake Model initiative.

  13. Earthquakes

    EPA Pesticide Factsheets

    Information on this page will help you understand environmental dangers related to earthquakes, what you can do to prepare and recover. It will also help you recognize possible environmental hazards and learn what you can do to protect you and your family

  14. A discussion of the socio-economic losses and shelter impacts from the Van, Turkey Earthquakes of October and November 2011

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Kunz-Plapp, T.; Vervaeck, A.; Muehr, B.; Markus, M.

    2012-04-01

    The Van earthquake in 2011 hit at 10:41 GMT (13:41 Local) on Sunday, October 23rd, 2011. It was a Mw7.1-7.3 event located at a depth of around 10 km with the epicentre located directly between Ercis (pop. 75,000) and Van (pop. 370,000). Since then, the CEDIM Forensic Analysis Group (using a team of seismologists, engineers, sociologists and meteorologists) and www.earthquake-report.com has reported and analysed on the Van event. In addition, many damaging aftershocks occurring after the main eventwere analysed including a major aftershock centered in Van-Edremit on November 9th, 2011, causing much additional losses. The province of Van has around 1.035 million people as of the last census. The Van province is one of the poorest in Turkey and has much inequality between the rural and urban centers with an average HDI (Human Development Index) around that of Bhutan or Congo. The earthquakes are estimated to have caused 604 deaths (23 October) and 40 deaths (9 November); mostly due to falling debris and house collapse). In addition, between 1 billion TRY to 4 billion TRY (approx. 555 million USD - 2.2 billion USD) is estimated as total economic losses. This represents around 17 to 66% of the provincial GDP of the Van Province (approx. 3.3 billion USD) as of 2011. From the CATDAT Damaging Earthquakes Database, major earthquakes such as this one have occurred in the year 1111 causing major damage and having a magnitude around 6.5-7. In the year 1646 or 1648, Van was again struck by a M6.7 quake killing around 2000 people. In 1881, a M6.3 earthquake near Van killed 95 people. Again, in 1941, a M5.9 earthquake affected Ercis and Van killing between 190 and 430 people. 1945-1946 as well as 1972 brought again damaging and casualty-bearing earthquakes to the Van province. In 1976, the Van-Muradiye earthquake struck the border region with a M7, killing around 3840 people and causing around 51,000 people to become homeless. Key immediate lessons from similar historic

  15. Estimation of flood losses to agricultural crops using remote sensing

    NASA Astrophysics Data System (ADS)

    Tapia-Silva, Felipe-Omar; Itzerott, Sibylle; Foerster, Saskia; Kuhlmann, Bernd; Kreibich, Heidi

    2011-01-01

    The estimation of flood damage is an important component of risk-oriented flood design, risk mapping, financial appraisals and comparative risk analyses. However, research on flood loss modelling, especially in the agricultural sector, has not yet gained much attention. Agricultural losses strongly depend on the crops affected, which need to be predicted accurately. Therefore, three different methods to predict flood-affected crops using remote sensing and ancillary data were developed, applied and validated. These methods are: (a) a hierarchical classification based on standard curves of spectral response using satellite images, (b) disaggregation of crop statistics using a Monte Carlo simulation and probabilities of crops to be cultivated on specific soils and (c) analysis of crop rotation with data mining Net Bayesian Classifiers (NBC) using soil data and crop data derived from a multi-year satellite image analysis. A flood loss estimation model for crops was applied and validated in flood detention areas (polders) at the Havel River (Untere Havelniederung) in Germany. The polders were used for temporary storage of flood water during the extreme flood event in August 2002. The flood loss to crops during the extreme flood event in August 2002 was estimated based on the results of the three crop prediction methods. The loss estimates were then compared with official loss data for validation purposes. The analysis of crop rotation with NBC obtained the best result, with 66% of crops correctly classified. The accuracy of the other methods reached 34% with identification using Normalized Difference Vegetation Index (NDVI) standard curves and 19% using disaggregation of crop statistics. The results were confirmed by evaluating the loss estimation procedure, in which the damage model using affected crops estimated by NBC showed the smallest overall deviation (1%) when compared to the official losses. Remote sensing offers various possibilities for the improvement of

  16. A General Method to Estimate Earthquake Moment and Magnitude using Regional Phase Amplitudes

    SciTech Connect

    Pasyanos, M E

    2009-11-19

    This paper presents a general method of estimating earthquake magnitude using regional phase amplitudes, called regional M{sub o} or regional M{sub w}. Conceptually, this method uses an earthquake source model along with an attenuation model and geometrical spreading which accounts for the propagation to utilize regional phase amplitudes of any phase and frequency. Amplitudes are corrected to yield a source term from which one can estimate the seismic moment. Moment magnitudes can then be reliably determined with sets of observed phase amplitudes rather than predetermined ones, and afterwards averaged to robustly determine this parameter. We first examine in detail several events to demonstrate the methodology. We then look at various ensembles of phases and frequencies, and compare results to existing regional methods. We find regional M{sub o} to be a stable estimator of earthquake size that has several advantages over other methods. Because of its versatility, it is applicable to many more events, particularly smaller events. We make moment estimates for earthquakes ranging from magnitude 2 to as large as 7. Even with diverse input amplitude sources, we find magnitude estimates to be more robust than typical magnitudes and existing regional methods and might be tuned further to improve upon them. The method yields a more meaningful quantity of seismic moment, which can be recast as M{sub w}. Lastly, it is applied here to the Middle East region using an existing calibration model, but it would be easy to transport to any region with suitable attenuation calibration.

  17. The effect of band loss on estimates of annual survival

    USGS Publications Warehouse

    Nelson, Louis J.; Anderson, David R.; Burnham, Kenneth P.

    1980-01-01

    Banding has proven to be a useful technique in the study of population dynamics of avian species. However, band loss has long been recognized as a potential problem, (Hickey, 1952; Ludwig, 1967). Recently, Brownie et al. (1978) presented 14 models based on an array of explicit assumptions for the analysis of band recovery data. Various estimation models (assumption sets) allowed survival and/or recovery rates to be (a) constant, (b) time-specific, or (c) time- and age-specific. Optimal inference methods were employed and statistical tests of critical assumptions were developed and emphasized. The methods of Brownie et al. (1978), as with all previously published methods of which we are aware, assume no loss of bands during the study. However, some band loss is certain to occur and this potentially biases the estimates of annual survival rates whatever the analysis method. A few empirical studies have estimated band loss rates (a notable exception is Ludwig, 1967); consequently, for almost all band recovery data, the exact rate of band loss is unknown. In this paper we investigate the bias in estimates of annual survival rates due to varying degrees of hypothesized band loss. Our main results are based on perhaps the most useful model, originally developed by Seber (1970), for estimation of annual survival rate. Inferences are made concerning the bias of estimated survival rates in other models because the structure of these estimators is similar.

  18. Estimating blood loss after birth: using simulated clinical examples.

    PubMed

    Buckland, Sara S; Homer, Caroline S E

    2007-06-01

    To determine the accuracy of the estimation of blood loss using simulated clinical examples. Over 100 attendees came together at a seminar about postpartum haemorrhage in June 2006. Five blood loss assessment stations were constructed, each containing a simulated clinical example. Each station was numbered and was made up of a variety of equipment used in birthing suites. Over 5L of 'artificial' blood was made. The artificial blood was similar to the colour and consistency of real blood. A convenience sample of 88 participants was given a response sheet and asked to estimate blood loss at each station. Participants included midwives, student midwives and an obstetrician. Blood in a container (bedpan, kidney dish) was more accurately estimated than blood on sanitary pads, sheets or clothing. Lower volumes of blood were also estimated correctly by more participants than the higher volumes. Improvements are still needed in visual estimation of blood loss following childbirth. Education programs may increase the level of accuracy. We encourage other clinicians and educators to embark upon a similar exercise to assist midwives and others to improve their visual estimation of blood loss after birth. Accurate estimations can ensure that women who experience significant blood loss can receive appropriate care and the published rates of postpartum haemorrhage are correct.

  19. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    USGS Publications Warehouse

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  20. Modified Mercalli Intensity for scenario earthquakes in Evansville, Indiana

    USGS Publications Warehouse

    Cramer, Chris; Haase, Jennifer; Boyd, Oliver

    2012-01-01

    Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the fact that Evansville is close to the Wabash Valley and New Madrid seismic zones, there is concern about the hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake. Earthquake-hazard maps provide one way of conveying such estimates of strong ground shaking and will help the region prepare for future earthquakes and reduce earthquake-caused losses.

  1. Heterogeneous rupture in the great Cascadia earthquake of 1700 inferred from coastal subsidence estimates

    USGS Publications Warehouse

    Wang, Pei-Ling; Engelhart, Simon E.; Wang, Kelin; Hawkes, Andrea D.; Horton, Benjamin P.; Nelson, Alan R.; Witter, Robert C.

    2013-01-01

    Past earthquake rupture models used to explain paleoseismic estimates of coastal subsidence during the great A.D. 1700 Cascadia earthquake have assumed a uniform slip distribution along the megathrust. Here we infer heterogeneous slip for the Cascadia margin in A.D. 1700 that is analogous to slip distributions during instrumentally recorded great subduction earthquakes worldwide. The assumption of uniform distribution in previous rupture models was due partly to the large uncertainties of then available paleoseismic data used to constrain the models. In this work, we use more precise estimates of subsidence in 1700 from detailed tidal microfossil studies. We develop a 3-D elastic dislocation model that allows the slip to vary both along strike and in the dip direction. Despite uncertainties in the updip and downdip slip extensions, the more precise subsidence estimates are best explained by a model with along-strike slip heterogeneity, with multiple patches of high-moment release separated by areas of low-moment release. For example, in A.D. 1700, there was very little slip near Alsea Bay, Oregon (~44.4°N), an area that coincides with a segment boundary previously suggested on the basis of gravity anomalies. A probable subducting seamount in this area may be responsible for impeding rupture during great earthquakes. Our results highlight the need for more precise, high-quality estimates of subsidence or uplift during prehistoric earthquakes from the coasts of southern British Columbia, northern Washington (north of 47°N), southernmost Oregon, and northern California (south of 43°N), where slip distributions of prehistoric earthquakes are poorly constrained.

  2. Towards Estimating the Magnitude of Earthquakes from EM Data Collected from the Subduction Zone

    NASA Astrophysics Data System (ADS)

    Heraud, J. A.

    2016-12-01

    During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes have been used and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone.

  3. Estimating the national wage loss from cancer in Canada

    PubMed Central

    Hopkins, R.B.; Goeree, R.; Longo, C.J.

    2010-01-01

    Objectives Using primary and secondary data sources, we set out to estimate the Canadian wage loss from cancer for patients, caregivers, and parents from a patient and a societal perspective. Methods First, a multiple-database literature search was conducted to find Canadian-specific direct surveys of wage loss from cancer. Second, estimates for wage loss were generated from the nationally representative Canadian Community Health Survey (cchs) Cycle 3.1. In addition, both estimates were standardized to derive a friction-period estimate and were extrapolated to produce national annual estimates. Results The literature search identified six direct surveys that included a total of 1632 patients with cancer. The cchs Cycle 3.1 included 2287 patients with cancer. Overall, based on the direct surveys, newly diagnosed cancer patients reduced their labour participation in the friction period by 36% ($4,518), and caregivers lost 23% of their workable hours ($2,887). The cchs estimated that annual household income was 26.5% lower ($4,978) for respondents with cancer as compared with the general population. For the year 2009, results from direct surveys indicated that new cancers in Canada generated a wage loss of $3.18 billion; the cchs Cycle 3.1 estimate was $2.95 billion. Conclusions Wage loss from cancer is a significant economic burden on patients, their families, and society in Canada, with direct surveys and the cchs providing similar estimates. PMID:20404977

  4. Hidden Markov model for dependent mark loss and survival estimation

    USGS Publications Warehouse

    Laake, Jeffrey L.; Johnson, Devin S.; Diefenbach, Duane R.; Ternent, Mark A.

    2014-01-01

    Mark-recapture estimators assume no loss of marks to provide unbiased estimates of population parameters. We describe a hidden Markov model (HMM) framework that integrates a mark loss model with a Cormack–Jolly–Seber model for survival estimation. Mark loss can be estimated with single-marked animals as long as a sub-sample of animals has a permanent mark. Double-marking provides an estimate of mark loss assuming independence but dependence can be modeled with a permanently marked sub-sample. We use a log-linear approach to include covariates for mark loss and dependence which is more flexible than existing published methods for integrated models. The HMM approach is demonstrated with a dataset of black bears (Ursus americanus) with two ear tags and a subset of which were permanently marked with tattoos. The data were analyzed with and without the tattoo. Dropping the tattoos resulted in estimates of survival that were reduced by 0.005–0.035 due to tag loss dependence that could not be modeled. We also analyzed the data with and without the tattoo using a single tag. By not using.

  5. Balancing Score Adjusted Targeted Minimum Loss-based Estimation

    PubMed Central

    Lendle, Samuel David; Fireman, Bruce; van der Laan, Mark J.

    2015-01-01

    Adjusting for a balancing score is sufficient for bias reduction when estimating causal effects including the average treatment effect and effect among the treated. Estimators that adjust for the propensity score in a nonparametric way, such as matching on an estimate of the propensity score, can be consistent when the estimated propensity score is not consistent for the true propensity score but converges to some other balancing score. We call this property the balancing score property, and discuss a class of estimators that have this property. We introduce a targeted minimum loss-based estimator (TMLE) for a treatment-specific mean with the balancing score property that is additionally locally efficient and doubly robust. We investigate the new estimator’s performance relative to other estimators, including another TMLE, a propensity score matching estimator, an inverse probability of treatment weighted estimator, and a regression-based estimator in simulation studies. PMID:26561539

  6. Estimating soil erosion changes in the Wenchuan earthquake disaster area using geo-spatial information technology

    NASA Astrophysics Data System (ADS)

    Zhang, Bing; Jiao, Quanjun; Wu, Yanhong; Zhang, Wenjuan

    2009-05-01

    The secondary disasters induced by the Wenchuan earthquake of May 12, 2008, such as landslides, collapsing rocks, debris flows, floods, etc., have changed the local natural landscape tremendously and caused heavy soil erosion in the earthquake-hit areas. Using thematic mapper images taken before the earthquake and airborne images taken after the earthquake, we extracted information about the destroyed landscape by utilizing remote sensing and geographical information system techniques. Then, taking into account multi-year precipitation, vegetation cover, soil type, land use, and elevation data, we evaluated the soil erosion area and intensity using the revised universal soil loss equation. Results indicate that the soil erosion in earthquake-hit areas was exacerbated, with the severe erosion area increasing by 279.2 km2, or 1.9% of the total statistical area. Large amounts of soil and debris blocked streams and formed many barrier lakes over an area of more than 3.9 km2. It was evident from the spatial distribution of soil erosion areas that the intensity of soil erosion accelerated in the stream valley areas, especially in the valleys of the Min River and the Jian River.

  7. Estimating phosphorus loss in runoff from manure and fertilizer for a phosphorus loss quantification tool.

    PubMed

    Vadas, P A; Good, L W; Moore, P A; Widman, N

    2009-01-01

    Nonpoint-source pollution of fresh waters by P is a concern because it contributes to accelerated eutrophication. Given the state of the science concerning agricultural P transport, a simple tool to quantify annual, field-scale P loss is a realistic goal. We developed new methods to predict annual dissolved P loss in runoff from surface-applied manures and fertilizers and validated the methods with data from 21 published field studies. We incorporated these manure and fertilizer P runoff loss methods into an annual, field-scale P loss quantification tool that estimates dissolved and particulate P loss in runoff from soil, manure, fertilizer, and eroded sediment. We validated the P loss tool using independent data from 28 studies that monitored P loss in runoff from a variety of agricultural land uses for at least 1 yr. Results demonstrated (i) that our new methods to estimate P loss from surface manure and fertilizer are an improvement over methods used in existing Indexes, and (ii) that it was possible to reliably quantify annual dissolved, sediment, and total P loss in runoff using relatively simple methods and readily available inputs. Thus, a P loss quantification tool that does not require greater degrees of complexity or input data than existing P Indexes could accurately predict P loss across a variety of management and fertilization practices, soil types, climates, and geographic locations. However, estimates of runoff and erosion are still needed that are accurate to a level appropriate for the intended use of the quantification tool.

  8. The importance of in-situ observations for rapid loss estimates in the Euro-Med region

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Mazet Roux, G.; Gilles, S.

    2009-04-01

    A major (M>7) earthquake occurring in a densely populated area will inevitably cause significant damage and generally speaking the poorer the country the higher the number of fatalities. It was clear for any earthquake monitoring agency that the M7.8 Wenchuan earthquake in May 2008 was a disaster as soon its magnitude and location had been estimated. However, the loss estimates of moderate to strong earthquakes (M5 to M6) occurring close to an urban area is much trickier because the losses are the result of the convolution of many parameters (location, magnitude, depth, directivity, seismic attenuation, site effects, building vulnerability, repartition of the population at the time of the event…) which are either affected by non-negligible uncertainties or poorly constrained at least at a global scale. Just considering one of this parameter, the epicentral location: In this range of magnitude, the characteristic size of the potentially damaged area is comparable to the typical epicentral location uncertainty obtained in real time, i.e. 10 to 15 km. It is then not possible to discriminate in real time between an earthquake location right below a town which could cause significant damage and a location 15 km away which impact would be much lower. Clearly, if the uncertainties affecting each of the parameters are properly taken into account, for such earthquakes the resulting scenarios of losses will range from no impact to very significant impact and then the results will not be of much use. The way to reduce the uncertainties on the loss estimates in such cases is then to collect in-situ information on the local shaking level and/or on the actual damage at a number of localities. In area of low seismic hazard, the cost of installing dense accelerometric network is, in practice, too high and the only remaining solution is to rapidly collect observations of the damage. That is what the EMSC has been developing for the last few years by involving the Citizen in

  9. Rapid Estimation of Macroseismic Intensity for On-site Earthquake Early Warning in Italy from Early Radiated Energ

    NASA Astrophysics Data System (ADS)

    Emolo, A.; Zollo, A.; Brondi, P.; Picozzi, M.; Mucciarelli, M.

    2015-12-01

    Earthquake Early Warning System (EEWS) are effective tools for the risk mitigation in active seismic regions. Recently, a feasibility study of a nation-wide earthquake early warning systems has been conducted for Italy considering the RAN Network and the EEW software platform PRESTo. This work showed that a reliable estimations in terms of magnitude and epicentral localization would be available within 3-4 seconds after the first P-wave arrival. On the other hand, given the RAN's density, a regional EEWS approach would result in a Blind Zone (BZ) of 25-30 km in average. Such BZ dimension would provide lead-times greater than zero only for events having magnitude larger than 6.5. Considering that in Italy also smaller events are capable of generating great losses both in human and economic terms, as dramatically experienced during the recent 2009 L'Aquila (ML 5.9) and 2012 Emilia (ML 5.9) earthquakes, it has become urgent to develop and test on-site approaches. The present study is focused on the development of a new on-site EEW metodology for the estimation of the macroseismic intensity at a target site or area. In this analysis we have used a few thousands of accelerometric traces recorded by RAN related to the largest earthquakes (ML>4) occurred in Italy in the period 1997-2013. The work is focused on the integral EW parameter Squared Velocity Integral (IV2) and on its capability to predict the peak ground velocity PGV and the Housner Intensity IH, as well as from these latters we parameterized a new relation between IV2 and the Macroseismic Intensity. To assess the performance of the developed on-site EEW relation, we used data of the largest events occurred in Italy in the last 6 years recorded by the Osservatorio Sismico delle Strutture, as well as on the recordings of the moderate earthquake reported by INGV Strong Motion Data. The results shows that the macroseismic intensity values predicted by IV2 and the one estimated by PGV and IH are in good agreement.

  10. Strong earthquake motion estimates for three sites on the U.C. San Diego campus

    SciTech Connect

    Day, S; Doroudian, M; Elgamal, A; Gonzales, S; Heuze, F; Lai, T; Minster, B; Oglesby, D; Riemer, M; Vernon, F; Vucetic, M; Wagoner, J; Yang, Z

    2002-05-07

    The approach of the Campus Earthquake Program (CEP) is to combine the substantial expertise that exists within the UC system in geology, seismology, and geotechnical engineering, to estimate the earthquake strong motion exposure of UC facilities. These estimates draw upon recent advances in hazard assessment, seismic wave propagation modeling in rocks and soils, and dynamic soil testing. The UC campuses currently chosen for application of our integrated methodology are Riverside, San Diego, and Santa Barbara. The procedure starts with the identification of possible earthquake sources in the region and the determination of the most critical fault(s) related to earthquake exposure of the campus. Combined geological, geophysical, and geotechnical studies are then conducted to characterize each campus with specific focus on the location of particular target buildings of special interest to the campus administrators. We drill, sample, and geophysically log deep boreholes next to the target structure, to provide direct in-situ measurements of subsurface material properties, and to install uphole and downhole 3-component seismic sensors capable of recording both weak and strong motions. The boreholes provide access below the soil layers, to deeper materials that have relatively high seismic shear-wave velocities. Analyses of conjugate downhole and uphole records provide a basis for optimizing the representation of the low-strain response of the sites. Earthquake rupture scenarios of identified causative faults are combined with the earthquake records and with nonlinear soil models to provide site-specific estimates of strong motions at the selected target locations. The predicted ground motions are shared with the UC consultants, so that they can be used as input to the dynamic analysis of the buildings. Thus, for each campus targeted by the CEP project, the strong motion studies consist of two phases, Phase 1--initial source and site characterization, drilling

  11. Strong Earthquake Motion Estimates for Three Sites on the U.C. Riverside Campus

    SciTech Connect

    Archuleta, R.; Elgamal, A.; Heuze, F.; Lai, T.; Lavalle, D.; Lawrence, B.; Liu, P.C.; Matesic, L.; Park, S.; Riemar, M.; Steidl, J.; Vucetic, M.; Wagoner, J.; Yang, Z.

    2000-11-01

    The approach of the Campus Earthquake Program (CEP) is to combine the substantial expertise that exists within the UC system in geology, seismology, and geotechnical engineering, to estimate the earthquake strong motion exposure of UC facilities. These estimates draw upon recent advances in hazard assessment, seismic wave propagation modeling in rocks and soils, and dynamic soil testing. The UC campuses currently chosen for application of our integrated methodology are Riverside, San Diego, and Santa Barbara. The procedure starts with the identification of possible earthquake sources in the region and the determination of the most critical fault(s) related to earthquake exposure of the campus. Combined geological, geophysical, and geotechnical studies are then conducted to characterize each campus with specific focus on the location of particular target buildings of special interest to the campus administrators. We drill and geophysically log deep boreholes next to the target structure, to provide direct in-situ measurements of subsurface material properties, and to install uphole and downhole 3-component seismic sensors capable of recording both weak and strong motions. The boreholes provide access below the soil layers, to deeper materials that have relatively high seismic shear-wave velocities. Analyses of conjugate downhole and uphole records provide a basis for optimizing the representation of the low-strain response of the sites. Earthquake rupture scenarios of identified causative faults are combined with the earthquake records and with nonlinear soil models to provide site-specific estimates of strong motions at the selected target locations. The predicted ground motions are shared with the UC consultants, so that they can be used as input to the dynamic analysis of the buildings. Thus, for each campus targeted by the CEP project, the strong motion studies consist of two phases, Phase 1--initial source and site characterization, drilling, geophysical

  12. Efficient Location Uncertainty Treatment for Probabilistic Modelling of Portfolio Loss from Earthquake Events

    NASA Astrophysics Data System (ADS)

    Scheingraber, Christoph; Käser, Martin; Allmann, Alexander

    2017-04-01

    Probabilistic seismic risk analysis (PSRA) is a well-established method for modelling loss from earthquake events. In the insurance industry, it is widely employed for probabilistic modelling of loss to a distributed portfolio. In this context, precise exposure locations are often unknown, which results in considerable loss uncertainty. The treatment of exposure uncertainty has already been identified as an area where PSRA would benefit from increased research attention. However, so far, epistemic location uncertainty has not been in the focus of a large amount of research. We propose a new framework for efficient treatment of location uncertainty. To demonstrate the usefulness of this novel method, a large number of synthetic portfolios resembling real-world portfolios is systematically analyzed. We investigate the effect of portfolio characteristics such as value distribution, portfolio size, or proportion of risk items with unknown coordinates on loss variability. Several sampling criteria to increase the computational efficiency of the framework are proposed and put into the wider context of well-established Monte-Carlo variance reduction techniques. The performance of each of the proposed criteria is analyzed.

  13. Estimating convective energy losses from solar central receivers

    SciTech Connect

    Siebers, D L; Kraabel, J S

    1984-04-01

    This report outlines a method for estimating the total convective energy loss from a receiver of a solar central receiver power plant. Two types of receivers are considered in detail: a cylindrical, external-type receiver and a cavity-type receiver. The method is intended to provide the designer with a tool for estimating the total convective energy loss that is based on current knowledge of convective heat transfer from receivers to the environment and that is adaptable to new information as it becomes available. The current knowledge consists of information from two recent large-scale experiments, as well as information already in the literature. Also outlined is a method for estimating the uncertainty in the convective loss estimates. Sample estimations of the total convective energy loss and the uncertainties in those convective energy loss estimates for the external receiver of the 10 MWe Solar Thermal Central Receiver Plant (Barstow, California) and the cavity receiver of the International Energy Agency Small Solar Power Systems Project (Almeria, Spain) are included in the appendices.

  14. Towards reliable automated estimates of earthquake source properties from body wave spectra

    NASA Astrophysics Data System (ADS)

    Ross, Z. E.; Ben-Zion, Y.

    2016-12-01

    We develop a two-stage methodology for automated estimation of earthquake source properties from body wave spectra. An automated picking algorithm is used to window and calculate spectra for both P and S phases. Empirical Green's functions are stacked to minimize non-generic source effects such as directivity, and are used to deconvolve the spectra of target earthquakes for analysis. In the first stage, window lengths and frequency ranges are defined automatically from the event magnitude and used to get preliminary estimates of the P and S corner frequencies of the target event. In the second stage, the preliminary corner frequencies are used to update various parameters to increase the amount of data and overall quality of the deconvolved spectral ratios (target event over stacked Empirical Green's function). The obtained spectral ratios are used to estimate the corner frequencies, strain/stress drops, radiated seismic energy, apparent stress, and the extent of directivity for both P- and S-waves. The technique is applied to data generated by five small to moderate earthquakes in southern California at hundreds of stations. Four of the five earthquakes are found to have significant directivity. The developed automated procedure is suitable for systematic processing of large seismic waveform data sets with no user involvement.

  15. Toward reliable automated estimates of earthquake source properties from body wave spectra

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Ben-Zion, Yehuda

    2016-06-01

    We develop a two-stage methodology for automated estimation of earthquake source properties from body wave spectra. An automated picking algorithm is used to window and calculate spectra for both P and S phases. Empirical Green's functions are stacked to minimize nongeneric source effects such as directivity and are used to deconvolve the spectra of target earthquakes for analysis. In the first stage, window lengths and frequency ranges are defined automatically from the event magnitude and used to get preliminary estimates of the P and S corner frequencies of the target event. In the second stage, the preliminary corner frequencies are used to update various parameters to increase the amount of data and overall quality of the deconvolved spectral ratios (target event over stacked Empirical Green's function). The obtained spectral ratios are used to estimate the corner frequencies, strain/stress drops, radiated seismic energy, apparent stress, and the extent of directivity for both P and S waves. The technique is applied to data generated by five small to moderate earthquakes in southern California at hundreds of stations. Four of the five earthquakes are found to have significant directivity. The developed automated procedure is suitable for systematic processing of large seismic waveform data sets with no user involvement.

  16. Source scaling relationships of small earthquakes estimated from the inversion method using stopping phases

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Takeo, M.; Ito, H.; Ellsworth, W.; Matsuzawa, T.; Kuwahara, Y.; Iio, Y.; Horiuchi, S.; Ohmi, S.

    2002-12-01

    We estimate source parameters of small earthquakes from stopping phases and investigate the scaling relationships between source parameters. The method we employed [Imanishi and Takeo, 2002] assumes an elliptical fault model proposed by Savage [1966]. In this model, two high-frequency stopping phases, Hilbert transformations of each other, are radiated and the difference in arrival times between the two stopping phases is dependent on the average value of rupture velocity, the source dimension, the aspect ratio of elliptical fault, the direction of rupture propagation and the orientation of the fault plane. These parameters can be estimated by a nonlinear least squares inversion method. Earthquakes studied occurred between May and August 1999 at the western Nagano prefecture, Japan, which is characterized by high levels of shallow earthquakes. The data consist of seismograms recorded by an 800 m deep borehole and a 46 surface seismic array whose spacing is a few km. In particular, the 800 m borehole data provide a wide frequency bandwidth and greatly reduce ground noise and coda wave amplitude compared to surface recordings. High-frequency stopping phases are readily detected on accelerograms recorded in the borehole. After correcting both borehole and surface data for attenuation, we also measure the rise time, which is defined as the time lag from the arrival time of the direct wave to the first slope change in the displacement pulse. Using these durations, we estimate source parameters of 25 earthquakes ranging in size from M1.2 to M2.6. The rupture aspect ratio is estimated to be about 0.8 on an average. This suggests that the assumption of a circular crack model is valid as a first order approximation for earthquakes analyzed in this study. Static stress drops range from approximately 0.1 to 5 MPa and do not vary with seismic moment. It seems that the breakdown seen in the previous studies by other authors using surface data is simply an artifact of

  17. Towards Practical, Real-Time Estimation of Spatial Aftershock Probabilities: A Feasibility Study in Earthquake Hazard

    NASA Astrophysics Data System (ADS)

    Morrow, P.; McCloskey, J.; Steacy, S.

    2001-12-01

    It is now widely accepted that the goal of deterministic earthquake prediction is unattainable in the short term and may even be forbidden by nonlinearity in the generating dynamics. This nonlinearity does not, however, preclude the estimation of earthquake probability and, in particular, how this probability might change in space and time; earthquake hazard estimation might be possible in the absence of earthquake prediction. Recently, there has been a major development in the understanding of stress triggering of earthquakes which allows accurate calculation of the spatial variation of aftershock probability following any large earthquake. Over the past few years this Coulomb stress technique (CST) has been the subject of intensive study in the geophysics literature and has been extremely successful in explaining the spatial distribution of aftershocks following several major earthquakes. The power of current micro-computers, the great number of local, telemetered seismic networks, the rapid acquisition of data from satellites coupled with the speed of modern telecommunications and data transfer all mean that it may be possible that these new techniques could be applied in a forward sense. In other words, it is theoretically possible today to make predictions of the likely spatial distribution of aftershocks in near-real-time following a large earthquake. Approximate versions of such predictions could be available within, say, 0.1 days after the mainshock and might be continually refined and updated over the next 100 days. The European Commission has recently provided funding for a project to assess the extent to which it is currently possible to move CST predictions into a practically useful time frame so that low-confidence estimates of aftershock probability might be made within a few hours of an event and improved in near-real-time, as data of better quality become available over the following days to tens of days. Specifically, the project aims to assess the

  18. Strong Earthquake Motion Estimates for the UCSB Campus, and Related Response of the Engineering 1 Building

    SciTech Connect

    Archuleta, R.; Bonilla, F.; Doroudian, M.; Elgamal, A.; Hueze, F.

    2000-06-06

    This is the second report on the UC/CLC Campus Earthquake Program (CEP), concerning the estimation of exposure of the U.C. Santa Barbara campus to strong earthquake motions (Phase 2 study). The main results of Phase 1 are summarized in the current report. This document describes the studies which resulted in site-specific strong motion estimates for the Engineering I site, and discusses the potential impact of these motions on the building. The main elements of Phase 2 are: (1) determining that a M 6.8 earthquake on the North Channel-Pitas Point (NCPP) fault is the largest threat to the campus. Its recurrence interval is estimated at 350 to 525 years; (2) recording earthquakes from that fault on March 23, 1998 (M 3.2) and May 14, 1999 (M 3.2) at the new UCSB seismic station; (3) using these recordings as empirical Green's functions (EGF) in scenario earthquake simulations which provided strong motion estimates (seismic syntheses) at a depth of 74 m under the Engineering I site; 240 such simulations were performed, each with the same seismic moment, but giving a broad range of motions that were analyzed for their mean and standard deviation; (4) laboratory testing, at U.C. Berkeley and U.C. Los Angeles, of soil samples obtained from drilling at the UCSB station site, to determine their response to earthquake-type loading; (5) performing nonlinear soil dynamic calculations, using the soil properties determined in-situ and in the laboratory, to calculate the surface strong motions resulting from the seismic syntheses at depth; (6) comparing these CEP-generated strong motion estimates to acceleration spectra based on the application of state-of-practice methods - the IBC 2000 code, UBC 97 code and Probabilistic Seismic Hazard Analysis (PSHA), this comparison will be used to formulate design-basis spectra for future buildings and retrofits at UCSB; and (7) comparing the response of the Engineering I building to the CEP ground motion estimates and to the design

  19. LOSS ESTIMATE FOR ITER ECH TRANSMISSION LINE INCLUDING MULTIMODE PROPAGATION

    SciTech Connect

    Shapiro, Michael; Bigelow, Tim S; Caughman, John B; Rasmussen, David A

    2010-01-01

    The ITER electron cyclotron heating (ECH) transmission lines (TLs) are 63.5-mm-diam corrugated waveguides that will each carry 1 MW of power at 170 GHz. The TL is defined here as the corrugated wave guide system connecting the gyrotron mirror optics unit (MO U) to the entrance of the ECH launcher and includes miter bends and other corrugated wave guide components. The losses on the ITER TL have been calculated for four possible cases corresponding to having HE(11) mode purity at the input of the TL of 100, 97, 90, and 80%. The losses due to coupling, ohmic, and mode conversion loss are evaluated in detail using a numerical code and analytical approaches. Estimates of the calorimetric loss on the line show that the output power is reduced by about 5, +/- 1% because of ohmic loss in each of the four cases. Estimates of the mode conversion loss show that the fraction of output power in the HE(11) mode is similar to 3% smaller than the fraction of input power in the HE(11) mode. High output mode purity therefore can be achieved only with significantly higher input mode purity. Combining both ohmic and mode conversion loss, the efficiency of the TL from the gyrotron MOU to the ECH launcher can be roughly estimated in theory as 92% times the fraction of input power in the HE(11) mode.

  20. Loss Estimation Modeling Of Scenario Lahars From Mount Rainier, Washington State, Using HAZUS-MH

    NASA Astrophysics Data System (ADS)

    Walsh, T. J.; Cakir, R.

    2011-12-01

    We have adapted lahar hazard zones developed by Hoblitt and others (1998) and converted to digital data by Schilling and others (2008) into the appropriate format for HAZUS-MH, which is FEMA's loss estimation model. We assume that structures engulfed by cohesive lahars will suffer complete loss, and structures affected by post-lahar flooding will be appropriately modeled by the HAZUS-MH flood model. Another approach investigated is to estimate the momentum of lahars, calculate a lateral force, and apply the earthquake model, substituting the lahar lateral force for PGA. Our initial model used the HAZUS default data, which include estimates of building type and value from census data. This model estimated a loss of about 12 billion for a repeat lahar similar to the Electron Mudflow down the Puyallup River. Because HAZUS data are based on census tracts, this estimated damage includes everything in the census tract, even buildings outside of the lahar hazard zone. To correct this, we acquired assessors data from all of the affected counties and converted them into HAZUS format. We then clipped it to the boundaries of the lahar hazard zone to more precisely delineate those properties actually at risk in each scenario. This refined our initial loss estimate to about 6 billion with exclusion of building content values. We are also investigating rebuilding the lahar hazard zones applying Lahar-Z to a more accurate topographic grid derived from recent Lidar data acquired from the Puget Sound Lidar Consortium and Mount Rainier National Park. Final results of these models for the major drainages of Mount Rainier will be posted to the Washington Interactive Geologic Map (http://www.dnr.wa.gov/ResearchScience/Topics/GeosciencesData/Pages/geology_portal.aspx).

  1. Earthquake slip vectors and estimates of present-day plate motions

    NASA Technical Reports Server (NTRS)

    Demets, Charles

    1993-01-01

    Two alternative models for present-day global plate motions are derived from subsets of the NUVEL-1 data in order to investigate the degree to which earthquake slip vectors affect the NUVEL-1 model and to provide estimates of present-day plate velocities that are independent of earthquake slip vectors. The data set used to derive the first model excludes subduction zone slip vectors. The primary purpose of this model is to demonstrate that the 240 subduction zone slip vectors in the NUVEL-1 data set do not greatly affect the plate velocities predicted by NUVEL-1. A data set that excludes all of the 724 earthquake slip vectors used to derive NUVEL-1 is used to derive the second model. This model is suitable as a reference model for kinematic studies that require plate velocity estimates unaffected by earthquake slip vectors. The slip-dependent slip vector bias along transform faults is investigated using the second model, and evidence is sought for biases in slip directions along spreading centers.

  2. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    USGS Publications Warehouse

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200  km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  3. Estimating the Rate of Retinal Ganglion Cell Loss in Glaucoma

    PubMed Central

    Medeiros, Felipe A.; Zangwill, Linda M.; Anderson, Douglas R.; Liebmann, Jeffrey M.; Girkin, Christopher A; Harwerth, Ronald S.; Fredette, Marie-Josée; Weinreb, Robert N.

    2013-01-01

    Purpose To present and evaluate a new method of estimating rates of retinal ganglion cell (RGC) loss in glaucoma by combining structural and functional measurements. Design Observational cohort study Methods The study included 213 eyes of 213 glaucoma patients followed for an average of 4.5±0.8 years with standard automated perimetry (SAP) visual fields and optical coherence tomography (OCT). A control group of 33 eyes of 33 glaucoma patients had repeated tests over a short period of time to test the specificity of the method. An additional group of 52 eyes from 52 healthy subjects followed for an average of 4.0±0.7 years was used to estimate age-related losses of RGCs. Estimates of RGC counts were obtained from SAP and OCT and a weighted average was used to obtain a final estimate of the number of RGCs for each eye. The rate of RGC loss was calculated for each eye using linear regression. Progression was defined by a statistically significant slope faster than the age-expected loss of RGCs. Results From the 213 eyes, 47 (22.1%) showed rates of RGC loss that were faster than the age-expected decline. A larger proportion of glaucomatous eyes showed progression based on rates of RGC loss than based on isolated parameters from SAP (8.5%) or OCT (14.6%; P<0.01), while maintaining similar specificities in the stable group. Conclusion The rate of RGC loss estimated from combining structure and function performed better than either isolated structural or functional measures for detecting progressive glaucomatous damage. PMID:22840484

  4. Routine estimation of earthquake source complexity: The 18 October 1992 Colombian earthquake

    USGS Publications Warehouse

    Ammon, Charles J.; Lay, Thorne; Velasco, Aaron A.; Vidale, John E.

    1994-01-01

    We describe two methods, suitable for routine application to teleseismic recordings, that characterize the time history of seismic events. Stacking short-period signals from large regional arrays provides stable estimates of high-frequency radiation from the source, and an empirical Green's function deconvolution procedure extracts reliable, broadband time functions suitable for analysis of faulting complexity and the spatio-temporal extent of rupture. Combined, these procedures characterize the source radiation of large events (Ms > 7) between 200- and 0.5-sec periods.

  5. Southern California regional earthquake probability estimated from continuous GPS geodetic data

    NASA Astrophysics Data System (ADS)

    Anderson, G.

    2002-12-01

    Current seismic hazard estimates are primarily based on seismic and geologic data, but geodetic measurements from large, dense arrays such as the Southern California Integrated GPS Network (SCIGN) can also be used to estimate earthquake probabilities and seismic hazard. Geodetically-derived earthquake probability estimates are particularly important in regions with poorly-constrained fault slip rates. In addition, they are useful because such estimates come with well-determined error bounds. Long-term planning is underway to incorporate geodetic data in the next generation of United States national seismic hazard maps, and techniques for doing so need further development. I present a new method for estimating the expected rates of earthquakes using strain rates derived from geodetic station velocities. I compute the strain rates using a new technique devised by Y. Hsu and M. Simons [Y. Hsu and M. Simons, pers. comm.], which computes the horizontal strain rate tensor ( {˙ {ɛ}}) at each node of a pre-defined regular grid, using all geodetic velocities in the data set weighted by distance and estimated uncertainty. In addition, they use a novel weighting to handle the effects of station distribution: they divide the region covered by the geodetic network into Voronoi cells using the station locations and weight each station's contribution to {˙ {ɛ}} by the area of the Voronoi cell centered at that station. I convert {˙ {ɛ}} into the equivalent seismic moment rate density (˙ {M}) using the method of \\textit{Savage and Simpson} [1997] and maximum seismogenic depths estimated from regional seismicity; ˙ {M} gives the expected rate of seismic moment release in a region, based on the geodetic strain rates. Assuming the seismicity in the given region follows a Gutenberg-Richter relationship, I convert ˙ {M} to an expected rate of earthquakes of a given magnitude. I will present results of a study applying this method to data from the SCIGN array to estimate

  6. Housing type after the Great East Japan Earthquake and loss of motor function in elderly victims: a prospective observational study.

    PubMed

    Ito, Kumiko; Tomata, Yasutake; Kogure, Mana; Sugawara, Yumi; Watanabe, Takashi; Asaka, Tadayoshi; Tsuji, Ichiro

    2016-11-03

    Previous studies have reported that elderly victims of natural disasters might be prone to a subsequent decline in motor function. Victims of the Great East Japan Earthquake (GEJE) relocated to a wide range of different types of housing. As the evacuee lifestyle varies according to the type of housing available to them, their degree of motor function loss might also vary accordingly. However, the association between postdisaster housing type and loss of motor function has never been investigated. The present study was conducted to investigate the association between housing type after the GEJE and loss of motor function in elderly victims. We conducted a prospective observational study of 478 Japanese individuals aged ≥65 years living in Miyagi Prefecture, one of the areas most significantly affected by the GEJE. Information on housing type after the GEJE, motor function as assessed by the Kihon checklist and other lifestyle factors was collected by interview and questionnaire in 2012. Information on motor function was then collected 1 year later. The multiple logistic regression model was used to estimate the multivariate adjusted ORs of motor function loss. We classified 53 (11.1%) of the respondents as having loss of motor function. The multivariate adjusted OR (with 95% CI) for loss of motor function among participants who were living in privately rented temporary housing/rental housing was 2.62 (1.10 to 6.24) compared to those who had remained in the same housing as that before the GEJE, and this increase was statistically significant. The proportion of individuals with loss of motor function was higher among persons who had relocated to privately rented temporary housing/rental housing after the GEJE. This result may reflect the influence of a move to a living environment where few acquaintances are located (lack of social capital). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go

  7. A phase coherence approach to estimating the spatial extent of earthquakes

    NASA Astrophysics Data System (ADS)

    Hawthorne, Jessica C.; Ampuero, Jean-Paul

    2016-04-01

    We present a new method for estimating the spatial extent of seismic sources. The approach takes advantage of an inter-station phase coherence computation that can identify co-located sources (Hawthorne and Ampuero, 2014). Here, however, we note that the phase coherence calculation can eliminate the Green's function and give high values only if both earthquakes are point sources---if their dimensions are much smaller than the wavelengths of the propagating seismic waves. By examining the decrease in coherence at higher frequencies (shorter wavelengths), we can estimate the spatial extents of the earthquake ruptures. The approach can to some extent be seen as a simple way of identifying directivity or variations in the apparent source time functions recorded at various stations. We apply this method to a set of well-recorded earthquakes near Parkfield, CA. We show that when the signal to noise ratio is high, the phase coherence remains high well above 50 Hz for closely spaced M<1.5 earthquake. The high-frequency phase coherence is smaller for larger earthquakes, suggesting larger spatial extents. The implied radii scale roughly as expected from typical magnitude-corner frequency scalings. We also examine a second source of high-frequency decoherence: spatial variation in the shape of the Green's functions. This spatial decoherence appears to occur on a similar wavelengths as the decoherence associated with the apparent source time functions. However, the variation in Green's functions can be normalized away to some extent by comparing observations at multiple components on a single station, which see the same apparent source time functions.

  8. Probabilistic estimates of surface coseismic slip and afterslip for Hayward fault earthquakes

    USGS Publications Warehouse

    Aagaard, Brad T.; Lienkaemper, James J.; Schwartz, David P.

    2012-01-01

    We examine the partition of long‐term geologic slip on the Hayward fault into interseismic creep, coseismic slip, and afterslip. Using Monte Carlo simulations, we compute expected coseismic slip and afterslip at three alinement array sites for Hayward fault earthquakes with nominal moment magnitudes ranging from about 6.5 to 7.1. We consider how interseismic creep might affect the coseismic slip distribution as well as the variability in locations of large and small slip patches and the magnitude of an earthquake for a given rupture area. We calibrate the estimates to be consistent with the ratio of interseismic creep rate at the alinement array sites to the geologic slip rate for the Hayward fault. We find that the coseismic slip at the surface is expected to comprise only a small fraction of the long‐term geologic slip. The median values of coseismic slip are less than 0.2 m in nearly all cases as a result of the influence of interseismic creep and afterslip. However, afterslip makes a substantial contribution to the long‐term geologic slip and may be responsible for up to 0.5–1.5 m (median plus one standard deviation [S.D.]) of additional slip following an earthquake rupture. Thus, utility and transportation infrastructure could be severely impacted by afterslip in the hours and days following a large earthquake on the Hayward fault that generated little coseismic slip. Inherent spatial variability in earthquake slip combined with the uncertainty in how interseismic creep affects coseismic slip results in large uncertainties in these slip estimates.

  9. Source model estimation of the 2005 Kyushu Earthquake, Japan using Modified Semi Empirical Technique

    NASA Astrophysics Data System (ADS)

    Sandeep; Joshi, A.; Sah, S. K.; Kumar, Parveen; Lal, Sohan; Vandana; Kamal; Singh, R. S.

    2017-10-01

    The 2005 Kyushu earthquake (MW 6.6, MJMA 7.0) occurred northwest of Fukuoka, Japan causing much damage and injuries. Here, we model the earthquake's source using the data recorded at surrounding field stations. Two isolated strong motion generation areas (SMGA) are identified on the rupture plane. The parameters of each SMGA are estimated using source displacement spectra and then used the spatiotemporal distribution of aftershocks to identify possible locations of SMGAs on the rupture plane. A modified semi empirical technique (MSET) simulated the records for the estimated rupture model. We then compared the observed and simulated acceleration records from eight regional stations. A comparable match between the observed and simulated records confirms the robustness of two SMGA rupture model and ability of MSET to simulate strong ground motion.

  10. Regional intensity attenuation models for France and the estimation of magnitude and location of historical earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Scotti, O.

    2006-01-01

    Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.

  11. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of

  12. A spatially explicit estimate of avoided forest loss.

    PubMed

    Honey-Rosés, Jordi; Baylis, Kathy; Ramírez, M Isabel

    2011-10-01

    With the potential expansion of forest conservation programs spurred by climate-change agreements, there is a need to measure the extent to which such programs achieve their intended results. Conventional methods for evaluating conservation impact tend to be biased because they do not compare like areas or account for spatial relations. We assessed the effect of a conservation initiative that combined designation of protected areas with payments for environmental services to conserve over wintering habitat for the monarch butterfly (Danaus plexippus) in Mexico. To do so, we used a spatial-matching estimator that matches covariates among polygons and their neighbors. We measured avoided forest loss (avoided disturbance and deforestation) by comparing forest cover on protected and unprotected lands that were similar in terms of accessibility, governance, and forest type. Whereas conventional estimates of avoided forest loss suggest that conservation initiatives did not protect forest cover, we found evidence that the conservation measures are preserving forest cover. We found that the conservation measures protected between 200 ha and 710 ha (3-16%) of forest that is high-quality habitat for monarch butterflies, but had a smaller effect on total forest cover, preserving between 0 ha and 200 ha (0-2.5%) of forest with canopy cover >70%. We suggest that future estimates of avoided forest loss be analyzed spatially to account for how forest loss occurs across the landscape. Given the forthcoming demand from donors and carbon financiers for estimates of avoided forest loss, we anticipate our methods and results will contribute to future studies that estimate the outcome of conservation efforts.

  13. Stress drop estimates and hypocenter relocations of induced earthquakes near Fox Creek, Alberta

    NASA Astrophysics Data System (ADS)

    Clerc, F.; Harrington, R. M.; Liu, Y.; Gu, Y. J.

    2016-12-01

    This study investigates the physical differences between induced and naturally occurring earthquakes using a sequence of events potentially induced by hydraulic fracturing near Fox Creek, Alberta. We perform precise estimations of static stress drop to determine if the range of values is low compared to values estimated for naturally occurring events, as has been suggested by previous studies. Starting with the Natural Resources Canada earthquake catalog and using waveform data from regional networks, we use a spectral ratio method to calculate the static stress drop values of a group of relocated earthquakes occurring in close proximity to hydraulic fracturing wells from December 2013 to June 2015. The spectral ratio method allows us to precisely constrain the corner frequencies of the amplitude spectra by eliminating the path and site effects of co-located event pairs. Our estimated stress drop values range from 0.1 - 149 MPa over the full range of observed magnitudes, Mw 1.5-4, which are on the high side of the typical reported range of tectonic events, but consistent with other regional studies [Zhang et al., 2016; Wang et al., 2016]. , Stress drops values range from 11 to 93 MPa and appear to be scale invariant over the magnitude range Mw 3 - 4, and are less well constrained at lower magnitudes due to noise and bandwidth limitations. We observe no correlation between event stress drop and hypocenter depth or distance from the wells. Relocated hypocenters cluster around corresponding injection wells and form fine-scale lineations, suggesting the presence and orientation of fault planes. We conclude that neither the range of stress drops nor their scaling with respect to magnitude can be used to conclusively discriminate induced and tectonic earthquakes, as stress drop values may be greatly affected by the regional setting. Instead, the double-difference relocations may be a more reliable indicator of induced seismicity.

  14. The design and implementation of urban earthquake disaster loss evaluation and emergency response decision support systems based on GIS

    NASA Astrophysics Data System (ADS)

    Yang, Kun; Xu, Quan-li; Peng, Shuang-yun; Cao, Yan-bo

    2008-10-01

    Based on the necessity analysis of GIS applications in earthquake disaster prevention, this paper has deeply discussed the spatial integration scheme of urban earthquake disaster loss evaluation models and visualization technologies by using the network development methods such as COM/DCOM, ActiveX and ASP, as well as the spatial database development methods such as OO4O and ArcSDE based on ArcGIS software packages. Meanwhile, according to Software Engineering principles, a solution of Urban Earthquake Emergency Response Decision Support Systems based on GIS technologies have also been proposed, which include the systems logical structures, the technical routes,the system realization methods and function structures etc. Finally, the testing systems user interfaces have also been offered in the paper.

  15. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

  16. Estimating refractivity from propagation loss in turbulent media

    NASA Astrophysics Data System (ADS)

    Wagner, Mark; Gerstoft, Peter; Rogers, Ted

    2016-12-01

    This paper estimates lower atmospheric refractivity (M-profile) given an electromagnetic (EM) propagation loss (PL) measurement. Specifically, height-independent PL measurements over a range of 10-80 km are used to infer information about the existence and potential parameters of atmospheric ducts in the lowest 1 km of the atmosphere. The main improvement made on previous refractivity estimations is inclusion of range-dependent fluctuations due to turbulence in the forward propagation model. Using this framework, the maximum likelihood (ML) estimate of atmospheric refractivity has good accuracy, and with prior information about ducting the maximum a priori (MAP) refractivity estimate can be found. Monte Carlo methods are used to estimate the mean and covariance of PL, which are fed into a Gaussian likelihood function for evaluation of estimated refractivity probability. Comparisons were made between inversions performed on propagation loss data simulated by a wide angle parabolic equation (PE) propagation model with added homogeneous and inhomogeneous turbulence. It was found that the turbulence models produce significantly different results, suggesting that accurate modeling of turbulence is key.

  17. Earthquake impact scale

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.

    2011-01-01

    With the advent of the USGS prompt assessment of global earthquakes for response (PAGER) system, which rapidly assesses earthquake impacts, U.S. and international earthquake responders are reconsidering their automatic alert and activation levels and response procedures. To help facilitate rapid and appropriate earthquake response, an Earthquake Impact Scale (EIS) is proposed on the basis of two complementary criteria. On the basis of the estimated cost of damage, one is most suitable for domestic events; the other, on the basis of estimated ranges of fatalities, is generally more appropriate for global events, particularly in developing countries. Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are quite effective in communicating predicted impact and response needed after an event through alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1,000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses reaching $1M, $100M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness predominate in countries in which local building practices typically lend themselves to high collapse and casualty rates, and these impacts lend to prioritization for international response. In contrast, financial and overall societal impacts often trigger the level of response in regions or countries in which prevalent earthquake resistant construction practices greatly reduce building collapse and resulting fatalities. Any newly devised alert, whether economic- or casualty-based, should be intuitive and consistent with established lexicons and procedures. Useful alerts should

  18. Estimating Earthquake Source Parameters from P-wave Spectra: Lessons from Theory and Observations

    NASA Astrophysics Data System (ADS)

    Shearer, P. M.; Denolle, M.; Kaneko, Y.

    2015-12-01

    Observations make clear that some earthquakes radiate relatively more high frequency energy that others of the same moment. But translating these differences into traditional source parameter measures, such as stress drop and radiated energy, can be problematic. Some of the issues include: (1) Because of directivity and other rupture propagation details, theoretical results show that recorded spectra will vary in shape among stations. Observational studies often neglect this effect or assume it will average out when multiple stations are used, but this averaging is rarely perfect, particularly considering the narrow range of takeoff angles used in teleseismic studies. (2) Depth phases for shallow events create interference in the spectra that can severely bias spectral estimates, unless depth phases are taken into account. (3) Corner frequency is not a well-defined parameter and different methods for its computation will yield different results. In addition, stress drop estimates inferred from corner frequencies rely on specific theoretical rupture models, and different assumed crack geometries and rupture velocities will yield different stress drop values. (4) Attenuation corrections may be inaccurate or not fully reflect local 3D near-source attenuation structure. The use of empirical Green's function (EGF) events can help, but these often have signal-to-noise issues or are not very close to the target earthquake. (5) Energy estimates typically rely on some degree of extrapolation of spectra beyond their observational band, introducing model assumptions into what is intended to be a direct measure of an earthquake property. (6) P-wave spectra are analyzed much more than S-wave spectra because of their greater frequency content, but they only carry a small fraction of the total radiated seismic energy and thus total energy estimates may rely on poorly known Es/Ep scaling relations. We will discuss strategies to address these problems and to compute improved source

  19. Estimation of postfire nutrient loss in the Florida everglades.

    PubMed

    Qian, Y; Miao, S L; Gu, B; Li, Y C

    2009-01-01

    Postfire nutrient release into ecosystem via plant ash is critical to the understanding of fire impacts on the environment. Factors determining a postfire nutrient budget are prefire nutrient content in the combustible biomass, burn temperature, and the amount of combustible biomass. Our objective was to quantitatively describe the relationships between nutrient losses (or concentrations in ash) and burning temperature in laboratory controlled combustion and to further predict nutrient losses in field fire by applying predictive models established based on laboratory data. The percentage losses of total nitrogen (TN), total carbon (TC), and material mass showed a significant linear correlation with a slope close to 1, indicating that TN or TC loss occurred predominantly through volatilization during combustion. Data obtained in laboratory experiments suggest that the losses of TN, TC, as well as the ratio of ash total phosphorus (TP) concentration to leaf TP concentration have strong relationships with burning temperature and these relationships can be quantitatively described by nonlinear equations. The potential use of these nonlinear models relating nutrient loss (or concentration) to temperature in predicting nutrient concentrations in field ash appear to be promising. During a prescribed fire in the northern Everglades, 73.1% of TP was estimated to be retained in ash while 26.9% was lost to the atmosphere, agreeing well with the distribution of TP during previously reported wild fires. The use of predictive models would greatly reduce the cost associated with measuring field ash nutrient concentrations.

  20. Update earthquake risk assessment in Cairo, Egypt

    NASA Astrophysics Data System (ADS)

    Badawy, Ahmed; Korrat, Ibrahim; El-Hadidy, Mahmoud; Gaber, Hanan

    2017-07-01

    The Cairo earthquake (12 October 1992; m b = 5.8) is still and after 25 years one of the most painful events and is dug into the Egyptians memory. This is not due to the strength of the earthquake but due to the accompanied losses and damages (561 dead; 10,000 injured and 3000 families lost their homes). Nowadays, the most frequent and important question that should rise is "what if this earthquake is repeated today." In this study, we simulate the same size earthquake (12 October 1992) ground motion shaking and the consequent social-economic impacts in terms of losses and damages. Seismic hazard, earthquake catalogs, soil types, demographics, and building inventories were integrated into HAZUS-MH to produce a sound earthquake risk assessment for Cairo including economic and social losses. Generally, the earthquake risk assessment clearly indicates that "the losses and damages may be increased twice or three times" in Cairo compared to the 1992 earthquake. The earthquake risk profile reveals that five districts (Al-Sahel, El Basateen, Dar El-Salam, Gharb, and Madinat Nasr sharq) lie in high seismic risks, and three districts (Manshiyat Naser, El-Waily, and Wassat (center)) are in low seismic risk level. Moreover, the building damage estimations reflect that Gharb is the highest vulnerable district. The analysis shows that the Cairo urban area faces high risk. Deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated buildings damages are concentrated within the most densely populated (El Basateen, Dar El-Salam, Gharb, and Madinat Nasr Gharb) districts. Moreover, about 75 % of casualties are in the same districts. Actually, an earthquake risk assessment for Cairo represents a crucial application of the HAZUS earthquake loss estimation model for risk management. Finally, for mitigation, risk reduction, and to improve the seismic performance of structures and assure life safety

  1. Update earthquake risk assessment in Cairo, Egypt

    NASA Astrophysics Data System (ADS)

    Badawy, Ahmed; Korrat, Ibrahim; El-Hadidy, Mahmoud; Gaber, Hanan

    2016-12-01

    The Cairo earthquake (12 October 1992; m b = 5.8) is still and after 25 years one of the most painful events and is dug into the Egyptians memory. This is not due to the strength of the earthquake but due to the accompanied losses and damages (561 dead; 10,000 injured and 3000 families lost their homes). Nowadays, the most frequent and important question that should rise is "what if this earthquake is repeated today." In this study, we simulate the same size earthquake (12 October 1992) ground motion shaking and the consequent social-economic impacts in terms of losses and damages. Seismic hazard, earthquake catalogs, soil types, demographics, and building inventories were integrated into HAZUS-MH to produce a sound earthquake risk assessment for Cairo including economic and social losses. Generally, the earthquake risk assessment clearly indicates that "the losses and damages may be increased twice or three times" in Cairo compared to the 1992 earthquake. The earthquake risk profile reveals that five districts (Al-Sahel, El Basateen, Dar El-Salam, Gharb, and Madinat Nasr sharq) lie in high seismic risks, and three districts (Manshiyat Naser, El-Waily, and Wassat (center)) are in low seismic risk level. Moreover, the building damage estimations reflect that Gharb is the highest vulnerable district. The analysis shows that the Cairo urban area faces high risk. Deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated buildings damages are concentrated within the most densely populated (El Basateen, Dar El-Salam, Gharb, and Madinat Nasr Gharb) districts. Moreover, about 75 % of casualties are in the same districts. Actually, an earthquake risk assessment for Cairo represents a crucial application of the HAZUS earthquake loss estimation model for risk management. Finally, for mitigation, risk reduction, and to improve the seismic performance of structures and assure life safety

  2. Stable stress drop ratio estimation for potentially induced earthquakes in Oklahoma

    NASA Astrophysics Data System (ADS)

    Yoshimitsu, N.; Ellsworth, W. L.; Beroza, G. C.; Huang, Y.; Shaw, B. E.

    2016-12-01

    Stress drop is a key source parameter that influences the earthquake hazard in Oklahoma where potentially injection-induced earthquakes show high activity. Determing if the stress drops of induced earthquake are similar to natural events would provide an important constraint on the hazard. Making stable and accurate stress drop measurements has been a challenging issue because of the strong dependence of stress drop on the corner frequency. Shaw et al. (in this meeting) introduced a new approach for the stable stress drop ratio estimation, using the high- and low-frequency asymptotic level of the spectral ratio that avoids the need to measure corner frequencies. We compared the stress drop ratio of earthquakes in Oklahoma, measured by this new approach, with stress drops measured by the traditional approach based on the spectral fitting.We formed spectral ratios for pairs of co-located events in two clusters to remove path effects; both clusters contain 9 events (distance < 5 km, 2.7 < Mw < 4.2). The spectral ratios between the smaller events and the larger event in the cluster were calculated using the Multi-Window Spectral Ratio method. We analyzed 5.12 seconds after the twice the S wave arrival time (coda wave), and stacked the spectral ratios of all components of 6 USGS stations. Corner frequencies and moment ratio of each event pair were fit with the Boatwright model. Using the derived corner frequencies and the asymptotic level of the spectral ratio, we measured the stress drop ratios of each event in two methods. As a result, the stress drop ratio obtained from asymptotic spectral values ranged from 1 to 2 while the values obtained from the corner frequencies ranged from 1 to 5. This suggests the stress drop ratio based on the spectral asymptotes is an effective way to reduce the error of the estimation, which should greatly improve our ability to characterize true stress drop variability.

  3. Estimating Phosphorus Loss in Runoff from Manure and Fertilizer for a Phosphorus Loss Quantification Tool

    USDA-ARS?s Scientific Manuscript database

    Non-point source pollution of fresh waters by phosphorus (P) is a concern because it contributes to accelerated eutrophication. Qualitative P Indexes that estimate the risk of field-scale P loss have been developed in the USA and Europe. However, given the state of the science concerning agricultura...

  4. Dose estimates in a loss of lead shielding truck accident.

    SciTech Connect

    Dennis, Matthew L.; Osborn, Douglas M.; Weiner, Ruth F.; Heames, Terence John

    2009-08-01

    The radiological transportation risk & consequence program, RADTRAN, has recently added an updated loss of lead shielding (LOS) model to it most recent version, RADTRAN 6.0. The LOS model was used to determine dose estimates to first-responders during a spent nuclear fuel transportation accident. Results varied according to the following: type of accident scenario, percent of lead slump, distance to shipment, and time spent in the area. This document presents a method of creating dose estimates for first-responders using RADTRAN with potential accident scenarios. This may be of particular interest in the event of high speed accidents or fires involving cask punctures.

  5. Probabilistic seismic loss estimation via endurance time method

    NASA Astrophysics Data System (ADS)

    Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.

    2017-01-01

    Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.

  6. Microzonation of Seismic Hazards and Estimation of Human Fatality for Scenario Earthquakes in Chianan Area, Taiwan

    NASA Astrophysics Data System (ADS)

    Liu, K. S.; Chiang, C. L.; Ho, T. T.; Tsai, Y. B.

    2015-12-01

    In this study, we assess seismic hazards in the 57 administration districts of Chianan area, Taiwan in the form of ShakeMaps as well as to estimate potential human fatalities from scenario earthquakes on the three Type I active faults in this area. As a result, it is noted that two regions with high MMI intensity greater than IX in the map of maximum ground motion. One is in the Chiayi area around Minsyong, Dalin and Meishan due to presence of the Meishan fault and large site amplification factors which can reach as high as 2.38 and 2.09 for PGA and PGV, respectively, in Minsyong. The other is in the Tainan area around Jiali, Madou, Siaying, Syuejia, Jiangjyun and Yanshuei due to a disastrous earthquake occurred near the border between Jiali and Madou with a magnitude of Mw 6.83 in 1862 and large site amplification factors which can reach as high as 2.89 and 2.97 for PGA and PGV, respectively, in Madou. In addition, the probabilities in 10, 30, and 50-year periods with seismic intensity exceeding MMII VIII in above areas are greater than 45%, 80% and 95%, respectively. Moreover, from the distribution of probabilities, high values of greater than 95% over a 10 year period with seismic intensity corresponding to CWBI V and MMI VI are found in central and northern Chiayi and northern Tainan. At last, from estimation of human fatalities for scenario earthquakes on three active faults in Chianan area, it is noted that the numbers of fatalities increase rapidly for people above age 45. Compared to the 1946 Hsinhua earthquake, the number of fatality estimated from the scenario earthquake on the Hsinhua active fault is significantly high. However, the higher number of fatality in this case is reasonable after considering the probably reasons. Hence, we urge local and the central governments to pay special attention on seismic hazard mitigation in this highly urbanized area with large number of old buildings.

  7. Maximum magnitude estimations of induced earthquakes at Paradox Valley, Colorado, from cumulative injection volume and geometry of seismicity clusters

    NASA Astrophysics Data System (ADS)

    Yeck, William L.; Block, Lisa V.; Wood, Christopher K.; King, Vanessa M.

    2015-01-01

    The Paradox Valley Unit (PVU), a salinity control project in southwest Colorado, disposes of brine in a single deep injection well. Since the initiation of injection at the PVU in 1991, earthquakes have been repeatedly induced. PVU closely monitors all seismicity in the Paradox Valley region with a dense surface seismic network. A key factor for understanding the seismic hazard from PVU injection is the maximum magnitude earthquake that can be induced. The estimate of maximum magnitude of induced earthquakes is difficult to constrain as, unlike naturally occurring earthquakes, the maximum magnitude of induced earthquakes changes over time and is affected by injection parameters. We investigate temporal variations in maximum magnitudes of induced earthquakes at the PVU using two methods. First, we consider the relationship between the total cumulative injected volume and the history of observed largest earthquakes at the PVU. Second, we explore the relationship between maximum magnitude and the geometry of individual seismicity clusters. Under the assumptions that: (i) elevated pore pressures must be distributed over an entire fault surface to initiate rupture and (ii) the location of induced events delineates volumes of sufficiently high pore-pressure to induce rupture, we calculate the largest allowable vertical penny-shaped faults, and investigate the potential earthquake magnitudes represented by their rupture. Results from both the injection volume and geometrical methods suggest that the PVU has the potential to induce events up to roughly MW 5 in the region directly surrounding the well; however, the largest observed earthquake to date has been about a magnitude unit smaller than this predicted maximum. In the seismicity cluster surrounding the injection well, the maximum potential earthquake size estimated by these methods and the observed maximum magnitudes have remained steady since the mid-2000s. These observations suggest that either these methods

  8. Reevaluation of the macroseismic effects of the 1887 Sonora, Mexico earthquake and its magnitude estimation

    USGS Publications Warehouse

    Suárez, Gerardo; Hough, Susan E.

    2008-01-01

    The Sonora, Mexico, earthquake of 3 May 1887 occurred a few years before the start of the instrumental era in seismology. We revisit all available accounts of the earthquake and assign Modified Mercalli Intensities (MMI), interpreting and analyzing macroseismic information using the best available modern methods. We find that earlier intensity assignments for this important earthquake were unjustifiably high in many cases. High intensity values were assigned based on accounts of rock falls, soil failure or changes in the water table, which are now known to be very poor indicators of shaking severity and intensity. Nonetheless, reliable accounts reveal that light damage (intensity VI) occurred at distances of up to ~200 km in both Mexico and the United States. The resulting set of 98 reevaluated intensity values is used to draw an isoseismal map of this event. Using the attenuation relation proposed by Bakun (2006b), we estimate an optimal moment magnitude of Mw7.6. Assuming this magnitude is correct, a fact supported independently by documented rupture parameters assuming standard scaling relations, our results support the conclusion that northern Sonora as well as the Basin and Range province are characterized by lower attenuation of intensities than California. However, this appears to be at odds with recent results that Lg attenuation in the Basin and Range province is comparable to that in California.

  9. Earthquake shaking hazard estimates and exposure changes in the conterminous United States

    USGS Publications Warehouse

    Jaiswal, Kishor S.; Petersen, Mark D.; Rukstales, Kenneth S.; Leith, William S.

    2015-01-01

    A large portion of the population of the United States lives in areas vulnerable to earthquake hazards. This investigation aims to quantify population and infrastructure exposure within the conterminous U.S. that are subjected to varying levels of earthquake ground motions by systematically analyzing the last four cycles of the U.S. Geological Survey's (USGS) National Seismic Hazard Models (published in 1996, 2002, 2008 and 2014). Using the 2013 LandScan data, we estimate the numbers of people who are exposed to potentially damaging ground motions (peak ground accelerations at or above 0.1g). At least 28 million (~9% of the total population) may experience 0.1g level of shaking at relatively frequent intervals (annual rate of 1 in 72 years or 50% probability of exceedance (PE) in 50 years), 57 million (~18% of the total population) may experience this level of shaking at moderately frequent intervals (annual rate of 1 in 475 years or 10% PE in 50 years), and 143 million (~46% of the total population) may experience such shaking at relatively infrequent intervals (annual rate of 1 in 2,475 years or 2% PE in 50 years). We also show that there is a significant number of critical infrastructure facilities located in high earthquake-hazard areas (Modified Mercalli Intensity ≥ VII with moderately frequent recurrence interval).

  10. Sufficient dimension reduction via squared-loss mutual information estimation.

    PubMed

    Suzuki, Taiji; Sugiyama, Masashi

    2013-03-01

    The goal of sufficient dimension reduction in supervised learning is to find the low-dimensional subspace of input features that contains all of the information about the output values that the input features possess. In this letter, we propose a novel sufficient dimension-reduction method using a squared-loss variant of mutual information as a dependency measure. We apply a density-ratio estimator for approximating squared-loss mutual information that is formulated as a minimum contrast estimator on parametric or nonparametric models. Since cross-validation is available for choosing an appropriate model, our method does not require any prespecified structure on the underlying distributions. We elucidate the asymptotic bias of our estimator on parametric models and the asymptotic convergence rate on nonparametric models. The convergence analysis utilizes the uniform tail-bound of a U-process, and the convergence rate is characterized by the bracketing entropy of the model. We then develop a natural gradient algorithm on the Grassmann manifold for sufficient subspace search. The analytic formula of our estimator allows us to compute the gradient efficiently. Numerical experiments show that the proposed method compares favorably with existing dimension-reduction approaches on artificial and benchmark data sets.

  11. Testing Earthquake Source Inversion Methodologies

    NASA Astrophysics Data System (ADS)

    Page, Morgan; Mai, P. Martin; Schorlemmer, Danijel

    2011-03-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquake-related computations, such as ground motion simulations and static stress change calculations.

  12. Earthquake source scaling and self-similarity estimation from stacking P and S spectra

    NASA Astrophysics Data System (ADS)

    Prieto, GermáN. A.; Shearer, Peter M.; Vernon, Frank L.; Kilb, Debi

    2004-08-01

    We study the scaling relationships of source parameters and the self-similarity of earthquake spectra by analyzing a cluster of over 400 small earthquakes (ML = 0.5 to 3.4) recorded by the Anza seismic network in southern California. We compute P, S, and preevent noise spectra from each seismogram using a multitaper technique and approximate source and receiver terms by iteratively stacking the spectra. To estimate scaling relationships, we average the spectra in size bins based on their relative moment. We correct for attenuation by using the smallest moment bin as an empirical Green's function (EGF) for the stacked spectra in the larger moment bins. The shapes of the log spectra agree within their estimated uncertainties after shifting along the ω-3 line expected for self-similarity of the source spectra. We also estimate corner frequencies and radiated energy from the relative source spectra using a simple source model. The ratio between radiated seismic energy and seismic moment (proportional to apparent stress) is nearly constant with increasing moment over the magnitude range of our EGF-corrected data (ML = 1.8 to 3.4). Corner frequencies vary inversely as the cube root of moment, as expected from the observed self-similarity in the spectra. The ratio between P and S corner frequencies is observed to be 1.6 ± 0.2. We obtain values for absolute moment and energy by calibrating our results to local magnitudes for these earthquakes. This yields a S to P energy ratio of 9 ± 1.5 and a value of apparent stress of about 1 MPa.

  13. The CATDAT damaging earthquakes database

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  14. Deep Structure and Earthquake Generating Properties in the Yamasaki Fault Zone Estimated from Dense Seismic Observation

    NASA Astrophysics Data System (ADS)

    Nishigami, K.; Shibutani, T.; Katao, H.; Yamaguchi, S.; Mamada, Y.

    2010-12-01

    We have been estimating crustal heterogeneous structure and earthquake generating properties in and around the Yamasaki fault zone, which is a left-lateral strike-slip active fault with a total length of about 80 km in southwest Japan. We deployed dense seismic observation network, composed of 32 stations with average spacing of 5-10 km around the Yamasaki fault zone. We estimate detailed fault structure such as fault dip and shape, segmentation, and possible location of asperities and rupture initiation point, as well as generating properties of earthquakes in the fault zone, through analyses of accurate hypocenter distribution, focal mechanism, 3-D velocity tomography, coda wave inversion, and other waveform analyses. We also deployed a linear seismic array across the fault, composed of 20 stations with about 20 m spacing, in order to delineate the fault-zone structure in more detail using the seismic waves trapped inside the low velocity zone. We also estimate detailed resistivity structure at shallow depth of the fault zone by AMT (audio-frequency magnetotelluric) and MT surveys. In the scattering analysis of coda waves, we used 2,391 wave traces from 121 earthquakes that occurred in 2002, 2003, 2008 and 2009, recorded at 60 stations, including dense temporary and routine stations. We estimated 3-D distribution of relative scattering coefficients along the Yamasaki fault zone. Microseismicity is high and scattering coefficient is relatively larger in the upper crust along the entire fault zone. The distribution of strong scatterers suggests that the Ohara and Hijima faults, which are the segments in the northwestern part of the Yamasaki fault zone, have almost vertical fault plane from surface to a depth of about 15 km. We used seismic network data operated by Universities, NIED, AIST, and JMA. This study has been carried out as a part of the project "Study on evaluation of earthquake source faults based on surveys of inland active faults" by Japan Nuclear

  15. Model parameter estimation bias induced by earthquake magnitude cut-off

    NASA Astrophysics Data System (ADS)

    Harte, D. S.

    2016-02-01

    We evaluate the bias in parameter estimates of the ETAS model. We show that when a simulated catalogue is magnitude-truncated there is considerable bias, whereas when it is not truncated there is no discernible bias. We also discuss two further implied assumptions in the ETAS and other self-exciting models. First, that the triggering boundary magnitude is equivalent to the catalogue completeness magnitude. Secondly, the assumption in the Gutenberg-Richter relationship that numbers of events increase exponentially as magnitude decreases. These two assumptions are confounded with the magnitude truncation effect. We discuss the effect of these problems on analyses of real earthquake catalogues.

  16. Twitter as Information Source for Rapid Damage Estimation after Major Earthquakes

    NASA Astrophysics Data System (ADS)

    Eggert, Silke; Fohringer, Joachim

    2014-05-01

    Natural disasters like earthquakes require a fast response from local authorities. Well trained rescue teams have to be available, equipment and technology has to be ready set up, information have to be directed to the right positions so the head quarter can manage the operation precisely. The main goal is to reach the most affected areas in a minimum of time. But even with the best preparation for these cases, there will always be the uncertainty of what really happened in the affected area. Modern geophysical sensor networks provide high quality data. These measurements, however, are only mapping disjoint values from their respective locations for a limited amount of parameters. Using observations of witnesses represents one approach to enhance measured values from sensors ("humans as sensors"). These observations are increasingly disseminated via social media platforms. These "social sensors" offer several advantages over common sensors, e.g. high mobility, high versatility of captured parameters as well as rapid distribution of information. Moreover, the amount of data offered by social media platforms is quite extensive. We analyze messages distributed via Twitter after major earthquakes to get rapid information on what eye-witnesses report from the epicentral area. We use this information to (a) quickly learn about damage and losses to support fast disaster response and to (b) densify geophysical networks in areas where there is sparse information to gain a more detailed insight on felt intensities. We present a case study from the Mw 7.1 Philippines (Bohol) earthquake that happened on Oct. 15 2013. We extract Twitter messages, so called tweets containing one or more specified keywords from the semantic field of "earthquake" and use them for further analysis. For the time frame of Oct. 15 to Oct 18 we get a data base of in total 50.000 tweets whereof 2900 tweets are geo-localized and 470 have a photo attached. Analyses for both national level and locally for

  17. Estimation of regression laws for ground motion parameters using as case of study the Amatrice earthquake

    NASA Astrophysics Data System (ADS)

    Tiberi, Lara; Costa, Giovanni

    2017-04-01

    The possibility to directly associate the damages to the ground motion parameters is always a great challenge, in particular for civil protections. Indeed a ground motion parameter, estimated in near real time that can express the damages occurred after an earthquake, is fundamental to arrange the first assistance after an event. The aim of this work is to contribute to the estimation of the ground motion parameter that better describes the observed intensity, immediately after an event. This can be done calculating for each ground motion parameter estimated in a near real time mode a regression law which correlates the above-mentioned parameter to the observed macro-seismic intensity. This estimation is done collecting high quality accelerometric data in near field, filtering them at different frequency steps. The regression laws are calculated using two different techniques: the non linear least-squares (NLLS) Marquardt-Levenberg algorithm and the orthogonal distance methodology (ODR). The limits of the first methodology are the needed of initial values for the parameters a and b (set 1.0 in this study), and the constraint that the independent variable must be known with greater accuracy than the dependent variable. While the second algorithm is based on the estimation of the errors perpendicular to the line, rather than just vertically. The vertical errors are just the errors in the 'y' direction, so only for the dependent variable whereas the perpendicular errors take into account errors for both the variables, the dependent and the independent. This makes possible also to directly invert the relation, so the a and b values can be used also to express the gmps as function of I. For each law the standard deviation and R2 value are estimated in order to test the quality and the reliability of the found relation. The Amatrice earthquake of 24th August of 2016 is used as case of study to test the goodness of the calculated regression laws.

  18. Rapid magnitude estimation using τC method for earthquake early warning system (Case study in Sumatra)

    NASA Astrophysics Data System (ADS)

    Rahman, Aditya; Marsono, Agus; Rudyanto, Ariska

    2017-07-01

    Sumatra has three sources of earthquakes, these are subduction zone, Sumatra fault system and outer arc faults which are very closed to settlements so these pose serious threat to human lives also properties. Earthquake early warning system should be developed for mitigation. This study aims to developing earthquake early warning system by way of estimating the magnitude before the arrival of S waves. The magnitude is estimated from relationship between τc parameter and magnitude. Strong ground motion record were integrated twice to get the displacement record with high-pass Butterworth filter applied. τc was determined from ratio of displacement and velocity of vertical component record. τc reflects the size of an earthquake from initial portion of the P waves. τc method could generate magnitude estimation with 0.71 deviation value from actual size impending earthquake, with 0,5 of corner frequency and needed 21,1 second before the arrival of S waves. This method validated with the existing earthquake catalog

  19. tau_p^{max} magnitude estimation, the case of the April 6, 2009 L'Aquila earthquake

    NASA Astrophysics Data System (ADS)

    Olivieri, Marco

    2013-04-01

    Rapid magnitude estimate procedures represent a crucial part of proposed earthquake early warning systems. Most of these estimates are focused on the first part of the P-wave train, the earlier and less destructive part of the ground motion that follows an earthquake. Allen and Kanamori (Science 300:786-789, 2003) proposed to use the predominant period of the P-wave to determine the magnitude of a large earthquake at local distance and Olivieri et al. (Bull Seismol Soc Am 185:74-81, 2008) calibrated a specific relation for the Italian region. The Mw 6.3 earthquake hit Central Italy on April 6, 2009 and the largest aftershocks provide a useful dataset to validate the proposed relation and discuss the risks connected to the extrapolation of magnitude relations with a poor dataset of large earthquake waveforms. A large discrepancy between local magnitude (ML) estimated by means of tau_p^{max} evaluation and standard ML (6.8 ± 1.5 vs. 5.9 ± 0.4) suggests using caution when ML vs. tau_p^{max} calibrations do not include a relevant dataset of large earthquakes. Effects from large residuals could be mitigated or removed introducing selection rules on τ p function, by regionalizing the ML vs. tau_p^{max} function in the presence of significant tectonic or geological heterogeneity, and using probabilistic and evolutionary methods.

  20. Real-time damage estimations of 2016 Kumamoto earthquakes extrapolated by the Japan Real-time Information System for earthquake (J-RISQ)

    NASA Astrophysics Data System (ADS)

    Shohei, N.; Nakamura, H.; Takahashi, I.; Fujiwara, H.

    2016-12-01

    It is crucial to develop methods grasping the situation soon after the earthquake, both in terms of supporting initial reactions, and enhancing social systems more resilient. For those reasons, we have been developing J-RISQ. Promptly after an earthquake, it estimates damages by combining methods for predicting ground motion using subsurface data, information about population and buildings, damage assessment methods for building using different fragility functions, and real-time observation data obtained by NIED, municipalities and JMA. In this study, we describe about estimations of 2016 Kumamoto earthquakes extrapolated by J-RISQ. In 2016, Kumamoto have faced 2 large jolts, the foreshock (M6.5) occurred on April 14, the main shock (M7.3) came on April 16. J-RISQ published a first report in 29 seconds after the foreshock and generated a total of seven reports within 10 minutes. Finally, it estimated that the number of completely collapsed buildings was between 5,000 and 14,000. In case of the main shock, a first report in 29 seconds, then 8 reports within 11 minutes. Finally, estimated numbers of completely collapsed buildings was between 15,000 and 38,000. The count of completely collapsed residences is approximately 8,300 according to the announcement by FDMA at July 19. In this regard, J-RISQ seems to be overestimated, however, the spatial distribution of estimation indicates a belt of destructive area adjacent to Mashiki town, and this result is correspond approximately to actual damaged area. For verification, we have performed field investigations of building damage in Kumamoto. On the other hand, the damage after the main shock includes the effect of the foreshock, so we are going to develop estimation methods considering about reduction of building caused by continuous earthquakes. *This work was supported by the CSTI through the Cross-ministerial Strategic Innovation Promotion Program (SIP), titled "Enhancement of societal resiliency against natural

  1. Estimating the 2008 Quetame (Colombia) earthquake source parameters from seismic data and InSAR measurements

    NASA Astrophysics Data System (ADS)

    Dicelis, Gabriel; Assumpção, Marcelo; Kellogg, James; Pedraza, Patricia; Dias, Fábio

    2016-12-01

    Seismic waveforms and geodetic measurements (InSAR) were used to determine the location, focal mechanism and coseismic surface displacements of the Mw 5.9 earthquake which struck the center of Colombia on May 24, 2008. We determined the focal mechanism of the main event using teleseismic P wave arrivals and regional waveform inversion for the moment tensor. We relocated the best set of aftershocks (30 events) with magnitudes larger than 2.0 recorded from May to June 2008 by a temporary local network as well as by stations of the Colombia national network. We successfully estimated coseismic deformation using SAR interferometry, despite distortion in some areas of the interferogram by atmospheric noise. The deformation was compared to synthetic data for rectangular dislocations in an elastic half-space. Nine source parameters (strike, dip, length, width, strike-slip deformation, dip-slip deformation, latitude shift, longitude shift, and minimum depth) were inverted to fit the observed changes in line-of-sight (LOS) toward the satellite four derived parameters were also estimated (rake, average slip, maximum depth and seismic moment). The aftershock relocation, the focal mechanism and the coseismic dislocation model agree with a right-lateral strike-slip fault with nodal planes oriented NE-SW and NW-SE. We use the results of the waveform inversion, radar interferometry and aftershock relocations to identify the high-angle NE-SW nodal plane as the primary fault. The inferred subsurface rupture length is roughly 11 km, which is consistent with the 12 km long distribution of aftershocks. This coseismic model can provide insights on earthquake mechanisms and seismic hazard assessments for the area, including the 8 million residents of Colombia's nearby capital city Bogota. The 2008 Quetame earthquake appears to be associated with the northeastward "escape" of the North Andean block, and it may help to illuminate how margin-parallel shear slip is partitioned in the

  2. Estimation of Coda Wave Attenuation for the National Capital Region, Delhi, India Using Local Earthquakes

    NASA Astrophysics Data System (ADS)

    Mohanty, William K.; Prakash, Rajesh; Suresh, G.; Shukla, A. K.; Yanger Walling, M.; Srivastava, J. P.

    2009-03-01

    Attenuation of seismic waves is very essential for the study of earthquake source parameters and also for ground-motion simulations, and this is important for the seismic hazard estimation of a region. The digital data acquired by 16 short-period seismic stations of the Delhi Telemetric Network for 55 earthquakes of magnitude 1.5 to 4.2, which occurred within an epicentral distance of 100 km in an area around Delhi, have been used to estimate the coda attenuation Q c . Using the Single Backscattering Model, the seismograms have been analyzed at 10 central frequencies. The frequency dependence average attenuation relationship Q c = 142 f 1.04 has been attained. Four Lapse-Time windows from 20 to 50 seconds duration with a difference of 10 seconds have been analyzed to study the lapse time dependence of Q c . The Q c values show that frequency dependence (exponent n) remains similar at all the lapse time window lengths. While the change in Q 0 values is significant, change in Q 0 with larger lapsetime reflects the rate of homogeneity at the depth. The variation of Q c indicates a definitive trend from west to east in accordance with the geology of the region.

  3. Estimation of seismic source parameters for earthquakes in the southern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Rhee, H.; Sheen, D.

    2013-12-01

    Recent seismicity in the Korean Peninsula is shown to be low but there is the potential for more severe seismic activity. Historical records show that there were many damaging earthquakes around the Peninsula. Absence of instrumental records of damaging earthquakes hinders our efforts to understand seismotectonic characteristics in the Peninsula and predict seismic hazards. Therefore it is important to analyze instrumental records precisely to help improve our knowledge of seismicity in this region. Several studies on seismic source parameters in the Korean Peninsula were performed to find source parameters for a single event (Kim, 2001; Jo and Baag, 2007; Choi, 2009; Choi and Shim, 2009; Choi, 2010; Choi and Noh, 2010; Kim et al., 2010), to find relationships between source parameters (Kim and Kim, 2008; Shin and Kang, 2008) or to determine the input parameters for the stochastic strong ground motion simulation (Jo and Baag, 2001; Junn et al., 2002). In all previous studies, however, the source parameters were estimated only from small numbers of large earthquakes in this region. To understand the seismotectonic environment in low seismicity region, it will be better that a study on the source parameters is performed by using as many data as we can. In this study, therefore, we estimated seismic source parameters, such as the corner frequency, Brune stress drop and moment magnitude, from 503 events with ML≥1.6 that occurred in the southern part of the Korean Peninsula from 2001 to 2012. The data set consist of 2,834 S-wave trains on three-component seismograms recorded at broadband seismograph stations which have been operating by the Korea Meteorological Administration and the Korea Institute of Geoscience and Mineral Resources. To calculate the seismic source parameters, we used the iterative method of Jo and Baag (2001) based on the methods of Snoke (1987) and Andrews (1986). In this method, the source parameters are estimated by using the integration of

  4. Seismic moment of the 1891 Nobi, Japan, earthquake estimated from historical seismograms

    NASA Astrophysics Data System (ADS)

    Fukuyama, E.; Muramatu, I.; Mikumo, T.

    2007-06-01

    The seismic moment of the 1891 Nobi, Japan, earthquake has been evaluated from the historical seismogram recorded at the Central Meteorological Observatory in Tokyo. For this purpose, synthetic seismograms from point and finite source models with various fault parameters have been calculated by a discrete wave-number method, incorporating the instrumental response of the Gray-Milne-Ewing seismograph, and then compared with the original records. Our estimate of the seismic moment (Mo) is 1.8 × 1020 N m corresponding to a moment magnitude (Mw) 7.5. This is significantly smaller than the previous estimates from the distribution of damage, but is consistent with that inferred from geological field survey (Matsuda, 1974) of the surface faults.

  5. Estimation of Future Changes in Flood Disaster Losses

    NASA Astrophysics Data System (ADS)

    Konoshima, L.; Hirabayashi, Y.; Roobavannan, M.

    2012-12-01

    Disaster losses can be estimated by hazard intensity, exposure, and vulnerabilities. Many studies have addressed future economic losses from river floods, most of which are focused on Europe (Bouwer et al, 2010). Here flood disaster losses are calculated using the output of multi-model ensembles of CMIP5 GCMs in order to estimate the changes in damage loss due to climate change. For the global distribution of the expected future population and GDP, the ALPS scenario of RITE is population for is used. Here, flood event is defined as river discharge that has a probability of having 100 years return period. The time series of annual maximum daily discharge was fitted using moment fitting method for GEV distribution at each grid. L-moment method (Hosking and Wallis 1997) is used for estimating the parameters of distribution. For probability distribution, Gumbel distribution and Generalized Extreme Value (GEV) distribution were tested to see the future changes of 100-year value. Using the calculation of 100-year flood of present condition and annual maximum discharge for present and future climate conditions, the area exceeding 100-year flood is calculated for each 30 years. And to estimate the economic impact of future changes in occurrence of 100-year flood, affected total GDP is calculated by multiplying the affected population with country's GDP in areas exceeding 100-year flood value of present climate for each present and future conditions. The 100-year flood value is fixed with the value of present condition in calculating the affected value on the future condition. To consider the effect of the climatic condition and changes of economic growth, the regions are classified by continents. The Southeast Asia is divided into Japan and South Korea (No.1) and other countries (No.2), since the GDP and GDP growth rate within the two areas is quite different compared to other regions. Figure 1 shows the average and standard deviation (1-sigma) of future changing ratio

  6. Quasi-static Slips Around the Source Areas of the 2003 Tokachi-oki (M8.0) and 2005 Miyagi-oki (M7.2) Earthquakes, Japan Estimated From Small Repeating Earthquakes

    NASA Astrophysics Data System (ADS)

    Uchida, N.; Matsuzawa, T.; Hirahara, S.; Igarashi, T.; Hasegawa, A.; Kasahara, M.

    2005-12-01

    We have estimated spatio-temporal distribution of interplate quasi-static slips around the source areas of the 2003 Tokachi-oki (M8.0) and 2005 Miyagi-oki (M7.2) earthquakes by using small repeating earthquakes. The small repeating earthquakes are thought to be caused by repeated rupture of small asperities surrounded by stable sliding areas on the fault. Here we estimated cumulative slips for small repeating earthquakes assuming that they were equal to the quasi-static slip histories in the surrounding areas on the plate boundaries (Igarashi et al., 2003; Uchida et al., 2003). The 2003 Tokachi-oki earthquake occurred on September 26, 2003 off the southeast of Hokkaido, Japan. The present analyses show that the slips in the areas around and to the east of the asperity of the earthquake were slow before the earthquake but that it was significantly accelerated after the earthquake. The slip rate acceleration to the east of the asperity probably triggered a M7.1 event which occurred on November 29, 2004 at the eastern edge of the accelerated area (about 100km east from the hypocenter of the Tokachi-oki earthquake). It seems that the quasi-static slip released the slip deficit in the locked area between the asperities of the 2003 Tokachi-oki and 1973 Nemuro-oki (M7.4) earthquakes. The 2005 Miyagi-oki earthquake occurred on August 16, 2005 in the anticipated source area for the recurrent _eMiyagi-oki earthquake_f. However, it was estimated that the earthquake did not destroyed the whole area of the asperity which caused the previous Miyagi-oki earthquake in 1978 (The Headquarters for Earthquake Research Promotion, 2005). Our result shows the quasi-static slips for the period of 20 years before the earthquake was almost constant to the west of the source area of the 2005 earthquake. The slips after the earthquake were not significant for the period of 15 days which suggests the plate boundary around the asperity for the earthquake is still locking.

  7. A plate boundary earthquake record from a wetland adjacent to the Alpine fault in New Zealand refines hazard estimates

    NASA Astrophysics Data System (ADS)

    Cochran, U. A.; Clark, K. J.; Howarth, J. D.; Biasi, G. P.; Langridge, R. M.; Villamor, P.; Berryman, K. R.; Vandergoes, M. J.

    2017-04-01

    Discovery and investigation of millennial-scale geological records of past large earthquakes improve understanding of earthquake frequency, recurrence behaviour, and likelihood of future rupture of major active faults. Here we present a ∼2000 year-long, seven-event earthquake record from John O'Groats wetland adjacent to the Alpine fault in New Zealand, one of the most active strike-slip faults in the world. We linked this record with the 7000 year-long, 22-event earthquake record from Hokuri Creek (20 km along strike to the north) to refine estimates of earthquake frequency and recurrence behaviour for the South Westland section of the plate boundary fault. Eight cores from John O'Groats wetland revealed a sequence that alternated between organic-dominated and clastic-dominated sediment packages. Transitions from a thick organic unit to a thick clastic unit that were sharp, involved a significant change in depositional environment, and were basin-wide, were interpreted as evidence of past surface-rupturing earthquakes. Radiocarbon dates of short-lived organic fractions either side of these transitions were modelled to provide estimates for earthquake ages. Of the seven events recognised at the John O'Groats site, three post-date the most recent event at Hokuri Creek, two match events at Hokuri Creek, and two events at John O'Groats occurred in a long interval during which the Hokuri Creek site may not have been recording earthquakes clearly. The preferred John O'Groats-Hokuri Creek earthquake record consists of 27 events since ∼6000 BC for which we calculate a mean recurrence interval of 291 ± 23 years, shorter than previously estimated for the South Westland section of the fault and shorter than the current interseismic period. The revised 50-year conditional probability of a surface-rupturing earthquake on this fault section is 29%. The coefficient of variation is estimated at 0.41. We suggest the low recurrence variability is likely to be a feature of

  8. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window

    PubMed Central

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-01-01

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude. PMID:25346344

  9. Estimation of Pd y τc parameters for earthquakes of the SW Iberia (S. Vicente Cape)

    NASA Astrophysics Data System (ADS)

    Buforn, E.; Pro, C.; Carranza, M.; Zollo, A.; Pazos, A.; Lozano, L.; Carrilho, F.

    2012-04-01

    The S.Vicente Cape (SW Iberia) is a region where potential large and damaging earthquakes may occur, such as the 1755 Lisbon (Imax=X) or 1969 S. Vicente Cape (Ms=8,1) events. In order to study the feasibility of an Earthquake Early Warning System (EEWS) for earthquakes on this region (ALERT-ES project), we have estimated the Pd and τc parameters for a rapid estimation of the magnitude from the first seconds of the beginning of P-waves. We have selected earthquakes occurred on the period 2006-2011 with magnitude larger than 3.8 and recorded at regional distances (less than 500 km) at real time broad-band seismic stations of Instituto Geográfico Nacional , Western Mediterranean and Portuguese National Networks. We have studied time-windows from 2 to 4s and applied different filters. Due to the off-shore focus occurrence and very bad azimuthal coverage, we have corrected the Pd parameter by the radiation pattern obtained from focal mechanisms of the largest earthquakes of this region. We have normalized the Pd value to a reference distance (100 km) and after we have obtained empirical correlation laws Pd and τc to the magnitude, in order to obtain a rapid estimation of the magnitude.

  10. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window.

    PubMed

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-10-27

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude.

  11. Some methods of assessing and estimating point processes models for earthquake occurrences

    NASA Astrophysics Data System (ADS)

    Veen, Alejandro

    This dissertation presents methods of assessing and estimating point process models and applies them to Southern California earthquake occurrence data. The first part provides an alternative derivation of the asymptotic distribution of Ripley's K-function for a homogeneous Poisson process and shows how it can be combined with point process residual analysis in order to test for different classes of point process models. This is done with the mean K-function of thinned residuals (K M) or a weighted analogue called the weighted or inhomogeneous K-function (KW). This work derives the asymptotic distributions of KM and K W for an inhomogeneous Poisson process. Both statistics can be used as measures of goodness-of-fit for a variety of classes of point process models. The second part deals with the estimation of branching process models. The traditional approach by Maximum Likelihood can be a very unstable and computationally difficult. Viewing branching processes as incomplete data problems suggests using the Expectation-Maximization algorithm as a practical alternative. A particularly efficient procedure based on maximizing the partial log-likelihood function is proposed for the Epidemic-type Aftershock Sequence (ETAS) model, one of the most widely used seismological branching process models. The third part of this work applies the weighted K-function to assess the goodnessof-fit of a class of point process models for the spatial distribution of earthquakes in Southern California. Then, the proposed EM-type algorithm is used to estimate declustered background seismicity rates of geologically distinct regions in Southern California.

  12. Uncertainty estimations for moment tensor inversions: the issue of the 2012 May 20 Emilia earthquake

    NASA Astrophysics Data System (ADS)

    Scognamiglio, Laura; Magnoni, Federica; Tinti, Elisa; Casarotti, Emanuele

    2016-08-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Geoscientists ordinarily use moment tensor catalogues, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their analysis. The 2012 May 20 Emilia main shock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. A variability of ˜0.5 units in magnitude leads to a controversial knowledge of the real size of the event and reveals how the solutions could be poorly constrained. In this work, we investigate the stability of the moment tensor solution for this earthquake, studying the effect of five different 1-D velocity models, the number and the distribution of the stations used in the inversion procedure. We also introduce a 3-D velocity model to account for structural heterogeneity. We finally estimate the uncertainties associated to the computed focal planes and the obtained Mw. We conclude that our reliable source solutions provide a moment magnitude that ranges from 5.87, 1-D model, to 5.96, 3-D model, reducing the variability of the literature to ˜0.1. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, requires coming out with disclosed assumptions and explicit processing workflows. Finally and, probably more important, when moment tensor solution is used for secondary analyses it has to be combined with the same main boundary conditions (e.g. wave-velocity propagation model) to avoid conflicting results.

  13. Estimating earthquake-induced failure probability and downtime of critical facilities.

    PubMed

    Porter, Keith; Ramer, Kyle

    2012-01-01

    Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.

  14. Comparison between scaling law and nonparametric Bayesian estimate for the recurrence time of strong earthquakes

    NASA Astrophysics Data System (ADS)

    Rotondi, R.

    2009-04-01

    According to the unified scaling theory the probability distribution function of the recurrence time T is a scaled version of a base function and the average value of T can be used as a scale parameter for the distribution. The base function must belong to the scale family of distributions: tested on different catalogues and for different scale levels, for Corral (2005) the (truncated) generalized gamma distribution is the best model, for German (2006) the Weibull distribution. The scaling approach should overcome the difficulty of estimating distribution functions over small areas but theorical limitations and partial instability of the estimated distributions have been pointed out in the literature. Our aim is to analyze the recurrence time of strong earthquakes that occurred in the Italian territory. To satisfy the hypotheses of independence and identical distribution we have evaluated the times between events that occurred in each area of the Database of Individual Seismogenic Sources and then we have gathered them by eight tectonically coherent regions, each of them dominated by a well characterized geodynamic process. To solve problems like: paucity of data, presence of outliers and uncertainty in the choice of the functional expression for the distribution of t, we have followed a nonparametric approach (Rotondi (2009)) in which: (a) the maximum flexibility is obtained by assuming that the probability distribution is a random function belonging to a large function space, distributed as a stochastic process; (b) nonparametric estimation method is robust when the data contain outliers; (c) Bayesian methodology allows to exploit different information sources so that the model fitting may be good also to scarce samples. We have compared the hazard rates evaluated through the parametric and nonparametric approach. References Corral A. (2005). Mixing of rescaled data and Bayesian inference for earthquake recurrence times, Nonlin. Proces. Geophys., 12, 89

  15. The range split-spectrum method for ionosphere estimation applied to the 2008 Kyrgyzstan earthquake

    NASA Astrophysics Data System (ADS)

    Gomba, Giorgio; Eineder, Michael

    2015-04-01

    L-band remote sensing systems, like the future Tandem-L mission, are disrupted by the ionized upper part of the atmosphere called ionosphere. The ionosphere is a region of the upper atmosphere composed by gases that are ionized by the solar radiation. The extent of the effects induced on a SAR measurement is given by the electron density integrated along the radio-wave paths and on its spatial variations. The main effect of the ionosphere on microwaves is to cause an additional delay, which introduces a phase difference between SAR measurements modifying the interferometric phase. The objectives of the Tandem-L mission are the systematic monitoring of dynamic Earth processes like Earth surface deformations, vegetation structure, ice and glacier changes and ocean surface currents. The scientific requirements regarding the mapping of surface deformation due to tectonic processes, earthquakes, volcanic cycles and anthropogenic factors demand deformation measurements; namely one, two or three dimensional displacement maps with resolutions of a few hundreds of meters and accuracies of centimeter to millimeter level. Ionospheric effects can make impossible to produce deformation maps with such accuracy and must therefore be estimated and compensated. As an example of this process, the implementation of the range split-spectrum method proposed in [1,2] will be presented and applied to an example dataset. The 2008 Kyrgyzstan Earthquake of October 5 is imaged by an ALOS PALSAR interferogram; a part from the earthquake, many fringes due to strong ionospheric variations can also be seen. The compensated interferogram shows how the ionosphere-related fringes were successfully estimated and removed. [1] Rosen, P.A.; Hensley, S.; Chen, C., "Measurement and mitigation of the ionosphere in L-band Interferometric SAR data," Radar Conference, 2010 IEEE , vol., no., pp.1459,1463, 10-14 May 2010 [2] Brcic, R.; Parizzi, A.; Eineder, M.; Bamler, R.; Meyer, F., "Estimation and

  16. Nowcasting Earthquakes

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Donnellan, A.; Grant Ludwig, L.; Turcotte, D. L.; Luginbuhl, M.; Gail, G.

    2016-12-01

    Nowcasting is a term originating from economics and finance. It refers to the process of determining the uncertain state of the economy or markets at the current time by indirect means. We apply this idea to seismically active regions, where the goal is to determine the current state of the fault system, and its current level of progress through the earthquake cycle. In our implementation of this idea, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. Our method does not involve any model other than the idea of an earthquake cycle. Rather, we define a specific region and a specific large earthquake magnitude of interest, ensuring that we have enough data to span at least 20 or more large earthquake cycles in the region. We then compute the earthquake potential score (EPS) which is defined as the cumulative probability distribution P(nearthquakes in the region. From the count of small earthquakes since the last large earthquake, we determine the value of EPS = P(nestimate of the level of progress through the earthquake cycle in the defined region at the current time.

  17. Nowcasting earthquakes

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.; Donnellan, A.; Grant Ludwig, L.; Luginbuhl, M.; Gong, G.

    2016-11-01

    Nowcasting is a term originating from economics and finance. It refers to the process of determining the uncertain state of the economy or markets at the current time by indirect means. We apply this idea to seismically active regions, where the goal is to determine the current state of the fault system and its current level of progress through the earthquake cycle. In our implementation of this idea, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. Our method does not involve any model other than the idea of an earthquake cycle. Rather, we define a specific region and a specific large earthquake magnitude of interest, ensuring that we have enough data to span at least 20 or more large earthquake cycles in the region. We then compute the earthquake potential score (EPS) which is defined as the cumulative probability distribution P(n < n(t)) for the current count n(t) for the small earthquakes in the region. From the count of small earthquakes since the last large earthquake, we determine the value of EPS = P(n < n(t)). EPS is therefore the current level of hazard and assigns a number between 0% and 100% to every region so defined, thus providing a unique measure. Physically, the EPS corresponds to an estimate of the level of progress through the earthquake cycle in the defined region at the current time.

  18. Estimating conditional quantiles with the help of the pinball loss

    SciTech Connect

    Steinwart, Ingo

    2008-01-01

    Using the so-called pinball loss for estimating conditional quantiles is a well-known tool in both statistics and machine learning. So far, however, only little work has been done to quantify the efficiency of this tool for non-parametric (modified) empirical risk minimization approaches. The goal of this work is to fill this gap by establishing inequalities that describe how close approximate pinball risk minimizers are to the corresponding conditional quantile. These inequalities, which hold under mild assumptions on the data-generating distribution, are then used to establish so-called variance bounds which recently turned out to play an important role in the statistical analysis of (modified) empirical risk minimization approaches. To illustrate the use of the established inequalities, we then use them to establish an oracle inequality for support vector machines that use the pinball loss. Here, it turns out that we obtain learning rates which are optimal in a min-max sense under some standard assumptions on the regularity of the conditional quantile function.

  19. Estimation of earthquake source parameters by the inversion of waveform data: synthetic waveforms

    USGS Publications Warehouse

    Sipkin, S.A.

    1982-01-01

    Two methods are presented for the recovery of a time-dependent moment-tensor source from waveform data. One procedure utilizes multichannel signal-enhancement theory; in the other a multichannel vector-deconvolution approach, developed by Oldenburg (1982) and based on Backus-Gilbert inverse theory, is used. These methods have the advantage of being extremely flexible; both may be used either routinely or as research tools for studying particular earthquakes in detail. Both methods are also robust with respect to small errors in the Green's functions and may be used to refine estimates of source depth by minimizing the misfits to the data. The multichannel vector-deconvolution approach, although it requires more interaction, also allows a trade-off between resolution and accuracy, and complete statistics for the solution are obtained. The procedures have been tested using a number of synthetic body-wave data sets, including point and complex sources, with satisfactory results. ?? 1982.

  20. Simultaneous estimation of earthquake source parameters and crustal Q value from broadband data of selected aftershocks of the 2001 M w 7.7 Bhuj earthquake

    NASA Astrophysics Data System (ADS)

    Saha, A.; Lijesh, S.; Mandal, P.

    2012-12-01

    This paper presents the simultaneous estimation of source parameters and crustal Q values for small to moderate-size aftershocks ( M w 2.1-5.1) of the M_{w }7.7 2001 Bhuj earthquake. The horizontal-component S-waves of 144 well located earthquakes (2001-2010) recorded at 3-10 broadband seismograph sites in the Kachchh Seismic Zone, Gujarat, India are analyzed, and their seismic corner frequencies, long-period spectral levels and crustal Q values are simultaneously estimated by inverting the horizontal component of the S-wave displacement spectrum using the Levenberg-Marquardt nonlinear inversion technique, wherein the inversion scheme is formulated based on the ω-square source spectral model. The static stress drops (Δ σ) are then calculated from the corner frequency and seismic moment. The estimated source parameters suggest that the seismic moment ( M 0) and source radius ( r) of aftershocks are varying from 1.12 × 1012 to 4.00 × 1016 N-m and 132.57 to 513.20 m, respectively. Whereas, estimated stress drops (Δ σ) and multiplicative factor ( E mo) values range from 0.01 to 20.0 MPa and 1.05 to 3.39, respectively. The corner frequencies are found to be ranging from 2.36 to 8.76 Hz. The crustal S-wave quality factor varies from 256 to 1882 with an average of 840 for the Kachchh region, which agrees well with the crustal Q value of the seismically active New Madrid region, USA. Our estimated stress drop values are quite large compared to the other similar size Indian intraplate earthquakes, which can be attributed to the presence of crustal mafic intrusives and aqueous fluids in the lower crust as revealed by the earlier tomographic study of the region.

  1. Testing earthquake source inversion methodologies

    USGS Publications Warehouse

    Page, M.; Mai, P.M.; Schorlemmer, D.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  2. Unreliability of intraoperative estimated blood loss in extended sagittal synostectomies.

    PubMed

    Seruya, Mitchel; Oh, Albert K; Boyajian, Michael J; Myseros, John S; Yaun, Amanda L; Keating, Robert F

    2011-11-01

    Intraoperative blood loss represents a significant concern during open repair of craniosynostosis, and its reliable measurement remains a serious challenge. In this study of extended sagittal synostectomies, the authors analyzed the relationship between estimated blood loss (EBL) and calculated blood loss (CBL), and investigated predictors of hemodynamic outcomes. The authors reviewed outcomes in infants with sagittal synostosis who underwent primary extended synostectomies (the so-called Pi procedure) between 1997 and 2009. Patient demographic data, operating time, and mean arterial pressures (MAPs) were recorded. Serial MAPs were averaged for a MAP(mean). The EBL was based on anesthesia records, and the CBL on pre- and postoperative hemoglobin values in concert with transfusion volumes. Factors associated with EBL, CBL, red blood cell transfusion (RBCT), and hospital length of stay (LOS) were investigated. Hemodynamic outcomes were reported as percent estimated blood volume (% EBV), and relationships were analyzed using simple and multiple linear and logistic regression models. A p value < 0.05 was considered significant. Seventy-one infants with sagittal synostosis underwent primary extended synostectomies at a mean age and weight of 4.9 months and 7.3 kg, respectively. The average operating time was 1.4 hours, and intraoperative MAP was 54.6 mm Hg (21.3% lower than preoperative baseline). There was no association between mean EBL (12.7% EBV) and mean CBL (23.6% EBV) (r = 0.059, p = 0.63). The EBL inversely correlated with the patient's age (r = -0.07) and weight (r = -0.11) at surgery (p < 0.05 in both instances). With regard to intraoperative factors, EBL positively trended with operating time (r = 0.26, p = 0.09) and CBL inversely trended with MAP(mean) (r = -0.04, p = 0.10), although these relationships were only borderline significant. Intraoperative RBCT, which was required in 59.1% of patients, positively correlated with EBL (r = 1.55, p < 0.001), yet

  3. Estimation of co-seismic stress change of the 2008 Wenchuan Ms8.0 earthquake

    SciTech Connect

    Sun Dongsheng; Wang Hongcai; Ma Yinsheng; Zhou Chunjing

    2012-09-26

    In-situ stress change near the fault before and after a great earthquake is a key issue in the geosciences field. In this work, based on the 2008 Great Wenchuan earthquake fault slip dislocation model, the co-seismic stress tensor change due to the Wenchuan earthquake and the distribution functions around the Longmen Shan fault are given. Our calculated results are almost consistent with the before and after great Wenchuan earthquake in-situ measuring results. The quantitative assessment results provide a reference for the study of the mechanism of earthquakes.

  4. Flood Damage and Loss Estimation for Iowa on Web-based Systems using HAZUS

    NASA Astrophysics Data System (ADS)

    Yildirim, E.; Sermet, M. Y.; Demir, I.

    2016-12-01

    Importance of decision support systems for flood emergency response and loss estimation increases with its social and economic impacts. To estimate the damage of the flood, there are several software systems available to researchers and decision makers. HAZUS-MH is one of the most widely used desktop program, developed by FEMA (Federal Emergency Management Agency), to estimate economic loss and social impacts of disasters such as earthquake, hurricane and flooding (riverine and coastal). HAZUS used loss estimation methodology and implements through geographic information system (GIS). HAZUS contains structural, demographic, and vehicle information across United States. Thus, it allows decision makers to understand and predict possible casualties and damage of the floods by running flood simulations through GIS application. However, it doesn't represent real time conditions because of using static data. To close this gap, an overview of a web-based infrastructure coupling HAZUS and real time data provided by IFIS (Iowa Flood Information System) is presented by this research. IFIS is developed by the Iowa Flood Center, and a one-stop web-platform to access community-based flood conditions, forecasts, visualizations, inundation maps and flood-related data, information, and applications. Large volume of real-time observational data from a variety of sensors and remote sensing resources (radars, rain gauges, stream sensors, etc.) and flood inundation models are staged on a user-friendly maps environment that is accessible to the general public. Providing cross sectional analyses between HAZUS-MH and IFIS datasets, emergency managers are able to evaluate flood damage during flood events easier and more accessible in real time conditions. With matching data from HAZUS-MH census tract layer and IFC gauges, economical effects of flooding can be observed and evaluated by decision makers. The system will also provide visualization of the data by using augmented reality for

  5. Ground-motion modeling of the 1906 San Francisco Earthquake, part II: Ground-motion estimates for the 1906 earthquake and scenario events

    USGS Publications Warehouse

    Aagaard, B.T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.

    2008-01-01

    We estimate the ground motions produce by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  6. Estimation of earthquake source parameters in the Kachchh seismic zone, Gujarat, India, using three component S-wave spectra

    NASA Astrophysics Data System (ADS)

    Nagamani, Durgada; Mandal, Prantik

    2017-07-01

    Earthquake source parameters and crustal Q0 values for the 138 selected local events of (Mw{:}2.5{-}4.4) the 2001 Bhuj earthquake sequence have been computed through inversion modelling of S-waves from three-component broadband seismometer data. SEISAN software has been used to locate the identified local earthquakes, which were recorded at least three or more stations of the Kachchh seismological network. Three component spectra of S-wave are being inverted by using the Levenberg-Marquardt non-linear inversion technique, wherein the inversion scheme is formulated based on ω 2 source model. SAC Software (seismic analysis code) is being utilized for calculating three-component displacement and velocity spectra of S-wave. The displacement spectra are used for estimating corner frequency (in Hz) and long period spectral level (in nm-s). These two parameters play a key role in estimating earthquake source parameters. The crustal {Q}0 values have been computed simultaneously for each component of three-component broadband seismograph. The estimated seismic moment (M0) and source radius ( r) using S-wave spectra range from 7.03E+12 to 5.36E+15 N-m and 178.56 to 565.21 m, respectively. The corner frequencies for S-wave vary from 3.025 to 7.425 Hz. We also estimated the radiated energy (ES) using velocity spectra, which is varying from 2.76E+06 to 4.07E+11 Joules. The estimated apparent stress drop and static stress drop values range from 0.01 to 2.56 and 0.53 to 36.79 MPa, respectively. Our study also reveals that estimated Q0 values vary from 119.0 to 7229.5, with an average Q0 value of 701. Another important parameter, by which the earthquake rupture process can be recognized, is Zuniga parameter. It suggests that most of the Kachchh events follow the frictional overshoot model. Our estimated static stress drop values are higher than the apparent stress drop values. And the stress drop values are quite larger for intraplate earthquakes than the interplate earthquakes.

  7. The source model and recurrence interval of Genroku-type Kanto earthquakes estimated from paleo-shoreline data

    NASA Astrophysics Data System (ADS)

    Sato, Toshinori; Higuchi, Harutaka; Miyauchi, Takahiro; Endo, Kaori; Tsumura, Noriko; Ito, Tanio; Noda, Akemi; Matsu'ura, Mitsuhiro

    2016-02-01

    In the southern Kanto region of Japan, where the Philippine Sea plate is descending at the Sagami trough, two different types of large interplate earthquakes have occurred repeatedly. The 1923 (Taisho) and 1703 (Genroku) Kanto earthquakes characterize the first and second types, respectively. A reliable source model has been obtained for the 1923 event from seismological and geodetical data, but not for the 1703 event because we have only historical records and paleo-shoreline data about it. We developed an inversion method to estimate fault slip distribution of interplate repeating earthquakes from paleo-shoreline data on the idea of crustal deformation cycles associated with subduction-zone earthquakes. By applying the inversion method to the present heights of the Genroku and Holocene marine terraces developed along the coasts of the southern Boso and Miura peninsulas, we estimated the fault slip distribution of the 1703 Genroku earthquake as follows. The source region extends along the Sagami trough from the Miura peninsula to the offing of the southern Boso peninsula, which covers the southern two thirds of the source region of the 1923 Kanto earthquake. The coseismic slip takes the maximum of 20 m at the southern tip of the Boso peninsula, and the moment magnitude (Mw) is calculated as 8.2. From the interseismic slip-deficit rates at the plate interface obtained by GPS data inversion, assuming that the total slip deficit is compensated by coseismic slip, we can roughly estimate the average recurrence interval as 350 years for large interplate events of any type and 1400 years for the Genroku-type events.

  8. The energy radiated by the 26 December 2004 Sumatra-Andaman earthquake estimated from 10-minute P-wave windows

    USGS Publications Warehouse

    Choy, G.L.; Boatwright, J.

    2007-01-01

    The rupture process of the Mw 9.1 Sumatra-Andaman earthquake lasted for approximately 500 sec, nearly twice as long as the teleseismic time windows between the P and PP arrival times generally used to compute radiated energy. In order to measure the P waves radiated by the entire earthquake, we analyze records that extend from the P-wave to the S-wave arrival times from stations at distances ?? >60??. These 8- to 10-min windows contain the PP, PPP, and ScP arrivals, along with other multiply reflected phases. To gauge the effect of including these additional phases, we form the spectral ratio of the source spectrum estimated from extended windows (between TP and TS) to the source spectrum estimated from normal windows (between TP and TPP). The extended windows are analyzed as though they contained only the P-pP-sP wave group. We analyze four smaller earthquakes that occurred in the vicinity of the Mw 9.1 mainshock, with similar depths and focal mechanisms. These smaller events range in magnitude from an Mw 6.0 aftershock of 9 January 2005 to the Mw 8.6 Nias earthquake that occurred to the south of the Sumatra-Andaman earthquake on 28 March 2005. We average the spectral ratios for these four events to obtain a frequency-dependent operator for the extended windows. We then correct the source spectrum estimated from the extended records of the 26 December 2004 mainshock to obtain a complete or corrected source spectrum for the entire rupture process (???600 sec) of the great Sumatra-Andaman earthquake. Our estimate of the total seismic energy radiated by this earthquake is 1.4 ?? 1017 J. When we compare the corrected source spectrum for the entire earthquake to the source spectrum from the first ???250 sec of the rupture process (obtained from normal teleseismic windows), we find that the mainshock radiated much more seismic energy in the first half of the rupture process than in the second half, especially over the period range from 3 sec to 40 sec.

  9. Estimating coseismic coastal uplift with an intertidal mussel: calibration for the 2010 Maule Chile earthquake (Mw = 8.8)

    NASA Astrophysics Data System (ADS)

    Melnick, Daniel; Cisternas, Marco; Moreno, Marcos; Norambuena, Ricardo

    2012-05-01

    Coseismic coastal uplift has been quantified using sessile intertidal organisms after several great earthquakes following FitzRoy's pioneer measurements in 1835. A dense survey of such markers may complement space geodetic data to obtain an accurate distribution of fault slip and earthquake segmentation. However, uplift estimates based on diverse intertidal organisms tend to differ, because of few methodological and comparative studies. Here, we calibrate and estimate coastal uplift in the southern segment of the 2010 Maule, Chile earthquake (Mw = 8.8) using >1100 post-earthquake elevation measurements of the sessile mussel Perumytilus purpuratus. This mussel is the predominant competitor for rocky shores all along the Pacific coast of South America, where it forms fringes or belts distinctively in the middle intertidal zone. These belts are centered at mean sea level and their width should equal one third of the tidal range. We measured belt widths close to this value at 40% of the sites, but overall widths are highly variable due to the unevenness in belt tops; belt bases, in turn, are rather regular. Belt top unevenness apparently results from locally-enhanced wave splash, whereas belt base evenness is controlled by predation. According to our measurements made beyond the earthquake rupture, the belt base is at the bottom of the middle intertidal zone, and thus we propose to estimate coastal uplift using the belt base mean elevation plus one sixth of the tidal range to reach mean sea level. Within errors our estimates agree with GPS displacements but differ from other methods. Comparisons of joint inversions for megathrust slip suggest combining space geodetic data with estimates from intertidal organisms may locally increase the detail of slip distributions.

  10. Understanding earthquake hazards in urban areas - Evansville Area Earthquake Hazards Mapping Project

    USGS Publications Warehouse

    Boyd, Oliver S.

    2012-01-01

    The region surrounding Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the proximity of Evansville to the Wabash Valley and New Madrid seismic zones, there is concern among nearby communities about hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake and are able to design structures to withstand this estimated ground shaking. Earthquake-hazard maps provide one way of conveying such information and can help the region of Evansville prepare for future earthquakes and reduce earthquake-caused loss of life and financial and structural loss. The Evansville Area Earthquake Hazards Mapping Project (EAEHMP) has produced three types of hazard maps for the Evansville area: (1) probabilistic seismic-hazard maps show the ground motion that is expected to be exceeded with a given probability within a given period of time; (2) scenario ground-shaking maps show the expected shaking from two specific scenario earthquakes; (3) liquefaction-potential maps show how likely the strong ground shaking from the scenario earthquakes is to produce liquefaction. These maps complement the U.S. Geological Survey's National Seismic Hazard Maps but are more detailed regionally and take into account surficial geology, soil thickness, and soil stiffness; these elements greatly affect ground shaking.

  11. Fault Slip Distribution of the 2016 Fukushima Earthquake Estimated from Tsunami Waveforms

    NASA Astrophysics Data System (ADS)

    Gusman, Aditya Riadi; Satake, Kenji; Shinohara, Masanao; Sakai, Shin'ichi; Tanioka, Yuichiro

    2017-08-01

    The 2016 Fukushima normal-faulting earthquake (Mjma 7.4) occurred 40 km off the coast of Fukushima within the upper crust. The earthquake generated a moderate tsunami which was recorded by coastal tide gauges and offshore pressure gauges. First, the sensitivity of tsunami waveforms to fault dimensions and depths was examined and the best size and depth were determined. Tsunami waveforms computed based on four available focal mechanisms showed that a simple fault striking northeast-southwest and dipping southeast (strike = 45°, dip = 41°, rake = -95°) yielded the best fit to the observed waveforms. This fault geometry was then used in a tsunami waveform inversion to estimate the fault slip distribution. A large slip of 3.5 m was located near the surface and the major slip region covered an area of 20 km × 20 km. The seismic moment, calculated assuming a rigidity of 2.7 × 1010 N/m2 was 3.70 × 1019 Nm, equivalent to Mw = 7.0. This is slightly larger than the moments from the moment tensor solutions (Mw 6.9). Large secondary tsunami peaks arrived approximately an hour after clear initial peaks were recorded by the offshore pressure gauges and the Sendai and Ofunato tide gauges. Our tsunami propagation model suggests that the large secondary tsunami signals were from tsunami waves reflected off the Fukushima coast. A rather large tsunami amplitude of 75 cm at Kuji, about 300 km north of the source, was comparable to those recorded at stations located much closer to the epicenter, such as Soma and Onahama. Tsunami simulations and ray tracing for both real and artificial bathymetry indicate that a significant portion of the tsunami wave was refracted to the coast located around Kuji and Miyako due to bathymetry effects.

  12. Fault Slip Distribution of the 2016 Fukushima Earthquake Estimated from Tsunami Waveforms

    NASA Astrophysics Data System (ADS)

    Gusman, Aditya Riadi; Satake, Kenji; Shinohara, Masanao; Sakai, Shin'ichi; Tanioka, Yuichiro

    2017-06-01

    The 2016 Fukushima normal-faulting earthquake (Mjma 7.4) occurred 40 km off the coast of Fukushima within the upper crust. The earthquake generated a moderate tsunami which was recorded by coastal tide gauges and offshore pressure gauges. First, the sensitivity of tsunami waveforms to fault dimensions and depths was examined and the best size and depth were determined. Tsunami waveforms computed based on four available focal mechanisms showed that a simple fault striking northeast-southwest and dipping southeast (strike = 45°, dip = 41°, rake = -95°) yielded the best fit to the observed waveforms. This fault geometry was then used in a tsunami waveform inversion to estimate the fault slip distribution. A large slip of 3.5 m was located near the surface and the major slip region covered an area of 20 km × 20 km. The seismic moment, calculated assuming a rigidity of 2.7 × 1010 N/m2 was 3.70 × 1019 Nm, equivalent to Mw = 7.0. This is slightly larger than the moments from the moment tensor solutions (Mw 6.9). Large secondary tsunami peaks arrived approximately an hour after clear initial peaks were recorded by the offshore pressure gauges and the Sendai and Ofunato tide gauges. Our tsunami propagation model suggests that the large secondary tsunami signals were from tsunami waves reflected off the Fukushima coast. A rather large tsunami amplitude of 75 cm at Kuji, about 300 km north of the source, was comparable to those recorded at stations located much closer to the epicenter, such as Soma and Onahama. Tsunami simulations and ray tracing for both real and artificial bathymetry indicate that a significant portion of the tsunami wave was refracted to the coast located around Kuji and Miyako due to bathymetry effects.

  13. Exploration of deep sedimentary layers in Tacna city, southern Peru, using microtremors and earthquake data for estimation of local amplification

    NASA Astrophysics Data System (ADS)

    Yamanaka, Hiroaki; Gamero, Mileyvi Selene Quispe; Chimoto, Kosuke; Saguchi, Kouichiro; Calderon, Diana; La Rosa, Fernándo Lázares; Bardales, Zenón Aguilar

    2016-01-01

    S-wave velocity profiles of sedimentary layers in Tacna, southern Peru, based on analysis of microtremor array data and earthquake records, have been determined for estimation of site amplification. We investigated vertical component of microtremors in temporary arrays at two sites in the city for Rayleigh wave phase velocity. A receiver function was also estimated from existing earthquake data at a strong motion station near one of the microtremor exploration sites. The phase velocity and the receiver function were jointly inverted to S-wave velocity profiles. The depths to the basement with an S-wave velocity of 2.8 km/s at the two sites are similar as about 1 km. The top soil at the site in a severely damaged area in the city had a lower S-wave velocity than that in a slightly damaged area during the 2001 southern Peru earthquake. We subsequently estimate site amplifications from the velocity profiles and find that amplification is large at periods from 0.2 to 0.8 s at the damaged area indicating possible reasons for the differences in the damage observed during the 2001 southern Peru earthquake.

  14. Source study of two small earthquakes of Delhi, India, and estimation of ground motion from future moderate, local events

    NASA Astrophysics Data System (ADS)

    Bansal, B. K.; Singh, S. K.; Dharmaraju, R.; Pacheco, J. F.; Ordaz, M.; Dattatrayam, R. S.; Suresh, G.

    2009-01-01

    We study source characteristics of two small, local earthquakes which occurred in Delhi on 28 April 2001 (Mw3.4) and 18 March 2004 (Mw2.6). Both earthquakes were located in the heart of New Delhi, and were recorded in the epicentral region by digital accelerographs. The depths of the events are 15 km and 8 km, respectively. First motions and waveform modeling yield a normal-faulting mechanism with large strike-slip component. The strike of one of the nodal planes roughly agrees with NE-SW orientation of faults and lineaments mapped in the region. We use the recordings of the 2004 event as empirical Green’s functions to synthesize expected ground motions in the epicentral region of a Mw5.0 earthquake in Delhi. It is possible that such a local event may control the hazard in Delhi. Our computations show that a Mw5.0 earthquake would give rise to PGA of ~200 to 450 gal, the smaller values occurring at hard sites. The estimate of corresponding PGV is ~6 to 15 cm/s. The recommended response spectra, Sa, 5% damping, for Delhi, which falls in zone IV of the Indian seismic zoning map, may not be conservative enough at soft sites for a postulated Mw5.0 local earthquake.

  15. Spatial and temporal variations of radiated seismic energy estimated for repeating earthquakes in northeastern Japan; implication for healing process

    NASA Astrophysics Data System (ADS)

    Ara, M.; Ide, S.; Uchida, N.

    2015-12-01

    Repeating earthquakes are shear slip on the plate interface, and helpful to monitor long-term deformation in subduction zones. Previous studies have measured the size of repeating earthquakes mainly using seismic moment, to calculate slip amount in each event. As another measure of event size, seismic energy may provide some information related to the frictional property on the plate interface. We estimated radiated seismic energy for 620 repeating earthquakes of MJMA from 2.5 to 5.9, detected by the method of Uchida and Matsuzawa [2013], in the Tohoku-Oki region. The study period is from 2001 to 2013, extending before and after the 2011 Tohoku-Oki earthquake of Mw 9, which is also accompanied with large afterslip [e.g., Ozawa et al., 2012]. The seismograms recorded by NIED Hi-net were used. We measured coda wave amplitude by the method of Mayeda et al. [2003] and estimated source spectra and radiated seismic energy by the method of Baltay et al. [2010] after slight modifications. The estimated scaled energy, the ratio between radiated seismic energy and seismic moment, shows a slight increase with seismic moment. The scaled energy increases with depth, while its temporal change before and after the Tohoku-Oki earthquake is not systematic. The scaled energy also increases with the inter-event time of repeating earthquakes. This might be explained by the difference of fault strength, proportional to the logarithm of time. In addition to this healing relation, scaling relationship between seismic moment and the inter-event time of repeating earthquake is well known [Nadeau and Johnson, 1998]. From these healing and scaling relationships, it is expected that scaled energy is proportional to the logarithm of seismic moment. This prediction is generally consistent with our observation, though the moment dependency is too small to be recognized as power or log. This healing-related scaling may be applicable to general earthquakes, and might be associated with the

  16. Using safety inspection data to estimate shaking intensity for the 1994 Northridge earthquake

    USGS Publications Warehouse

    Thywissen, K.; Boatwright, J.

    1998-01-01

    We map the shaking intensity suffered in Los Angeles County during the 17 January 1994, Northridge earthquake using municipal safety inspection data. The intensity is estimated from the number of buildings given red, yellow, or green tags, aggregated by census tract. Census tracts contain from 200 to 4000 residential buildings and have an average area of 6 km2 but are as small as 2 and 1 km2 in the most densely populated areas of the San Fernando Valley and downtown Los Angeles, respectively. In comparison, the zip code areas on which standard MMI intensity estimates are based are six times larger, on average, than the census tracts. We group the buildings by age (before and after 1940 and 1976), by number of housing units (one, two to four, and five or more), and by construction type, and we normalize the tags by the total number of similar buildings in each census tract. We analyze the seven most abundant building categories. The fragilities (the fraction of buildings in each category tagged within each intensity level) for these seven building categories are adjusted so that the intensity estimates agree. We calibrate the shaking intensity to correspond with the modified Mercalli intensities (MMI) estimated and compiled by Dewey et al. (1995); the shapes of the resulting isoseismals are similar, although we underestimate the extent of the MMI = 6 and 7 areas. The fragility varies significantly between different building categories (by factors of 10 to 20) and building ages (by factors of 2 to 6). The post-1940 wood-frame multi-family (???5 units) dwellings make up the most fragile building category, and the post-1940 wood-frame single-family dwellings make up the most resistant building category.

  17. Mass Loss and Surface Displacement Estimates in Greenland from GRACE

    NASA Astrophysics Data System (ADS)

    Jensen, Tim; Forsberg, Rene

    2015-04-01

    The estimation of ice sheet mass changes from GRACE is basically an inverse problem, the solution is non-unique and several procedures for determining the mass distribution exists. We present Greenland mass loss results from two such procedures, namely a direct spherical harmonic inversion procedure possible through a thin layer assumption, and a generalized inverse masscon procedure. These results are updated to the end of 2014, including the unusual 2013 mass gain anomaly, and show a good agreement when taking into account leakage from the Canadian Icecaps. The GRACE mass changes are further compared to GPS uplift data on the bedrock along the edge of the ice sheet. The solid Earth deformation is assumed to consist of an elastic deformation of the crust and an anelastic deformation of the underlying mantle (GIA). The crustal deformation is due to current surface loading effects and therefore contains a strong seasonal component of variation, superimposed on a secular trend. The majority of the anelastic GIA deformation of the mantle is believed to be approximately constant. An accelerating secular trend and seasonal changes, as seen in Greenland, is therefore assumed to be due to elastic deformation from changes in surface mass loading from the ice sheet. The GRACE and GPS comparison is only valid by assuring that the signal content of the two observables are consistent. The GPS receivers are measuring movement at a single point on the bedrock surface, and therefore sensitive to a limited loading footprint, while the GRACE satellites on the other hand measures a filtered, attenuated gravitational field, at an altitude of approximately 500 km, making it sensitive to a much larger area. Despite this, the seasonal loading signal in the two observables show a reasonably good agreement.

  18. Risk and the neoliberal state: why post-Mitch lessons didn't reduce El Salvador's earthquake losses.

    PubMed

    Wisner, B

    2001-09-01

    Although El Salvador suffered light losses from Hurricane Mitch in 1998, it benefited from the increased international aid and encouragement for advance planning, especially mitigation and prevention interventions. Thus, one would have supposed, El Salvador would have been in a very advantageous position, able more easily than its economically crippled neighbours, Honduras and Nicaragua, to implement the 'lessons of Mitch'. A review of the recovery plan tabled by the El Salvador government following the earthquakes of early 2001 shows that despite the rhetoric in favour of 'learning the lessons of Mitch', very little mitigation and prevention had actually been put in place between the hurricane (1998) and the earthquakes (2001). The recovery plan is analysed in terms of the degree to which it deals with root causes of disaster vulnerability, namely, the economic and political marginality of much of the population and environmental degradation. An explanation for the failure to implement mitigation and preventive actions is traced to the adherence by the government of El Salvador to an extreme form of neoliberal, free market ideology, and the deep fissures and mistrust in a country that follow a long and bloody civil war.

  19. Estimates of aseismic slip associated with small earthquakes near San Juan Bautista, CA

    NASA Astrophysics Data System (ADS)

    Hawthorne, J. C.; Simons, M.; Ampuero, J.-P.

    2016-11-01

    Postseismic slip observed after large (M > 6) earthquakes typically has an equivalent moment of a few tens of percent of the coseismic moment. Some observations of the recurrence intervals of repeating earthquakes suggest that postseismic slip following small (M≲4) earthquakes could be much larger—up to 10 or 100 times the coseismic moment. We use borehole strain data from U.S. Geological Survey strainmeter SJT to analyze deformation in the days before and after 1000 1.9 < M < 5 earthquakes near San Juan Bautista, CA. We find that on average, postseismic strain is roughly equal in magnitude to coseismic strain for the magnitude range considered, suggesting that postseismic moment following these small earthquakes is roughly equal to coseismic moment. This postseismic to coseismic moment ratio is larger than typically observed in earthquakes that rupture through the seismogenic zone but is much smaller than was hypothesized from modeling repeating earthquakes. Our results are consistent with a simple, self-similar model of earthquakes.

  20. BEAM LOSS ESTIMATES AND CONTROL FOR THE BNL NEUTRINO FACILITY.

    SciTech Connect

    WENG, W.-T.; LEE, Y.Y.; RAPARIA, D.; TSOUPAS, N.; BEEBE-WANG, J.; WEI, J.; ZHANG, S.Y.

    2005-05-16

    The requirement for low beam loss is very important both to protect the beam component, and to make the hands-on maintenance possible. In this report, the design considerations to achieving high intensity and low loss will be presented. We start by specifying the beam loss limit at every physical process followed by the proper design and parameters for realizing the required goals. The process considered in this paper include the emittance growth in the linac, the H{sup -} injection, the transition crossing, the coherent instabilities and the extraction losses.

  1. The tsunami source area of the 2003 Tokachi-oki earthquake estimated from tsunami travel times and its relationship to the 1952 Tokachi-oki earthquake

    USGS Publications Warehouse

    Hirata, K.; Tanioka, Y.; Satake, K.; Yamaki, S.; Geist, E.L.

    2004-01-01

    We estimate the tsunami source area of the 2003 Tokachi-oki earthquake (Mw 8.0) from observed tsunami travel times at 17 Japanese tide gauge stations. The estimated tsunami source area (???1.4 ?? 104 km2) coincides with the western-half of the ocean-bottom deformation area (???2.52 ?? 104 km2) of the 1952 Tokachi-oki earthquake (Mw 8.1), previously inferred from tsunami waveform inversion. This suggests that the 2003 event ruptured only the western-half of the 1952 rupture extent. Geographical distribution of the maximum tsunami heights in 2003 differs significantly from that of the 1952 tsunami, supporting this hypothesis. Analysis of first-peak tsunami travel times indicates that a major uplift of the ocean-bottom occurred approximately 30 km to the NNW of the mainshock epicenter, just above a major asperity inferred from seismic waveform inversion. Copyright ?? The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences.

  2. Estimates of stress drop and crustal tectonic stress from the 27 February 2010 Maule, Chile, earthquake: Implications for fault strength

    USGS Publications Warehouse

    Luttrell, K.M.; Tong, X.; Sandwell, D.T.; Brooks, B.A.; Bevis, M.G.

    2011-01-01

    The great 27 February 2010 Mw 8.8 earthquake off the coast of southern Chile ruptured a ???600 km length of subduction zone. In this paper, we make two independent estimates of shear stress in the crust in the region of the Chile earthquake. First, we use a coseismic slip model constrained by geodetic observations from interferometric synthetic aperture radar (InSAR) and GPS to derive a spatially variable estimate of the change in static shear stress along the ruptured fault. Second, we use a static force balance model to constrain the crustal shear stress required to simultaneously support observed fore-arc topography and the stress orientation indicated by the earthquake focal mechanism. This includes the derivation of a semianalytic solution for the stress field exerted by surface and Moho topography loading the crust. We find that the deviatoric stress exerted by topography is minimized in the limit when the crust is considered an incompressible elastic solid, with a Poisson ratio of 0.5, and is independent of Young's modulus. This places a strict lower bound on the critical stress state maintained by the crust supporting plastically deformed accretionary wedge topography. We estimate the coseismic shear stress change from the Maule event ranged from-6 MPa (stress increase) to 17 MPa (stress drop), with a maximum depth-averaged crustal shear-stress drop of 4 MPa. We separately estimate that the plate-driving forces acting in the region, regardless of their exact mechanism, must contribute at least 27 MPa trench-perpendicular compression and 15 MPa trench-parallel compression. This corresponds to a depth-averaged shear stress of at least 7 MPa. The comparable magnitude of these two independent shear stress estimates is consistent with the interpretation that the section of the megathrust fault ruptured in the Maule earthquake is weak, with the seismic cycle relieving much of the total sustained shear stress in the crust. Copyright 2011 by the American

  3. Estimation of the Demand for Hospital Care After a Possible High-Magnitude Earthquake in the City of Lima, Peru.

    PubMed

    Bambarén, Celso; Uyen, Angela; Rodriguez, Miguel

    2017-02-01

    Introduction A model prepared by National Civil Defense (INDECI; Lima, Peru) estimated that an earthquake with an intensity of 8.0 Mw in front of the central coast of Peru would result in 51,019 deaths and 686,105 injured in districts of Metropolitan Lima and Callao. Using this information as a base, a study was designed to determine the characteristics of the demand for treatment in public hospitals and to estimate gaps in care in the hours immediately after such an event. A probabilistic model was designed that included the following variables: demand for hospital care; time of arrival at the hospitals; type of medical treatment; reason for hospital admission; and the need for specialized care like hemodialysis, blood transfusions, and surgical procedures. The values for these variables were obtained through a literature search of the databases of the MEDLINE medical bibliography, the Cochrane and SciELO libraries, and Google Scholar for information on earthquakes over the last 30 years of over magnitude 6.0 on the moment magnitude scale. If a high-magnitude earthquake were to occur in Lima, it was estimated that between 23,328 and 178,387 injured would go to hospitals, of which between 4,666 and 121,303 would require inpatient care, while between 18,662 and 57,084 could be treated as outpatients. It was estimated that there would be an average of 8,768 cases of crush syndrome and 54,217 cases of other health problems. Enough blood would be required for 8,761 wounded in the first 24 hours. Furthermore, it was expected that there would be a deficit of hospital beds and operating theaters due to the high demand. Sudden and violent disasters, such as earthquakes, represent significant challenges for health systems and services. This study shows the deficit of preparation and capacity to respond to a possible high-magnitude earthquake. The study also showed there are not enough resources to face mega-disasters, especially in large cities. Bambarén C , Uyen A

  4. Estimation of full moment tensors, including uncertainties, for earthquakes, volcanic events, and nuclear explosions

    NASA Astrophysics Data System (ADS)

    Alvizuri, Celso; Silwal, Vipul; Krischer, Lion; Tape, Carl

    2017-04-01

    A seismic moment tensor is a 3 × 3 symmetric matrix that provides a compact representation of seismic events within Earth's crust. We develop an algorithm to estimate moment tensors and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment tensors by generating synthetic waveforms at each grid point and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment tensor M for the event is then the moment tensor with minimum misfit. To describe the uncertainty associated with M, we first convert the misfit function to a probability function. The uncertainty, or rather the confidence, is then given by the 'confidence curve' P(V ), where P(V ) is the probability that the true moment tensor for the event lies within the neighborhood of M that has fractional volume V . The area under the confidence curve provides a single, abbreviated 'confidence parameter' for M. We apply the method to data from events in different regions and tectonic settings: small (Mw < 2.5) events at Uturuncu volcano in Bolivia, moderate (Mw > 4) earthquakes in the southern Alaska subduction zone, and natural and man-made events at the Nevada Test Site. Moment tensor uncertainties allow us to better discriminate among moment tensor source types and to assign physical processes to the events.

  5. Magnitude estimates of two large aftershocks of the 16 December 1811 New Madrid earthquake

    USGS Publications Warehouse

    Hough, S.E.; Martin, S.

    2002-01-01

    The three principal New Madrid mainshocks of 1811-1812 were followed by extensive aftershock sequences that included numerous felt events. Although no instrumental data are available for either the mainshocks or the aftershocks, available historical accounts do provide information that can be used to estimate magnitudes and locations for the large events. In this article we investigate two of the largest aftershocks: one near dawn following the first mainshock on 16 December 1811, and one near midday on 17 December 1811. We reinterpret original felt reports to obtain a set of 48 and 20 modified Mercalli intensity values of the two aftershocks, respectively. For the dawn aftershock, we infer a Mw of approximately 7.0 based on a comparison of its intensities with those of the smallest New Madrid mainshock. Based on a detailed account that appears to describe near-field ground motions, we further propose a new fault rupture scenario for the dawn aftershock. We suggest that the aftershock had a thrust mechanism and occurred on a southeastern limb of the Reelfoot fault. For the 17 December 1811 aftershock, we infer a Mw of approximately 6.1 ?? 0.2. This value is determined using the method of Bakun et al. (2002), which is based on a new calibration of intensity versus distance for earthquakes in central and eastern North America. The location of this event is not well constrained, but the available accounts suggest an epicenter beyond the southern end of the New Madrid Seismic Zone.

  6. Experimental study of permanent displacement estimate method based on strong-motion earthquake accelerograms

    NASA Astrophysics Data System (ADS)

    Lu, Tao; Hu, Guorui

    2016-04-01

    In the engineering seismology studies, the seismic permanent displacement of the near-fault site is often obtained by the process of the ground motion accelerogram recorded by the instrument on the station. Because of the selection differences of the estimate methods and the algorithm parameters, the strongly different results of the permanent displacement is gotten often. And the reliability of the methods has not only been proved in fact, but also the selection of the algorithm parameters has to be carefully considered. In order to solve this problem, the experimental study on the permanent displacement according to the accelerogram was carried out with the experiment program of using the large shaking table and the sliding mechanism in the earthquake engineering laboratory. In the experiments,the large shaking table genarated the dynamincs excitation without the permanent displacement,the sliding mechanism fixed on the shaking table genarated the permanent displacement, and the accelerogram including the permant information had been recorded by the instrument on the sliding mechanism.Then the permanent displacement value had been obtained according to the accelerogram, and been compared with the displacement value gotten by the displacement meter and the digital close range photogrammetry. The experimental study showed that the reliable permanent displacement could be obtained by the existing processing method under the simple laboratory conditions with the preconditions of the algorithm parameters selection carefully.

  7. Estimation of full moment tensors, including uncertainties,for earthquakes, volcanic events, and nuclear tests

    NASA Astrophysics Data System (ADS)

    Alvizuri, C. R.; Silwal, V.; Krischer, L.; Tape, C.

    2016-12-01

    A seismic moment tensor is a 3 X 3 symmetric matrix that provides a compact representation of seismic events within Earth's crust. We develop an algorithm to estimate moment tensors and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment tensors by generating synthetic waveforms at each grid point and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment tensor M for the event is then the moment tensor with minimum misfit. To describe the uncertainty associated with M, we first convert the misfit function to a probability function. The uncertainty, or rather the confidence, is then given by the 'confidence curve' P(V), where P(V) is the probability that the true moment tensor for the event lies within the neighborhood of M that has fractional volume V. The area under the confidence curve provides a single, abbreviated 'confidence parameter' for M. We apply the method to data from events in different regions and tectonic settings: small (Mw < 2.5) events at Uturuncu volcano in Bolivia, moderate (Mw > 4) earthquakes in the southern Alaska subduction zone, and natural and man-made events at the Nevada Test Site. Moment tensor uncertainties allow us to better discriminate among moment tensor source types and to assign physical processes to the events.

  8. Estimation of full moment tensors, including uncertainties, for earthquakes, volcanic events, and nuclear explosions

    NASA Astrophysics Data System (ADS)

    Alvizuri, Celso R.

    We present a catalog of full seismic moment tensors for 63 events from Uturuncu volcano in Bolivia. The events were recorded during 2011-2012 in the PLUTONS seismic array of 24 broadband stations. Most events had magnitudes between 0.5 and 2.0 and did not generate discernible surface waves; the largest event was Mw 2.8. For each event we computed the misfit between observed and synthetic waveforms, and we used first-motion polarity measurements to reduce the number of possible solutions. Each moment tensor solution was obtained using a grid search over the six-dimensional space of moment tensors. For each event we show the misfit function in eigenvalue space, represented by a lune. We identify three subsets of the catalog: (1) 6 isotropic events, (2) 5 tensional crack events, and (3) a swarm of 14 events southeast of the volcanic center that appear to be double couples. The occurrence of positively isotropic events is consistent with other published results from volcanic and geothermal regions. Several of these previous results, as well as our results, cannot be interpreted within the context of either an oblique opening crack or a crack-plus-double-couple model. Proper characterization of uncertainties for full moment tensors is critical for distinguishing among physical models of source processes. A seismic moment tensor is a 3x3 symmetric matrix that provides a compact representation of a seismic source. We develop an algorithm to estimate moment tensors and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment tensors by generating synthetic waveforms for each moment tensor and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment tensor M0 for the event is then the moment tensor with minimum misfit. To describe the uncertainty associated with M0, we first convert the misfit function to a probability function. The uncertainty, or

  9. Earthquake Analysis.

    ERIC Educational Resources Information Center

    Espinoza, Fernando

    2000-01-01

    Indicates the importance of the development of students' measurement and estimation skills. Analyzes earthquake data recorded at seismograph stations and explains how to read and modify the graphs. Presents an activity for student evaluation. (YDS)

  10. Source rupture processes of the 2016 Kumamoto, Japan, earthquakes estimated from strong-motion waveforms

    NASA Astrophysics Data System (ADS)

    Kubo, Hisahiko; Suzuki, Wataru; Aoi, Shin; Sekiguchi, Haruko

    2016-10-01

    The detailed source rupture process of the M 7.3 event (April 16, 2016, 01:25, JST) of the 2016 Kumamoto, Japan, earthquakes was derived from strong-motion waveforms using multiple-time-window linear waveform inversion. Based on the observations of surface ruptures, the spatial distribution of aftershocks, and the geodetic data, a realistic curved fault model was developed for source-process analysis of this event. The seismic moment and maximum slip were estimated as 5.5 × 1019 Nm ( M w 7.1) and 3.8 m, respectively. The source model of the M 7.3 event had two significant ruptures. One rupture propagated toward the northeastern shallow region at 4 s after rupture initiation and continued with large slips to approximately 16 s. This rupture caused a large slip region 10-30 km northeast of the hypocenter that reached the caldera of Mt. Aso. Another rupture propagated toward the surface from the hypocenter at 2-6 s and then propagated toward the northeast along the near surface at 6-10 s. A comparison with the result of using a single fault plane model demonstrated that the use of the curved fault model led to improved waveform fit at the stations south of the fault. The source process of the M 6.5 event (April 14, 2016, 21:26, JST) was also estimated. In the source model obtained for the M 6.5 event, the seismic moment was 1.7 × 1018 Nm ( M w 6.1), and the rupture with large slips propagated from the hypocenter to the surface along the north-northeast direction at 1-6 s. The results in this study are consistent with observations of the surface ruptures. [Figure not available: see fulltext. Caption: .

  11. Improved phase arrival estimate and location for local earthquakes in South Korea

    NASA Astrophysics Data System (ADS)

    Morton, E. A.; Rowe, C. A.; Begnaud, M. L.

    2012-12-01

    The Korean Institute of Geoscience and Mineral Resources (KIGAM) and the Korean Meteorological Agency (KMA) regularly report local (distance < ~1200 km) seismicity recorded with their networks; we obtain preliminary event location estimates as well as waveform data, but no phase arrivals are reported, so the data are not immediately useful for earthquake location. Our goal is to identify seismic events that are sufficiently well-located to provide accurate seismic travel-time information for events within the KIGAM and KMA networks, and also recorded by some regional stations. Toward that end, we are using a combination of manual phase identification and arrival-time picking, with waveform cross-correlation, to cluster events that have occurred in close proximity to one another, which allows for improved phase identification by comparing the highly correlating waveforms. We cross-correlate the known events with one another on 5 seismic stations and cluster events that correlate above a correlation coefficient threshold of 0.7, which reveals few clusters containing few events each. The small number of repeating events suggests that the online catalogs have had mining and quarry blasts removed before publication, as these can contribute significantly to repeating seismic sources in relatively aseismic regions such as South Korea. The dispersed source locations in our catalog, however, are ideal for seismic velocity modeling by providing superior sampling through the dense seismic station arrangement, which produces favorable event-to-station ray path coverage. Following careful manual phase picking on 104 events chosen to provide adequate ray coverage, we re-locate the events to obtain improved source coordinates. The re-located events are used with Thurber's Simul2000 pseudo-bending local tomography code to estimate the crustal structure on the Korean Peninsula, which is an important contribution to ongoing calibration for events of interest in the region.

  12. Source parameters of the 2008 Bukavu-Cyangugu earthquake estimated from InSAR and teleseismic data

    NASA Astrophysics Data System (ADS)

    D'Oreye, Nicolas; González, Pablo J.; Shuler, Ashley; Oth, Adrien; Bagalwa, Louis; Ekström, Göran; Kavotha, Déogratias; Kervyn, François; Lucas, Celia; Lukaya, François; Osodundu, Etoy; Wauthier, Christelle; Fernández, José

    2011-02-01

    Earthquake source parameter determination is of great importance for hazard assessment, as well as for a variety of scientific studies concerning regional stress and strain release and volcano-tectonic interaction. This is especially true for poorly instrumented, densely populated regions such as encountered in Africa, where even the distribution of seismicity remains poorly documented. In this paper, we combine data from satellite radar interferometry (InSAR) and teleseismic waveforms to determine the source parameters of the Mw 5.9 earthquake that occurred on 2008 February 3 near the cities of Bukavu (DR Congo) and Cyangugu (Rwanda). This was the second largest earthquake ever to be recorded in the Kivu basin, a section of the western branch of the East African Rift (EAR). This earthquake is of particular interest due to its shallow depth and proximity to active volcanoes and Lake Kivu, which contains high concentrations of dissolved carbon dioxide and methane. The shallow depth and possible similarity with dyking events recognized in other parts of EAR suggested the potential association of the earthquake with a magmatic intrusion, emphasizing the necessity of accurate source parameter determination. In general, we find that estimates of fault plane geometry, depth and scalar moment are highly consistent between teleseismic and InSAR studies. Centroid-moment-tensor (CMT) solutions locate the earthquake near the southern part of Lake Kivu, while InSAR studies place it under the lake itself. CMT solutions characterize the event as a nearly pure double-couple, normal faulting earthquake occurring on a fault plane striking 350° and dipping 52° east, with a rake of -101°. This is consistent with locally mapped faults, as well as InSAR data, which place the earthquake on a fault striking 355° and dipping 55° east, with a rake of -98°. The depth of the earthquake was constrained by a joint analysis of teleseismic P and SH waves and the CMT data set, showing that

  13. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  14. Source Process of the 2010 Great Chile Earthquake (Mw8.8) Estimated Using Observed Tsunami Waveforms

    NASA Astrophysics Data System (ADS)

    Tanioka, Y.; Gusman, A. R.

    2010-12-01

    The great earthquake, Mw 8.8, occurred in Chile on 27 February, 2010 at 06:34:14 UTC. The number of casualties by this earthquake was reached 800, and more than 500 people among that were killed by tsunamis. The large tsunami was generated by the earthquake and propagated through Pacific and reached along the coast of Pacific include Hawaii, Japan, and Alaska. The maximum run-up height of the tsunami was 28 m in Chile. The tsunami was observed at DART real-time tsunami monitoring systems installed in the Pacific by NOAA-PMEL and also tide gauges around Pacific. In this paper, the tsunami waveforms observed at 9 DART stations, 32412, 51406, 51426, 54401, 43412, 46412, 46409, 46403, and 21413, are used to estimate the slip distribution of the 2010 Chile earthquake. The source area of 500km x 150km is divided into 30 subfaults of 50 km x 50 km. The Global CMT solution shows the focal mechanism of the earthquake, strike=18degree, dip=18degree, rake=112degree. Those fault parameters are assumed for all subfaults. The tsunami is numerically computed on actual bathymetry. The finite-difference computation for the linear long-wave equations are carried out in the whole Pacific. The grid size is 5 minutes, about 9km. Tsunami waveforms at 9 DART stations are computed from each subfault with a unit amount of slip, and used as the Green’s function for the inversion. The result of the tsunami inversion indicates that the large slip amount of more than 10m is estimated in the source area from about 150 km northeast of the epicenter to about 200 km southwest of the epicenter. The maximum slip amount is estimated to be 19 m at a subfault located at the southwest of the epicenter. The total length of the rupture length is found to be about 400-350 km. The result also indicates the bilateral rupture process of the great Chile earthquake. The total seismic moment calculated from the slip distribution is 2.6 x 10^{22} Nm (Mw 8.9) by assuming the rigidity of 4 x 10^{10} N/m^{2}. This

  15. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    PubMed

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  16. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    PubMed Central

    Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  17. Real-Time Estimation of Earthquake Location, Magnitude and Rapid Shake map Computation for the Campania Region, Southern Italy

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Convertito, V.; de Matteis, R.; Iannaccone, G.; Lancieri, M.; Lomax, A.; Satriano, C.

    2005-12-01

    introducing an evolutionary strategy which is aimed at obtaining a more and more refined estimate of the maximum probability volume as the time goes on. The real time magnitude estimate will take advantage from the high spatial density of the network in the source region and the wide dynamic range of installed instruments. Based on the offline analysis of high quality strong-motion data bases recorded in Italy and worldwide, several methods will be checked and validated , using different observed quantities (peak amplitude, dominant frequency, square velocity integral, .) to be measured on seismograms, as a function of time. Following the ElarmS methodology (Allen,2004), peak ground attenuation relations can be used to predict the distribution of maximum ground shaking, as updated estimates of earthquake location and magnitude are progressively available from the Early Warning system starting from the time of first P-wave detection. As measurements of peak ground quantities for the current earthquake become available from the network, these values are progressively used to adjust an "ad hoc" determined attenuation relation for the Campania region using the stochastic approach proposed by Boore (1993).

  18. A teleseismic study of the 2002 Denali fault, Alaska, earthquake and implications for rapid strong-motion estimation

    USGS Publications Warehouse

    Ji, C.; Helmberger, D.V.; Wald, D.J.

    2004-01-01

    Slip histories for the 2002 M7.9 Denali fault, Alaska, earthquake are derived rapidly from global teleseismic waveform data. In phases, three models improve matching waveform data and recovery of rupture details. In the first model (Phase I), analogous to an automated solution, a simple fault plane is fixed based on the preliminary Harvard Centroid Moment Tensor mechanism and the epicenter provided by the Preliminary Determination of Epicenters. This model is then updated (Phase II) by implementing a more realistic fault geometry inferred from Digital Elevation Model topography and further (Phase III) by using the calibrated P-wave and SH-wave arrival times derived from modeling of the nearby 2002 M6.7 Nenana Mountain earthquake. These models are used to predict the peak ground velocity and the shaking intensity field in the fault vicinity. The procedure to estimate local strong motion could be automated and used for global real-time earthquake shaking and damage assessment. ?? 2004, Earthquake Engineering Research Institute.

  19. Effects of tag loss on direct estimates of population growth rate

    USGS Publications Warehouse

    Rotella, J.J.; Hines, J.E.

    2005-01-01

    The temporal symmetry approach of R. Pradel can be used with capture-recapture data to produce retrospective estimates of a population's growth rate, lambda(i), and the relative contributions to lambda(i) from different components of the population. Direct estimation of lambda(i) provides an alternative to using population projection matrices to estimate asymptotic lambda and is seeing increased use. However, the robustness of direct estimates of lambda(1) to violations of several key assumptions has not yet been investigated. Here, we consider tag loss as a possible source of bias for scenarios in which the rate of tag loss is (1) the same for all marked animals in the population and (2) a function of tag age. We computed analytic approximations of the expected values for each of the parameter estimators involved in direct estimation and used those values to calculate bias and precision for each parameter estimator. Estimates of lambda(i) were robust to homogeneous rates of tag loss. When tag loss rates varied by tag age, bias occurred for some of the sampling situations evaluated, especially those with low capture probability, a high rate of tag loss, or both. For situations with low rates of tag loss and high capture probability, bias was low and often negligible. Estimates of contributions of demographic components to lambda(i) were not robust to tag loss. Tag loss reduced the precision of all estimates because tag loss results in fewer marked animals remaining available for estimation. Clearly tag loss should be prevented if possible, and should be considered in analyses of lambda(i), but tag loss does not necessarily preclude unbiased estimation of lambda(i).

  20. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  1. Estimation of human heat loss in five Mediterranean regions.

    PubMed

    Bilgili, M; Simsek, E; Sahin, B; Yasar, A; Ozbek, A

    2015-10-01

    This study investigates the effects of seasonal weather differences on the human body's heat losses in the Mediterranean region of Turkey. The provinces of Adana, Antakya, Osmaniye, Mersin and Antalya were chosen for the research, and monthly atmospheric temperatures, relative humidity, wind speed and atmospheric pressure data from 2007 were used. In all these provinces, radiative, convective and evaporative heat losses from the human body based on skin surface and respiration were analyzed from meteorological data by using the heat balance equation. According to the results, the rate of radiative, convective and evaporative heat losses from the human body varies considerably from season to season. In all the provinces, 90% of heat loss was caused by heat transfer from the skin, with the remaining 10% taking place through respiration. Furthermore, radiative and convective heat loss through the skin reached the highest values in the winter months at approximately between 110 and 140W/m(2), with the lowest values coming in the summer months at roughly 30-50W/m(2).

  2. Long-period earthquake simulations in the Wasatch Front, UT: misfit characterization and ground motion estimates

    USGS Publications Warehouse

    Moschetti, Morgan P.; Ramírez-Guzmán, Leonardo

    2011-01-01

    In this research we characterize the goodness-of-fit between observed and synthetic seismograms from three small magnitude (M3.6-4.5) earthquakes in the region using the Wasatch Front community velocity model (WCVM) in order to determine the ability of the WCVM to predict earthquake ground motions for scenario earthquake modeling efforts. We employ the goodness-of-fit algorithms and criteria of Olsen and Mayhew (2010). In focusing comparisons on the ground motion parameters that are of greatest importance in engineering seismology, we find that the synthetic seismograms calculated using the WCVM produce a fair fit to the observed ground motion records up to a frequency of 0.5 Hz for two of the modeled earthquakes and up to 0.1 Hz for one of the earthquakes. In addition to the reference seismic material model (WCVM), we carry out earthquake simulations using material models with perturbations to the regional seismic model and with perturbations to the deep sedimentary basins. Simple perturbations to the regional seismic velocity model and to the seismic velocities of the sedimentary basin result in small improvements in the observed misfit but do not indicate a significantly improved material model. Unresolved differences between the observed and synthetic seismograms are likely due to un-modeled heterogeneities and incorrect basin geometries in the WCVM. These differences suggest that ground motion prediction accuracy from deterministic modeling varies across the region and further efforts to improve the WCVM are needed.

  3. Real-time magnitude estimation and rapid fault characterization with GPS data for Earthquake Early Warning applications

    NASA Astrophysics Data System (ADS)

    Colombelli, S.; Allen, R. M.; Zollo, A.

    2012-12-01

    The combined use of seismic and geodetic observations is now a common practice for finite-fault modeling and seismic source parametrization. With the advent of high-rate 1Hz GPS stations the seismological community has begun to look at ways to include GPS data in Earthquake Early Warning (EEW) algorithms. GPS stations record ground displacement without any risk of saturating or need for baseline or other corrections. Thus, geodetic displacement timeseries complement the high-frequency information provided by seismic data. In the standard approaches to early warning, the initial portion of the P-wave signal is used to rapidly characterize the earthquake magnitude and to predict the expected ground shaking at a target site, before damaging waves arrive. Whether the final magnitude of an earthquake can be predicted while the rupture process is underway, still represents a controversial issue; the point is that the limitations of the standard approaches when applied to giant earthquakes have become evident after the experience of the Mw 9.0, 2011 Tohoku-Oki earthquake. Here we explore the application of GPS data to EEW and investigate whether the co-seismic ground deformation can be used to provide fast and reliable magnitude estimations. We implemented an algorithm to extract the permanent static offset from GPS displacement timeseries; the static displacement is then used to invert for the slip distribution on the fault plane, using a constant-slip, rectangular source embedded in a homogeneous half-space. We developed an efficient real-time static slip inversion scheme for both the rapid determination of the event size and for the near real-time estimation of the rupture area. This would allow for a correct evaluation of the expected ground shaking at the target sites, which represents, without doubt, the most important aspect of the practical implementation of an early warning system and the most relevant information to be provided to non-expert end-users. The

  4. Combined UAVSAR and GPS Estimates of Fault Slip for the M 6.0 South Napa Earthquake

    NASA Astrophysics Data System (ADS)

    Donnellan, A.; Parker, J. W.; Hawkins, B.; Hensley, S.; Jones, C. E.; Owen, S. E.; Moore, A. W.; Wang, J.; Pierce, M. E.; Rundle, J. B.

    2014-12-01

    Combined UAVSAR and GPS Estimates of Fault Slip for the M 6.0 South Napa Earthquake Andrea Donnellan, Jay Parker, Brian Hawkins, Scott Hensley, Cathleen Jones, Susan Owen, Angelyn Moore Jet Propulsion Laboratory, California Institute of Technology Marlon Pierce, Jun Wang Indiana University John Rundle University of California, Davis The South Napa to Santa Rosa area has been observed with NASA's UAVSAR since late 2009 as part of an experiment to monitor areas identified as having a high probability of an earthquake. The M 6.0 South Napa earthquake occurred on 24 August 2014. The area was flown 29 May 2014 preceeding the earthquake, and again on 29 August 2014, five days after the earthquake. The UAVSAR results show slip on a single fault at the south end of the rupture near the epicenter of the event. The rupture branches out into multiple faults further north near the Napa area. A combined inversion of rapid GPS results and the unwrapped UAVSAR interferogram indicate nearly pure strike slip motion. Using this assumption, the UAVSAR data show horizontal right-lateral slip across the fault of 19 cm at the south end of the rupture and increasing to 70 cm northward over a distance of 6.5 km. The joint inversion indicates slip of ~30 cm on a network of sub-parallel faults is concentrated in a zone about 17 km long. The lower depths of the faults are 5-8.5 km. The eastern two sub-parallel faults break the surface, while three faults to the west are buried at depths ranging from 2-6 km with deeper depths to the north and west. The geodetic moment release is equivalent to a M 6.1 event. Additional ruptures are observed in the interferogram, but the inversions suggest that they represent superficial slip that does not contribute to the overall moment release.

  5. Defeating Earthquakes

    NASA Astrophysics Data System (ADS)

    Stein, R. S.

    2012-12-01

    The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by

  6. Equations for estimating horizontal response spectra and peak acceleration from western North American earthquakes: A summary of recent work

    USGS Publications Warehouse

    Boore, D.M.; Joyner, W.B.; Fumal, T.E.

    1997-01-01

    In this paper we summarize our recently-published work on estimating horizontal response spectra and peak acceleration for shallow earthquakes in western North America. Although none of the sets of coefficients given here for the equations are new, for the convenience of the reader and in keeping with the style of this special issue, we provide tables for estimating random horizontal-component peak acceleration and 5 percent damped pseudo-acceleration response spectra in terms of the natural, rather than common, logarithm of the ground-motion parameter. The equations give ground motion in terms of moment magnitude, distance, and site conditions for strike-slip, reverse-slip, or unspecified faulting mechanisms. Site conditions are represented by the shear velocity averaged over the upper 30 m, and recommended values of average shear velocity are given for typical rock and soil sites and for site categories used in the National Earthquake Hazards Reduction Program's recommended seismic code provisions. In addition, we stipulate more restrictive ranges of magnitude and distance for the use of our equations than in our previous publications. Finally, we provide tables of input parameters that include a few corrections to site classifications and earthquake magnitude (the corrections made a small enough difference in the ground-motion predictions that we chose not to change the coefficients of the prediction equations).

  7. Estimate of Seismological Parameters for the 1908 Messina Earthquake Through a new Data set Within the SISMOS Project.

    NASA Astrophysics Data System (ADS)

    Palombo, B.; Ferrari, G.; Bernardi, F.; Hunstad, I.; Perniola, B.

    2008-12-01

    The 1908 earthquake is one of the most catastrophic events in Italian history, recorded by most of the historical seismic stations existing at that time. Some of the seismograms recorded by these stations have already been used by many authors for the purpose of studying source characteristics, although only copies of the original recordings were available. Thanks to the Euroseismos project (2002-2007) and to the Sismos project, most of the original data (seismogram recordings and instrument parameter calibrations) for these events are now available in digital formats. Sismos technical facilities now allow us to apply the modern methods of digital-data analysis for the earthquakes recorded by mechanical and electromagnetic seismographs. The Sismos database has recently acquired many original seismograms and related instrumental parameters for the 1908 Messina earthquake, recorded by 14 stations distributed worldwide and never before used in previous works. We have estimated the main event parameters (i.e. location, Ms, Mw and focal mechanism) with the new data set. The aim of our work is to provide the scientific community with a reliable size and source estimation for accurate and consistent seismic hazard evaluation in Sicily, a region characterized by high long-term seismicity.

  8. Estimating extreme losses for the Florida Public Hurricane Model—part II

    NASA Astrophysics Data System (ADS)

    Gulati, Sneh; George, Florence; Hamid, Shahid

    2017-01-01

    Rising global temperatures are leading to an increase in the number of extreme events and losses (http://www.epa.gov/climatechange/science/indicators/). Accurate estimation of these extreme losses with the intention of protecting themselves against them is critical to insurance companies. In a previous paper, Gulati et al. (2014) discussed probable maximum loss (PML) estimation for the Florida Public Hurricane Loss Model (FPHLM) using parametric and nonparametric methods. In this paper, we investigate the use of semi-parametric methods to do the same. Detailed analysis of the data shows that the annual losses from FPHLM do not tend to be very heavy tailed, and therefore, neither the popular Hill's method nor the moment's estimator work well. However, Pickand's estimator with threshold around the 84th percentile provides a good fit for the extreme quantiles for the losses.

  9. Tsunami Waveform Inversion Technique to Estimate the Initial Sea Surface Displacement - Application to the 2007 Niigataken Chuetsu-oki Earthquake Tsunami

    NASA Astrophysics Data System (ADS)

    Tanioka, Y.; Namegaya, Y.; Satake, K.

    2008-12-01

    Recent earthquake source studies using the tsunami waveform inversion technique generally estimate slip distributions of large earthquakes by assuming the fault geometries. However, if an earthquake source is complex or not obvious, it is better to estimate the initial sea surface displacement of the tsunami using the tsunami waveform inversion first. Then, that result can be used to estimate or discuss the source process of the large earthquake. In this study, in order to estimate the initial sea surface displacement due to an earthquake, a new inversion technique using observed tsunami waveforms is developed. The sea surface in the possible tsunami source region is divided into small cells. Tsunami waveforms, or Green"fs functions for the inversion, at tide gauge stations are numerically computed for each cell with a unit amount of uplift. The sea surface displacements for each cell are estimated by inversion of the observed tsunami waveforms at those tide gauges. We apply the above technique to estimate the initial sea surface displacement due to the 2007 Niigataken Chuetsu-oki earthquake (MJMA 6.8). The earthquake occurred off the coast of Niigata prefecture, the Japan Sea coast of the central Japan, at 10:13 a.m. (JST) on 16th July, 2007. Various source models of the earthquake were suggested using aftershock distribution data, seismological waveform data or geodetic data, but the fault plane of the earthquake is still controversial. The earthquake accompanied by tsunami, which was recorded at tide gauge stations along the Japan Sea coast. The maximum height of about 1 m was observed at a tide gauge station at Banjin, Kashiwazaki city, near the source region. Observed tsunami waveforms at ten tide gauge stations located around the source region are used for the inversion. The sea surface above the source region, or the aftershock area, is divided into 26 cells (4 km x 4 km) to estimate the initial sea surface displacement. The result shows that uplifts are

  10. Estimating Intensities and/or Strong Motion Parameters Using Civilian Monitoring Videos: The May 12, 2008, Wenchuan Earthquake

    NASA Astrophysics Data System (ADS)

    Yang, Xiaolin; Wu, Zhongliang; Jiang, Changsheng; Xia, Min

    2011-05-01

    One of the important issues in macroseismology and engineering seismology is how to get as much intensity and/or strong motion data as possible. We collected and studied several cases in the May 12, 2008, Wenchuan earthquake, exploring the possibility of estimating intensities and/or strong ground motion parameters using civilian monitoring videos which were deployed originally for security purposes. We used 53 video recordings in different places to determine the intensity distribution of the earthquake, which is shown to be consistent with the intensity distribution mapped by field investigation, and even better than that given by the Community Internet Intensity Map. In some of the videos, the seismic wave propagation is clearly visible, and can be measured with the reference of some artificial objects such as cars and/or trucks. By measuring the propagating wave, strong motion parameters can be roughly but quantitatively estimated. As a demonstration of this `propagating-wave method', we used a series of civilian videos recorded in different parts of Sichuan and Shaanxi and estimated the local PGAs. The estimate is compared with the measurement reported by strong motion instruments. The result shows that civilian monitoring video provide a practical way of collecting and estimating intensity and/or strong motion parameters, having the advantage of being dynamic, and being able to be played back for further analysis, reflecting a new trend for macroseismology in our digital era.

  11. Cascadia's Staggering Losses

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Vogt, B.

    2001-05-01

    Recent worldwide earthquakes have resulted in staggering losses. The Northridge, California; Kobe, Japan; Loma Prieta, California; Izmit, Turkey; Chi-Chi, Taiwan; and Bhuj, India earthquakes, which range from magnitudes 6.7 to 7.7, have all occurred near populated areas. These earthquakes have resulted in estimated losses between \\3 and \\300 billion, with tens to tens of thousands of fatalities. Subduction zones are capable of producing the largest earthquakes. The 1939 M7.8 Chilean, the 1960 M9.5 Chilean, the 1964 M9.2 Alaskan, the 1970 M7.8 Peruvian, the 1985 M7.9 Mexico City and the 2001 M7.7 Bhuj earthquakes are damaging subduction zone quakes. The Cascadia fault zone poses a tremendous hazard in the Pacific Northwest due to the ground shaking and tsunami inundation hazards combined with the population. To address the Cascadia subduction zone threat, the Oregon Department of Geology and Mineral Industries conducted a preliminary statewide loss study. The 1998 Oregon study incorporated a M8.5 quake, the influence of near surface soil effects and default building, social and economic data available in FEMA's HAZUS97 software. Direct financial losses are projected at over \\$12 billion. Casualties are estimated at about 13,000. Over 5,000 of the casualties are estimated to result in fatalities from hazards relating to tsunamis and unreinforced masonry buildings.

  12. Efficient Acoustic Uncertainty Estimation for Transmission Loss Calculations

    DTIC Science & Technology

    2011-09-01

    Soc. Am. Vol. 129, 589-592. PUBLICATIONS [1] Kundu , P.K., Cohen, I.M., and Dowling, D.R., Fluid Mechanics , 5th Ed. (Academic Press, Oxford, 2012), 891 pages. ...Transmission Loss Calculations Kevin R. James Department of Mechanical Engineering University of Michigan Ann Arbor, MI 48109-2133 phone: (734) 998...1807 fax: (734) 764-4256 email: krj@umich.edu David R. Dowling Department of Mechanical Engineering University of Michigan Ann Arbor, MI

  13. Estimates of Crustal Transmission Losses Using MLM Array Processing.

    DTIC Science & Technology

    1982-07-01

    Continue on rovereo de II neceery, end Identfip by block nuinbe.) 1. Array processing 2. Acoustic transmission loss 3. MLM beamforming 20. AUSTRACT...Baggeroer and Falconer,1981). The technique conceptually designs a beamformer based on the input data (hence data adaptive). This beamformer minimizes...output power with the constraint that energy from a specific direction is passed undistorted. We shall see that the structure of this beamformer can be

  14. Revisiting borehole strain, typhoons, and slow earthquakes using quantitative estimates of precipitation-induced strain changes

    NASA Astrophysics Data System (ADS)

    Hsu, Ya-Ju; Chang, Yuan-Shu; Liu, Chi-Ching; Lee, Hsin-Ming; Linde, Alan T.; Sacks, Selwyn I.; Kitagawa, Genshio; Chen, Yue-Gau

    2015-06-01

    Taiwan experiences high deformation rates, particularly along its eastern margin where a shortening rate of about 30 mm/yr is experienced in the Longitudinal Valley and the Coastal Range. Four Sacks-Evertson borehole strainmeters have been installed in this area since 2003. Liu et al. (2009) proposed that a number of strain transient events, primarily coincident with low-barometric pressure during passages of typhoons, were due to deep-triggered slow slip. Here we extend that investigation with a quantitative analysis of the strain responses to precipitation as well as barometric pressure and the Earth tides in order to isolate tectonic source effects. Estimates of the strain responses to barometric pressure and groundwater level changes for the different stations vary over the ranges -1 to -3 nanostrain/millibar(hPa) and -0.3 to -1.0 nanostrain/hPa, respectively, consistent with theoretical values derived using Hooke's law. Liu et al. (2009) noted that during some typhoons, including at least one with very heavy rainfall, the observed strain changes were consistent with only barometric forcing. By considering a more extensive data set, we now find that the strain response to rainfall is about -5.1 nanostrain/hPa. A larger strain response to rainfall compared to that to air pressure and water level may be associated with an additional strain from fluid pressure changes that take place due to infiltration of precipitation. Using a state-space model, we remove the strain response to rainfall, in addition to those due to air pressure changes and the Earth tides, and investigate whether corrected strain changes are related to environmental disturbances or tectonic-original motions. The majority of strain changes attributed to slow earthquakes seem rather to be associated with environmental factors. However, some events show remaining strain changes after all corrections. These events include strain polarity changes during passages of typhoons (a characteristic that is

  15. Passive Bottom Loss Estimation Using Compact Arrays and Autonomous Underwater Vehicles

    DTIC Science & Technology

    2015-09-30

    DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Passive bottom loss estimation using compact arrays and autonomous...source of error being the low accuracy in bottom- loss estimation [Ferla, 2002]. While the classical approach to acquiring the bottom reflection...frequency range indicated above, the poor angular resolution of the short arrays required in AUV deployment causes an underestimation of the loss

  16. Introduction of a Novel Loss Data Normalization Method for Improved Estimation of Extreme Losses from Natural Catastrophes

    NASA Astrophysics Data System (ADS)

    Eichner, J. F.; Steuer, M.; Loew, P.

    2016-12-01

    Past natural catastrophes offer valuable information for present-day risk assessment. To make use of historic loss data one has to find a setting that enables comparison (over place and time) of historic events happening under today's conditions. By means of loss data normalization the influence of socio-economic development, as the fundamental driver in this context, can be eliminated and the data gives way to the deduction of risk-relevant information and allows the study of other driving factors such as influences from climate variability and climate change or changes of vulnerability. Munich Re's NatCatSERVICE database includes for each historic loss event the geographic coordinates of all locations and regions that were affected in a relevant way. These locations form the basis for what is known as the loss footprint of an event. Here we introduce a state of the art and robust method for global loss data normalization. The presented peril-specific loss footprint normalization method adjusts direct economic loss data to the influence of economic growth within each loss footprint (by using gross cell product data as proxy for local economic growth) and makes loss data comparable over time. To achieve a comparative setting for supra-regional economic differences, we categorize the normalized loss values (together with information on fatalities) based on the World Bank income groups into five catastrophe classes, from minor to catastrophic. The data treated in such way allows (a) for studying the influence of improved reporting of small scale loss events over time and (b) for application of standard (stationary) extreme value statistics (here: peaks over threshold method) to compile estimates for extreme and extrapolated loss magnitudes such as a "100 year event" on global scale. Examples of such results will be shown.

  17. Loss of Information in Estimating Item Parameters in Incomplete Designs

    ERIC Educational Resources Information Center

    Eggen, Theo J. H. M.; Verelst, Norman D.

    2006-01-01

    In this paper, the efficiency of conditional maximum likelihood (CML) and marginal maximum likelihood (MML) estimation of the item parameters of the Rasch model in incomplete designs is investigated. The use of the concept of F-information (Eggen, 2000) is generalized to incomplete testing designs. The scaled determinant of the F-information…

  18. Loss of Information in Estimating Item Parameters in Incomplete Designs

    ERIC Educational Resources Information Center

    Eggen, Theo J. H. M.; Verelst, Norman D.

    2006-01-01

    In this paper, the efficiency of conditional maximum likelihood (CML) and marginal maximum likelihood (MML) estimation of the item parameters of the Rasch model in incomplete designs is investigated. The use of the concept of F-information (Eggen, 2000) is generalized to incomplete testing designs. The scaled determinant of the F-information…

  19. Exploring the uncertainty range of coseismic stress drop estimations of large earthquakes using finite fault inversions

    NASA Astrophysics Data System (ADS)

    Adams, Mareike; Twardzik, Cedric; Ji, Chen

    2017-01-01

    A new finite fault inversion strategy is developed to explore the uncertainty range for the energy based average coseismic stress drop (overline {{{Δ }}{τ_E}}) of large earthquakes. For a given earthquake, we conduct a modified finite fault inversion to find a solution that not only matches seismic and geodetic data but also has a overline {{{Δ }}{τ_E}} matching a specified value. We do the inversions for a wide range of stress drops. These results produce a trade-off curve between the misfit to the observations and overline {{{Δ }}{τ_E}} , which allows one to define the range of overline {{{Δ }}{τ_E}} that will produce an acceptable misfit. The study of the 2014 Rat Islands Mw 7.9 earthquake reveals an unexpected result: when using only teleseismic waveforms as data, the lower bound of overline {{{Δ }}{τ_E}} (5-10 MPa) for this earthquake is successfully constrained. However, the same data set exhibits no sensitivity to its upper bound of overline {{{Δ }}{τ_E}} because there is limited resolution to the fine scale roughness of fault slip. Given that the spatial resolution of all seismic or geodetic data is limited, we can speculate that the upper bound of overline {{{Δ }}{τ_E}} cannot be constrained with them. This has consequences for the earthquake energy budget. Failing to constrain the upper bound of overline {{{Δ }}{τ_E}} leads to the conclusions that (1) the seismic radiation efficiency determined from the inverted model might be significantly overestimated and (2) the upper bound of the average fracture energy EG cannot be constrained by seismic or geodetic data. Thus, caution must be taken when investigating the characteristics of large earthquakes using the energy budget approach. Finally, searching for the lower bound of overline {{{Δ }}{τ_E}} can be used as an energy-based smoothing scheme during finite fault inversions.

  20. How do "ghost transients" from past earthquakes affect GPS slip rate estimates on southern California faults?

    NASA Astrophysics Data System (ADS)

    Hearn, E. H.; Pollitz, F. F.; Thatcher, W. R.; Onishi, C. T.

    2013-04-01

    In this study, we investigate the extent to which viscoelastic velocity perturbations (or "ghost transients") from individual fault segments can affect elastic block model-based inferences of fault slip rates from GPS velocity fields. We focus on the southern California GPS velocity field, exploring the effects of known, large earthquakes for two end-member rheological structures. Our approach is to compute, at each GPS site, the velocity perturbation relative to a cycle average for earthquake cycles on particular fault segments. We then correct the SCEC CMM4.0 velocity field for this perturbation and invert the corrected field for fault slip rates. We find that if asthenosphere viscosities are low (3 × 1018 Pa s), the current GPS velocity field is significantly perturbed by viscoelastic earthquake cycle effects associated with the San Andreas Fault segment that last ruptured in 1857 (Mw = 7.9). Correcting the GPS velocity field for this perturbation (or "ghost transient") adds about 5 mm/a to the SAF slip rate along the Mojave and San Bernardino segments. The GPS velocity perturbations due to large earthquakes on the Garlock Fault (most recently, events in the early 1600s) and the White Wolf Fault (most recently, the Mw = 7.3 1952 Kern County earthquake) are smaller and do not influence block-model inverted fault slip rates. This suggests that either the large discrepancy between geodetic and geologic slip rates for the Garlock Fault is not due to a ghost transient or that un-modeled transients from recent Mojave earthquakes may influence the GPS velocity field.

  1. Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning

    DTIC Science & Technology

    2008-01-01

    ranging from the income level to age and her preference order over a set of products (e.g. movies in Netflix ). The ranking task is to learn a map- ping...learners in RankBoost. However, in both cases, the proposed strategy selects the samples which are estimated to produce a faster convergence from the...steps in Section 5. 2. Related Work A number of strategies have been proposed for active learning in the classification framework. Some of those center

  2. Pictorial estimation of blood loss in a birthing pool--an aide memoire.

    PubMed

    Goodman, Anushia

    2015-04-01

    The aim of this article is to share some photographic images to help midwives visually estimate blood loss at water births. PubMed, CINAHL and MEDLINE databases were searched for relevant research. There is little evidence to inform the practice of visually estimating blood loss in water, as discussed further on in the article. This article outlines a simulation where varying amounts of blood were poured into a birthing pool, captured by photo images. Photo images of key amounts like 150mls, 300mls and 450mls can be useful visual markers when estimating blood loss at water births. The speed of spread across the pool may be a significant factor in assessing blood loss. The author recommends that midwives and educators embark on similar simulations to inform their skill in estimating blood loss at water births.

  3. Q Estimates using the Coda of Local Earthquakes in Western Turkey

    NASA Astrophysics Data System (ADS)

    Akyol, Nihal

    2015-04-01

    The regional extension in the central west Turkey has been associated to different deformation processes, such as: spreading and thinning of over-thickened crust following the latest collision across the Neotethys, Arabia-Eurasia convergence resulting in westward extrusion of the Anatolian Plate and Africa-Eurasia convergence forming regional tectonics in the back-arc extensional area. Utilizing single isotropic scattering model, the Coda quality factor (Qc) at five frequency bands (1.5, 3, 5, 7, 10 Hz) and for eight window lengths (25-60 s, in steps of 5 s) were estimated in the region. The data comes from 228 earthquakes with local magnitudes and depths range from 2.9 - 4.9 and 2.2 - 27.0 km, respectively. The source to receiver distance of the records changes between 11 and 72 km. Spatial differences of attenuation characteristics were examined by dividing the region into four subregions. The frequency dependence of Qc values between 1.5 and 10 Hz has been inferred utilizing Qc = Q0fn relationship. Q0 values change between 32.7 and 82.1, while n values changes between 0.91 and 0.79 for the main- and four sub-regions, respectively. Obtained frequency dependence of Qc values for a lapse time of 40 s in the main region is Qc(f) = 49.6±1.0f0.85±0.02. The obtained low Q0 values show that the central west Turkey region is characterized by a high seismic attenuation, in general. Strong frequency and lapse time dependencies of Qc values for the main- and four sub-region imply tectonic complexity in the region. The attenuation and its frequency dependency values versus the lapse time for the easternmost subregion, confirm the slab tear inferred from previous studies. The highest frequency dependency values, at all lapse times, in the westernmost subregion imply high degree of heterogeneity supported by severe anti-clockwise rotation in this area. Lapse time dependencies of attenuation and its frequency dependencies were examined for two different ranges of event depth

  4. Estimate of cusp loss width in multicusp negative ion source

    NASA Astrophysics Data System (ADS)

    Morishita, T.; Ogasawara, M.; Hatayama, A.

    1998-02-01

    Expression of cusp loss width derived by Bosch and Merlino is applied to JAERI's Kamaboko source. The width is related to the ambipolar diffusion coefficient across the cusp magnetic field. Electron-ion collision is found 1.2-7.4 times larger as compared with electron-neutral collision. Averaged cusp magnetic field in the diffusion coefficient is taken as a parameter in the simulation code for Kamaboko source. When the averaged magnetic field is 48 G, simulation results agree well with JAERI's experiment in a wide range of pressure and arc power variation. The value of 48 G is reasonable from the consideration of confining the equation of ion source plasma. The obtained width is about 10 times the value evaluated by two times ion Larmor radius on the surface of cusp magnet.

  5. Impact-based earthquake alerts with the U.S. Geological Survey's PAGER system: what's next?

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Garcia, D.; So, E.; Hearne, M.

    2012-01-01

    In September 2010, the USGS began publicly releasing earthquake alerts for significant earthquakes around the globe based on estimates of potential casualties and economic losses with its Prompt Assessment of Global Earthquakes for Response (PAGER) system. These estimates significantly enhanced the utility of the USGS PAGER system which had been, since 2006, providing estimated population exposures to specific shaking intensities. Quantifying earthquake impacts and communicating estimated losses (and their uncertainties) to the public, the media, humanitarian, and response communities required a new protocol—necessitating the development of an Earthquake Impact Scale—described herein and now deployed with the PAGER system. After two years of PAGER-based impact alerting, we now review operations, hazard calculations, loss models, alerting protocols, and our success rate for recent (2010-2011) events. This review prompts analyses of the strengths, limitations, opportunities, and pressures, allowing clearer definition of future research and development priorities for the PAGER system.

  6. Coseismic Fault Slip of the September 16, 2015 Mw 8.3 Illapel, Chile Earthquake Estimated from InSAR Data

    NASA Astrophysics Data System (ADS)

    Zhang, Yingfeng; Zhang, Guohong; Hetland, Eric A.; Shan, Xinjian; Wen, Shaoyan; Zuo, Ronghu

    2016-04-01

    The complete surface deformation of 2015 Mw 8.3 Illapel, Chile earthquake is obtained using SAR interferograms obtained for descending and ascending Sentinel-1 orbits. We find that the Illapel event is predominantly thrust, as expected for an earthquake on the interface between the Nazca and South America plates, with a slight right-lateral strike slip component. The maximum thrust-slip and right-lateral strike slip reach 8.3 and 1.5 m, respectively, both located at a depth of 8 km, northwest to the epicenter. The total estimated seismic moment is 3.28 × 1021 N.m, corresponding to a moment magnitude Mw 8.27. In our model, the rupture breaks all the way up to the sea-floor at the trench, which is consistent with the destructive tsunami following the earthquake. We also find the slip distribution correlates closely with previous estimates of interseismic locking distribution. We argue that positive coulomb stress changes caused by the Illapel earthquake may favor earthquakes on the extensional faults in this area. Finally, based on our inferred coseismic slip model and coulomb stress calculation, we envision that the subduction interface that last slipped in the 1922 Mw 8.4 Vallenar earthquake might be near the upper end of its seismic quiescence, and the earthquake potential in this region is urgent.

  7. Napa Earthquake impact on water systems

    NASA Astrophysics Data System (ADS)

    Wang, J.

    2014-12-01

    South Napa earthquake occurred in Napa, California on August 24 at 3am, local time, and the magnitude is 6.0. The earthquake was the largest in SF Bay Area since the 1989 Loma Prieta earthquake. Economic loss topped $ 1 billion. Wine makers cleaning up and estimated the damage on tourism. Around 15,000 cases of lovely cabernet were pouring into the garden at the Hess Collection. Earthquake potentially raise water pollution risks, could cause water crisis. CA suffered water shortage recent years, and it could be helpful on how to prevent underground/surface water pollution from earthquake. This research gives a clear view on drinking water system in CA, pollution on river systems, as well as estimation on earthquake impact on water supply. The Sacramento-San Joaquin River delta (close to Napa), is the center of the state's water distribution system, delivering fresh water to more than 25 million residents and 3 million acres of farmland. Delta water conveyed through a network of levees is crucial to Southern California. The drought has significantly curtailed water export, and salt water intrusion reduced fresh water outflows. Strong shaking from a nearby earthquake can cause saturated, loose, sandy soils liquefaction, and could potentially damage major delta levee systems near Napa. Napa earthquake is a wake-up call for Southern California. It could potentially damage freshwater supply system.

  8. Estimating High Frequency Energy Radiation of Large Earthquakes by Image Deconvolution Back-Projection

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Takeuchi, Nozomu; Kawakatsu, Hitoshi; Mori, Jim

    2017-04-01

    With the recent establishment of regional dense seismic arrays (e.g., Hi-net in Japan, USArray in the North America), advanced digital data processing has enabled improvement of back-projection methods that have become popular and are widely used to track the rupture process of moderate to large earthquakes. Back-projection methods can be classified into two groups, one using time domain analyses, and the other frequency domain analyses. There are minor technique differences in both groups. Here we focus on the back-projection performed in the time domain using seismic waveforms recorded at teleseismic distances (30-90 degree). For the standard back-projection (Ishii et al., 2005), teleseismic P waves that are recorded on vertical components of a dense seismic array are analyzed. Since seismic arrays have limited resolutions and we make several assumptions (e.g., only direct P waves at the observed waveforms, and every trace has completely identical waveform), the final images from back-projections show the stacked amplitudes (or correlation coefficients) that are often smeared in both time and space domains. Although it might not be difficult to reveal overall source processes for a giant seismic source such as the 2004 Mw 9.0 Sumatra earthquake where the source extent is about 1400 km (Ishii et al., 2005; Krüger and Ohrnberger, 2005), there are more problems in imaging detailed processes of earthquakes with smaller source dimensions, such as a M 7.5 earthquake with a source extent of 100-150 km. For smaller earthquakes, it is more difficult to resolve space distributions of the radiated energies. We developed a new inversion method, Image Deconvolution Back-Projection (IDBP) to determine the sources of high frequency energy radiation by linear inversion of observed images from a back-projection approach. The observed back-projection image for multiple sources is considered as a convolution of the image of the true radiated energy and the array response for a

  9. GPS estimates of microplate motions, northern Caribbean: evidence for a Hispaniola microplate and implications for earthquake hazard

    NASA Astrophysics Data System (ADS)

    Benford, B.; DeMets, C.; Calais, E.

    2012-09-01

    We use elastic block modelling of 126 GPS site velocities from Jamaica, Hispaniola, Puerto Rico and other islands in the northern Caribbean to test for the existence of a Hispaniola microplate and estimate angular velocities for the Gônave, Hispaniola, Puerto Rico-Virgin Islands and two smaller microplates relative to each other and the Caribbean and North America plates. A model in which the Gônave microplate spans the whole plate boundary between the Cayman spreading centre and Mona Passage west of Puerto Rico is rejected at a high confidence level. The data instead require an independently moving Hispaniola microplate between the Mona Passage and a likely diffuse boundary within or offshore from western Hispaniola. Our updated angular velocities predict 6.8 ± 1.0 mm yr-1 of left-lateral slip along the seismically hazardous Enriquillo-Plantain Garden fault zone of southwest Hispaniola, 9.8 ± 2.0 mm yr-1 of slip along the Septentrional fault of northern Hispaniola and ˜14-15 mm yr-1 of left-lateral slip along the Oriente fault south of Cuba. They also predict 5.7 ± 1 mm yr-1 of fault-normal motion in the vicinity of the Enriquillo-Plantain Garden fault zone, faster than previously estimated and possibly accommodated by folds and faults in the Enriquillo-Plantain Garden fault zone borderlands. Our new and a previous estimate of Gônave-Caribbean plate motion suggest that enough elastic strain accumulates to generate one to two Mw˜ 7 earthquakes per century along the Enriquillo-Plantain Garden and nearby faults of southwest Hispaniola. That the 2010 M= 7.0 Haiti earthquake ended a 240-yr-long period of seismic quiescence in this region raises concerns that it could mark the onset of a new earthquake sequence that will relieve elastic strain that has accumulated since the late 18th century.

  10. Slip distribution of the 2014 Mw = 8.1 Pisagua, northern Chile, earthquake sequence estimated from coseismic fore-arc surface cracks

    NASA Astrophysics Data System (ADS)

    Loveless, John P.; Scott, Chelsea P.; Allmendinger, Richard W.; González, Gabriel

    2016-10-01

    The 2014 Mw = 8.1 Iquique (Pisagua), Chile, earthquake sequence ruptured a segment of the Nazca-South America subduction zone that last hosted a great earthquake in 1877. The sequence opened >3700 surface cracks in the fore arc of decameter-scale length and millimeter-to centimeter-scale aperture. We use the strikes of measured cracks, inferred to be perpendicular to coseismically applied tension, to estimate the slip distribution of the main shock and largest aftershock. The slip estimates are compatible with those based on seismic, geodetic, and tsunami data, indicating that geologic observations can also place quantitative constraints on rupture properties. The earthquake sequence ruptured between two asperities inferred from a regional-scale distribution of surface cracks, interpreted to represent a modal or most common rupture scenario for the northern Chile subduction zone. We suggest that past events, including the 1877 earthquake, broke the 2014 Pisagua source area together with adjacent sections in a throughgoing rupture.

  11. Reassessment of liquefaction potential and estimation of earthquake- induced settlements at Paducah Gaseous Diffusion Plant, Paducah, Kentucky. Final report

    SciTech Connect

    Sykora, D.W.; Yule, D.E.

    1996-04-01

    This report documents a reassessment of liquefaction potential and estimation of earthquake-induced settlements for the U.S. Department of Energy (DOE), Paducah Gaseous Diffusion Plant (PGDP), located southwest of Paducah, KY. The U.S. Army Engineer Waterways Experiment Station (WES) was authorized to conduct this study from FY91 to FY94 by the DOE, Oak Ridge Operations (ORO), Oak Ridge, TN, through Inter- Agency Agreement (IAG) No. DE-AI05-91OR21971. The study was conducted under the Gaseous Diffusion Plant Safety Analysis Report (GDP SAR) Program.

  12. Volcano-tectonic earthquakes: A new tool for estimating intrusive volumes and forecasting eruptions

    NASA Astrophysics Data System (ADS)

    White, Randall; McCausland, Wendy

    2016-01-01

    We present data on 136 high-frequency earthquakes and swarms, termed volcano-tectonic (VT) seismicity, which preceded 111 eruptions at 83 volcanoes, plus data on VT swarms that preceded intrusions at 21 other volcanoes. We find that VT seismicity is usually the earliest reported seismic precursor for eruptions at volcanoes that have been dormant for decades or more, and precedes eruptions of all magma types from basaltic to rhyolitic and all explosivities from VEI 0 to ultraplinian VEI 6 at such previously long-dormant volcanoes. Because large eruptions occur most commonly during resumption of activity at long-dormant volcanoes, VT seismicity is an important precursor for the Earth's most dangerous eruptions. VT seismicity precedes all explosive eruptions of VEI ≥ 5 and most if not all VEI 4 eruptions in our data set. Surprisingly we find that the VT seismicity originates at distal locations on tectonic fault structures at distances of one or two to tens of kilometers laterally from the site of the eventual eruption, and rarely if ever starts beneath the eruption site itself. The distal VT swarms generally occur at depths almost equal to the horizontal distance of the swarm from the summit out to about 15 km distance, beyond which hypocenter depths level out. We summarize several important characteristics of this distal VT seismicity including: swarm-like nature, onset days to years prior to the beginning of magmatic eruptions, peaking of activity at the time of the initial eruption whether phreatic or magmatic, and large non-double couple component to focal mechanisms. Most importantly we show that the intruded magma volume can be simply estimated from the cumulative seismic moment of the VT seismicity from: Log10 V = 0.77 Log ΣMoment - 5.32, with volume, V, in cubic meters and seismic moment in Newton meters. Because the cumulative seismic moment can be approximated from the size of just the few largest events, and is quite insensitive to precise locations

  13. Power Scaling of the Size Distribution of Economic Loss and Fatalities due to Hurricanes, Earthquakes, Tornadoes, and Floods in the USA

    NASA Astrophysics Data System (ADS)

    Tebbens, S. F.; Barton, C. C.; Scott, B. E.

    2016-12-01

    Traditionally, the size of natural disaster events such as hurricanes, earthquakes, tornadoes, and floods is measured in terms of wind speed (m/sec), energy released (ergs), or discharge (m3/sec) rather than by economic loss or fatalities. Economic loss and fatalities from natural disasters result from the intersection of the human infrastructure and population with the size of the natural event. This study investigates the size versus cumulative number distribution of individual natural disaster events for several disaster types in the United States. Economic losses are adjusted for inflation to 2014 USD. The cumulative number divided by the time over which the data ranges for each disaster type is the basis for making probabilistic forecasts in terms of the number of events greater than a given size per year and, its inverse, return time. Such forecasts are of interest to insurers/re-insurers, meteorologists, seismologists, government planners, and response agencies. Plots of size versus cumulative number distributions per year for economic loss and fatalities are well fit by power scaling functions of the form p(x) = Cx-β; where, p(x) is the cumulative number of events with size equal to and greater than size x, C is a constant, the activity level, x is the event size, and β is the scaling exponent. Economic loss and fatalities due to hurricanes, earthquakes, tornadoes, and floods are well fit by power functions over one to five orders of magnitude in size. Economic losses for hurricanes and tornadoes have greater scaling exponents, β = 1.1 and 0.9 respectively, whereas earthquakes and floods have smaller scaling exponents, β = 0.4 and 0.6 respectively. Fatalities for tornadoes and floods have greater scaling exponents, β = 1.5 and 1.7 respectively, whereas hurricanes and earthquakes have smaller scaling exponents, β = 0.4 and 0.7 respectively. The scaling exponents can be used to make probabilistic forecasts for time windows ranging from 1 to 1000 years

  14. Estimation of ground motion for Bhuj (26 January 2001; Mw 7.6 and for future earthquakes in India

    USGS Publications Warehouse

    Singh, S.K.; Bansal, B.K.; Bhattacharya, S.N.; Pacheco, J.F.; Dattatrayam, R.S.; Ordaz, M.; Suresh, G.; ,; Hough, S.E.

    2003-01-01

    Only five moderate and large earthquakes (Mw ???5.7) in India-three in the Indian shield region and two in the Himalayan arc region-have given rise to multiple strong ground-motion recordings. Near-source data are available for only two of these events. The Bhuj earthquake (Mw 7.6), which occurred in the shield region, gave rise to useful recordings at distances exceeding 550 km. Because of the scarcity of the data, we use the stochastic method to estimate ground motions. We assume that (1) S waves dominate at R < 100 km and Lg waves at R ??? 100 km, (2) Q = 508f0.48 is valid for the Indian shield as well as the Himalayan arc region, (3) the effective duration is given by fc-1 + 0.05R, where fc is the corner frequency, and R is the hypocentral distance in kilometer, and (4) the acceleration spectra are sharply cut off beyond 35 Hz. We use two finite-source stochastic models. One is an approximate model that reduces to the ??2-source model at distances greater that about twice the source dimension. This model has the advantage that the ground motion is controlled by the familiar stress parameter, ????. In the other finite-source model, which is more reliable for near-source ground-motion estimation, the high-frequency radiation is controlled by the strength factor, sfact, a quantity that is physically related to the maximum slip rate on the fault. We estimate ???? needed to fit the observed Amax and Vmax data of each earthquake (which are mostly in the far field). The corresponding sfact is obtained by requiring that the predicted curves from the two models match each other in the far field up to a distance of about 500 km. The results show: (1) The ???? that explains Amax data for shield events may be a function of depth, increasing from ???50 bars at 10 km to ???400 bars at 36 km. The corresponding sfact values range from 1.0-2.0. The ???? values for the two Himalayan arc events are 75 and 150 bars (sfact = 1.0 and 1.4). (2) The ???? required to explain Vmax data

  15. Probabilistic estimation of earthquake-induced tsunami occurrences in the Adriatic and northern Ionian seas

    NASA Astrophysics Data System (ADS)

    Armigliato, Alberto; Tinti, Stefano

    2010-05-01

    In the framework of the EU-funded project TRANSFER (Tsunami Risk ANd Strategies For the European Region we faced the problem of assessing quantitatively the tsunami hazard in the Adriatic and north Ionian Seas. Tsunami catalogues indicate that the Ionian Sea coasts has been hit by several large historical tsunamis, some of which of local nature (especially along eastern Sicily, eastern Calabria and the Greek Ionian Islands), while others had trans-basin relevance, like those generated in correspondence with the western Hellenic Trench. In the Adriatic Sea the historical tsunami activity is indeed lower, but not negligible: the most exposed regions on the western side of the basin are Romagna-Marche, Gargano and southern Apulia, while in the eastern side the Dalmatian and Albanian coastlines show the largest tsunami exposure. To quantitatively assess the exposure of the selected coastlines to tsunamis we used a hybrid statistical-deterministic approach, already applied in the recent past to the southern Tyrrhenian and Ionian coasts of Italy. The general idea is to base the tsunami hazard analyses on the computation of the probability of occurrence of tsunamigenic earthquakes, which is appropriate in basins where the number of known historical tsunamis is too scarce to be used in reliable statistical analyses, and the largest part of the tsunamis had tectonic origin. The approach is based on the combination of two steps of different nature. The first step consists in the creation of a single homogeneous earthquake catalogue starting from suitably selected catalogues pertaining to each of the main regions facing the Adriatic and north Ionian basins (Italy, Croatia, Montenegro, Greece). The final catalogue contains 6619 earthquakes with moment magnitude ranging from 4.5 to 8.3 and focal depth lower than 50 km. The limitations in magnitude and depth are based on the assumption that earthquakes of magnitude lower than 4.5 and depth greater than 50 km have no significant

  16. Variable anelastic attenuation and site effect in estimating source parameters of various major earthquakes including M w 7.8 Nepal and M w 7.5 Hindu kush earthquake by using far-field strong-motion data

    NASA Astrophysics Data System (ADS)

    Kumar, Naresh; Kumar, Parveen; Chauhan, Vishal; Hazarika, Devajit

    2016-12-01

    Strong-motion records of recent Gorkha Nepal earthquake (M w 7.8), its strong aftershocks and seismic events of Hindu kush region have been analysed for estimation of source parameters. The M w 7.8 Gorkha Nepal earthquake of 25 April 2015 and its six aftershocks of magnitude range 5.3-7.3 are recorded at Multi-Parametric Geophysical Observatory, Ghuttu, Garhwal Himalaya (India) >600 km west from the epicentre of main shock of Gorkha earthquake. The acceleration data of eight earthquakes occurred in the Hindu kush region also recorded at this observatory which is located >1000 km east from the epicentre of M w 7.5 Hindu kush earthquake on 26 October 2015. The shear wave spectra of acceleration record are corrected for the possible effects of anelastic attenuation at both source and recording site as well as for site amplification. The strong-motion data of six local earthquakes are used to estimate the site amplification and the shear wave quality factor (Q β) at recording site. The frequency-dependent Q β(f) = 124f 0.98 is computed at Ghuttu station by using inversion technique. The corrected spectrum is compared with theoretical spectrum obtained from Brune's circular model for the horizontal components using grid search algorithm. Computed seismic moment, stress drop and source radius of the earthquakes used in this work range 8.20 × 1016-5.72 × 1020 Nm, 7.1-50.6 bars and 3.55-36.70 km, respectively. The results match with the available values obtained by other agencies.

  17. Earthquake-triggered liquefaction in Southern Siberia and surroundings: a base for predictive models and seismic hazard estimation

    NASA Astrophysics Data System (ADS)

    Lunina, Oksana

    2016-04-01

    The forms and location patterns of soil liquefaction induced by earthquakes in southern Siberia, Mongolia, and northern Kazakhstan in 1950 through 2014 have been investigated, using field methods and a database of coseismic effects created as a GIS MapInfo application, with a handy input box for large data arrays. Statistical analysis of the data has revealed regional relationships between the magnitude (Ms) of an earthquake and the maximum distance of its environmental effect to the epicenter and to the causative fault (Lunina et al., 2014). Estimated limit distances to the fault for the Ms = 8.1 largest event are 130 km that is 3.5 times as short as those to the epicenter, which is 450 km. Along with this the wider of the fault the less liquefaction cases happen. 93% of them are within 40 km from the causative fault. Analysis of liquefaction locations relative to nearest faults in southern East Siberia shows the distances to be within 8 km but 69% of all cases are within 1 km. As a result, predictive models have been created for locations of seismic liquefaction, assuming a fault pattern for some parts of the Baikal rift zone. Base on our field and world data, equations have been suggested to relate the maximum sizes of liquefaction-induced clastic dikes (maximum width, visible maximum height and intensity index of clastic dikes) with Ms and local shaking intensity corresponding to the MSK-64 macroseismic intensity scale (Lunina and Gladkov, 2015). The obtained results make basis for modeling the distribution of the geohazard for the purposes of prediction and for estimating the earthquake parameters from liquefaction-induced clastic dikes. The author would like to express their gratitude to the Institute of the Earth's Crust, Siberian Branch of the Russian Academy of Sciences for providing laboratory to carry out this research and Russian Scientific Foundation for their financial support (Grant 14-17-00007).

  18. Characteristics of radiation and propagation of seismic waves in the Baikal Rift Zone estimated by simulations of acceleration time histories of the recorded earthquakes

    NASA Astrophysics Data System (ADS)

    Pavlenko, O. V.; Tubanov, Ts. A.

    2017-01-01

    The regularities in the radiation and propagation of seismic waves within the Baikal Rift Zone in Buryatia are studied to estimate the ground motion parameters from the probable future strong earthquakes. The regional parameters of seismic radiation and propagation are estimated by the stochastic simulation (which provides the closest agreement between the calculations and observations) of the acceleration time histories of the earthquakes recorded by the Ulan-Ude seismic station. The acceleration time histories of the strongest earthquakes ( M W 3.4-4.8) that occurred in 2006-2011 at the epicentral distances of 96-125 km and had source depths of 8-12 km have been modeled. The calculations are conducted with estimates of the Q-factor which were previously obtained for the region. The frequency-dependent attenuation and geometrical spreading are estimated from the data on the deep structure of the crust and upper mantle (velocity sections) in the Ulan-Ude region, and the parameters determining the wave forms and duration of acceleration time histories are found by fitting. These parameters fairly well describe all the considered earthquakes. The Ulan-Ude station can be considered as the reference bedrock station with minimum local effects. The obtained estimates for the parameters of seismic radiation and propagation can be used for forecasting the ground motion from the future strong earthquakes and for constructing the seismic zoning maps for Buryatia.

  19. Estimating locations and magnitudes of earthquakes in southern California from modified Mercalli intensities

    USGS Publications Warehouse

    Bakun, W.H.

    2006-01-01

    Modified Mercalli intensity (MMI) assignments, instrumental moment magnitudes M, and epicenter locations of thirteen 5.6 ??? M ??? 7.1 "training-set" events in southern California were used to obtain the attenuation relation MMI = 1.64 + 1.41M - 0.00526 * ??h - 2.63 * log ??h, where ??h is the hypocentral distance in kilometers and M is moment magnitude. Intensity magnitudes MI and locations for five 5.9 ??? M ??? 7.3 independent test events were consistent with the instrumental source parameters. Fourteen "historical" earthquakes between 1890 and 1927 were then analyzed. Of particular interest are the MI 7.2 9 February 1890 and MI 6.6 28 May 1892 earthquakes, which were previously assumed to have occurred near the southern San Jacinto fault; a more likely location is in the Eastern California Shear Zone (ECSZ). These events, and the 1992 M 7.3 Landers and 1999 M 7.1 Hector Mine events, suggest that the ECSZ has been seismically active since at least the end of the nineteenth century. The earthquake catalog completeness level in the ECSZ is ???M 6.5 at least until the early twentieth century.

  20. Estimation of furrow irrigation sediment loss using an artificial neural network

    USDA-ARS?s Scientific Manuscript database

    The area irrigated by furrow irrigation in the U.S. has been steadily decreasing but still represents about 20% of the total irrigated area in the U.S. Furrow irrigation sediment loss is a major water quality issue and a method for estimating sediment loss is needed to quantify the environmental imp...

  1. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  2. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  3. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  4. Estimation of source processes of the 2016 Kumamoto earthquakes from strong motion waveforms

    NASA Astrophysics Data System (ADS)

    Kubo, H.; Suzuki, W.; Aoi, S.; Sekiguchi, H.

    2016-12-01

    In this study, we estimated the source processes for two large events of the 2016 Kumamoto earthquakes (the M7.3 event at 1:25 JST on April 16, 2016 and the M6.5 event at 21:26 JST on April 14, 2016) from strong motion waveforms using multiple-time-window linear waveform inversion (Hartzell and Heaton 1983; Sekiguchi et al. 2000). Based on the observations of surface ruptures, the spatial distribution of aftershocks, and the geodetic data, a realistic curved fault model was developed for the source-process analysis of the M7.3 event. The source model obtained for the M7.3 event with a seismic moment of 5.5 × 1019 Nm (Mw 7.1) had two significant ruptures. One rupture propagated toward the northeastern shallow region at 4 s after rupture initiation, and continued with large slips to approximately 16 s. This rupture caused a large slip region with a peak slip of 3.8 m that was located 10-30 km northeast of the hypocenter and reached the caldera of Mt. Aso. The contribution of the large slip region to the seismic waveforms was large at many stations. Another rupture propagated toward the surface from the hypocenter at 2-6 s, and then propagated toward the northeast along the near surface at 6-10 s. This rupture largely contributed to the seismic waveforms at the stations south of the fault and close to the hypocenter. A comparison with the results obtained using a single fault plane model demonstrate that the use of the curved fault model led to improved waveform fit at the stations south of the fault. The extent of the large near-surface slips in this source model for the M7.3 event is roughly consistent with the extent of the observed large surface ruptures. The source model obtained for the M6.5 event with a seismic moment of 1.7 × 1018 Nm (Mw 6.1) had large slips in the region around the hypocenter and in the shallow region north-northeast of the hypocenter, both of which had a maximum slip of 0.7 m. The rupture of the M6.5 event propagated from the former region

  5. Estimation of soil loss by water erosion in the Chinese Loess Plateau using Universal Soil Loss Equation and GRACE

    NASA Astrophysics Data System (ADS)

    Schnitzer, S.; Seitz, F.; Eicker, A.; Güntner, A.; Wattenbach, M.; Menzel, A.

    2013-06-01

    For the estimation of soil loss by erosion in the strongly affected Chinese Loess Plateau we applied the Universal Soil Loss Equation (USLE) using a number of input data sets (monthly precipitation, soil types, digital elevation model, land cover and soil conservation measures). Calculations were performed in ArcGIS and SAGA. The large-scale soil erosion in the Loess Plateau results in a strong non-hydrological mass change. In order to investigate whether the resulting mass change from USLE may be validated by the gravity field satellite mission GRACE (Gravity Recovery and Climate Experiment), we processed different GRACE level-2 products (ITG, GFZ and CSR). The mass variations estimated in the GRACE trend were relatively close to the observed sediment yield data of the Yellow River. However, the soil losses resulting from two USLE parameterizations were comparatively high since USLE does not consider the sediment delivery ratio. Most eroded soil stays in the study area and only a fraction is exported by the Yellow River. Thus, the resultant mass loss appears to be too small to be resolved by GRACE.

  6. Combining MODIS and Landsat imagery to estimate and map boreal forest cover loss

    USGS Publications Warehouse

    Potapov, P.; Hansen, Matthew C.; Stehman, S.V.; Loveland, T.R.; Pittman, K.

    2008-01-01

    Estimation of forest cover change is important for boreal forests, one of the most extensive forested biomes, due to its unique role in global timber stock, carbon sequestration and deposition, and high vulnerability to the effects of global climate change. We used time-series data from the MODerate Resolution Imaging Spectroradiometer (MODIS) to produce annual forest cover loss hotspot maps. These maps were used to assign all blocks (18.5 by 18.5 km) partitioning the boreal biome into strata of high, medium and low likelihood of forest cover loss. A stratified random sample of 118 blocks was interpreted for forest cover and forest cover loss using high spatial resolution Landsat imagery from 2000 and 2005. Area of forest cover gross loss from 2000 to 2005 within the boreal biome is estimated to be 1.63% (standard error 0.10%) of the total biome area, and represents a 4.02% reduction in year 2000 forest cover. The proportion of identified forest cover loss relative to regional forest area is much higher in North America than in Eurasia (5.63% to 3.00%). Of the total forest cover loss identified, 58.9% is attributable to wildfires. The MODIS pan-boreal change hotspot estimates reveal significant increases in forest cover loss due to wildfires in 2002 and 2003, with 2003 being the peak year of loss within the 5-year study period. Overall, the precision of the aggregate forest cover loss estimates derived from the Landsat data and the value of the MODIS-derived map displaying the spatial and temporal patterns of forest loss demonstrate the efficacy of this protocol for operational, cost-effective, and timely biome-wide monitoring of gross forest cover loss.

  7. Real time earthquake information and tsunami estimation system for Indonesia, Philippines and Central-South American regions

    NASA Astrophysics Data System (ADS)

    Pulido Hernandez, N. E.; Inazu, D.; Saito, T.; Senda, J.; Fukuyama, E.; Kumagai, H.

    2015-12-01

    Southeast Asia as well as Central-South American regions are within the most active seismic regions in the world. To contribute to the understanding of source process of earthquakes the National Research Institute for Earth Science and Disaster Prevention NIED maintains the international seismic Network (ISN) since 2007. Continuous seismic waveforms from 294 broadband seismic stations in Indonesia, Philippines, and Central-South America regions are received in real time at NIED, and used for automatic location of seismic events. Using these data we perform automatic and manual estimation of moment tensor of seismic events (Mw>4.5) by using the SWIFT program developed at NIED. We simulate the propagation of local tsunamis in these regions using a tsunami simulation code and visualization system developed at NIED, combined with CMT parameters estimated by SWIFT. The goals of the system are to provide a rapid and reliable earthquake and tsunami information in particular for large seismic, and produce an appropriate database of earthquake source parameters and tsunami simulations for research. The system uses the hypocenter location and magnitude of earthquakes automatically determined at NIED by the SeisComP3 system (GFZ) from the continuous seismic waveforms in the region, to perform the automated calculation of moment tensors by SWIFT, and then carry out the automatic simulation and visualization of tsunami. The system generates maps of maximum tsunami heights within the target regions and along the coasts and display them with the fault model parameters used for tsunami simulations. Tsunami calculations are performed for all events with available automatic SWIFT/CMT solutions. Tsunami calculations are re-computed using SWIFT manual solutions for events with Mw>5.5 and centroid depths shallower than 100 km. Revised maximum tsunami heights as well as animation of tsunami propagation are also calculated and displayed for the two double couple solutions by SWIFT

  8. Moment Tensor Estimation using a Grid-Search approach for the Pawnee, Oklahoma Mw 5.8 Earthquake

    NASA Astrophysics Data System (ADS)

    Friberg, P. A.; Stachnik, J.; Baker, B. I.

    2016-12-01

    Following the 5.8 Mw earthquake in Pawnee, Oklahoma, a series of moment tensor solutions were published by the National Earthquake Information Center (NEIC). While all solutions were in relative agreement that the focal mechanism was vertical strike slip in nature there is a great deal of variance in the optimal depth and double couple percentage. Such a variance in depth solutions is particularly important in interpretation as Oklahoma is actively engaging in hydrocarbon production and waste-water disposal. The GCMT solution using teleseismic long period surface waves favors a deep ( 18 km) strong double couple solution ( 97%). In contrast, long period body wave and regional moment tensor solutions favor intermediate depth ( 10 km) and double couple solutions ( 75%) and the high frequency body wave solutions prefers a shallow depth ( 2 km) marginal double couple solution ( 56%). The depth using traditional travel time location techniques at the NEIC was 5.6 +/- 1.3km. The intent of this study is to assess the uncertainty in the moment tensor estimation using an exhaustive parameter grid-search strategy recently developed by Tape and Tape (2015). Using this approach we systematically scan through depths and moment tensors and for each depth moment tensor pair compute a misfit function. To better understand how our different datasets add information to estimating the unknown parameters we apply the proposed methodology to regional surface waves, teleseismic body waves, and, finally, both surface and teleseismic body waves. We present not only the best solution but a set of plausible moment tensors at each depth. A desirable consequence of this abstract would be to demonstrate that the differing solutions submitted to the NEIC are plausible in the sense that they best explain the data used in their respective inversions but may not necessarily adequately resolve a particular model parameter. http://earthquake.usgs.gov/earthquakes/eventpage/us10006jxs

  9. An atlas of ShakeMaps for selected global earthquakes

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul S.; Marano, Kristin D.

    2008-01-01

    An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.

  10. Simultaneous estimation of b-values and detection rates of earthquakes for the application to aftershock probability forecasting

    NASA Astrophysics Data System (ADS)

    Katsura, K.; Ogata, Y.

    2004-12-01

    Reasenberg and Jones [Science, 1989, 1994] proposed the aftershock probability forecasting based on the joint distribution [Utsu, J. Fac. Sci. Hokkaido Univ., 1970] of the modified Omori formula of aftershock decay and Gutenberg-Richter law of magnitude frequency, where the respective parameters are estimated by the maximum likelihood method [Ogata, J. Phys. Earth, 1983; Utsu, Geophys Bull. Hokkaido Univ., 1965, Aki, Bull. Earthq. Res. Inst., 1965]. The public forecast has been implemented by the responsible agencies in California and Japan. However, a considerable difficulty in the above procedure is that, due to the contamination of arriving seismic waves, detection rate of aftershocks is extremely low during a period immediately after the main shock, say, during the first day, when the forecasting is most critical for public in the affected area. Therefore, for the forecasting of a probability during such a period, they adopt a generic model with a set of the standard parameter values in California or Japan. For an effective and realistic estimation, I propose to utilize the statistical model introduced by Ogata and Katsura [Geophys. J. Int., 1993] for the simultaneous estimation of the b-values of Gutenberg-Richter law together with detection-rate (probability) of earthquakes of each magnitude-band from the provided data of all detected events, where the both parameters are allowed for changing in time. Thus, by using all detected aftershocks from the beginning of the period, we can estimate the underlying modified Omori rate of both detected and undetected events and their b-value changes, taking the time-varying missing rates of events into account. The similar computation is applied to the ETAS model for complex aftershock activity or regional seismicity where substantial missing events are expected immediately after a large aftershock or another strong earthquake in the vicinity. Demonstrations of the present procedure will be shown for the recent examples

  11. Rapid uncertainty estimation in finite fault inversion: Case study for the 2015, Mw 8.3 Illapel earthquake

    NASA Astrophysics Data System (ADS)

    Cummins, P. R.; Benavente, R. F.; Dettmer, J.; Williamson, A.

    2016-12-01

    Rapid estimation of the slip distribution for large earthquakes can be useful for the early phases of emergency response, in rapid impact assessment and tsunami early warning. Model parameter uncertainties can be crucial for meaningful interpretation of such slip models, but they are often ignored. However, estimation of uncertainty in linear finite fault inversion is difficult because of the positivity constraints that are almost always applied. We have shown in previous work that positivity can be realized by imposing a prior such that the logs of each subfault scalar moment are smoothly distributed on the fault surface, and each scalar moment is intrinsically non-negative while the posterior PDF can still be approximated as Gaussian. The inversion is nonlinear, but we showed that the most probable solution can be found by iterative methods that are not computationally demanding. In addition, the posterior covariance matrix (which provides uncertainties) can be estimated from the most probable solution, using an analytic expression for the Hessian of the cost function. We have studied this approach previously for synthetic W-phase data and showed that a first order estimation of the uncertainty in the slip model can be obtained.Here we apply this method to seismic W-phase recorded following the 2015, Mw 8.3 Illapel earthquake. Our results show a slip distrubtion with maximum slip near the subduction zone trench axis, and having uncertainties that scale roughly with the slip value. We also consider application of this method to multiple data types: seismic W-phase, geodetic, and tsunami.

  12. Estimating timber losses from a town ant colony with aerial photographs

    Treesearch

    John C. Moser

    1986-01-01

    Aerial photographs were used to locate an individual nest of Atta texana (Buckley) and to estimate the area of damage in a plantation of loblolly pine (Pinus taeda L.). Stumpage loss from the nest over a period of 30 years was estimated to be $653.

  13. Visual estimation versus gravimetric measurement of postpartum blood loss: a prospective cohort study.

    PubMed

    Al Kadri, Hanan M F; Al Anazi, Bedayah K; Tamim, Hani M

    2011-06-01

    One of the major problems in international literature is how to measure postpartum blood loss with accuracy. We aimed in this research to assess the accuracy of visual estimation of postpartum blood loss (by each of two main health-care providers) compared with the gravimetric calculation method. We carried out a prospective cohort study at King Abdulaziz Medical City, Riyadh, Saudi Arabia between 1 November 2009 and 31 December 2009. All women who were admitted to labor and delivery suite and delivered vaginally were included in the study. Postpartum blood loss was visually estimated by the attending physician and obstetrics nurse and then objectively calculated by a gravimetric machine. Comparison between the three methods of blood loss calculation was carried out. A total of 150 patients were included in this study. There was a significant difference between the gravimetric calculated blood loss and both health-care providers' estimation with a tendency to underestimate the loss by about 30%. The background and seniority of the assessing health-care provider did not affect the accuracy of the estimation. The corrected incidence of postpartum hemorrhage in Saudi Arabia was found to be 1.47%. Health-care providers tend to underestimate the volume of postpartum blood loss by about 30%. Training and continuous auditing of the diagnosis of postpartum hemorrhage is needed to avoid missing cases and thus preventing associated morbidity and mortality.

  14. Research on earthquake prediction from infrared cloud images

    NASA Astrophysics Data System (ADS)

    Fan, Jing; Chen, Zhong; Yan, Liang; Gong, Jing; Wang, Dong

    2015-12-01

    In recent years, the occurrence of large earthquakes is frequent all over the word. In the face of the inevitable natural disasters, the prediction of the earthquake is particularly important to avoid more loss of life and property. Many achievements in the field of predict earthquake from remote sensing images have been obtained in the last few decades. But the traditional prediction methods presented do have the limitations of can't forecast epicenter location accurately and automatically. In order to solve the problem, a new predicting earthquakes method based on extract the texture and emergence frequency of the earthquake cloud is proposed in this paper. First, strengthen the infrared cloud images. Second, extract the texture feature vector of each pixel. Then, classified those pixels and converted to several small suspected area. Finally, tracking the suspected area and estimate the possible location. The inversion experiment of Ludian earthquake show that this approach can forecast the seismic center feasible and accurately.

  15. Estimation of slip scenarios of mega-thrust earthquakes and strong motion simulations for Central Andes, Peru

    NASA Astrophysics Data System (ADS)

    Pulido, N.; Tavera, H.; Aguilar, Z.; Chlieh, M.; Calderon, D.; Sekiguchi, T.; Nakai, S.; Yamazaki, F.

    2012-12-01

    We have developed a methodology for the estimation of slip scenarios for megathrust earthquakes based on a model of interseismic coupling (ISC) distribution in subduction margins obtained from geodetic data, as well as information of recurrence of historical earthquakes. This geodetic slip model (GSM) delineates the long wavelength asperities within the megathrust. For the simulation of strong ground motion it becomes necessary to introduce short wavelength heterogeneities to the source slip to be able to efficiently simulate high frequency ground motions. To achieve this purpose we elaborate "broadband" source models constructed by combining the GSM with several short wavelength slip distributions obtained from a Von Karman PSD function with random phases. Our application of the method to Central Andes in Peru, show that this region has presently the potential of generating an earthquake with moment magnitude of 8.9, with a peak slip of 17 m and a source area of approximately 500 km along strike and 165 km along dip. For the strong motion simulations we constructed 12 broadband slip models, and consider 9 possible hypocenter locations for each model. We performed strong motion simulations for the whole central Andes region (Peru), spanning an area from the Nazca ridge (16^o S) to the Mendana fracture (9^o S). For this purpose we use the hybrid strong motion simulation method of Pulido et al. (2004), improved to handle a general slip distribution. Our simulated PGA and PGV distributions indicate that a region of at least 500 km along the coast of central Andes is subjected to a MMI intensity of approximately 8, for the slip model that yielded the largest ground motions among the 12 slip models considered, averaged for all assumed hypocenter locations. This result is in agreement with the macroseismic intensity distribution estimated for the great 1746 earthquake (M~9) in central Andes (Dorbath et al. 1990). Our results indicate that the simulated PGA and PGV for

  16. A smartphone application for earthquakes that matter!

    NASA Astrophysics Data System (ADS)

    Bossu, Rémy; Etivant, Caroline; Roussel, Fréderic; Mazet-Roux, Gilles; Steed, Robert

    2014-05-01

    level of shaking intensity with empirical models of fatality losses calibrated on past earthquakes in each country. Non-seismic detections and macroseismic questionnaires collected online are combined to identify as many as possible of the felt earthquakes regardless their magnitude. Non seismic detections include Twitter earthquake detections, developed by the US Geological Survey, where the number of tweets containing the keyword "earthquake" is monitored in real time and flashsourcing, developed by the EMSC, which detect traffic surges on its rapid earthquake information website caused by the natural convergence of eyewitnesses who rush to the Internet to investigate the cause of the shaking that they have just felt. All together, we estimate that the number of detected felt earthquakes is around 1 000 per year, compared with the 35 000 earthquakes annually reported by the EMSC! Felt events are already the subject of the web page "Latest significant earthquakes" on EMSC website (http://www.emsc-csem.org/Earthquake/significant_earthquakes.php) and of a dedicated Twitter service @LastQuake. We will present the identification process of the earthquakes that matter, the smartphone application itself (to be released in May) and its future evolutions.

  17. Tag loss can bias Jolly-Seber capture-recapture estimates

    USGS Publications Warehouse

    McDonald, T.L.; Amstrup, Steven C.; Manly, B.F.J.

    2003-01-01

    We identified cases where the Jolly-Seber estimator of population size is biased under tag loss and tag-induced mortality by examining the mathematical arguments and performing computer simulations. We found that, except under certain tag-loss models and high sample sizes, the population size estimators (uncorrected for tag loss) are severely biased high when tag loss or tag-induced mortality occurs. Our findings verify that this misconception about effects of tag loss and tag-induced mortality could have serious consequences for field biologists interested in population size. Reiterating common sense, we encourage those engaged in capture-recapture studies to be careful and humane when handling animals during tagging, to use tags with high retention rates, to double-tag animals when possible, and to strive for the highest capture probabilities possible.

  18. An estimation method of the fault wind turbine power generation loss based on correlation analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Zhu, Shourang; Wang, Wei

    2017-01-01

    A method for estimating the power generation loss of a fault wind turbine is proposed in this paper. In this method, the wind speed is estimated and the estimated value of the loss of power generation is given by combining the actual output power characteristic curve of the wind turbine. In the wind speed estimation, the correlation analysis is used, and the normal operation of the wind speed of the fault wind turbine is selected, and the regression analysis method is used to obtain the estimated value of the wind speed. Based on the estimation method, this paper presents an implementation of the method in the monitoring system of the wind turbine, and verifies the effectiveness of the proposed method.

  19. Estimation of fault propagation distance from fold shape: Implications for earthquake hazard assessment

    NASA Astrophysics Data System (ADS)

    Allmendinger, Richard W.; Shaw, John H.

    2000-12-01

    A numerical grid search using the trishear kinematic model can be used to extract both slip and the distance that a fault tip line has propagated during growth of a fault-propagation fold. The propagation distance defines the initial position of the tip line at the onset of slip. In the Santa Fe Springs anticline of the Los Angeles basin, we show that the tip line of the underlying Puente Hills thrust fault initiated at the same position as the 1987 magnitude 6.0 Whittier Narrows earthquake.

  20. Estimating Phosphorus Loss at the Whole-Farm Scale with User-Friendly Models

    NASA Astrophysics Data System (ADS)

    Vadas, P.; Powell, M.; Brink, G.; Busch, D.; Good, L.

    2014-12-01

    Phosphorus (P) loss from agricultural fields and delivery to surface waters persists as a water quality impairment issue. For dairy farms, P can be lost from cropland, pastures, barnyards, and open-air cattle lots; and all these sources must be evaluated to determine which ones are a priority for P loss remediation. We used interview surveys to document land use, cattle herd characteristics, and manure management for four grazing-based dairy farms in Wisconsin, USA. We then used the APLE and Snap-Plus models to estimate annual P loss from all areas on these farms and determine their relative contribution to whole-farm P loss. At the whole-farm level, average annual P loss (kg ha-1) from grazing-based dairy farms was low (0.6 to 1.8 kg ha-1), generally because a significant portion of land was in permanently vegetated pastures or hay and had low erosion. However, there were areas on the farms that represented sources of significant P loss. For cropland, the greatest P loss was from areas with exposed soil, typically for corn production, and especially on steeper sloping land. The farm areas with the greatest P loss had concentrated animal housing, including barnyards, and over-wintering and young-stock lots. These areas can represent from about 5% to almost 30% of total farm P loss, depending on lot management and P loss from other land uses. Our project builds on research to show that producer surveys can provide reliable management information to assess whole-farm P loss. It also shows that we can use models like RUSLE2, Snap-Plus, and APLE to rapidly, reliably, and quantitatively estimate P loss in runoff from all areas on a dairy farm and identify areas in greatest need of alternative management to reduce P loss.

  1. The size of earthquakes

    USGS Publications Warehouse

    Kanamori, H.

    1980-01-01

    How we should measure the size of an earthquake has been historically a very important, as well as a very difficult, seismological problem. For example, figure 1 shows the loss of life caused by earthquakes in recent times and clearly demonstrates that 1976 was the worst year for earthquake casualties in the 20th century. However, the damage caused by an earthquake is due not only to its physical size but also to other factors such as where and when it occurs; thus, figure 1 is not necessarily an accurate measure of the "size" of earthquakes in 1976. the point is that the physical process underlying an earthquake is highly complex; we therefore cannot express every detail of an earthquake by a simple straightforward parameter. Indeed, it would be very convenient if we could find a single number that represents the overall physical size of an earthquake. This was in fact the concept behind the Richter magnitude scale introduced in 1935. 

  2. Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984

    USGS Publications Warehouse

    Sipkin, S.A.

    1987-01-01

    The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.

  3. Estimation of fault parameters using GRACE observations and analytical model. Case study: The 2010 Chile earthquake

    NASA Astrophysics Data System (ADS)

    Fatolazadeh, Farzam; Naeeni, Mehdi Raoofian; Voosoghi, Behzad; Rahimi, Armin

    2017-07-01

    In this study, an inversion method is used to constrain the fault parameters of the 2010 Chile Earthquake using gravimetric observations. The formulation consists of using monthly Geopotential coefficients of GRACE observations in a conjunction with the analytical model of Okubo 1992 which accounts for the gravity changes resulting from Earthquake. At first, it is necessary to eliminate the hydrological and oceanic effects from GRACE monthly coefficients and then a spatio-spectral localization analysis, based on wavelet local analysis, should be used to filter the GRACE observations and to better refine the tectonic signal. Finally, the corrected GRACE observations are compared with the analytical model using a nonlinear inversion algorithm. Our results show discernible differences between the computed average slip using gravity observations and those predicted from other co-seismic models. In this study, fault parameters such as length, width, depth, dip, strike and slip are computed using the changes in gravity and gravity gradient components. By using the variations of gravity gradient components the above mentioned parameters are determined as 428 ± 6 Km, 203 ± 5 Km, 5 Km, 10°, 13° and 8 ± 1.2 m respectively. Moreover, the values of the seismic moment and moment magnitude are 2. 09 × 1022 N m and 8.88 Mw respectively which show the small differences with the values reported from USGS (1. 8 × 1022N m and 8.83 Mw).

  4. To what extent does earthquake variability affect slip rate estimates; a test using San Andreas fault paleoseismology.

    NASA Astrophysics Data System (ADS)

    Weldon, R. J.

    2011-12-01

    Slip rate is one of the most fundamental properties of a fault, defining its role in local or global tectonics and largely controlling the hazard it poses to society. Unquestionably a fault's slip rate changes through time because faults have a finite lifetime (i.e. they form where they once weren't and they eventually die and become inactive); it is generally accepted that this time scale is related to the rate at which the tectonic driving forces vary, so faults are expected to have relatively constant slip rates over 10s of thousands to millions of years. Because most faults slip in discrete events (earthquakes) it is also clear that one must measure the slip rate over enough time to average the variability due to the seismic cycle. The rapid growth in geologic, geodetic and geochronologic tools to determine slip rate over a broad range of time intervals has generated tremendous interest in and speculation of other processes that vary slip rate between these two accepted extremes. To test hypotheses of processes that vary slip rate between the seismic cycle and a fault's tectonic lifetime one needs to fully understand the uncertainties that are associated with a slip rate estimate, including those introduced by these two accepted variations. This abstract addresses the extent to which variability in the timing and size of earthquakes affects slip rate estimates, and uses the rapidly growing paleoseismic data set for the southern San Andreas fault to provide the necessary information to quantitatively address this issue. Good paleoseismic evidence exists for intervals without earthquakes of at least three times the average long term recurrence interval. Often these long hiatuses are balanced by clusters of earthquakes during which at least three earthquakes can occur in a time period less than a single average interval. Thus, slip rates from sample intervals that are similar to the long term average recurrence interval are useless and even slip rates determined

  5. Identification and Estimation of Postseismic Deformation: Implications for Plate Motion Models, Models of the Earthquake Cycle, and Terrestrial Reference Frame Definition

    NASA Astrophysics Data System (ADS)

    Kedar, S.; Bock, Y.; Moore, A. W.; Argus, D. F.; Fang, P.; Liu, Z.; Haase, J. S.; Su, L.; Owen, S. E.; Goldberg, D.; Squibb, M. B.; Geng, J.

    2015-12-01

    Postseismic deformation indicates a viscoelastic response of the lithosphere. It is critical, then, to identify and estimate the extent of postseismic deformation in both space and time, not only for its inherent information on crustal rheology and earthquake physics, but also since it must considered for plate motion models that are derived geodetically from the "steady-state" interseismic velocities, models of the earthquake cycle that provide interseismic strain accumulation and earthquake probability forecasts, as well as terrestrial reference frame definition that is the basis for space geodetic positioning. As part of the Solid Earth Science ESDR System) SESES project under a NASA MEaSUREs grant, JPL and SIO estimate combined daily position time series for over 1800 GNSS stations, both globally and at plate boundaries, independently using the GIPSY and GAMIT software packages, but with a consistent set of a prior epoch-date coordinates and metadata. The longest time series began in 1992, and many of them contain postseismic signals. For example, about 90 of the global GNSS stations out of more than 400 that define the ITRF have experienced one or more major earthquakes and 36 have had multiple earthquakes; as expected, most plate boundary stations have as well. We quantify the spatial (distance from rupture) and temporal (decay time) extent of postseismic deformation. We examine parametric models (log, exponential) and a physical model (rate- and state-dependent friction) to fit the time series. Using a PCA analysis, we determine whether or not a particular earthquake can be uniformly fit by a single underlying postseismic process - otherwise we fit individual stations. Then we investigate whether the estimated time series velocities can be directly used as input to plate motion models, rather than arbitrarily removing the apparent postseismic portion of a time series and/or eliminating stations closest to earthquake epicenters.

  6. A trial estimation of frictional properties, focusing on aperiodicity off Kamaishi just after the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Ariyoshi, Keisuke; Uchida, Naoki; Matsuzawa, Toru; Hino, Ryota; Hasegawa, Akira; Hori, Takane; Kaneda, Yoshiyuki

    2014-12-01

    Motivated by the fact that temporal earthquake aperiodicity was observed off Kamaishi just after the 2011 Tohoku earthquake, we performed numerical simulations of chain reactions due to the postseismic slip of large earthquakes by applying rate- and state-dependent friction laws. If the repeater is composed of single asperity, our results show that, (i) a mixture of partial and whole rupturing of a single asperity can explain some of the observed variability in timing and size of the repeating earthquakes off Kamaishi; (ii) the partial rupturing can be reproduced with moderate frictional instability with the aging-law and not the slip or Nagata laws; (iii) the perturbation of the activated earthquake hypocenters observed mostly in the ESE-WNW direction may reflect the fact that the large postseismic slip of the 2011 Tohoku earthquake propagated from ESE to WNW off Kamaishi; (iv) the observed region of repeating earthquake quiescence may reflect the strong plate coupling of megathrust earthquakes.

  7. A Temperature-Based Bioimpedance Correction for Water Loss Estimation During Sports.

    PubMed

    Ring, Matthias; Lohmueller, Clemens; Rauh, Manfred; Mester, Joachim; Eskofier, Bjoern M

    2016-11-01

    The amount of total body water (TBW) can be estimated based on bioimpedance measurements of the human body. In sports, TBW estimations are of importance because mild water losses can impair muscular strength and aerobic endurance. Severe water losses can even be life threatening. TBW estimations based on bioimpedance, however, fail during sports because the increased body temperature corrupts bioimpedance measurements. Therefore, this paper proposes a machine learning method that eliminates the effects of increased temperature on bioimpedance and, consequently, reveals the changes in bioimpedance that are due to TBW loss. This is facilitated by utilizing changes in skin and core temperature. The method was evaluated in a study in which bioimpedance, temperature, and TBW loss were recorded every 15 min during a 2-h running workout. The evaluation demonstrated that the proposed method is able to reduce the error of TBW loss estimation by up to 71%, compared to the state of art. In the future, the proposed method in combination with portable bioimpedance devices might facilitate the development of wearable systems for continuous and noninvasive TBW loss monitoring during sports.

  8. Perspectives on earthquake hazards in the New Madrid seismic zone, Missouri

    USGS Publications Warehouse

    Thenhaus, P.C.

    1990-01-01

    A sequence of three great earthquakes struck the Central United States during the winter of 1811-1812 in the area of New Madrid, Missouri. they are considered to be the greatest earthquakes in the conterminous U.S because they were felt and caused damage at far greater distances than any other earthquakes in U.S history. The large population currently living within the damage area of these earthquakes means that widespread destruction and loss of life is likely if the sequence were repeated. In contrast to California, where the earthquakes are felt frequently, the damaging earthquakes that have occurred in the Easter U.S-in 155 (Cape Ann, Mass.), 1811-12 (New Madrid, Mo.), 1886 (Charleston S.C) ,and 1897 (Giles County, Va.- are generally regarded as only historical phenomena (fig. 1). The social memory of these earthquakes no longer exists. A fundamental problem in the Eastern U.S, therefore, is that the earthquake hazard is not generally considered today in land-use and civic planning. This article offers perspectives on the earthquake hazard of the New Madrid seismic zone through discussions of the geology of the Mississippi Embayment, the historical earthquakes that have occurred there, the earthquake risk, and the "tools" that geoscientists have to study the region. The so-called earthquake hazard is defined  by the characterization of the physical attributes of the geological structures that cause earthquakes, the estimation of the recurrence times of the earthquakes, the estimation of the recurrence times of the earthquakes, their potential size, and the expected ground motions. the term "earthquake risk," on the other hand, refers to aspects of the expected damage to manmade strctures and to lifelines as a result of the earthquake hazard.  

  9. Estimation of menstrual blood loss volume based on menstrual diary and laboratory data

    PubMed Central

    2012-01-01

    Background Abnormal uterine bleeding is often investigated in clinical studies and critical to identify during gynecological consultation. The current standard for quantification of menstrual blood loss is the alkaline-hematin-method. However, this method is expensive and inconvenient for patients. Bleeding diaries, although widely used, provide only qualitative information on menstrual blood loss. Other methods have been developed, but still do not provide reliable quantitative data. Methods We estimated blood loss volume using data from two clinical studies in women suffering abnormal menstrual bleeding. These estimations were derived from mixed linear models based on diary data, hematological parameters and age. To validate the models, we applied our results to data from a third study with a similar patient population. Results The resulting best fitting model uses diary entries on bleeding intensity at a particular day, information on occurrence and frequency of single bleeding intensities in defined time windows, hemoglobin and ferritin values and age of the patient all as predictors of menstrual blood loss volume. Sensitivity and specificity for the diagnosis of excessive bleeding were 87% and 70%, respectively. Our model-based estimates reflect the subjective assessment by physicians and patients in the same way as the measured values do. When applying the model to an independent study, we found a correlation of 0.73 between estimated and measured values for the blood loss in a single day. Further models with reduced number of parameters (simplified for easier practical use) still showed correlation values between 0.69 and 0.73. Conclusions We present a method for estimating menstrual blood loss volume in women suffering from prolonged or excessive menstrual bleeding. Our statistical model includes entries from bleeding diaries, laboratory parameters and age and produces results which correlate well with data derived by the alkaline-hematin-method. Therefore

  10. The Use of Streambed Temperatures to Estimate Losses in an Arid Environment

    NASA Astrophysics Data System (ADS)

    Naranjo, R. C.; Young, M. H.; Niswonger, R.; Miller, J. J.; French, R. H.

    2001-12-01

    Quantifying channel transmission losses in arid environments is important for a variety of reasons, ranging from designing flood control mitigation structures to estimating ground water recharge. To quantify the losses in an alluvial channel, an experiment was performed on a 2 km reach of a channel on an alluvial fan, located on the U.S. Department of Energy's Nevada Test Site. The channel was subjected to three separate flow events. Transmission losses were estimated using discharge monitoring and a subsurface temperature modeling approach. Four stations were equipped to continuously monitor stage, temperature. Streambed temperatures measured at 0-, 30-, 50- and 100-cm depths were used to calibrate VS2DH, a two-dimensional, variably saturated flow model (Healy and Ronan, 1996). Average losses based on the difference in flow between each reach indicate that 21, 27, and 53 percent of the flow was reduced down stream of the source. Lower losses occurred within the reaches that contained caliche and the largest losses were measured at the lower reach that mostly contained loosely unconsolidated material. As expected, the thermal gradients corresponded well with the bedload material and the measured losses. Low thermal gradients were detected at the locations were where caliche was present, suggesting conduction-dominated heat transfer. The lower reach corresponded to the smallest thermal gradient, suggesting advection-dominated heat transfer. Losses predicted by VS2DH are within an order of magnitude of the estimated losses based on discharge measurements. The differences in losses are a result of both the spatial extent to which the modeling results are applied and unmeasured lateral subsurface flow. Large thermal gradients were detected at locations where caliche was present, suggesting conduction dominated heat tranfer.

  11. The use of streambed temperatures to estimate transmission losses on an experimental channel.

    SciTech Connect

    Ramon C. Naranjo; Michael H. Young; Richard Niswonger; Julianne J. Miller; Richard H. French

    2001-10-18

    Quantifying channel transmission losses in arid environments is important for a variety of reasons, from engineering design of flood control structures to evaluating recharge. To quantify the losses in an alluvial channel, an experiment was performed on a 2-km reach of an alluvial fan located on the Nevada Test Site. The channel was subjected to three separate flow events. Transmission losses were estimated using standard discharge monitoring and subsurface temperature modeling approach. Four stations were equipped to continuously monitor stage, temperature, and water content. Streambed temperatures measured at 0, 30, 50 and 100 cm depths were used to calibrate VS2DH, a two-dimensional, variably saturated flow model. Average losses based on the difference in flow between each station indicate that 21 percent, 27 percent, and 53 percent of the flow was reduced downgradient of the source. Results from the temperature monitoring identified locations with large thermal gradients, suggesting a conduction-dominated heat transfer on streambed sediments where caliche-cemented surfaces were present. Transmission losses at the lowermost segment corresponded to the smallest thermal gradient, suggesting an advection-dominated heat transfer. Losses predicted by VS2DH are within an order of magnitude of the estimated losses based on discharge measurements. The differences in losses are a result of the spatial extent to which the modeling results are applied and lateral subsurface flow.

  12. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  13. SEISMIC SITE RESPONSE ESTIMATION IN THE NEAR SOURCE REGION OF THE 2009 L’AQUILA, ITALY, EARTHQUAKE

    NASA Astrophysics Data System (ADS)

    Bertrand, E.; Azzara, R.; Bergamashi, F.; Bordoni, P.; Cara, F.; Cogliano, R.; Cultrera, G.; di Giulio, G.; Duval, A.; Fodarella, A.; Milana, G.; Pucillo, S.; Régnier, J.; Riccio, G.; Salichon, J.

    2009-12-01

    The 6th of April 2009, at 3:32 local time, a Mw 6.3 earthquake hit the Abruzzo region (central Italy) causing more than 300 casualties. The epicenter of the earthquake was 95km NE of Rome and 10km from the center of the city of L’Aquila, the administrative capital of the Abruzzo region. This city has a population of about 70,000 and was severely damaged by the earthquake, the total cost of the buildings damage being estimated around 3 Bn €. Historical masonry buildings particularly suffered from the seismic shaking, but some reinforced concrete structures from more modern construction were also heavily damaged. To better estimate the seismic solicitation of these structures during the earthquake, we deployed temporary arrays in the near source region. Downtown L’Aquila, as well as a rural quarter composed of ancient dwelling-centers located western L’Aquila (Roio area), have been instrumented. The array set up downtown consisted of nearly 25 stations including velocimetric and accelerometric sensors. In the Roio area, 6 stations operated for almost one month. The data has been processed in order to study the spectral ratios of the horizontal component of ground motion at the soil site and at a reference site, as well as the spectral ratio of the horizontal and the vertical movement at a single recording site. Downtown L’Aquila is set on a Quaternary fluvial terrace (breccias with limestone boulders and clasts in a marly matrix), which forms the left bank of the Aterno River and slopes down in the southwest direction towards the Aterno River. The alluvial are lying on lacustrine sediments reaching their maximum thickness (about 250m) in the center of L’Aquila. After De Luca et al. (2005), these quaternary deposits seem to lead in an important amplification factor in the low frequency range (0.5-0.6 Hz). However, the level of amplification varies strongly from one point to the other in the center of the city. This new experimentation allows new and more

  14. The radiated seismic energy and apparent stress of interplate and intraplate earthquakes at subduction zone environments; implications for seismic hazard estimation

    USGS Publications Warehouse

    Choy, George L.; Boatwright, John L.; Kirby, Stephen H.

    2001-01-01

    The radiated seismic energies (ES) of 980 shallow subduction-zone earthquakes with magnitudes ? 5.8 are used to examine global patterns of energy release and apparent stress. In contrast to traditional methods which have relied upon empirical formulas, these energies are computed through direct spectral analysis of broadband seismic waveforms. Energy gives a physically different measure of earthquake size than moment. Moment, being derived from the low-frequency asymptote of the displacement spectra, is related to the final static displacement. Thus, moment is crucial to the long-term tectonic implication of an earthquake. In contrast, energy, being derived from the velocity power spectra, is more a measure of seismic potential for damage to anthropogenic structures. There is considerable scatter in the plot of ES-M0 for worldwide earthquakes. For any given M0, the ES can vary by as much as an order of magnitude about the mean regression line. The global variation between ES and M0, while large, is not random. When subsets of ES-M0 are plotted as a function of seismic region, tectonic setting and faulting type, the scatter in data is often substantially reduced. There are two profound implications for the estimation of seismic and tsunamic hazard. First, it is now feasible to characterize the apparent stress for particular regions. Second, a given M0 does not have a unique ES. This means that M0 alone is not sufficient to describe all aspects of an earthquake. In particular, we have found examples of interplate thrust-faulting earthquakes and intraslab normal-faulting earthquakes occurring in the same epicentral region with vastly different macroseismic effects. Despite the gross macroseismic disparities, the MW?s in these examples were identical. However, the Me?s (energy magnitudes) successfully distinguished the earthquakes that were more damaging.

  15. Earthquake Monitoring and Early Warning Systems in Taiwan (Invited)

    NASA Astrophysics Data System (ADS)

    Wu, Y.

    2010-12-01

    The Taiwan region is characterized by a high shortening rate and a strong seismic activity. The Central Weather Bureau (CWB) is responsible for the earthquake monitoring in Taiwan. The CWB seismic network consists of 71 real-time short-period seismic stations in Taiwan region for routinely earthquake monitoring and has recorded about 18,000 events each year in a roughly 400 km x 550 km region. There are 53 real-time broadband stations install for seismological research purposes and reporting moment tensor solution in Taiwan. With the implementation of a real-time strong-motion network by the CWB, earthquake rapid reporting and early warning systems have been developed in Taiwan. The network consists of 110 stations. For rapid reporting system, when a potentially felt earthquake occurs around the Taiwan area, the location, magnitude and shake map of seismic intensities can be automatically reported within about 40 to 60 sec. For large earthquakes, the shaking map and losses can be estimated within 2 min after the earthquake occurrence. For earthquake early warning system, earthquake information could be determined at about 15 to 20 sec after a large earthquake occurrence. Therefore, this system can provide early warning before the arrival of S-wave for metropolitan areas located 70 km away from the epicenter. Recently, onsite earthquake early warning device is developed using MEMS sensor. It focuses on that to offer early warning for areas close to the epicenter.

  16. Estimation of network bias in the shift in Helmert parameters induced by great earthquakes

    NASA Astrophysics Data System (ADS)

    Zannat, U. J.; Tregoning, P.

    2016-12-01

    Geophysical models and their interpretations of several measurements ofinterest, such as sea-level rise, post-seismic relaxation, and glacial isostaticadjustment, are intertwined, at the state-of-the-art level of precision, withthe problem of realizing the International Terrestrial Reference Frame (ITRF).However, usually there is a discrepancy, known as the network bias, betweenthe theoretically convenient Center of Figure (CF) and the physically accessibleCenter of Network (CN) frames, because of unavoidable factors such asuneven station distribution, lack of stations in the oceans, disparity in the coveragebetween the two hemispheres, and the existence of deforming zones.In order to quantify the expected network bias in the observed CF motion fornetworks of finite size, we propose to calculate the sampling distribution ofthe change in Helmert parameters, as predicted by theoretical earthquake orloading models, by Monte Carlo integration. Here, we evaluate the networkbias due to the coseismic displacement fields of the 2004 Sumatra-Andamanand the 2011 Tohoku-Oki earthquakes for the ITRF2008 and the ITRF2014core site networks. For the theoretical prediction of the coseismic field, wehave applied the normal mode summation technique on the layered PreliminaryReference Earth Model (PREM) for a set of fault models proposed in theliterature. It is found that the network bias ranges from about 5% to about20% for the different components of the Helmert parameters regardless of thefault model used and thus cannot be neglected. Furthermore, the estimatedbias is well within the uncertainty predicted by our simulation. We alsoshow that the error can be mitigated (approximately 10-15%) by adoptingthe realization of the CF by the center of station positions weighted by theareas of the corresponding cells in Voronoi decomposition of the surface ofthe earth.

  17. Estimating field-of-view loss in bathymetric lidar: application to large-scale simulations.

    PubMed

    Carr, Domenic; Tuell, Grady

    2014-07-20

    When designing a bathymetric lidar, it is important to study simulated waveforms for various combinations of system and environmental parameters. To predict a system's ranging accuracy, it is often necessary to analyze thousands of waveforms. In these large-scale simulations, estimating field-of-view loss is a challenge because the calculation is complex and computationally intensive. This paper describes a new procedure for quickly approximating this loss, and illustrates how it can be used to efficiently predict ranging accuracy.

  18. Programmable calculator program for linear somatic cell scores to estimate mastitis yield losses.

    PubMed

    Kirk, J H

    1984-02-01

    A programmable calculator program calculates loss of milk yield in dairy cows based on linear somatic cell count scores. The program displays the distribution of the herd by lactation number and linear score for present and optimal goal situations. Loss of yield is in pounds and dollars by cow and herd. The program estimates optimal milk production and numbers of fewer cows at the goal for mastitis infection.

  19. Is CO radio line emission a reliable mass-loss-rate estimator for AGB stars?

    NASA Astrophysics Data System (ADS)

    Ramstedt, Sofia; Scḧier, Frederik; Olofsson, Hans

    The final evolutionary stage of low- to intermediate-mass stars, as they evolve along the asymptotic giant branch (AGB), is characterized by mass loss so intense (10-8-10-4 Msol yr-1) that eventually the AGB life time is determined by it. The material lost by the star is enriched in nucleo-synthesized material and thus AGB stars play an important role in the chemical evolution of galaxies. A reliable mass-loss-rate estimator is of utmost importance in order to increase our understanding of late stellar evolution and to reach conclusions about the amount of enriched material recycled by AGB stars. For low-mass-loss-rate AGB stars, modelling of observed rotational CO radio line emission has proven to be a good tool for estimating mass-loss rates [Olofsson et al. (2002) for M-type stars and Schöier & Olofsson (2001) for carbon stars], but several lines are needed to get good constraints. For high-mass-loss-rate objects the situation is more complicated, the main reason being saturation of the optically thick CO lines. Moreover, Kemper et al. (2003) introduced temporal changes in the mass-loss rate, or alternatively, spatially varying turbulent motions, in order to explain observed line-intensity ratios. This puts into question whether it is possible to model the circumstellar envelope using a constant mass-loss rate, or whether the physical structure of the outflow is more complex than normally assumed. We present observations of CO radio line emission for a sample of intermediate- to high-mass-loss-rate AGB stars. The lowest rotational transition line (J =1-0) was observed at OSO and the higher-frequency lines (J =2-1, 3-2, 4-3 and in some cases 6-5) were observed at the JCMT. Using a detailed, non-LTE, radiative transfer model we are able to reproduce observed line ratios (Figure 1) and constrain the mass-loss rates for the whole sample, using a constant mass-loss rate and a "standard" circumstellar envelope model. However, for some objects only a lower limit to

  20. Comparison of the Cut-and-Paste and Full Moment Tensor Methods for Estimating Earthquake Source Parameters

    NASA Astrophysics Data System (ADS)

    Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.

    2008-12-01

    Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the

  1. Estimating tag loss of the Atlantic Horseshoe crab, Limulus polyphemus, using a multi-state model

    USGS Publications Warehouse

    Butler, Catherine Alyssa; McGowan, Conor P.; Grand, James B.; Smith, David

    2012-01-01

    The Atlantic Horseshoe crab, Limulus polyphemus, is a valuable resource along the Mid-Atlantic coast which has, in recent years, experienced new management paradigms due to increased concern about this species role in the environment. While current management actions are underway, many acknowledge the need for improved and updated parameter estimates to reduce the uncertainty within the management models. Specifically, updated and improved estimates of demographic parameters such as adult crab survival in the regional population of interest, Delaware Bay, could greatly enhance these models and improve management decisions. There is however, some concern that difficulties in tag resighting or complete loss of tags could be occurring. As apparent from the assumptions of a Jolly-Seber model, loss of tags can result in a biased estimate and underestimate a survival rate. Given that uncertainty, as a first step towards estimating an unbiased estimate of adult survival, we first took steps to estimate the rate of tag loss. Using data from a double tag mark-resight study conducted in Delaware Bay and Program MARK, we designed a multi-state model to allow for the estimation of mortality of each tag separately and simultaneously.

  2. Method for estimating spatially variable seepage loss and hydraulic conductivity in intermittent and ephemeral streams

    USGS Publications Warehouse

    Niswonger, R.G.; Prudic, D.E.; Fogg, G.E.; Stonestrom, D.A.; Buckland, E.M.

    2008-01-01

    A method is presented for estimating seepage loss and streambed hydraulic conductivity along intermittent and ephemeral streams using streamflow front velocities in initially dry channels. The method uses the kinematic wave equation for routing streamflow in channels coupled to Philip's equation for infiltration. The coupled model considers variations in seepage loss both across and along the channel. Water redistribution in the unsaturated zone is also represented in the model. Sensitivity of the streamflow front velocity to parameters used for calculating seepage loss and for routing streamflow shows that the streambed hydraulic conductivity has the greatest sensitivity for moderate to large seepage loss rates. Channel roughness, geometry, and slope are most important for low seepage loss rates; however, streambed hydraulic conductivity is still important for values greater than 0.008 m/d. Two example applications are presented to demonstrate the utility of the method. Copyright 2008 by the American Geophysical Union.

  3. A chemodynamic approach for estimating losses of target organic chemicals from water during sample holding time

    USGS Publications Warehouse

    Capel, P.D.; Larson, S.J.

    1995-01-01

    Minimizing the loss of target organic chemicals from environmental water samples between the time of sample collection and isolation is important to the integrity of an investigation. During this sample holding time, there is a potential for analyte loss through volatilization from the water to the headspace, sorption to the walls and cap of the sample bottle; and transformation through biotic and/or abiotic reactions. This paper presents a chemodynamic-based, generalized approach to estimate the most probable loss processes for individual target organic chemicals. The basic premise is that the investigator must know which loss process(es) are important for a particular analyte, based on its chemodynamic properties, when choosing the appropriate method(s) to prevent loss.

  4. Estimation of insurance-related losses resulting from coastal flooding in France

    NASA Astrophysics Data System (ADS)

    Naulin, J. P.; Moncoulon, D.; Le Roy, S.; Pedreros, R.; Idier, D.; Oliveros, C.

    2016-01-01

    A model has been developed in order to estimate insurance-related losses caused by coastal flooding in France. The deterministic part of the model aims at identifying the potentially flood-impacted sectors and the subsequent insured losses a few days after the occurrence of a storm surge event on any part of the French coast. This deterministic component is a combination of three models: a hazard model, a vulnerability model, and a damage model. The first model uses the PREVIMER system to estimate the water level resulting from the simultaneous occurrence of a high tide and a surge caused by a meteorological event along the coast. A storage-cell flood model propagates these water levels over the land and thus determines the probable inundated areas. The vulnerability model, for its part, is derived from the insurance schedules and claims database, combining information such as risk type, class of business, and insured values. The outcome of the vulnerability and hazard models are then combined with the damage model to estimate the event damage and potential insured losses. This system shows satisfactory results in the estimation of the magnitude of the known losses related to the flood caused by the Xynthia storm. However, it also appears very sensitive to the water height estimated during the flood period, conditioned by the junction between seawater levels and coastal topography, the accuracy for which is still limited by the amount of information in the system.

  5. Estimation of insurance related losses resulting from coastal flooding in France

    NASA Astrophysics Data System (ADS)

    Naulin, J. P.; Moncoulon, D.; Le Roy, S.; Pedreros, R.; Idier, D.; Oliveros, C.

    2015-04-01

    A model has been developed in order to estimate insurance-related losses caused by coastal flooding in France. The deterministic part of the model aims at identifying the potentially flood-impacted sectors and the subsequent insured losses a few days after the occurrence of a storm surge event on any part of the French coast. This deterministic component is a combination of three models: a hazard model, a vulnerability model and a damage model. The first model uses the PREVIMER system to estimate the water level along the coast. A storage-cell flood model propagates these water levels over the land and thus determines the probable inundated areas. The vulnerability model, for its part, is derived from the insurance schedules and claims database; combining information such as risk type, class of business and insured values. The outcome of the vulnerability and hazard models are then combined with the damage model to estimate the event damage and potential insured losses. This system shows satisfactory results in the estimation of the magnitude of the known losses related to the flood caused by the Xynthia storm. However, it also appears very sensitive to the water height estimated during the flood period, conditioned by the junction between sea water levels and coastal topography for which the accuracy is still limited in the system.

  6. Proceedings of Conference XVIII: a workshop on "Continuing actions to reduce losses from earthquakes in the Mississippi Valley area," 24-26 May, 1982, St. Louis, Missouri

    USGS Publications Warehouse

    Gori, Paula L.; Hays, Walter W.; Kitzmiller, Carla

    1983-01-01

    payoff and trre lowest cost and effort requirements. These action plans, which identify steps that can be undertaken immediately to reduce losses from earthquakes in each of the seven States in the Mississippi Valley area, are contained in this report. The draft 5-year plan for the Central United States, prepared in the Knoxville workshop, was the starting point of the small group discussions in the St. Louis workshop which lead to the action plans contained in this report. For completeness, the draft 5-year plan for the Central United States is reproduced as Appendix B.

  7. Uncertainty in sample estimates and the implicit loss function for soil information.

    NASA Astrophysics Data System (ADS)

    Lark, Murray

    2015-04-01

    One significant challenge in the communication of uncertain information is how to enable the sponsors of sampling exercises to make a rational choice of sample size. One way to do this is to compute the value of additional information given the loss function for errors. The loss function expresses the costs that result from decisions made using erroneous information. In certain circumstances, such as remediation of contaminated land prior to development, loss functions can be computed and used to guide rational decision making on the amount of resource to spend on sampling to collect soil information. In many circumstances the loss function cannot be obtained prior to decision making. This may be the case when multiple decisions may be based on the soil information and the costs of errors are hard to predict. The implicit loss function is proposed as a tool to aid decision making in these circumstances. Conditional on a logistical model which expresses costs of soil sampling as a function of effort, and statistical information from which the error of estimates can be modelled as a function of effort, the implicit loss function is the loss function which makes a particular decision on effort rational. In this presentation the loss function is defined and computed for a number of arbitrary decisions on sampling effort for a hypothetical soil monitoring problem. This is based on a logistical model of sampling cost parameterized from a recent geochemical survey of soil in Donegal, Ireland and on statistical parameters estimated with the aid of a process model for change in soil organic carbon. It is shown how the implicit loss function might provide a basis for reflection on a particular choice of sample size by comparing it with the values attributed to soil properties and functions. Scope for further research to develop and apply the implicit loss function to help decision making by policy makers and regulators is then discussed.

  8. Estimation of Source Parameters for the Aftershocks of the 2001 Mw 7.7 Bhuj Earthquake, India

    NASA Astrophysics Data System (ADS)

    Mandal, Prantik; Johnston, A.

    2006-08-01

    The source parameters for 213 Bhuj aftershocks of moment magnitude varying from 2.16 to 5.74 have been estimated using the spectral analysis of the SH- waveform on the transverse component of the three-componnet digital seismograms as well as accelerograms. The estimated stress drop values for Bhuj aftershocks show more scatter (Mo0.5 to 1 ∞ Δσ) toward the larger seismic moment values (log Mo ≥ 1014.5 N-m, larger aftershocks), whereas, they show a more systematic nature (Mo3 ∞ Δσ) for smaller seismic moment (log Mo < 1014.5 N-m, smaller aftershocks) values. This size dependency of stress drop has also been seen from the relation between our estimated seismic moment and source radius, however, this size-dependent stress drop is not observed for the source parameter estimates for the other stable continental region earthquakes in India and around the world. The estimated seismic moment (Mo), source radius (r) and stress drop (Δσ) for aftershocks of moment magnitude 2.16 to 5.74 range from 1.95 × 1012 to 4.5 × 1017 N-m, 239 to 2835 m and 0.63 to 20.7 MPa, respectively. The near-surface attenuation factor (k) is found to be large of the order of 0.03 for the Kachchh region, suggesting thick low velocity sediments beneath the region. The estimated stress drop values show an increasing trend with the depth indicating the base of seismogenic layer (as characterized by larger stress drop values (>15 MPa)) lying in 22 26km depth range beneath the region. We suggest that the concentration of large stress drop values at 10 36km depth may be related to the large stress/strain associvated with a brittle, competent intrusive body of mafic nature.

  9. Estimation of Age Using Alveolar Bone Loss: Forensic and Anthropological Applications.

    PubMed

    Ruquet, Michel; Saliba-Serre, Bérengère; Tardivo, Delphine; Foti, Bruno

    2015-09-01

    The objective of this study was to utilize a new odontological methodological approach based on radiographic for age estimation. The study was comprised of 397 participants aged between 9 and 87 years. A clinical examination and a radiographic assessment of alveolar bone loss were performed. Direct measures of alveolar bone level were recorded using CT scans. A medical examination report was attached to the investigation file. Because of the link between alveolar bone loss and age, a model was proposed to enable simple, reliable, and quick age estimation. This work added new arguments for age estimation. This study aimed to develop a simple, standardized, and reproducible technique for age estimation of adults of actual populations in forensic medicine and ancient populations in funeral anthropology. © 2015 American Academy of Forensic Sciences.

  10. Landslides in Colorado, USA--Impacts and loss estimation for 2010

    USGS Publications Warehouse

    Highland, Lynn M.

    2012-01-01

    The focus of this study is to investigate landslides and consequent losses which affected Colorado in the year 2010. By obtaining landslide reports from a variety of sources, this report will demonstrate the feasibility of creating a profile of landslides and their effects on communities. A short overview of the current status of landslide-loss studies for the United States is introduced, followed by a compilation of landslide occurrence and associated losses and impacts which affected Colorado for the year 2010. Direct costs are summarized in descriptive and tabular form, and where possible, indirect costs are also noted or estimated. Total direct costs of landslides in Colorado for the year 2010 were approximately $9,149,335.00 (2010 U.S. dollars). (Since not all data for damages and costs were obtained, this figure realistically could be considerably higher.) Indirect costs were noted where available but are not totaled due to the fact that most indirect costs were not obtainable for various reasons outlined later in this report. Casualty data are considered as being within the scope of loss evaluation, and are reported in Appendix 1, but are not assigned dollar losses. More details on the source material for loss data not found in the reference section are reported in Appendix 2, and Appendix 3 summarizes notes on landslide-loss investigations in general and lessons learned during the process of loss-data collection.

  11. Estimating earthquake-rupture rates on a fault or fault system

    USGS Publications Warehouse

    Field, E.H.; Page, M.T.

    2011-01-01

    Previous approaches used to determine the rates of different earthquakes on a fault have made assumptions regarding segmentation, have been difficult to document and reproduce, and have lacked the ability to satisfy all available data constraints. We present a relatively objective and reproducible inverse methodology for determining the rate of different ruptures on a fault or fault system. The data used in the inversion include slip rate, event rate, and other constraints such as an optional a priori magnitude-frequency distribution. We demonstrate our methodology by solving for the long-term rate of ruptures on the southern San Andreas fault. Our results imply that a Gutenberg-Richter distribution is consistent with the data available for this fault; however, more work is needed to test the robustness of this assertion. More importantly, the methodology is extensible to an entire fault system (thereby including multifault ruptures) and can be used to quantify the relative benefits of collecting additional paleoseismic data at different sites.

  12. Estimation of return periods of multiple losses per winter associated with historical windstorm series over Germany

    NASA Astrophysics Data System (ADS)

    Karremann, Melanie; Pinto, Joaquim G.; von Bomhard, Philipp; Klawa, Matthias

    2014-05-01

    During the last decades, several windstorm series hit Western Europe leading to large cumulative economic losses. Such storm series are an example of serial clustering of extreme cyclones and present a considerable risk for the insurance industry. Here, clustering of events and return periods of storm series for Germany are quantified based on potential losses using empirical models. Two reanalysis datasets and observations from 123 German Weather Service stations are considered for the winters 1981/1982 to 2010/2011. Based on these datasets, histograms of events exceeding selected return levels (1-, 2- and 5-year) are derived. Return periods of historical storm series are estimated based on the Poisson and the negative Binomial distribution. About 4680 years of global circulation model simulations forced with current climate conditions are analysed to provide a better assessment of historical return periods. Estimations differ between the considered distributions. Except for frequent and weak events, the return period estimates obtained with the Poisson distribution clearly deviate from empirical data. This clearly documents overdispersion in the loss data, thus indicating the clustering of potential loss events. Better assessments are achieved for the negative Binomial distribution, e.g. 34 to 53 years for the storm series like 1989/1990. The overdispersion (clustering) of potential loss events clearly states the importance of an adequate risk assessment of multiple events per winter for economical applications.

  13. Estimation of Maximum Ground Motions in the Form of ShakeMaps and Assessment of Potential Human Fatalities from Scenario Earthquakes on the Chishan Active Fault in southern Taiwan

    NASA Astrophysics Data System (ADS)

    Liu, Kun Sung; Huang, Hsiang Chi; Shen, Jia Rong

    2017-04-01

    Historically, there were many damaging earthquakes in southern Taiwan during the last century. Some of these earthquakes had resulted in heavy loss of human lives. Accordingly, assessment of potential seismic hazards has become increasingly important in southern Taiwan, including Kaohsiung, Tainan and northern Pingtung areas since the Central Geological Survey upgraded the Chishan active fault from suspected fault to Category I in 2010. In this study, we first estimate the maximum seismic ground motions in term of PGA, PGV and MMI by incorporating a site-effect term in attenuation relationships, aiming to show high seismic hazard areas in southern Taiwan. Furthermore, we will assess potential death tolls due to large future earthquakes occurring on Chishan active fault. As a result, from the maximum PGA ShakeMap for an Mw7.2 scenario earthquake on the Chishan active fault in southern Taiwan, we can see that areas with high PGA above 400 gals, are located in the northeastern, central and northern parts of southwestern Kaohsiung as well as the southern part of central Tainan. In addition, comparing the cities located in Tainan City at similar distances from the Chishan fault have relatively greater PGA and PGV than those in Kaohsiung City and Pingtung County. This is mainly due to large site response factors in Tainan. On the other hand, seismic hazard in term of PGA and PGV, respectively, show that they are not particular high in the areas near the Chishan fault. The main reason is that these areas are marked with low site response factors. Finally, the estimated fatalities in Kaohsiung City at 5230, 4285 and 2786, respectively, for Mw 7.2, 7.0 and 6.8 are higher than those estimated for Tainan City and Pingtung County. The main reason is high population density above 10000 persons per km2 are present in Fongshan, Zuoying, Sanmin, Cianjin, Sinsing, Yancheng, Lingya Districts and between 5,000 and 10,000 persons per km2 are present in Nanzih and Gushan Districts in

  14. Combining double difference and amplitude ratio approaches for Q estimates at the NW Bohemia earthquake swarm region

    NASA Astrophysics Data System (ADS)

    Kriegerowski, Marius; Cesca, Simone; Krüger, Frank; Dahm, Torsten; Horálek, Josef

    2016-04-01

    Aside from the propagation velocity of seismic waves, their attenuation can provide a direct measure of rock properties in the sampled subspace. We present a new attenuation tomography approach exploiting relative amplitude spectral ratios of earthquake pairs. We focus our investigation on North West Bohemia - a region characterized by intense earthquake swarm activity in a confined source region. The inter-event distances are small compared to the epicentral distances to the receivers meeting a fundamental requirement of the method. Due to the similar event locations also the ray paths are very similar. Consequently, the relative spectral ratio is affected mostly by rock properties along the path of the vector distance and thus representative of the focal region. In order to exclude effects of the seismic source spectra, only the high frequency content beyond the corner frequency is taken into consideration. This requires high quality as well as high sampling records. Future improvements in that respect can be expected from the ICDP proposal "Eger rift", which includes plans to install borehole monitoring in the investigated region. 1D and 3D synthetic tests show the feasibility of the presented method. Furthermore, we demonstrate influences of perturbations in source locations and travel time estimates on the determination of Q. Errors in Q scale linearly with errors in the differential travel times. These sources of errors can be attributed to the complex velocity structure of the investigated region. A critical aspect is the signal-to-noise ratio, which imposes a strong limitation and emphasizes the demand for high quality recordings. Hence, the presented method is expected to benefit from bore hole installations. Since we focus our analysis on the NW Bohemia case study example, a synthetic earthquake catalog incorporating source characteristics deduced from preceding moment tensor inversions coupled with a realistic velocity model provides us with a realistic

  15. Bayesian Tsunami-Waveform Inversion and Tsunami-Source Uncertainty Estimation for the 2011 Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Dettmer, J.; Hossen, M. J.; Cummins, P. R.

    2014-12-01

    This paper develops a Bayesian inversion to infer spatio-temporal parameters of the tsunami source (sea surface) due to megathrust earthquakes. To date, tsunami-source parameter uncertainties are poorly studied. In particular, the effects of parametrization choices (e.g., discretisation, finite rupture velocity, dispersion) on uncertainties have not been quantified. This approach is based on a trans-dimensional self-parametrization of the sea surface, avoids regularization, and provides rigorous uncertainty estimation that accounts for model-selection ambiguity associated with the source discretisation. The sea surface is parametrized using self-adapting irregular grids which match the local resolving power of the data and provide parsimonious solutions for complex source characteristics. Finite and spatially variable rupture velocity fields are addressed by obtaining causal delay times from the Eikonal equation. Data are considered from ocean-bottom pressure and coastal wave gauges. Data predictions are based on Green-function libraries computed from ocean-basin scale tsunami models for cases that include/exclude dispersion effects. Green functions are computed for elementary waves of Gaussian shape and grid spacing which is below the resolution of the data. The inversion is applied to tsunami waveforms from the great Mw=9.0 2011 Tohoku-Oki (Japan) earthquake. Posterior results show a strongly elongated tsunami source along the Japan trench, as obtained in previous studies. However, we find that the tsunami data is fit with a source that is generally simpler than obtained in other studies, with a maximum amplitude less than 5 m. In addition, the data are sensitive to the spatial variability of rupture velocity and require a kinematic source model to obtain satisfactory fits which is consistent with other work employing linear multiple time-window parametrizations.

  16. Nitrogen losses from dairy manure estimated through nitrogen mass balance and chemical markers

    USGS Publications Warehouse

    Hristov, Alexander N.; Zaman, S.; Vander Pol, M.; Ndegwa, P.; Campbell, L.; Silva, S.

    2009-01-01

    Ammonia is an important air and water pollutant, but the spatial variation in its concentrations presents technical difficulties in accurate determination of ammonia emissions from animal feeding operations. The objectives of this study were to investigate the relationship between ammonia volatilization and ??15N of dairy manure and the feasibility of estimating ammonia losses from a dairy facility using chemical markers. In Exp. 1, the N/P ratio in manure decreased by 30% in 14 d as cumulative ammonia losses increased exponentially. Delta 15N of manure increased throughout the course of the experiment and ??15N of emitted ammonia increased (p < 0.001) quadratically from -31??? to -15 ???. The relationship between cumulative ammonia losses and ??15N of manure was highly significant (p < 0.001; r2 = 0.76). In Exp. 2, using a mass balance approach, approximately half of the N excreted by dairy cows (Bos taurus) could not be accounted for in 24 h. Using N/P and N/K ratios in fresh and 24-h manure, an estimated 0.55 and 0.34 (respectively) of the N excreted with feces and urine could not be accounted for. This study demonstrated that chemical markers (P, K) can be successfully used to estimate ammonia losses from cattle manure. The relationship between manure ??15N and cumulative ammonia loss may also be useful for estimating ammonia losses. Although promising, the latter approach needs to be further studied and verified in various experimental conditions and in the field. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  17. Estimated crop yield losses due to surface ozone exposure and economic damage in India.

    PubMed

    Debaje, S B

    2014-06-01

    In this study, we estimate yield losses and economic damage of two major crops (winter wheat and rabi rice) due to surface ozone (O3) exposure using hourly O3 concentrations for the period 2002-2007 in India. This study estimates crop yield losses according to two indices of O3 exposure: 7-h seasonal daytime (0900-1600 hours) mean measured O3 concentration (M7) and AOT40 (accumulation exposure of O3 concentration over a threshold of 40 parts per billion by volume during daylight hours (0700-1800 hours), established by field studies. Our results indicate that relative yield loss from 5 to 11% (6-30%) for winter wheat and 3-6% (9-16%) for rabi rice using M7 (AOT40) index of the mean total winter wheat 81 million metric tons (Mt) and rabi rice 12 Mt production per year for the period 2002-2007. The estimated mean crop production loss (CPL) for winter wheat are from 9 to 29 Mt, account for economic cost loss was from 1,222 to 4,091 million US$ annually. Similarly, the mean CPL for rabi rice are from 0.64 to 2.1 Mt, worth 86-276 million US$. Our calculated winter wheat and rabi rice losses agree well with previous results, providing the further evidence that large crop yield losses occurring in India due to current O3 concentration and further elevated O3 concentration in future may pose threat to food security.

  18. Maximum Earthquake Magnitude Assessments by Japanese Government Committees (Invited)

    NASA Astrophysics Data System (ADS)

    Satake, K.

    2013-12-01

    The 2011 Tohoku earthquake (M 9.0) was the largest earthquake in Japanese history and such a gigantic earthquake was not foreseen around Japan. After the 2011 disaster, various government committees in Japan have discussed and assessed the maximum credible earthquake size around Japan, but their values vary without definite consensus. I will review them with earthquakes along the Nankai Trough as an example. The Central Disaster Management Council, under Cabinet Office, set up a policy for the future tsunami disaster mitigation. The possible future tsunamis are classified into two levels: L1 and L2. The L2 tsunamis are the largest possible tsunamis with low frequency of occurrence, for which saving people's lives is the first priority with soft measures such as tsunami hazard maps, evacuation facilities or disaster education. The L1 tsunamis are expected to occur more frequently, typically once in a few decades, for which hard countermeasures such as breakwater must be prepared. The assessments of L1 and L2 events are left to local governments. The CDMC also assigned M 9.1 as the maximum size of earthquake along the Nankai trough, then computed the ground shaking and tsunami inundation for several scenario earthquakes. The estimated loss is about ten times the 2011 disaster, with maximum casualties of 320,000 and economic loss of 2 trillion dollars. The Headquarters of Earthquake Research Promotion, under MEXT, was set up after the 1995 Kobe earthquake and has made long-term forecast of large earthquakes and published national seismic hazard maps. The future probability of earthquake occurrence, for example in the next 30 years, was calculated from the past data of large earthquakes, on the basis of characteristic earthquake model. The HERP recently revised the long-term forecast of Naknai trough earthquake; while the 30 year probability (60 - 70 %) is similar to the previous estimate, they noted the size can be M 8 to 9, considering the variability of past

  19. Improved estimates of the European winter wind storm climate and the risk of reinsurance loss

    NASA Astrophysics Data System (ADS)

    Della-Marta, P. M.; Liniger, M. A.; Appenzeller, C.; Bresch, D. N.; Koellner-Heck, P.; Muccione, V.

    2009-04-01

    Current estimates of the European wind storm climate and their associated losses are often hampered by either relatively short, coarse resolution or inhomogeneous datasets. This study estimates the European wind storm climate using dynamical seasonal-to-decadal (s2d) climate forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF). The current s2d models' have limited predictive skill of European storminess, making the ensemble forecasts ergodic samples on which to build pseudo climates of 310 to 396 years in length. Extended winter (ONDJFMA) wind storm climatologies are created using a scalar extreme wind index considering only data above a high threshold. The method identifes between 2331 and 2471 wind storms using s2d data and 380 wind storms in ERA-40. Classical extreme value analysis (EVA) techniques are used to determine the wind storm climatologies. We suggest that the ERA-40 climatology, by virtue of its length, limiting form, and the fitting method, overestimates the return period (RP) of wind storms with RPs between 10-300 years and underestimates the return period of wind storms with RPs greater than 300 years. A 50 year event in ERA-40 is approximately a 33 year event using s2d. The largest influence on ERA-40 RP uncertainties is the sampling variability associated with only 45 seasons of storms. The climatologies are linked to the Swiss Reinsurance Company (Swiss Re) European wind storm loss model. New estimates of the risk of loss are compared with those from historical and stochastically generated wind storm fields used by Swiss Re. The resulting loss-frequency relationship matches well with the two independently modelled estimates and clearly demonstrates the added value by using alternative data and methods, as proposed in this study, to estimate the RP of high RP losses.

  20. Period-dependent source rupture behavior of the 2011 Tohoku earthquake estimated by multi period-band Bayesian waveform inversion

    NASA Astrophysics Data System (ADS)

    Kubo, H.; Asano, K.; Iwata, T.; Aoi, S.

    2014-12-01

    Previous studies for the period-dependent source characteristics of the 2011 Tohoku earthquake (e.g., Koper et al., 2011; Lay et al., 2012) were based on the short and long period source models using different method. Kubo et al. (2013) obtained source models of the 2011 Tohoku earthquake using multi period-bands waveform data by a common inversion method and discussed its period-dependent source characteristics. In this study, to achieve more in detail spatiotemporal source rupture behavior of this event, we introduce a new fault surface model having finer sub-fault size and estimate the source models in multi period-bands using a Bayesian inversion method combined with a multi-time-window method. Three components of velocity waveforms at 25 stations of K-NET, KiK-net, and F-net of NIED are used in this analysis. The target period band is 10-100 s. We divide this period band into three period bands (10-25 s, 25-50 s, and 50-100 s) and estimate a kinematic source model in each period band using a Bayesian inversion method with MCMC sampling (e.g., Fukuda & Johnson, 2008; Minson et al., 2013, 2014). The parameterization of spatiotemporal slip distribution follows the multi-time-window method (Hartzell & Heaton, 1983). The Green's functions are calculated by the 3D FDM (GMS; Aoi & Fujiwara, 1999) using a 3D velocity structure model (JIVSM; Koketsu et al., 2012). The assumed fault surface model is based on the Pacific plate boundary of JIVSM and is divided into 384 subfaults of about 16 * 16 km^2. The estimated source models in multi period-bands show the following source image: (1) First deep rupture off Miyagi at 0-60 s toward down-dip mostly radiating relatively short period (10-25 s) seismic waves. (2) Shallow rupture off Miyagi at 45-90 s toward up-dip with long duration radiating long period (50-100 s) seismic wave. (3) Second deep rupture off Miyagi at 60-105 s toward down-dip radiating longer period seismic waves then that of the first deep rupture. (4) Deep

  1. Development of an online tool for tsunami inundation simulation and tsunami loss estimation

    NASA Astrophysics Data System (ADS)

    Srivihok, P.; Honda, K.; Ruangrassamee, A.; Muangsin, V.; Naparat, P.; Foytong, P.; Promdumrong, N.; Aphimaeteethomrong, P.; Intavee, A.; Layug, J. E.; Kosin, T.

    2014-05-01

    The devastating impacts of the 2004 Indian Ocean tsunami highlighted the need for an effective end-to-end tsunami early warning system in the region that connects the scientific components of warning with preparedness of institutions and communities to respond to an emergency. Essential to preparedness planning is knowledge of tsunami risks. In this study, development of an online tool named “INSPIRE” for tsunami inundation simulation and tsunami loss estimation is presented. The tool is designed to accommodate various accuracy levels of tsunami exposure data which will support the users to undertake preliminary tsunami risk assessment from the existing data with progressive improvement with the use of more detailed and accurate datasets. Sampling survey technique is introduced to improve the local vulnerability data with lower cost and manpower. The performance of the proposed methodology and the INSPIRE tool were tested against the dataset in Kamala and Patong municipalities, Phuket province, Thailand. The estimated building type ratios from the sampling survey show the satisfactory agreement with the actual building data at the test sites. Sub-area classification by land use can improve the accuracy of the building type ratio estimation. For the resulting loss estimation, the exposure data generated from detailed field survey can provide the agreeable results when comparing to the actual building damage recorded for the Indian Ocean tsunami event in 2004. However, lower accuracy exposure data derived from sampling survey and remote sensing can still provide a comparative overview of estimated loss.

  2. Earthquake Prediction and Forecasting

    NASA Astrophysics Data System (ADS)

    Jackson, David D.

    Prospects for earthquake prediction and forecasting, and even their definitions, are actively debated. Here, "forecasting" means estimating the future earthquake rate as a function of location, time, and magnitude. Forecasting becomes "prediction" when we identify special conditions that make the immediate probability much higher than usual and high enough to justify exceptional action. Proposed precursors run from aeronomy to zoology, but no identified phenomenon consistently precedes earthquakes. The reported prediction of the 1975 Haicheng, China earthquake is often proclaimed as the most successful, but the success is questionable. An earthquake predicted to occur near Parkfield, California in 1988±5 years has not happened. Why is prediction so hard? Earthquakes start in a tiny volume deep within an opaque medium; we do not know their boundary conditions, initial conditions, or material properties well; and earthquake precursors, if any, hide amongst unrelated anomalies. Earthquakes cluster in space and time, and following a quake earthquake probability spikes. Aftershocks illustrate this clustering, and later earthquakes may even surpass earlier ones in size. However, the main shock in a cluster usually comes first and causes the most damage. Specific models help reveal the physics and allow intelligent disaster response. Modeling stresses from past earthquakes may improve forecasts, but this approach has not yet been validated prospectively. Reliable prediction of individual quakes is not realistic in the foreseeable future, but probabilistic forecasting provides valuable information for reducing risk. Recent studies are also leading to exciting discoveries about earthquakes.

  3. Preventing land loss in coastal Louisiana: estimates of WTP and WTA.

    PubMed

    Petrolia, Daniel R; Kim, Tae-Goun

    2011-03-01

    A dichotomous-choice contingent-valuation survey was conducted in the State of Louisiana (USA) to estimate compensating surplus (CS) and equivalent surplus (ES) welfare measures for the prevention of future coastal wetland losses in Louisiana. Valuations were elicited using both willingness to pay (WTP) and willingness to accept compensation (WTA) payment vehicles. Mean CS (WTP) estimates based on a probit model using a Box-Cox specification on income was $825 per household annually, and mean ES (WTA) was estimated at $4444 per household annually. Regression results indicate that the major factors influencing support for land-loss prevention were income (positive, WTP model only), perceived hurricane protection benefits (positive), environmental and recreation protection (positive), distrust of government (negative), age (positive, WTA model only), and race (positive for whites). Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Source parameters of the 2014 Mw 6.1 South Napa earthquake estimated from the Sentinel 1A, COSMO-SkyMed and GPS data

    NASA Astrophysics Data System (ADS)

    Guangcai, Feng; Zhiwei, Li; Xinjian, Shan; Bing, Xu; Yanan, Du

    2015-08-01

    Using the combination of two InSAR and one GPS data sets, we present the detailed source model of the 2014 Mw 6.1 South Napa earthquake, the biggest tremor to hit the San Francisco Bay Area since the 1989 Mw 6.9 Loma Prieta earthquake. The InSAR data are from the Sentinel-1A (S1A) and COSMO-SkyMed (CS) satellites, and GPS data are provided by Nevada Geodetic Laboratory. We firstly obtain the complete coseismic deformation fields of this event and estimate the InSAR data errors, then using the S1A data to construct the fault geometry, one main and two short parallel sub-faults which haven't been identified by field investigation. As expected the geometry is in good agreement with the aftershock distribution. By inverting the InSAR and GPS data, we derive a three segment slip and rake models. Our model indicates that this event was a right-lateral strike-slip earthquake with a slight reverse component in the West Napa Fault as we estimated. The fault is ~ 30 km long and more than 80% of the seismic moment was released at the center of the fault segment, where the slip reached its maximum (up to 1 m). We also find that our geodetic moment magnitude is 2.07 × 1018 Nm, corresponding to Mw 6.18, larger than that of USGS (Mw 6.0) and GCMT (Mw 6.1). This difference may partly be explained by our InSAR data including about one week's postseismic deformation and aftershocks. The results also demonstrate high SNR and great ability of the newly launched Sentinel-1A in earthquake study. Furthermore, this study suggests that this earthquake has potential to trigger nearby faults, especially the Green Valley fault where the coulomb stress was imparted by the 2014 South Napa earthquake.

  5. Earthquake Facts

    MedlinePlus

    ... landslide (usually triggered by an earthquake) displacing the ocean water. The hypocenter of an earthquake is the ... is the zone of earthquakes surrounding the Pacific Ocean — about 90% of the world’s earthquakes occur ...

  6. Annual South American Forest Loss Estimates (1989-2011) Based on Passive Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    van Marle, M.; van der Werf, G.; de Jeu, R.; Liu, Y.

    2014-12-01

    Vegetation dynamics, such as forest loss, are an important factor in global climate, but long-term and consistent information on these dynamics on continental scales is lacking. We have quantified large-scale forest loss over the 90s and 00s in the tropical biomes of South America using a passive-microwave satellite-based vegetation product. Our forest loss estimates are based on remotely sensed vegetation optical depth (VOD), which is an indicator of vegetation water content simultaneously retrieved with soil moisture. The advantage of low-frequency microwave remote sensing is that aerosols and clouds do not affect the observations. Furthermore, the longer wavelengths of passive microwaves penetrate deeper into vegetation than other products derived from optical and thermal sensors. This has the consequence that both woody parts of vegetation and leaves can be observed. The merged VOD product of AMSR-E and SSM/I observations, which covers over 23 years of daily observations, is used. We used this data stream and an outlier detection algorithm to quantify spatial and temporal variations in forest loss dynamics. Qualitatively, our results compared favorably to the newly developed Global Forest Change (GFC) maps based on Landsat data (r2=0.96), and this allowed us to convert the VOD outlier count to forest loss. Our results are spatially explicit with a 0.25-degree resolution and annual time step and we will present our estimates on country level. The added benefit of our results compared to GFC is the longer time period. The results indicate a relatively steady increase in forest loss in Brazil from 1989 until 2003, followed by two high forest loss years and a declining trend afterwards. This contrasts with other South American countries such as Bolivia and Peru, where forest losses increased in almost the whole 00s in comparison with the 90s.

  7. Estimation of coseismic deformation and a fault model of the 2010 Yushu earthquake using PALSAR interferometry data

    NASA Astrophysics Data System (ADS)

    Tobita, Mikio; Nishimura, Takuya; Kobayashi, Tomokazu; Hao, Ken Xiansheng; Shindo, Yoshikuni

    2011-07-01

    We present a map of the coseismic displacement field and slip distributions resulting from the Yushu earthquake on 14 April 2010 in Qinghai, China. The wide coverage of ScanSAR data helps in increasing the opportunity of interferometric synthetic aperture radar (InSAR) observations of a specific location on Earth, estimating surface slip, and constraining slip distribution on the fault plane. We increased InSAR sensitivity to the horizontal and vertical surface displacements by combining ascending and descending interferograms. We find that the maximum left-lateral surface slip is 166 cm at ~ 9.7 km WNW of Yushu. The end-to-end length of surface and subsurface faults is about 73 km, and the estimated lengths of the two surface fault lines are 30 km and ~ 9 km. Slip distribution on a fault plane inverted from InSAR data shows an almost pure left-lateral strike-slip, with two slip peaks near the epicentres and Yushu and a maximum slip of ~ 2.6 m. No postseismic deformation is observed southeast of the source region; however, we detected a significant postseismic displacement northwest of the source region. The deformation area is located a few kilometres northwest of the coseismic displacement area observed around Longbao Lake, suggesting that the coseismic slip and the postseismic slips are spatially isolated.

  8. Estimating the permanent loss of groundwater storage in the southern San Joaquin Valley, California

    NASA Astrophysics Data System (ADS)

    Smith, R. G.; Knight, R.; Chen, J.; Reeves, J. A.; Zebker, H. A.; Farr, T.; Liu, Z.

    2017-03-01

    In the San Joaquin Valley, California, recent droughts starting in 2007 have increased the pumping of groundwater, leading to widespread subsidence. In the southern portion of the San Joaquin Valley, vertical subsidence as high as 85 cm has been observed between June 2007 and December 2010 using Interferometric Synthetic Aperture Radar (InSAR). This study seeks to map regions where inelastic (not recoverable) deformation occurred during the study period, resulting in permanent compaction and loss of groundwater storage. We estimated the amount of permanent compaction by incorporating multiple data sets: the total deformation derived from InSAR, estimated skeletal-specific storage and hydraulic parameters, geologic information, and measured water levels during our study period. We used two approaches, one that we consider to provide an estimate of the lowest possible amount of inelastic deformation, and one that provides a more reasonable estimate. These two approaches resulted in a spatial distribution of values for the percentage of the total deformation that was inelastic, with the former estimating a spatially averaged value of 54%, and the latter a spatially averaged value of 98%. The former corresponds to the permanent loss of 4.14 × 108 m3 of groundwater storage, or roughly 5% of the volume of groundwater used over the study time period; the latter corresponds to the loss of 7.48 × 108 m3 of groundwater storage, or roughly 9% of the volume of groundwater used. This study demonstrates that a data-driven approach can be used effectively to estimate the permanent loss of groundwater storage.

  9. Annual South American forest loss estimates based on passive microwave remote sensing (1990-2010)

    NASA Astrophysics Data System (ADS)

    van Marle, M. J. E.; van der Werf, G. R.; de Jeu, R. A. M.; Liu, Y. Y.

    2016-02-01

    Consistent forest loss estimates are important to understand the role of forest loss and deforestation in the global carbon cycle, for biodiversity studies, and to estimate the mitigation potential of reducing deforestation. To date, most studies have relied on optical satellite data and new efforts have greatly improved our quantitative knowledge on forest dynamics. However, most of these studies yield results for only a relatively short time period or are limited to certain countries. We have quantified large-scale forest loss over a 21-year period (1990-2010) in the tropical biomes of South America using remotely sensed vegetation optical depth (VOD). This passive microwave satellite-based indicator of vegetation water content and vegetation density has a much coarser spatial resolution than optical data but its temporal resolution is higher and VOD is not impacted by aerosols and cloud cover. We used the merged VOD product of the Advanced Microwave Scanning Radiometer (AMSR-E) and Special Sensor Microwave Imager (SSM/I) observations, and developed a change detection algorithm to quantify spatial and temporal variations in forest loss dynamics. Our results compared reasonably well with the newly developed Landsat-based Global Forest Change (GFC) maps, available for the 2001 onwards period (r2 = 0.90 when comparing annual country-level estimates). This allowed us to convert our identified changes in VOD to forest loss area and compute these from 1990 onwards. We also compared these calibrated results to PRODES (r2 = 0.60 when comparing annual state-level estimates). We found that South American forest exhibited substantial interannual variability without a clear trend during the 1990s, but increased from 2000 until 2004. After 2004, forest loss decreased again, except for two smaller peaks in 2007 and 2010. For a large part, these trends were driven by changes in Brazil, which was responsible for 56 % of the total South American forest loss area over our study

  10. Senescence-related changes in nitrogen in fine roots: mass loss affects estimation.

    PubMed

    Kunkle, Justin M; Walters, Michael B; Kobe, Richard K

    2009-05-01

    The fate of nitrogen (N) in senescing fine roots has broad implications for whole-plant N economies and ecosystem N cycling. Studies to date have generally shown negligible changes in fine root N per unit root mass during senescence. However, unmeasured loss of mobile non-N constituents during senescence could lead to underestimates of fine root N loss. For N fertilized and unfertilized potted seedlings of Populus tremuloides Michx., Acer rubrum L., Acer saccharum Marsh. and Betula alleghaniensis Britton, we predicted that the fine roots would lose mass and N during senescence. We estimated mass loss as the product of changes in root mass per length and root length between live and recently dead fine roots. Changes in root N were compared among treatments on uncorrected mass, length (which is independent of changes in mass per length), calcium (Ca) and corrected mass bases and by evaluating the relationships of dead root N as a function of live root N, species and fertilization treatments. Across species, from live to dead roots, mass decreased 28-40%, N uncorrected for mass loss increased 10-35%, N per length decreased 5-16%, N per Ca declined 14-48% and N corrected for mass declined 12-28%. Given the magnitude of senescence-related root mass loss and uncertainties about Ca dynamics in senescing roots, N loss corrected for mass loss is likely the most reliable estimate of N loss. We re-evaluated the published estimates of N changes during root senescence based on our values of mass loss and found an average of 28% lower N in dead roots than in fine roots. Despite uncertainty about the contributions of resorption, leaching and microbial immobilization to the net loss of N during root senescence, live root N was a strong and proportional predictor of dead root N across species and fertilization treatments, suggesting that live root N alone could be used to predict the contributions of senescing fine roots to whole-plant N economies and N cycling.

  11. Estimating loss of Brucella abortus antibodies from age-specific serological data in elk

    USGS Publications Warehouse

    Benavides, J. A.; Caillaud, D.; Scurlock, B. M.; Maichak, E. J.; Edwards, W.H.; Cross, Paul C.

    2017-01-01

    Serological data are one of the primary sources of information for disease monitoring in wildlife. However, the duration of the seropositive status of exposed individuals is almost always unknown for many free-ranging host species. Directly estimating rates of antibody loss typically requires difficult longitudinal sampling of individuals following seroconversion. Instead, we propose a Bayesian statistical approach linking age and serological data to a mechanistic epidemiological model to infer brucellosis infection, the probability of antibody loss, and recovery rates of elk (Cervus canadensis) in the Greater Yellowstone Ecosystem. We found that seroprevalence declined above the age of ten, with no evidence of disease-induced mortality. The probability of antibody loss was estimated to be 0.70 per year after a five-year period of seropositivity and the basic reproduction number for brucellosis to 2.13. Our results suggest that individuals are unlikely to become re-infected because models with this mechanism were unable to reproduce a significant decline in seroprevalence in older individuals. This study highlights the possible implications of antibody loss, which could bias our estimation of critical epidemiological parameters for wildlife disease management based on serological data.

  12. Calorie Estimation in Adults Differing in Body Weight Class and Weight Loss Status

    PubMed Central

    Brown, Ruth E; Canning, Karissa L; Fung, Michael; Jiandani, Dishay; Riddell, Michael C; Macpherson, Alison K; Kuk, Jennifer L

    2016-01-01

    Purpose Ability to accurately estimate calories is important for weight management, yet few studies have investigated whether individuals can accurately estimate calories during exercise, or in a meal. The objective of this study was to determine if accuracy of estimation of moderate or vigorous exercise energy expenditure and calories in food is associated with body weight class or weight loss status. Methods Fifty-eight adults who were either normal weight (NW) or overweight (OW), and either attempting (WL) or not attempting weight loss (noWL), exercised on a treadmill at a moderate (60% HRmax) and a vigorous intensity (75% HRmax) for 25 minutes. Subsequently, participants estimated the number of calories they expended through exercise, and created a meal that they believed to be calorically equivalent to the exercise energy expenditure. Results The mean difference between estimated and measured calories in exercise and food did not differ within or between groups following moderate exercise. Following vigorous exercise, OW-noWL overestimated energy expenditure by 72%, and overestimated the calories in their food by 37% (P<0.05). OW-noWL also significantly overestimated exercise energy expenditure compared to all other groups (P<0.05), and significantly overestimated calories in food compared to both WL groups (P<0.05). However, among all groups there was a considerable range of over and underestimation (−280 kcal to +702 kcal), as reflected by the large and statistically significant absolute error in calorie estimation of exercise and food. Conclusion There was a wide range of under and overestimation of calories during exercise and in a meal. Error in calorie estimation may be greater in overweight adults who are not attempting weight loss. PMID:26469988

  13. Calorie Estimation in Adults Differing in Body Weight Class and Weight Loss Status.

    PubMed

    Brown, Ruth E; Canning, Karissa L; Fung, Michael; Jiandani, Dishay; Riddell, Michael C; Macpherson, Alison K; Kuk, Jennifer L

    2016-03-01

    Ability to accurately estimate calories is important for weight management, yet few studies have investigated whether individuals can accurately estimate calories during exercise or in a meal. The objective of this study was to determine if accuracy of estimation of moderate or vigorous exercise energy expenditure and calories in food is associated with body weight class or weight loss status. Fifty-eight adults who were either normal weight (NW) or overweight (OW), and either attempting (WL) or not attempting weight loss (noWL), exercised on a treadmill at a moderate (60% HRmax) and a vigorous intensity (75% HRmax) for 25 min. Subsequently, participants estimated the number of calories they expended through exercise and created a meal that they believed to be calorically equivalent to the exercise energy expenditure. The mean difference between estimated and measured calories in exercise and food did not differ within or between groups after moderate exercise. After vigorous exercise, OW-noWL overestimated energy expenditure by 72% and overestimated the calories in their food by 37% (P < 0.05). OW-noWL also significantly overestimated exercise energy expenditure compared with all other groups (P < 0.05) and significantly overestimated calories in food compared with both WL groups (P < 0.05). However, among all groups, there was a considerable range of overestimation and underestimation (-280 to +702 kcal), as reflected by the large and statistically significant absolute error in calorie estimation of exercise and food. There was a wide range of underestimation and overestimation of calories during exercise and in a meal. Error in calorie estimation may be greater in overweight adults who are not attempting weight loss.

  14. Routine estimate of focal depths for moderate and small earthquakes by modelling regional depth phase sPmP in eastern Canada

    NASA Astrophysics Data System (ADS)

    Ma, S.; Peci, V.; Adams, J.; McCormack, D.

    2003-04-01

    ROUTINE ESTIMATE OF FOCAL DEPTHS FOR MODERATE AND SMALL EARTHQUAKES BY MODELLING REGIONAL DEPTH PHASE sPmP IN EASTERN CANADA Shutian Ma, Veronika Peci, John Adams, and David McCormack(1) (1) National Earthquake Hazards Program, Geological Survey of Canada, 7 Observatory Crescent, Ottawa, ON, K1A 0Y3, Canada Shutian Ma (ma@seismo.nrcan.gc.ca/613-947 3520) Veronika Peci (peci@seismo.nrcan.gc.ca/613-995 7100) John Adams (adams@seismo.nrcan.gc.ca/613-995 5519) David McCormack (cormack@seismo.nrcan.gc.ca/613-992 8766) Earthquake focal depths are critical parameters for basic seismological research, seismotectonic study, seismic hazard assessment, and event discrimination. Focal depths for most earthquakes with Mw >= 4.5 can be estimated from teleseismic arrival times of P, pP and sP. For maller earthquakes, focal depths can be stimated from Pg and Sg arrival times recorded at close stations. However, for most earthquakes in eastern Canada, teleseismic signals are too weak and seismograph spacing too sparse for depth estimation. The regional phase sPmP is very sensitive to focal depth, generally well developed at epicentral distances greater than 100 km, and clearly recorded at many stations in eastern Canada for earthquakes with mN >= 2.8. We developed a procedure to estimate focal depth routinely with sPmP. We select vertical waveforms recorded at distances from about 100 to 300 km (using Geotool and SAC2000), generate synthetic waveforms (using reflectivity method) for a typical focal mechanism and for a suitable range of depths, and choose the depth at which the synthetic best matches the selected waveform. The software is easy to operate. For routine work an experienced operator can get a focal depth with waveform modelling within 10 minutes after the waveform is selected, or in a couple of minutes get a rough focal depth from sPmP and Pg or PmP arrival times without waveform modelling. We have confirmed our sPmP modelling results by two comparisons: (1) to depths

  15. Handbook for the estimation of microwave propagation effects: Link calculations for earth-space paths (path loss and noise estimation)

    NASA Technical Reports Server (NTRS)

    Crane, R. K.; Blood, D. W.

    1979-01-01

    A single model for a standard of comparison for other models when dealing with rain attenuation problems in system design and experimentation is proposed. Refinements to the Global Rain Production Model are incorporated. Path loss and noise estimation procedures as the basic input to systems design for earth-to-space microwave links operating at frequencies from 1 to 300 GHz are provided. Topics covered include gaseous absorption, attenuation by rain, ionospheric and tropospheric scintillation, low elevation angle effects, radome attenuation, diversity schemes, link calculation, and receiver noise emission by atmospheric gases, rain, and antenna contributions.

  16. Site Effect Estimation Based on the Aftershock Strong-Motion Data of the 2013 Lushan, China, earthquake

    NASA Astrophysics Data System (ADS)

    Yu, X.; Li, H.; Zhang, W.

    2016-12-01

    On April 20, 2013, an earthquake with magnitude of MS7.0 occurred in the Lushan region, Sichuan Province of China. The purpose of this paper is to investigate the site effects and Q factor for this area based on the aftershock data of this event. Generalized inversion technique (GIT) and the genetic algorithm (GA) are used in this study. Moreover, the GIT is modified to improve its analytical ability. Usually, the GIT needs a reference station as a standard. Ideally the reference station is located at a rock site, and its site effect is considered to be a constant. In the GIT process, the amount of earthquake data available in analysis is limited to that recorded by the reference station, and the stations of which site effect can be estimated are also restricted to those stations which recorded common events with the reference station. In order to improve the limitation of the GIT, a modified GIT is put forward in this study, namely, the transfer-station generalized inversion method (TSGI). Besides the reference station, a transfer station is added into the GIT. Compare with the GIT, this modified GIT can be used to enlarge data set and increa