Science.gov

Sample records for earthquake loss estimation

  1. Earthquake Loss Estimation Uncertainties

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander

    2013-04-01

    The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity

  2. Loss estimation of Membramo earthquake

    NASA Astrophysics Data System (ADS)

    Damanik, R.; Sedayo, H.

    2016-05-01

    Papua Tectonics are dominated by the oblique collision of the Pacific plate along the north side of the island. A very high relative plate motions (i.e. 120 mm/year) between the Pacific and Papua-Australian Plates, gives this region a very high earthquake production rate, about twice as much as that of Sumatra, the western margin of Indonesia. Most of the seismicity occurring beneath the island of New Guinea is clustered near the Huon Peninsula, the Mamberamo region, and the Bird's Neck. At 04:41 local time(GMT+9), July 28th 2015, a large earthquake of Mw = 7.0 occurred at West Mamberamo Fault System. The earthquake focal mechanism are dominated by northwest-trending thrust mechanisms. GMPE and ATC vulnerability curve were used to estimate distribution of damage. Mean of estimated losses was caused by this earthquake is IDR78.6 billion. We estimated insurance loss will be only small portion in total general due to deductible.

  3. Regional Earthquake Shaking and Loss Estimation

    NASA Astrophysics Data System (ADS)

    Sesetyan, K.; Demircioglu, M. B.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses in the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Both Level 0 (similar to PAGER system of USGS) and Level 1 analyses of the ELER routine are based on obtaining intensity distributions analytically and estimating total number of casualties and their geographic distribution either using regionally adjusted intensity-casualty or magnitude-casualty correlations (Level 0) of using regional building inventory data bases (Level 1). Level 0 analysis is similar to the PAGER system being developed by USGS. For given

  4. Estimating economic losses from earthquakes using an empirical approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2013-01-01

    We extended the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) empirical fatality estimation methodology proposed by Jaiswal et al. (2009) to rapidly estimate economic losses after significant earthquakes worldwide. The requisite model inputs are shaking intensity estimates made by the ShakeMap system, the spatial distribution of population available from the LandScan database, modern and historic country or sub-country population and Gross Domestic Product (GDP) data, and economic loss data from Munich Re's historical earthquakes catalog. We developed a strategy to approximately scale GDP-based economic exposure for historical and recent earthquakes in order to estimate economic losses. The process consists of using a country-specific multiplicative factor to accommodate the disparity between economic exposure and the annual per capita GDP, and it has proven successful in hindcast-ing past losses. Although loss, population, shaking estimates, and economic data used in the calibration process are uncertain, approximate ranges of losses can be estimated for the primary purpose of gauging the overall scope of the disaster and coordinating response. The proposed methodology is both indirect and approximate and is thus best suited as a rapid loss estimation model for applications like the PAGER system.

  5. Estimating annualized earthquake losses for the conterminous United States

    USGS Publications Warehouse

    Jaiswal, Kishor S.; Bausch, Douglas; Chen, Rui; Bouabid, Jawhar; Seligson, Hope

    2015-01-01

    We make use of the most recent National Seismic Hazard Maps (the years 2008 and 2014 cycles), updated census data on population, and economic exposure estimates of general building stock to quantify annualized earthquake loss (AEL) for the conterminous United States. The AEL analyses were performed using the Federal Emergency Management Agency's (FEMA) Hazus software, which facilitated a systematic comparison of the influence of the 2014 National Seismic Hazard Maps in terms of annualized loss estimates in different parts of the country. The losses from an individual earthquake could easily exceed many tens of billions of dollars, and the long-term averaged value of losses from all earthquakes within the conterminous U.S. has been estimated to be a few billion dollars per year. This study estimated nationwide losses to be approximately $4.5 billion per year (in 2012$), roughly 80% of which can be attributed to the States of California, Oregon and Washington. We document the change in estimated AELs arising solely from the change in the assumed hazard map. The change from the 2008 map to the 2014 map results in a 10 to 20% reduction in AELs for the highly seismic States of the Western United States, whereas the reduction is even more significant for Central and Eastern United States.

  6. Property loss estimation for wind and earthquake perils.

    PubMed

    Chandler, A M; Jones, E J; Patel, M H

    2001-04-01

    This article describes the development of a generic loss assessment methodology, which is applicable to earthquake and windstorm perils worldwide. The latest information regarding hazard estimation is first integrated with the parameters that best describe the intensity of the action of both windstorms and earthquakes on building structures, for events with defined average return periods or recurrence intervals. The subsequent evaluation of building vulnerability (damageability) under the action of both earthquake and windstorm loadings utilizes information on damage and loss from past events, along with an assessment of the key building properties (including age and quality of design and construction), to assess information about the ability of buildings to withstand such loadings and hence to assign a building type to the particular risk or portfolio of risks. This predicted damage information is then translated into risk-specific mathematical vulnerability functions, which enable numerical evaluation of the probability of building damage arising at various defined levels. By assigning cost factors to the defined damage levels, the associated computation of total loss at a given level of hazard may be achieved. This developed methodology is universal in the sense that it may be applied successfully to buildings situated in a variety of earthquake and windstorm environments, ranging from very low to extreme levels of hazard. As a loss prediction tool, it enables accurate estimation of losses from potential scenario events linked to defined return periods and, hence, can greatly assist risk assessment and planning.

  7. Global Building Inventory for Earthquake Loss Estimation and Risk Management

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David; Porter, Keith

    2010-01-01

    We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat’s demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature.

  8. Earthquake Loss Estimates in Near Real-Time

    NASA Astrophysics Data System (ADS)

    Wyss, Max; Wang, Rongjiang; Zschau, Jochen; Xia, Ye

    2006-10-01

    The usefulness to rescue teams of nearreal-time loss estimates after major earthquakes is advancing rapidly. The difference in the quality of data available in highly developed compared with developing countries dictates that different approaches be used to maximize mitigation efforts. In developed countries, extensive information from tax and insurance records, together with accurate census figures, furnish detailed data on the fragility of buildings and on the number of people at risk. For example, these data are exploited by the method to estimate losses used in the Hazards U.S. Multi-Hazard (HAZUSMH)software program (http://www.fema.gov/plan/prevent/hazus/). However, in developing countries, the population at risk is estimated from inferior data sources and the fragility of the building stock often is derived empirically, using past disastrous earthquakes for calibration [Wyss, 2004].

  9. Rapid estimation of earthquake loss based on instrumental seismic intensity: design and realization

    NASA Astrophysics Data System (ADS)

    Huang, Hongsheng; Chen, Lin; Zhu, Gengqing; Wang, Lin; Lin, Yanzhao; Wang, Huishan

    2013-11-01

    As a result of our ability to acquire large volumes of real-time earthquake observation data, coupled with increased computer performance, near real-time seismic instrument intensity can be obtained by using ground motion data observed by instruments and by using the appropriate spatial interpolation methods. By combining vulnerability study results from earthquake disaster research with earthquake disaster assessment models, we can estimate the losses caused by devastating earthquakes, in an attempt to provide more reliable information for earthquake emergency response and decision support. This paper analyzes the latest progress on the methods of rapid earthquake loss estimation at home and abroad. A new method involving seismic instrument intensity rapid reporting to estimate earthquake loss is proposed and the relevant software is developed. Finally, a case study using the M L4.9 earthquake that occurred in Shun-chang county, Fujian Province on March 13, 2007 is given as an example of the proposed method.

  10. Earthquake loss estimates in real time begin to assist rescue teams worldwide

    NASA Astrophysics Data System (ADS)

    Wyss, M.

    Recent advances are improving the speed and accuracy of loss estimates immediately after earthquakes so that injured people may be rescued more efficiently After major and large earthquakes, rescue agencies and civil defense managers rapidly need quantitative estimates of the extent of the potential disaster, at a time when data from the affected area may not yet have reached the outside world. Loss estimates for hypothetical future earthquakes are also reaching a level where they are useful for motivating and planning earthquake disaster mitigation.In many developing countries, urbanization and population are increasing at an unprecedented pace. Therefore, the extent of future earthquake disasters cannot easily be estimated from historical experience that typically dates from a hundred years ago. Even for order of magnitude estimates of future losses, it is necessary to include information on the current quality of buildings, the soil properties, and the present population.

  11. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    USGS Publications Warehouse

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  12. A global building inventory for earthquake loss estimation and risk management

    USGS Publications Warehouse

    Jaiswal, K.; Wald, D.; Porter, K.

    2010-01-01

    We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat's demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature. ?? 2010, Earthquake Engineering Research Institute.

  13. Comparing population exposure to multiple Washington earthquake scenarios for prioritizing loss estimation studies

    USGS Publications Warehouse

    Wood, Nathan J.; Ratliff, Jamie L.; Schelling, John; Weaver, Craig S.

    2014-01-01

    Scenario-based, loss-estimation studies are useful for gauging potential societal impacts from earthquakes but can be challenging to undertake in areas with multiple scenarios and jurisdictions. We present a geospatial approach using various population data for comparing earthquake scenarios and jurisdictions to help emergency managers prioritize where to focus limited resources on data development and loss-estimation studies. Using 20 earthquake scenarios developed for the State of Washington (USA), we demonstrate how a population-exposure analysis across multiple jurisdictions based on Modified Mercalli Intensity (MMI) classes helps emergency managers understand and communicate where potential loss of life may be concentrated and where impacts may be more related to quality of life. Results indicate that certain well-known scenarios may directly impact the greatest number of people, whereas other, potentially lesser-known, scenarios impact fewer people but consequences could be more severe. The use of economic data to profile each jurisdiction’s workforce in earthquake hazard zones also provides additional insight on at-risk populations. This approach can serve as a first step in understanding societal impacts of earthquakes and helping practitioners to efficiently use their limited risk-reduction resources.

  14. A new Tool for Estimating Losses due to Earthquakes: QUAKELOSS2

    NASA Astrophysics Data System (ADS)

    Kaestli, P.; Wyss, M.; Bonjour, C.; Wiemer, S.; Wyss, B. M.

    2007-12-01

    WAPMERR and the Swiss Seismological Service are developing new software for estimating mean damage to buildings, number of injured and number of fatalities due to earthquakes worldwide. The focus for applications is real-time estimates of losses after earthquakes in countries without dense seismograph networks, and results that are easy to digest by relief agencies. Therefore, the standard version of the software addresses losses by settlement, subdivisions of settlements and important pieces of infrastructure. However, a generic design, an open source policy and well defined interfaces will allow the software to work on any gridded or discrete building stock data, to do Monte-Carlo simulations for error assessment and to plug in more elaborate source models than simple point and line sources and thus to compute realistic loss scenarios as well as probabilistic risk maps. It will provide interfaces to SHAKEMAP and PAGER, such that innovations developed for the latter programs may be used in QUAKELOSS2, and vice versa. A client server design will provide a front-end web interface where the user may directly manage servers as well as run the software in one's&pown laboratory. The input-output features and mapping will be designed to allow the user to run QUAKELOSS2 remotely with basic functions, as well as in a laboratory setting including a full-featured GIS setup for additional analysis. In many cases, the input data (earthquake parameters as well as population and building stock data) are poorly known for developing countries. Calibration of loss estimates, using past earthquakes that have caused damage and WAPMERR's experience of four years" estimating losses, will help to produce approximately correct results in countries with strong earthquake activity. A worldwide standard dataset on population and building stock will be provided as open source together with the software. The dataset will be improved successively, based on input from satellite images

  15. Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge

    USGS Publications Warehouse

    Jaiswal, K.S.; Wald, D.J.

    2012-01-01

    We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.

  16. Loss estimates for a Puente Hills blind-thrust earthquake in Los Angeles, California

    USGS Publications Warehouse

    Field, E.H.; Seligson, H.A.; Gupta, N.; Gupta, V.; Jordan, T.H.; Campbell, K.W.

    2005-01-01

    Based on OpenSHA and HAZUS-MH, we present loss estimates for an earthquake rupture on the recently identified Puente Hills blind-thrust fault beneath Los Angeles. Given a range of possible magnitudes and ground motion models, and presuming a full fault rupture, we estimate the total economic loss to be between $82 and $252 billion. This range is not only considerably higher than a previous estimate of $69 billion, but also implies the event would be the costliest disaster in U.S. history. The analysis has also provided the following predictions: 3,000-18,000 fatalities, 142,000-735,000 displaced households, 42,000-211,000 in need of short-term public shelter, and 30,000-99,000 tons of debris generated. Finally, we show that the choice of ground motion model can be more influential than the earthquake magnitude, and that reducing this epistemic uncertainty (e.g., via model improvement and/or rejection) could reduce the uncertainty of the loss estimates by up to a factor of two. We note that a full Puente Hills fault rupture is a rare event (once every ???3,000 years), and that other seismic sources pose significant risk as well. ?? 2005, Earthquake Engineering Research Institute.

  17. Estimating earthquake potential

    USGS Publications Warehouse

    Page, R.A.

    1980-01-01

    The hazards to life and property from earthquakes can be minimized in three ways. First, structures can be designed and built to resist the effects of earthquakes. Second, the location of structures and human activities can be chosen to avoid or to limit the use of areas known to be subject to serious earthquake hazards. Third, preparations for an earthquake in response to a prediction or warning can reduce the loss of life and damage to property as well as promote a rapid recovery from the disaster. The success of the first two strategies, earthquake engineering and land use planning, depends on being able to reliably estimate the earthquake potential. The key considerations in defining the potential of a region are the location, size, and character of future earthquakes and frequency of their occurrence. Both historic seismicity of the region and the geologic record are considered in evaluating earthquake potential. 

  18. Regional earthquake loss estimation in the Autonomous Province of Bolzano - South Tyrol (Italy)

    NASA Astrophysics Data System (ADS)

    Huttenlau, Matthias; Winter, Benjamin

    2013-04-01

    Beside storm events geophysical events cause a majority of natural hazard losses on a global scale. However, in alpine regions with a moderate earthquake risk potential like in the study area and thereupon connected consequences on the collective memory this source of risk is often neglected in contrast to gravitational and hydrological hazards processes. In this context, the comparative analysis of potential disasters and emergencies on a national level in Switzerland (Katarisk study) has shown that earthquakes are the most serious source of risk in general. In order to estimate the potential losses of earthquake events for different return periods and loss dimensions of extreme events the following study was conducted in the Autonomous Province of Bolzano - South Tyrol (Italy). The applied methodology follows the generally accepted risk concept based on the risk components hazard, elements at risk and vulnerability, whereby risk is not defined holistically (direct, indirect, tangible and intangible) but with the risk category losses on buildings and inventory as a general risk proxy. The hazard analysis is based on a regional macroseismic scenario approach. Thereby, the settlement centre of each community (116 communities) is defined as potential epicentre. For each epicentre four different epicentral scenarios (return periods of 98, 475, 975 and 2475 years) are calculated based on the simple but approved and generally accepted attenuation law according to Sponheuer (1960). The relevant input parameters to calculate the epicentral scenarios are (i) the macroseismic intensity and (ii) the focal depth. The considered macroseismic intensities are based on a probabilistic seismic hazard analysis (PSHA) of the Italian earthquake catalogue on a community level (Dipartimento della Protezione Civile). The relevant focal depth are considered as a mean within a defined buffer of the focal depths of the harmonized earthquake catalogues of Italy and Switzerland as well as

  19. Ways to increase the reliability of earthquake loss estimations in emergency mode

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valeri; Ugarov, Aleksander

    2016-04-01

    The lessons of earthquake disasters in Nepal, China, Indonesia, India, Haiti, Turkey and many others show that authorities in charge of emergency response are most often lacking prompt and reliable information on the disaster itself and its secondary effects. Timely and adequate action just after a strong earthquake can result in significant benefits in saving lives and other benefits, especially, in densely populated areas with high level of industrialization. The reliability of rough and rapid information provided by "global systems" (i.e. systems operated without consideration on wherever the earthquake has occurred), in emergency mode is strongly dependent on many factors dealt with input data and simulation models used in such systems. The paper analyses the different factors contribution to the total "error" of fatality estimation in emergency mode. Examples of four strong events in Nepal, Italy, China, Italy allowed to make a conclusion that the reliability of loss estimations is first of all influenced by the uncertainties in event parameters determination (coordinates, magnitude, source depth); this factors' group rating is the highest; as the degree of influence on reliability of loss estimations is equal to about 50%. The second place is taken by the factors' group responsible for macroseismic field simulation; the degree of influence of the group errors is about 30%. The last place is taken by group of factors, which describes the built environment distribution and regional vulnerability functions; the factors' group contributes about 20% to the error of loss estimation. Ways to minimize the influence of different factors on the reliability of loss assessment in near real time are proposed. The first one is to determine the rating of seismological surveys for different zones in attempting to decrease uncertainties in the earthquake parameters input determination in emergency mode. The second one is to "calibrate" the "global systems" drawing advantage

  20. When the Big One Strikes Again: Estimated Losses due to a Repeat of the 1906 San Francisco Earthquake

    NASA Astrophysics Data System (ADS)

    Kircher, C. A.

    2006-12-01

    The 1906 San Francisco Earthquake, estimated to have a magnitude of 7.9, changed the history of California and indeed the whole nation. The earthquake and the associated fires caused about 3,000 deaths and 524 million in property damage. Much has changed since 1906. What would happen if an earthquake like that of 1906 were to happen today? Here I present results of an ongoing study of building damage and losses likely to occur due to a repeat of the 1906 San Francisco earthquake, using the HAZUS technology. Recent work by Boatwright et al. (2006) provides estimates of spectral response accelerations derived from observations of modified Mercalli intensities (MMI) in the 1906 event. In one scenario we calculate damage and loss estimates using those estimated ground motions. In another we use a method consistent with current seismic provisions of building codes for a magnitude M7.9 event on the San Andreas Fault. Our study region includes 19 counties covering 24,000 square miles, with a population of more than ten million people and about 1.5 trillion of building and contents exposure. The majority of this property and population is within 40 km (25 miles) of the San Andreas Fault. The current population of this Northern California region is about ten times what it was in 1906, and the replacement value of buildings is about 500 times greater. Despite improvements in building codes and construction practices, the growth of the region over the past 100 years causes the range of estimated fatalities, from approximately 800 to about 3,400 depending on time of day and other variables, to be comparable to what it was in 1906. The forecast property loss to buildings ranges from 90 to 120 billion. From 7,000 to 10,000 commercial buildings in the region may be closed due to serious damage; and about 160,000 to 250,000 households may be displaced from damaged residences. Losses due to fire following earthquake, as well as losses to utility and transportation systems, would be

  1. Impact of Uncertainty on Loss Estimates for a Repeat of the 1908 Messina-Reggio Calabria Earthquake in Southern Italy

    NASA Astrophysics Data System (ADS)

    Franco, Guillermo; Shen-Tu, BingMing; Goretti, Agostino; Bazzurro, Paolo; Valensise, Gianluca

    2008-07-01

    Increasing sophistication in the insurance and reinsurance market is stimulating the move towards catastrophe models that offer a greater degree of flexibility in the definition of model parameters and model assumptions. This study explores the impact of uncertainty in the input parameters on the loss estimates by departing from the exclusive usage of mean values to establish the earthquake event mechanism, the ground motion fields, or the damageability of the building stock. Here the potential losses due to a repeat of the 1908 Messina-Reggio Calabria event are calculated using different plausible alternatives found in the literature that encompass 12 event scenarios, 2 different ground motion prediction equations, and 16 combinations of damage functions for the building stock, a total of 384 loss scenarios. These results constitute the basis for a sensitivity analysis of the different assumptions on the loss estimates that allows the model user to estimate the impact of the uncertainty on input parameters and the potential spread of the model results. For the event under scrutiny, average losses would amount today to about 9.000 to 10.000 million Euros. The uncertainty in the model parameters is reflected in the high coefficient of variation of this loss, reaching approximately 45%. The choice of ground motion prediction equations and vulnerability functions of the building stock contribute the most to the uncertainty in loss estimates. This indicates that the application of non-local-specific information has a great impact on the spread of potential catastrophic losses. In order to close this uncertainty gap, more exhaustive documentation practices in insurance portfolios will have to go hand in hand with greater flexibility in the model input parameters.

  2. Impact of Uncertainty on Loss Estimates for a Repeat of the 1908 Messina-Reggio Calabria Earthquake in Southern Italy

    SciTech Connect

    Franco, Guillermo; Shen-Tu, Bing Ming; Bazzurro, Paolo; Goretti, Agostino; Valensise, Gianluca

    2008-07-08

    Increasing sophistication in the insurance and reinsurance market is stimulating the move towards catastrophe models that offer a greater degree of flexibility in the definition of model parameters and model assumptions. This study explores the impact of uncertainty in the input parameters on the loss estimates by departing from the exclusive usage of mean values to establish the earthquake event mechanism, the ground motion fields, or the damageability of the building stock. Here the potential losses due to a repeat of the 1908 Messina-Reggio Calabria event are calculated using different plausible alternatives found in the literature that encompass 12 event scenarios, 2 different ground motion prediction equations, and 16 combinations of damage functions for the building stock, a total of 384 loss scenarios. These results constitute the basis for a sensitivity analysis of the different assumptions on the loss estimates that allows the model user to estimate the impact of the uncertainty on input parameters and the potential spread of the model results. For the event under scrutiny, average losses would amount today to about 9.000 to 10.000 million Euros. The uncertainty in the model parameters is reflected in the high coefficient of variation of this loss, reaching approximately 45%. The choice of ground motion prediction equations and vulnerability functions of the building stock contribute the most to the uncertainty in loss estimates. This indicates that the application of non-local-specific information has a great impact on the spread of potential catastrophic losses. In order to close this uncertainty gap, more exhaustive documentation practices in insurance portfolios will have to go hand in hand with greater flexibility in the model input parameters.

  3. Observed and estimated economic losses in Guadeloupe (French Antilles) after Les Saintes Earthquake (2004). Application to risk comparison

    NASA Astrophysics Data System (ADS)

    Monfort, Daniel; Reveillère, Arnaud; Lecacheux, Sophie; Muller, Héloise; Grisanti, Ludovic; Baills, Audrey; Bertil, Didier; Sedan, Olivier; Tinard, Pierre

    2013-04-01

    The main objective of this work is to compare the potential direct economic losses between two different hazards in Guadeloupe (French Antilles), earthquakes and storm surges, for different return periods. In order to validate some hypotheses which are done concerning building typologies and their insured values a comparison between real economic loss data and estimated ones is done using a real event. In 2004 there was an earthquake in Guadeloupe, Mw 6.3, in a little archipelago in the south of Guadeloupe called Les Saintes. The heaviest intensities were VIII in the municipalities of Les Saintes and decreases from VII to IV in the other municipalities of Guadeloupe. The CCR, French Reinsurance Organism, has provided data about the total insured economic losses estimated per municipality (in a situation in 2011) and the insurance penetration ratio, it means, the ratio of insured exposed elements per municipality. Some other information about observed damaged structures is quite irregular all over the archipelago, being the only reliable one the observed macroseismic intensity per municipality (field survey done by BCSF). These data at Guadeloupe's scale has been compared with results coming from a retro damage scenario for this earthquake done with the vulnerability data from current buildings and their mean economic value of each building type and taking into account the local amplification effects on the earthquake propagation. In general the results are quite similar but with some significant differences. The results coming from scenario are quite correlated with the spatial attenuation from the earthquake intensity; the heaviest economic losses are concentrated within the municipalities exposed to a considerable and damageable intensity (VII to VIII). On the other side, CCR data show that heavy economic damages are not only located in the most impacted cities but also in the most important municipalities of the archipelago in terms of economic activity

  4. Planning a Preliminary program for Earthquake Loss Estimation and Emergency Operation by Three-dimensional Structural Model of Active Faults

    NASA Astrophysics Data System (ADS)

    Ke, M. C.

    2015-12-01

    Large scale earthquakes often cause serious economic losses and a lot of deaths. Because the seismic magnitude, the occurring time and the occurring location of earthquakes are still unable to predict now. The pre-disaster risk modeling and post-disaster operation are really important works of reducing earthquake damages. In order to understanding disaster risk of earthquakes, people usually use the technology of Earthquake simulation to build the earthquake scenarios. Therefore, Point source, fault line source and fault plane source are the models which often are used as a seismic source of scenarios. The assessment results made from different models used on risk assessment and emergency operation of earthquakes are well, but the accuracy of the assessment results could still be upgrade. This program invites experts and scholars from Taiwan University, National Central University, and National Cheng Kung University, and tries using historical records of earthquakes, geological data and geophysical data to build underground three-dimensional structure planes of active faults. It is a purpose to replace projection fault planes by underground fault planes as similar true. The analysis accuracy of earthquake prevention efforts can be upgraded by this database. Then these three-dimensional data will be applied to different stages of disaster prevention. For pre-disaster, results of earthquake risk analysis obtained by the three-dimensional data of the fault plane are closer to real damage. For disaster, three-dimensional data of the fault plane can be help to speculate that aftershocks distributed and serious damage area. The program has been used 14 geological profiles to build the three dimensional data of Hsinchu fault and HisnCheng faults in 2015. Other active faults will be completed in 2018 and be actually applied on earthquake disaster prevention.

  5. Urban Earthquake Shaking and Loss Assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Level 2 analysis of the ELER Software (similar to HAZUS and SELENA) is essentially intended for earthquake risk assessment (building damage, consequential human casualties and macro economic loss quantifiers) in urban areas. The basic Shake Mapping is similar to the Level 0 and Level 1 analysis however, options are available for more sophisticated treatment of site response through externally entered data and improvement of the shake map through incorporation

  6. Too generous to a fault? Is reliable earthquake safety a lost art? Errors in expected human losses due to incorrect seismic hazard estimates

    NASA Astrophysics Data System (ADS)

    Bela, James

    2014-11-01

    "One is well advised, when traveling to a new territory, to take a good map and then to check the map with the actual territory during the journey." In just such a reality check, Global Seismic Hazard Assessment Program (GSHAP) maps (prepared using PSHA) portrayed a "low seismic hazard," which was then also assumed to be the "risk to which the populations were exposed." But time-after-time-after-time the actual earthquakes that occurred were not only "surprises" (many times larger than those implied on the maps), but they were often near the maximum potential size (Maximum Credible Earthquake or MCE) that geologically could occur. Given these "errors in expected human losses due to incorrect seismic hazard estimates" revealed globally in these past performances of the GSHAP maps (> 700,000 deaths 2001-2011), we need to ask not only: "Is reliable earthquake safety a lost art?" but also: "Who and what were the `Raiders of the Lost Art?' "

  7. A new method for the production of social fragility functions and the result of its use in worldwide fatality loss estimation for earthquakes

    NASA Astrophysics Data System (ADS)

    Daniell, James; Wenzel, Friedemann

    2014-05-01

    A review of over 200 fatality models over the past 50 years for earthquake loss estimation from various authors has identified key parameters that influence fatality estimation in each of these models. These are often very specific and cannot be readily adapted globally. In the doctoral dissertation of the author, a new method is used for regression of fatalities to intensity using loss functions based not only on fatalities, but also using population models and other socioeconomic parameters created through time for every country worldwide for the period 1900-2013. A calibration of functions was undertaken from 1900-2008, and each individual quake analysed from 2009-2013 in real-time, in conjunction with www.earthquake-report.com. Using the CATDAT Damaging Earthquakes Database containing socioeconomic loss information for 7208 damaging earthquake events from 1900-2013 including disaggregation of secondary effects, fatality estimates for over 2035 events have been re-examined from 1900-2013. In addition, 99 of these events have detailed data for the individual cities and towns or have been reconstructed to create a death rate as a percentage of population. Many historical isoseismal maps and macroseismic intensity datapoint surveys collected globally, have been digitised and modelled covering around 1353 of these 2035 fatal events, to include an estimate of population, occupancy and socioeconomic climate at the time of the event at each intensity bracket. In addition, 1651 events without fatalities but causing damage have also been examined in this way. The production of socioeconomic and engineering indices such as HDI and building vulnerability has been undertaken on a country-level and state/province-level leading to a dataset allowing regressions not only using a static view of risk, but also allowing for the change in the socioeconomic climate between the earthquake events to be undertaken. This means that a year 1920 event in a country, will not simply be

  8. Rapid exposure and loss estimates for the May 12, 2008 Mw 7.9 Wenchuan earthquake provided by the U.S. Geological Survey's PAGER system

    USGS Publications Warehouse

    Earle, P.S.; Wald, D.J.; Allen, T.I.; Jaiswal, K.S.; Porter, K.A.; Hearne, M.G.

    2008-01-01

    One half-hour after the May 12th Mw 7.9 Wenchuan, China earthquake, the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) system distributed an automatically generated alert stating that 1.2 million people were exposed to severe-to-extreme shaking (Modified Mercalli Intensity VIII or greater). It was immediately clear that a large-scale disaster had occurred. These alerts were widely distributed and referenced by the major media outlets and used by governments, scientific, and relief agencies to guide their responses. The PAGER alerts and Web pages included predictive ShakeMaps showing estimates of ground shaking, maps of population density, and a list of estimated intensities at impacted cities. Manual, revised alerts were issued in the following hours that included the dimensions of the fault rupture. Within a half-day, PAGER’s estimates of the population exposed to strong shaking levels stabilized at 5.2 million people. A coordinated research effort is underway to extend PAGER’s capability to include estimates of the number of casualties. We are pursuing loss models that will allow PAGER the flexibility to use detailed inventory and engineering results in regions where these data are available while also calculating loss estimates in regions where little is known about the type and strength of the built infrastructure. Prototype PAGER fatality estimates are currently implemented and can be manually triggered. In the hours following the Wenchuan earthquake, these models predicted fatalities in the tens of thousands.

  9. Trends in global earthquake loss

    NASA Astrophysics Data System (ADS)

    Arnst, Isabel; Wenzel, Friedemann; Daniell, James

    2016-04-01

    Based on the CATDAT damage and loss database we analyse global trends of earthquake losses (in current values) and fatalities for the period between 1900 and 2015 from a statistical perspective. For this time period the data are complete for magnitudes above 6. First, we study the basic statistics of losses and find that losses below 10 bl. US satisfy approximately a power law with an exponent of 1.7 for the cumulative distribution. Higher loss values are modelled with the General Pareto Distribution (GPD). The 'transition' between power law and GPD is determined with the Mean Excess Function. We split the data set into a period of pre 1955 and post 1955 loss data as in those periods the exposure is significantly different due to population growth. The Annual Average Loss (AAL) for direct damage for events below 10 bl. US differs by a factor of 6, whereas the incorporation of the extreme loss events increases the AAL from 25 bl. US/yr to 30 bl. US/yr. Annual Average Deaths (AAD) show little (30%) difference for events below 6.000 fatalities and AAD values of 19.000 and 26.000 deaths per year if extreme values are incorporated. With data on the global Gross Domestic Product (GDP) that reflects the annual expenditures (consumption, investment, government spending) and on capital stock we relate losses to the economic capacity of societies and find that GDP (in real terms) grows much faster than losses so that the latter one play a decreasing role given the growing prosperity of mankind. This reasoning does not necessarily apply on a regional scale. Main conclusions of the analysis are that (a) a correct projection of historic loss values to nowadays US values is critical; (b) extreme value analysis is mandatory; (c) growing exposure is reflected in the AAL and AAD results for the periods pre and post 1955 events; (d) scaling loss values with global GDP data indicates that the relative size - from a global perspective - of losses decreases rapidly over time.

  10. Origin of Human Losses due to the Emilia Romagna, Italy, M5.9 Earthquake of 20 May 2012 and their Estimate in Real Time

    NASA Astrophysics Data System (ADS)

    Wyss, M.

    2012-12-01

    Estimating human losses within less than an hour worldwide requires assumptions and simplifications. Earthquake for which losses are accurately recorded after the event provide clues concerning the influence of error sources. If final observations and real time estimates differ significantly, data and methods to calculate losses may be modified or calibrated. In the case of the earthquake in the Emilia Romagna region with M5.9 on May 20th, the real time epicenter estimates of the GFZ and the USGS differed from the ultimate location by the INGV by 6 and 9 km, respectively. Fatalities estimated within an hour of the earthquake by the loss estimating tool QLARM, based on these two epicenters, numbered 20 and 31, whereas 7 were reported in the end, and 12 would have been calculated if the ultimate epicenter released by INGV had been used. These four numbers being small, do not differ statistically. Thus, the epicenter errors in this case did not appreciably influence the results. The QUEST team of INGV has reported intensities with I ≥ 5 at 40 locations with accuracies of 0.5 units and QLARM estimated I > 4.5 at 224 locations. The differences between the observed and calculated values at the 23 common locations show that the calculation in the 17 instances with significant differences were too high on average by one unit. By assuming higher than average attenuation within standard bounds for worldwide loss estimates, the calculated intensities model the observed ones better: For 57% of the locations, the difference was not significant; for the others, the calculated intensities were still somewhat higher than the observed ones. Using a generic attenuation law with higher than average attenuation, but not tailored to the region, the number of estimated fatalities becomes 12 compared to 7 reported ones. Thus, attenuation in this case decreased the discrepancy between observed and reported death by approximately a factor of two. The source of the fatalities is

  11. Pan-European Seismic Risk Assessment: A proof of concept using the Earthquake Loss Estimation Routine (ELER)

    NASA Astrophysics Data System (ADS)

    Corbane, Christina; Hancilar, Ufuk; Silva, Vitor; Ehrlich, Daniele; De Groeve, Tom

    2016-04-01

    One of the key objectives of the new EU civil protection mechanism is an enhanced understanding of risks the EU is facing. Developing a European perspective may create significant opportunities of successfully combining resources for the common objective of preventing and mitigating shared risks. Risk assessments and mapping represent the first step in these preventive efforts. The EU is facing an increasing number of natural disasters. Among them earthquakes are the second deadliest after extreme temperatures. A better-shared understanding of where seismic risk lies in the EU is useful to identify which regions are most at risk and where more detailed seismic risk assessments are needed. In that scope, seismic risk assessment models at a pan-European level have a great potential in obtaining an overview of the expected economic and human losses using a homogeneous quantitative approach and harmonized datasets. This study strives to demonstrate the feasibility of performing a probabilistic seismic risk assessment at a pan-European level with an open access methodology and using open datasets available across the EU. It aims also at highlighting the challenges and needs in datasets and the information gaps for a consistent seismic risk assessment at the pan-European level. The study constitutes a "proof of concept" that can complement the information provided by Member States in their National Risk Assessments. Its main contribution lies in pooling open-access data from different sources in a homogeneous format, which could serve as baseline data for performing more in depth risk assessments in Europe.

  12. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  13. A quick earthquake disaster loss assessment method supported by dasymetric data for emergency response in China

    NASA Astrophysics Data System (ADS)

    Xu, Jinghai; An, Jiwen; Nie, Gaozong

    2016-04-01

    Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.

  14. Spatial correlation of probabilistic earthquake ground motion and loss

    USGS Publications Warehouse

    Wesson, R.L.; Perkins, D.M.

    2001-01-01

    Spatial correlation of annual earthquake ground motions and losses can be used to estimate the variance of annual losses to a portfolio of properties exposed to earthquakes A direct method is described for the calculations of the spatial correlation of earthquake ground motions and losses. Calculations for the direct method can be carried out using either numerical quadrature or a discrete, matrix-based approach. Numerical results for this method are compared with those calculated from a simple Monte Carlo simulation. Spatial correlation of ground motion and loss is induced by the systematic attenuation of ground motion with distance from the source, by common site conditions, and by the finite length of fault ruptures. Spatial correlation is also strongly dependent on the partitioning of the variability, given an event, into interevent and intraevent components. Intraevent variability reduces the spatial correlation of losses. Interevent variability increases spatial correlation of losses. The higher the spatial correlation, the larger the variance in losses to a port-folio, and the more likely extreme values become. This result underscores the importance of accurately determining the relative magnitudes of intraevent and interevent variability in ground-motion studies, because of the strong impact in estimating earthquake losses to a portfolio. The direct method offers an alternative to simulation for calculating the variance of losses to a portfolio, which may reduce the amount of calculation required.

  15. Earthquake interdependence and insurance loss modeling

    NASA Astrophysics Data System (ADS)

    Muir Wood, R.

    2005-12-01

    Probabilistic Catastrophe loss modeling generally assumes that earthquakes are independent events and occur far enough apart in time that damage from one event is fully restituted before another earthquake occurs. While time dependence and cascade fault rupturing are today standard elements of the earthquake hazard engine, in the next generation of Catastrophe loss models one can expect to find a more comprehensive range of earthquake interdependence represented in a full simulation modeling environment. Such behavior includes the incorporation of the ways in which earthquakes relate one to another in both space and time (including foreshock, aftershock and triggered mainshock distinctions) and the damage that can be predicted from overlapping damage fields as related to the length of time for reconstruction that has elapsed between events. For insurance purposes losses are framed by the 168 hour clause for classifying losses as falling within the same `event' for reinsurance recoveries as well as the annual insurance contract. The understanding of the ways in which stress changes associated with fault rupture affect the probabilities of earthquakes on surrounding faults has also expanded the predictability of potential earthquake sequences as well as highlighted the potential to identify locations where, for some time window, risk can be discounted. While it can be illuminating to explore the loss and insurance implications of the patterns of historical earthquake occurrence seen historically along the Nankaido subduction zone of Southern Japan, in New Madrid from 1811-1812, or Nevada in 1954, the sequences to be expected in the future are unlikely to have historical precedent in the region in which they form.

  16. Rapid estimation of the economic consequences of global earthquakes

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2011-01-01

    The U.S. Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, operational since mid 2007, rapidly estimates the most affected locations and the population exposure at different levels of shaking intensities. The PAGER system has significantly improved the way aid agencies determine the scale of response needed in the aftermath of an earthquake. For example, the PAGER exposure estimates provided reasonably accurate assessments of the scale and spatial extent of the damage and losses following the 2008 Wenchuan earthquake (Mw 7.9) in China, the 2009 L'Aquila earthquake (Mw 6.3) in Italy, the 2010 Haiti earthquake (Mw 7.0), and the 2010 Chile earthquake (Mw 8.8). Nevertheless, some engineering and seismological expertise is often required to digest PAGER's exposure estimate and turn it into estimated fatalities and economic losses. This has been the focus of PAGER's most recent development. With the new loss-estimation component of the PAGER system it is now possible to produce rapid estimation of expected fatalities for global earthquakes (Jaiswal and others, 2009). While an estimate of earthquake fatalities is a fundamental indicator of potential human consequences in developing countries (for example, Iran, Pakistan, Haiti, Peru, and many others), economic consequences often drive the responses in much of the developed world (for example, New Zealand, the United States, and Chile), where the improved structural behavior of seismically resistant buildings significantly reduces earthquake casualties. Rapid availability of estimates of both fatalities and economic losses can be a valuable resource. The total time needed to determine the actual scope of an earthquake disaster and to respond effectively varies from country to country. It can take days or sometimes weeks before the damage and consequences of a disaster can be understood both socially and economically. The objective of the U.S. Geological Survey's PAGER system is

  17. Ten Years of Real-Time Earthquake Loss Alerts

    NASA Astrophysics Data System (ADS)

    Wyss, M.

    2013-12-01

    The priorities of the most important parameters of an earthquake disaster are: Number of fatalities, number of injured, mean damage as a function of settlement, expected intensity of shaking at critical facilities. The requirements to calculate these parameters in real time are: 1) Availability of reliable earthquake source parameters within minutes. 2) Capability of calculating expected intensities of strong ground shaking. 3) Data sets on population distribution and conditions of building stock as a function of settlements. 4) Data on locations of critical facilities. 5) Verified methods of calculating damage and losses. 6) Personnel available on a 24/7 basis to perform and review these calculations. There are three services available that distribute information about the likely consequences of earthquakes within about half an hour of the event. Two of these calculate losses, one gives only general information. Although, much progress has been made during the last ten years improving the data sets and the calculating methods, much remains to be done. The data sets are only first order approximations and the methods bare refinement. Nevertheless, the quantitative loss estimates after damaging earthquakes in real time are generally correct in the sense that they allow distinguishing disastrous from inconsequential events.

  18. Strategies for rapid global earthquake impact estimation: the Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, D.J.

    2013-01-01

    This chapter summarizes the state-of-the-art for rapid earthquake impact estimation. It details the needs and challenges associated with quick estimation of earthquake losses following global earthquakes, and provides a brief literature review of various approaches that have been used in the past. With this background, the chapter introduces the operational earthquake loss estimation system developed by the U.S. Geological Survey (USGS) known as PAGER (for Prompt Assessment of Global Earthquakes for Response). It also details some of the ongoing developments of PAGER’s loss estimation models to better supplement the operational empirical models, and to produce value-added web content for a variety of PAGER users.

  19. Hypocenter Estimation of Induced Earthquakes in Groningen

    NASA Astrophysics Data System (ADS)

    Spetzler, Jesper; Dost, Bernard

    2017-01-01

    Induced earthquakes due to gas production have taken place in the province of Groningen in the North-East of the Netherlands since 1986. In the first years of seismicity, a sparse seismological network with large station distances from the seismogenic area in Groningen was used. The location of induced earthquakes was limited by the few and wide spread stations. Recently, the station network has been extended significantly and the location of induced earthquakes in Groningen has become routine work. Except for the depth estimation of the events. In the hypocenter method used for source location by the Royal Netherlands Meteorological Institute (KNMI), the depth of the induced earthquakes is by default set to 3 km which is the average depth of the gas-reservoir. Alternatively, a differential travel time for P-waves approach for source location is applied on recorded data from the extended network. The epicenter and depth of 87 induced earthquakes from 2014 to July 2016 have been estimated. The newly estimated epicentres are close to the induced earthquake locations from the current method applied by the KNMI. It is observed that most induced earthquakes take place at reservoir level. Several events in the same magnitude order are found near a brittle anhydrite layer in the overburden of mainly rock salt evaporites.

  20. Losses to single-family housing from ground motions in the 1994 Northridge, California, earthquake

    USGS Publications Warehouse

    Wesson, R.L.; Perkins, D.M.; Leyendecker, E.V.; Roth, R.J.; Petersen, M.D.

    2004-01-01

    The distributions of insured losses to single-family housing following the 1994 Northridge, California, earthquake for 234 ZIP codes can be satisfactorily modeled with gamma distributions. Regressions of the parameters in the gamma distribution on estimates of ground motion, derived from ShakeMap estimates or from interpolated observations, provide a basis for developing curves of conditional probability of loss given a ground motion. Comparison of the resulting estimates of aggregate loss with the actual aggregate loss gives satisfactory agreement for several different ground-motion parameters. Estimates of loss based on a deterministic spatial model of the earthquake ground motion, using standard attenuation relationships and NEHRP soil factors, give satisfactory results for some ground-motion parameters if the input ground motions are increased about one and one-half standard deviations above the median, reflecting the fact that the ground motions for the Northridge earthquake tended to be higher than the median ground motion for other earthquakes with similar magnitude. The results give promise for making estimates of insured losses to a similar building stock under future earthquake loading. ?? 2004, Earthquake Engineering Research Institute.

  1. Development of fragility functions to estimate homelessness after an earthquake

    NASA Astrophysics Data System (ADS)

    Brink, Susan A.; Daniell, James; Khazai, Bijan; Wenzel, Friedemann

    2014-05-01

    used to estimate homelessness as a function of information that is readily available immediately after an earthquake. These fragility functions could be used by relief agencies and governments to provide an initial assessment of the need for allocation of emergency shelter immediately after an earthquake. Daniell JE (2014) The development of socio-economic fragility functions for use in worldwide rapid earthquake loss estimation procedures, Ph.D. Thesis (in publishing), Karlsruhe, Germany. Daniell, J. E., Khazai, B., Wenzel, F., & Vervaeck, A. (2011). The CATDAT damaging earthquakes database. Natural Hazards and Earth System Science, 11(8), 2235-2251. doi:10.5194/nhess-11-2235-2011 Daniell, J.E., Wenzel, F. and Vervaeck, A. (2012). "The Normalisation of socio-economic losses from historic worldwide earthquakes from 1900 to 2012", 15th WCEE, Lisbon, Portugal, Paper No. 2027. Jaiswal, K., & Wald, D. (2010). An Empirical Model for Global Earthquake Fatality Estimation. Earthquake Spectra, 26(4), 1017-1037. doi:10.1193/1.3480331

  2. Creating a Global Building Inventory for Earthquake Loss Assessment and Risk Management

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2008-01-01

    Earthquakes have claimed approximately 8 million lives over the last 2,000 years (Dunbar, Lockridge and others, 1992) and fatality rates are likely to continue to rise with increased population and urbanizations of global settlements especially in developing countries. More than 75% of earthquake-related human casualties are caused by the collapse of buildings or structures (Coburn and Spence, 2002). It is disheartening to note that large fractions of the world's population still reside in informal, poorly-constructed & non-engineered dwellings which have high susceptibility to collapse during earthquakes. Moreover, with increasing urbanization half of world's population now lives in urban areas (United Nations, 2001), and half of these urban centers are located in earthquake-prone regions (Bilham, 2004). The poor performance of most building stocks during earthquakes remains a primary societal concern. However, despite this dark history and bleaker future trends, there are no comprehensive global building inventories of sufficient quality and coverage to adequately address and characterize future earthquake losses. Such an inventory is vital both for earthquake loss mitigation and for earthquake disaster response purposes. While the latter purpose is the motivation of this work, we hope that the global building inventory database described herein will find widespread use for other mitigation efforts as well. For a real-time earthquake impact alert system, such as U.S. Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER), (Wald, Earle and others, 2006), we seek to rapidly evaluate potential casualties associated with earthquake ground shaking for any region of the world. The casualty estimation is based primarily on (1) rapid estimation of the ground shaking hazard, (2) aggregating the population exposure within different building types, and (3) estimating the casualties from the collapse of vulnerable buildings. Thus, the

  3. An approximate estimate of the earthquake risk in the United Arab Emirates

    NASA Astrophysics Data System (ADS)

    Al-Homoud, A.; Wyss, M.

    2003-04-01

    The UAE is not as safe from earthquake disasters as often assumed. The magnitude 5.1 earthquake of 11 March 2002 in Fujairah Masafi demonstrated that earthquakes can occur in the UAE. The threat of large earthquakes in southern Iran is well known to seismologist, but people generally do not realize that the international expert team that assessed the earthquake hazard for the entire world placed the UAE into the same class as many parts of Iran and Turkey, as well as California. There is no question that large earthquakes will occur again in southern Iran and that moderate earthquakes will happen again in the UAE. The only question is: when will they happen? From the history of earthquakes, we have an understanding, although limited to the last few decades, of what size earthquakes may be expected. For this reason, it is timely to estimate the probable consequences in the UAE of a large to great earthquake in southern Iran and a moderate earthquake in the UAE themselves. We propose to estimate the number of possible injuries, fatalities, and the financial loss in building value that might occur in the UAE in several future likely earthquakes. This estimate will be based on scenario earthquakes with positions and magnitudes determined by us, based on seismic hazard maps. Scenario earthquakes are events that are very likely to occur in the future, because similar ones have happened in the past. The time when they may happen will not be estimated in this work. The input for calculating the earthquake risk in the UAE, as we propose, will be the census figures for the population and the estimated properties of the building stock. WAPPMERR is the only research group capable to make these estimates for the UAE. The deliverables will be a scientific manuscript to be submitted to a reviewed journal, which will contain tables and figures showing the estimated numbers of (a) people killed and (b) people injured (slightly and seriously counted separately), (c) buildings

  4. Earthquakes trigger the loss of groundwater biodiversity

    PubMed Central

    Galassi, Diana M. P.; Lombardo, Paola; Fiasca, Barbara; Di Cioccio, Alessia; Di Lorenzo, Tiziana; Petitta, Marco; Di Carlo, Piero

    2014-01-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and “ecosystem engineers”, we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems. PMID:25182013

  5. Earthquakes trigger the loss of groundwater biodiversity.

    PubMed

    Galassi, Diana M P; Lombardo, Paola; Fiasca, Barbara; Di Cioccio, Alessia; Di Lorenzo, Tiziana; Petitta, Marco; Di Carlo, Piero

    2014-09-03

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and "ecosystem engineers", we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems.

  6. Earthquakes trigger the loss of groundwater biodiversity

    NASA Astrophysics Data System (ADS)

    Galassi, Diana M. P.; Lombardo, Paola; Fiasca, Barbara; di Cioccio, Alessia; di Lorenzo, Tiziana; Petitta, Marco; di Carlo, Piero

    2014-09-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and ``ecosystem engineers'', we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems.

  7. Development of Rapid Earthquake Loss Assessment Methodologies for Euro-Med Region

    NASA Astrophysics Data System (ADS)

    Erdik, M.

    2009-04-01

    For almost-real time estimation of the ground shaking and losses after a major earthquake in the Euro-Mediterranean region the JRA-3 component of the EU Project entitled "Network of research Infrastructures for European Seismology, NERIES" foresees: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base, supported, if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line regional broadband stations. 2. Estimation of the spatial distribution of selected ground motion parameters at engineering bedrock through region specific ground motion attenuation relationships and/or actual physical simulation of ground motion. 3. Estimation of the spatial distribution of site-specific ground selected motion parameters using regional geology (or urban geotechnical information) data-base using appropriate amplification models. 4. Estimation of the losses and uncertainties at various orders of sophistication (buildings, casualties) Main objective of the JRA-3 wprk package is to develop a methodology for real time estimation of losses after a major earthquake in the Euro-Mediterranean region. The multi-level methodology being developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variabilities and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical ane social elements subjected to earthquake hazard and the associated vulnerability relationships. A comprehensive methodology has been developed and the related software ELER is under preparation. The apllications of the ELER software are presented in the following two accompanying papers. 1. Regional Earthquake Shaking and Loss Estimation 2. Urban Earthquake Shakıng and Loss Assessment

  8. Real-time earthquake shake, damage, and loss mapping for Istanbul metropolitan area

    NASA Astrophysics Data System (ADS)

    Zülfikar, A. Can; Fercan, N. Özge Zülfikar; Tunç, Süleyman; Erdik, Mustafa

    2017-01-01

    The past devastating earthquakes in densely populated urban centers, such as the 1994 Northridge; 1995 Kobe; 1999 series of Kocaeli, Düzce, and Athens; and 2011 Van-Erciş events, showed that substantial social and economic losses can be expected. Previous studies indicate that inadequate emergency response can increase the number of casualties by a maximum factor of 10, which suggests the need for research on rapid earthquake shaking damage and loss estimation. The reduction in casualties in urban areas immediately following an earthquake can be improved if the location and severity of damages can be rapidly assessed by information from rapid response systems. In this context, a research project (TUBITAK-109M734) titled "Real-time Information of Earthquake Shaking, Damage, and Losses for Target Cities of Thessaloniki and Istanbul" was conducted during 2011-2014 to establish the rapid estimation of ground motion shaking and related earthquake damages and casualties for the target cities. In the present study, application to Istanbul metropolitan area is presented. In order to fulfill this objective, earthquake hazard and risk assessment methodology known as Earthquake Loss Estimation Routine, which was developed for the Euro-Mediterranean region within the Network of Research Infrastructures for European Seismology EC-FP6 project, was used. The current application to the Istanbul metropolitan area provides real-time ground motion information obtained by strong motion stations distributed throughout the densely populated areas of the city. According to this ground motion information, building damage estimation is computed by using grid-based building inventory, and the related loss is then estimated. Through this application, the rapidly estimated information enables public and private emergency management authorities to take action and allocate and prioritize resources to minimize the casualties in urban areas during immediate post-earthquake periods. Moreover, it

  9. A Method for Estimation of Death Tolls in Disastrous Earthquake

    NASA Astrophysics Data System (ADS)

    Pai, C.; Tien, Y.; Teng, T.

    2004-12-01

    Fatality tolls caused by the disastrous earthquake are the one of the most important items among the earthquake damage and losses. If we can precisely estimate the potential tolls and distribution of fatality in individual districts as soon as the earthquake occurrences, it not only make emergency programs and disaster management more effective but also supply critical information to plan and manage the disaster and the allotments of disaster rescue manpower and medicine resources in a timely manner. In this study, we intend to reach the estimation of death tolls caused by the Chi-Chi earthquake in individual districts based on the Attributive Database of Victims, population data, digital maps and Geographic Information Systems. In general, there were involved many factors including the characteristics of ground motions, geological conditions, types and usage habits of buildings, distribution of population and social-economic situations etc., all are related to the damage and losses induced by the disastrous earthquake. The density of seismic stations in Taiwan is the greatest in the world at present. In the meantime, it is easy to get complete seismic data by earthquake rapid-reporting systems from the Central Weather Bureau: mostly within about a minute or less after the earthquake happened. Therefore, it becomes possible to estimate death tolls caused by the earthquake in Taiwan based on the preliminary information. Firstly, we form the arithmetic mean of the three components of the Peak Ground Acceleration (PGA) to give the PGA Index for each individual seismic station, according to the mainshock data of the Chi-Chi earthquake. To supply the distribution of Iso-seismic Intensity Contours in any districts and resolve the problems for which there are no seismic station within partial districts through the PGA Index and geographical coordinates in individual seismic station, the Kriging Interpolation Method and the GIS software, The population density depends on

  10. Future Earth: Reducing Loss By Automating Response to Earthquake Shaking

    NASA Astrophysics Data System (ADS)

    Allen, R. M.

    2014-12-01

    Earthquakes pose a significant threat to society in the U.S. and around the world. The risk is easily forgotten given the infrequent recurrence of major damaging events, yet the likelihood of a major earthquake in California in the next 30 years is greater than 99%. As our societal infrastructure becomes ever more interconnected, the potential impacts of these future events are difficult to predict. Yet, the same inter-connected infrastructure also allows us to rapidly detect earthquakes as they begin, and provide seconds, tens or seconds, or a few minutes warning. A demonstration earthquake early warning system is now operating in California and is being expanded to the west coast (www.ShakeAlert.org). In recent earthquakes in the Los Angeles region, alerts were generated that could have provided warning to the vast majority of Los Angelinos who experienced the shaking. Efforts are underway to build a public system. Smartphone technology will be used not only to issue that alerts, but could also be used to collect data, and improve the warnings. The MyShake project at UC Berkeley is currently testing an app that attempts to turn millions of smartphones into earthquake-detectors. As our development of the technology continues, we can anticipate ever-more automated response to earthquake alerts. Already, the BART system in the San Francisco Bay Area automatically stops trains based on the alerts. In the future, elevators will stop, machinery will pause, hazardous materials will be isolated, and self-driving cars will pull-over to the side of the road. In this presentation we will review the current status of the earthquake early warning system in the US. We will illustrate how smartphones can contribute to the system. Finally, we will review applications of the information to reduce future losses.

  11. Estimation of earthquake risk curves of physical building damage

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias; Janouschkowetz, Silke; Fischer, Thomas; Simon, Christian

    2014-05-01

    In this study, a new approach to quantify seismic risks is presented. Here, the earthquake risk curves for the number of buildings with a defined physical damage state are estimated for South Africa. Therein, we define the physical damage states according to the current European macro-seismic intensity scale (EMS-98). The advantage of such kind of risk curve is that its plausibility can be checked more easily than for other types. The earthquake risk curve for physical building damage can be compared with historical damage and their corresponding empirical return periods. The number of damaged buildings from historical events is generally explored and documented in more detail than the corresponding monetary losses. The latter are also influenced by different economic conditions, such as inflation and price hikes. Further on, the monetary risk curve can be derived from the developed risk curve of physical building damage. The earthquake risk curve can also be used for the validation of underlying sub-models such as the hazard and vulnerability modules.

  12. An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling

    USGS Publications Warehouse

    Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.

    2009-01-01

    We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global

  13. Earthquake detection by new motion estimation algorithm in video processing

    NASA Astrophysics Data System (ADS)

    Hong, Chien-Shiang; Wang, Chuen-Ching; Tai, Shen-Chuan; Chen, Ji-Feng; Wang, Chung-Yao

    2011-01-01

    As increasing urbanization is taking place worldwide, earthquake hazards pose serious threats to lives and properties for urban areas. A practical earthquake prediction method appears to be far from realization. Generally, the traditional instruments for earthquake detection have the disadvantages of high cost and size. To solve these problems, this paper presents a new method which can detect earthquake intensity using video capture device. The main method is based on a new proposed motion vector algorithm with simple but effective methods to immediately calculate acceleration of a predefined target object. By estimating the motion vector variation, the movement distance of predefined target object can be computed, and therefore the earthquake amplitude can be defined. The effectiveness of the proposed scheme is demonstrated in a series of experimental simulations. It is shown that the scheme successfully detects the earthquake occurrence and identifies the earthquake amplitude from video streams.

  14. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  15. Application of the loss estimation tool QLARM in Algeria

    NASA Astrophysics Data System (ADS)

    Rosset, P.; Trendafiloski, G.; Yelles, K.; Semmane, F.; Wyss, M.

    2009-04-01

    During the last six years, WAPMERR has used Quakeloss for real-time loss estimation for more than 440 earthquakes worldwide. Loss reports, posted with an average delay of 30 minutes, include a map showing the average degree of damage in settlements near the epicenter, the total number of fatalities, the total number of injured, and a detailed list of casualties and damage rates in these settlements. After the M6.7 Boumerdes earthquake in 2003, we reported 1690-3660 fatalities. The official death toll was around 2270. Since the El Asnam earthquake, seismic events in Algeria have killed about 6,000 people, injured more than 20,000 and left more than 300,000 homeless. On average, one earthquake with the potential to kill people (M>5.4) happens every three years in Algeria. In the frame of a collaborative project between WAPMERR and CRAAG, we propose to calibrate our new loss estimation tool QLARM (qlarm.ethz.ch) and estimate human losses for future likely earthquakes in Algeria. The parameters needed for this calculation are the following. (1) Ground motion relation and soil amplification factors (2) distribution of building stock and population into vulnerability classes of the European Macroseismic Scale (EMS-98) as given in the PAGER database and (3) population by settlement. Considering the resolution of the available data, we construct 1) point city models for cases where only summary data for the city are available and, 2) discrete city models when data regarding city districts are available. Damage and losses are calculated using: (a) vulnerability models pertinent to EMS-98 vulnerability classes previously validated with the existing ones in Algeria (Tipaza and Chlef) (b) building collapse models pertinent to Algeria as given in the World Housing Encyclopedia and, (c) casualty matrices pertinent to EMS-98 vulnerability classes assembled from HAZUS casualty rates. As a first trial, we simulated the 2003 Boumerdes earthquake to check the validity of the proposed

  16. Precise estimation of repeating earthquake moment: Example from parkfield, california

    USGS Publications Warehouse

    Rubinstein, J.L.; Ellsworth, W.L.

    2010-01-01

    We offer a new method for estimating the relative size of repeating earthquakes using the singular value decomposition (SVD). This method takes advantage of the highly coherent waveforms of repeating earthquakes and arrives at far more precise and accurate descriptions of earthquake size than standard catalog techniques allow. We demonstrate that uncertainty in relative moment estimates is reduced from ??75% for standard coda-duration techniques employed by the network to an uncertainty of ??6.6% when the SVD method is used. This implies that a single-station estimate of moment using the SVD method has far less uncertainty than the whole-network estimates of moment based on coda duration. The SVD method offers a significant improvement in our ability to describe the size of repeating earthquakes and thus an opportunity to better understand how they accommodate slip as a function of time.

  17. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part A, Prehistoric earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.

  18. Seismic Risk Assessment and Loss Estimation for Tbilisi City

    NASA Astrophysics Data System (ADS)

    Tsereteli, Nino; Alania, Victor; Varazanashvili, Otar; Gugeshashvili, Tengiz; Arabidze, Vakhtang; Arevadze, Nika; Tsereteli, Emili; Gaphrindashvili, Giorgi; Gventcadze, Alexander; Goguadze, Nino; Vephkhvadze, Sophio

    2013-04-01

    The proper assessment of seismic risk is of crucial importance for society protection and city sustainable economic development, as it is the essential part to seismic hazard reduction. Estimation of seismic risk and losses is complicated tasks. There is always knowledge deficiency on real seismic hazard, local site effects, inventory on elements at risk, infrastructure vulnerability, especially for developing countries. Lately great efforts was done in the frame of EMME (earthquake Model for Middle East Region) project, where in the work packages WP1, WP2 , WP3 and WP4 where improved gaps related to seismic hazard assessment and vulnerability analysis. Finely in the frame of work package wp5 "City Scenario" additional work to this direction and detail investigation of local site conditions, active fault (3D) beneath Tbilisi were done. For estimation economic losses the algorithm was prepared taking into account obtained inventory. The long term usage of building is very complex. It relates to the reliability and durability of buildings. The long term usage and durability of a building is determined by the concept of depreciation. Depreciation of an entire building is calculated by summing the products of individual construction unit' depreciation rates and the corresponding value of these units within the building. This method of calculation is based on an assumption that depreciation is proportional to the building's (constructions) useful life. We used this methodology to create a matrix, which provides a way to evaluate the depreciation rates of buildings with different type and construction period and to determine their corresponding value. Finally loss was estimated resulting from shaking 10%, 5% and 2% exceedance probability in 50 years. Loss resulting from scenario earthquake (earthquake with possible maximum magnitude) also where estimated.

  19. Building losses assessment for Lushan earthquake utilization multisource remote sensing data and GIS

    NASA Astrophysics Data System (ADS)

    Nie, Juan; Yang, Siquan; Fan, Yida; Wen, Qi; Xu, Feng; Li, Lingling

    2015-12-01

    On 20 April 2013, a catastrophic earthquake of magnitude 7.0 struck the Lushan County, northwestern Sichuan Province, China. This earthquake named Lushan earthquake in China. The Lushan earthquake damaged many buildings. The situation of building loss is one basis for emergency relief and reconstruction. Thus, the building losses of the Lushan earthquake must be assessed. Remote sensing data and geographic information systems (GIS) can be employed to assess the building loss of the Lushan earthquake. The building losses assessment results for Lushan earthquake disaster utilization multisource remote sensing dada and GIS were reported in this paper. The assessment results indicated that 3.2% of buildings in the affected areas were complete collapsed. 12% and 12.5% of buildings were heavy damaged and slight damaged, respectively. The complete collapsed buildings, heavy damaged buildings, and slight damaged buildings mainly located at Danling County, Hongya County, Lushan County, Mingshan County, Qionglai County, Tianquan County, and Yingjing County.

  20. Earthquake Loss Assessment for Post-2000 Buildings in Istanbul

    NASA Astrophysics Data System (ADS)

    Hancilar, Ufuk; Cakti, Eser; Sesetyan, Karin

    2016-04-01

    Current building inventory of Istanbul city, which was compiled by street surveys in 2008, consists of more than 1.2 million buildings. The inventory provides information on lateral-load carrying system, number of floors and construction year, where almost 200,000 buildings are reinforced concrete frame type structures built after 2000. These buildings are assumed to be designed based on the provisions of Turkish Earthquake Resistant Design Code (1998) and are tagged as high-code buildings. However, there are no empirical or analytical fragility functions associated with these types of buildings. In this study we perform a damage and economic loss assessment exercise focusing on the post-2000 building stock of Istanbul. Three M7.4 scenario earthquakes near the city represent the input ground motion. As for the fragility functions, those provided by Hancilar and Cakti (2015) for code complying reinforced concrete frames are used. The results are compared with the number of damaged buildings given in the loss assessment studies available in the literature wherein expert judgment based fragilities for post-2000 buildings were used.

  1. Benefits of multidisciplinary collaboration for earthquake casualty estimation models: recent case studies

    NASA Astrophysics Data System (ADS)

    So, E.

    2010-12-01

    Earthquake casualty loss estimation, which depends primarily on building-specific casualty rates, has long suffered from a lack of cross-disciplinary collaboration in post-earthquake data gathering. An increase in our understanding of what contributes to casualties in earthquakes involve coordinated data-gathering efforts amongst disciplines; these are essential for improved global casualty estimation models. It is evident from examining past casualty loss models and reviewing field data collected from recent events, that generalized casualty rates cannot be applied globally for different building types, even within individual countries. For a particular structure type, regional and topographic building design effects, combined with variable material and workmanship quality all contribute to this multi-variant outcome. In addition, social factors affect building-specific casualty rates, including social status and education levels, and human behaviors in general, in that they modify egress and survivability rates. Without considering complex physical pathways, loss models purely based on historic casualty data, or even worse, rates derived from other countries, will be of very limited value. What’s more, as the world’s population, housing stock, and living and cultural environments change, methods of loss modeling must accommodate these variables, especially when considering casualties. To truly take advantage of observed earthquake losses, not only do damage surveys need better coordination of international and national reconnaissance teams, but these teams must integrate difference areas of expertise including engineering, public health and medicine. Research is needed to find methods to achieve consistent and practical ways of collecting and modeling casualties in earthquakes. International collaboration will also be necessary to transfer such expertise and resources to the communities in the cities which most need it. Coupling the theories and findings from

  2. An empirical model for global earthquake fatality estimation

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David

    2010-01-01

    We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U. S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits. [DOI: 10.1193/1.3480331

  3. A cluster-based decision support system for estimating earthquake damage and casualties.

    PubMed

    Aleskerov, Fuad; Say, Arzu Iseri; Toker, Aysegül; Akin, H Levent; Altay, Gülay

    2005-09-01

    This paper describes a Decision Support System for Disaster Management (DSS-DM) to aid operational and strategic planning and policy-making for disaster mitigation and preparedness in a less-developed infrastructural context. Such contexts require a more flexible and robust system for fast prediction of damage and losses. The proposed system is specifically designed for earthquake scenarios, estimating the extent of human losses and injuries, as well as the need for temporary shelters. The DSS-DM uses a scenario approach to calculate the aforementioned parameters at the district and sub-district level at different earthquake intensities. The following system modules have been created: clusters (buildings) with respect to use; buildings with respect to construction typology; and estimations of damage to clusters, human losses and injuries, and the need for shelters. The paper not only examines the components of the DSS-DM, but also looks at its application in Besiktas municipality in the city of Istanbul, Turkey.

  4. An empirical evolutionary magnitude estimation for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wu, Yih-Min; Chen, Da-Yi

    2016-04-01

    For earthquake early warning (EEW) system, it is a difficult mission to accurately estimate earthquake magnitude in the early nucleation stage of an earthquake occurrence because only few stations are triggered and the recorded seismic waveforms are short. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters within the initial portion of waveform after P-wave arrival. However, a large-magnitude earthquake (Mw > 7.0) may take longer time to complete the whole ruptures of the causative fault. Instead of adopting amplitude contents in fixed-length time window, that may underestimate magnitude for large-magnitude events, we suppose a fast, robust and unsaturated approach to estimate earthquake magnitudes. In this new method, the EEW system can initially give a bottom-bund magnitude in a few second time window and then update magnitude without saturation by extending the time window. Here we compared two kinds of time windows for adopting amplitudes. One is pure P-wave time widow (PTW); the other is whole-wave time window after P-wave arrival (WTW). The peak displacement amplitude in vertical component were adopted from 1- to 10-s length PTW and WTW, respectively. Linear regression analysis were implemented to find the empirical relationships between peak displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that using WTW to estimate magnitudes accompanies with smaller standard deviation. In addition, large uncertainties exist in the 1-second time widow. Therefore, for magnitude estimations we suggest the EEW system need to progressively adopt peak displacement amplitudes form 2- to 10-s WTW.

  5. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  6. Rapid Ice Mass Loss: Does It Have an Influence on Earthquake Occurrence in Southern Alaska?

    NASA Technical Reports Server (NTRS)

    Sauber, Jeanne M.

    2008-01-01

    The glaciers of southern Alaska are extensive, and many of them have undergone gigatons of ice wastage on time scales on the order of the seismic cycle. Since the ice loss occurs directly above a shallow main thrust zone associated with subduction of the Pacific-Yakutat plate beneath continental Alaska, the region between the Malaspina and Bering Glaciers is an excellent test site for evaluating the importance of recent ice wastage on earthquake faulting potential. We demonstrate the influence of cumulative glacial mass loss following the 1899 Yakataga earthquake (M=8.1) by using a two dimensional finite element model with a simple representation of ice fluctuations to calculate the incremental stresses and change in the fault stability margin (FSM) along the main thrust zone (MTZ) and on the surface. Along the MTZ, our results indicate a decrease in FSM between 1899 and the 1979 St. Elias earthquake (M=7.4) of 0.2 - 1.2 MPa over an 80 km region between the coast and the 1979 aftershock zone; at the surface, the estimated FSM was larger but more localized to the lower reaches of glacial ablation zones. The ice-induced stresses were large enough, in theory, to promote the occurrence of shallow thrust earthquakes. To empirically test the influence of short-term ice fluctuations on fault stability, we compared the seismic rate from a reference background time period (1988-1992) against other time periods (1993-2006) with variable ice or tectonic change characteristics. We found that the frequency of small tectonic events in the Icy Bay region increased in 2002-2006 relative to the background seismic rate. We hypothesize that this was due to a significant increase in the rate of ice wastage in 2002-2006 instead of the M=7.9, 2002 Denali earthquake, located more than 100km away.

  7. A Spectral Estimate of Average Slip in Earthquakes

    NASA Astrophysics Data System (ADS)

    Boatwright, J.; Hanks, T. C.

    2014-12-01

    We demonstrate that the high-frequency acceleration spectral level ao of an ω-square source spectrum is directly proportional to the average slip of the earthquake ∆u divided by the travel time to the station r/βao = 1.37 Fs (β/r) ∆uand multiplied by the radiation pattern Fs. This simple relation is robust but depends implicitly on the assumed relation between the corner frequency and source radius, which we take from the Brune (1970, JGR) model. We use this relation to estimate average slip by fitting spectral ratios with smaller earthquakes as empirical Green's functions. For a pair of Mw = 1.8 and 1.2 earthquakes in Parkfield, we fit the spectral ratios published by Nadeau et al. (1994, BSSA) to obtain 0.39 and 0.10 cm. For the Mw= 3.9 earthquake that occurred on Oct 29, 2012, at the Pinnacles, we fit spectral ratios formed with respect to an Md = 2.4 aftershock to obtain 4.4 cm. Using the Sato and Hirasawa (1973, JPE) model instead of the Brune model increases the estimates of average slip by 75%. These estimates of average slip are factors of 5-40 (or 3-23) times less than the average slips of 3.89 cm and 23.3 cm estimated by Nadeau and Johnson (1998, BSSA) from the slip rates, average seismic moments and recurrence intervals for the two sequences to which they associate these earthquakes. The most reasonable explanation for this discrepancy is that the stress release and rupture processes of these earthquakes is strongly heterogeneous. However, the fits to the spectral ratios do not indicate that the spectral shapes are distorted in the first two octaves above the corner frequency.

  8. An Account of Preliminary Landslide Damage and Losses Resulting from the February 28, 2001, Nisqually, Washington, Earthquake

    USGS Publications Warehouse

    Highland, Lynn M.

    2003-01-01

    The February 28, 2001, Nisqually, Washington, earthquake (Mw = 6.8) damaged an area of the northwestern United States that previously experienced two major historical earthquakes, in 1949 and in 1965. Preliminary estimates of direct monetary losses from damage due to earthquake-induced landslides is approximately $34.3 million. However, this figure does not include costs from damages to the elevated portion of the Alaskan Way Viaduct, a major highway through downtown Seattle, Washington that will be repaired or rebuilt, depending on the future decision of local and state authorities. There is much debate as to the cause of the damage to this viaduct with evaluations of cause ranging from earthquake shaking and liquefaction to lateral spreading to a combination of these effects. If the viaduct is included in the costs, the losses increase to $500+ million (if it is repaired) or to more than $1+ billion (if it is replaced). Preliminary estimate of losses due to all causes of earthquake damage is approximately $2 billion, which includes temporary repairs to the Alaskan Way Viaduct. These preliminary dollar figures will no doubt increase when plans and decisions regarding the Viaduct are completed.

  9. Development of a Global Slope Dataset for Estimation of Landslide Occurrence Resulting from Earthquakes

    USGS Publications Warehouse

    Verdin, Kristine L.; Godt, Jonathan W.; Funk, Christopher C.; Pedreros, Diego; Worstell, Bruce; Verdin, James

    2007-01-01

    Landslides resulting from earthquakes can cause widespread loss of life and damage to critical infrastructure. The U.S. Geological Survey (USGS) has developed an alarm system, PAGER (Prompt Assessment of Global Earthquakes for Response), that aims to provide timely information to emergency relief organizations on the impact of earthquakes. Landslides are responsible for many of the damaging effects following large earthquakes in mountainous regions, and thus data defining the topographic relief and slope are critical to the PAGER system. A new global topographic dataset was developed to aid in rapidly estimating landslide potential following large earthquakes. We used the remotely-sensed elevation data collected as part of the Shuttle Radar Topography Mission (SRTM) to generate a slope dataset with nearly global coverage. Slopes from the SRTM data, computed at 3-arc-second resolution, were summarized at 30-arc-second resolution, along with statistics developed to describe the distribution of slope within each 30-arc-second pixel. Because there are many small areas lacking SRTM data and the northern limit of the SRTM mission was lat 60?N., statistical methods referencing other elevation data were used to fill the voids within the dataset and to extrapolate the data north of 60?. The dataset will be used in the PAGER system to rapidly assess the susceptibility of areas to landsliding following large earthquakes.

  10. Monitoring road losses for Lushan 7.0 earthquake disaster utilization multisource remote sensing images

    NASA Astrophysics Data System (ADS)

    Huang, He; Yang, Siquan; Li, Suju; He, Haixia; Liu, Ming; Xu, Feng; Lin, Yueguan

    2015-12-01

    Earthquake is one major nature disasters in the world. At 8:02 on 20 April 2013, a catastrophic earthquake with Ms 7.0 in surface wave magnitude occurred in Sichuan province, China. The epicenter of this earthquake located in the administrative region of Lushan County and this earthquake was named the Lushan earthquake. The Lushan earthquake caused heavy casualties and property losses in Sichuan province. After the earthquake, various emergency relief supplies must be transported to the affected areas. Transportation network is the basis for emergency relief supplies transportation and allocation. Thus, the road losses of the Lushan earthquake must be monitoring. The road losses monitoring results for Lushan earthquake disaster utilization multisource remote sensing images were reported in this paper. The road losses monitoring results indicated that there were 166 meters' national roads, 3707 meters' provincial roads, 3396 meters' county roads, 7254 meters' township roads, and 3943 meters' village roads were damaged during the Lushan earthquake disaster. The damaged roads mainly located at Lushan County, Baoxing County, Tianquan County, Yucheng County, Mingshan County, and Qionglai County. The results also can be used as a decision-making information source for the disaster management government in China.

  11. Large Earthquakes in Developing Countries: Estimating and Reducing their Consequences

    NASA Astrophysics Data System (ADS)

    Tucker, B. E.

    2003-12-01

    Recent efforts to reduce the risk of earthquakes in developing countries have been diverse, earnest, and inadequate. The earthquake risk in developing countries is large and growing rapidly. It is largely ignored. Unless something is done - quickly - to reduce it, both developing and developed countries will suffer human and economic losses far greater than have been experienced in the past. GeoHazards International (GHI) is a nonprofit organization that has attempted to reduce the death and suffering caused by earthquakes in the world's most vulnerable communities, through preparedness, mitigation and prevention. Its approach has included raising awareness, strengthening local institutions and launching mitigation activities, particularly for schools. GHI and its partners around the world have achieved some success: thousands of school children are safer, hundreds of cities are aware of their risk, tens of cities have been assessed and advised, and some local organizations have been strengthened. But there is disturbing evidence that what is being done is insufficient. The problem outpaces the cure. A new program is now being considered that would attempt to improve earthquake-resistant construction of schools, internationally, by publicizing well-managed programs around the world that design, construct and maintain earthquake-resistant schools. While focused on schools, this program might have broader applications in the future.

  12. Emergency Physician Estimation of Blood Loss

    DTIC Science & Technology

    2011-01-01

    between laboratory determination and visual estimation of blood loss during normal delivery. Eur J Obstet Gynecol Reprod Biol. 1991;38:119–124. 3...exaggeration. Acta Obstet Gynecol Scand. 2006;85:1448–1452. 4. Meiser A, Casagranda O, Skipka G, et al. Quantification of blood loss. How precise is visual

  13. Estimation of the magnitudes and epicenters of Philippine historical earthquakes

    NASA Astrophysics Data System (ADS)

    Bautista, Maria Leonila P.; Oike, Kazuo

    2000-02-01

    The magnitudes and epicenters of Philippine earthquakes from 1589 to 1895 are estimated based on the review, evaluation and interpretation of historical accounts and descriptions. The first step involves the determination of magnitude-felt area relations for the Philippines for use in the magnitude estimation. Data used were the earthquake reports of 86, recent, shallow events with well-described effects and known magnitude values. Intensities are assigned according to the modified Mercalli intensity scale of I to XII. The areas enclosed by Intensities III to IX [ A(III) to A(IX)] are measured and related to magnitude values. The most robust relations are found for magnitudes relating to A(VI), A(VII), A(VIII) and A(IX). Historical earthquake data are obtained from primary sources in libraries in the Philippines and Spain. Most of these accounts were made by Spanish priests and officials stationed in the Philippines during the 15th to 19th centuries. More than 3000 events are catalogued, interpreted and their intensities determined by considering the possible effects of local site conditions, type of construction and the number and locations of existing towns to assess completeness of reporting. Of these events, 485 earthquakes with the largest number of accounts or with at least a minimum report of damage are selected. The historical epicenters are estimated based on the resulting generalized isoseismal maps augmented by information on recent seismicity and location of known tectonic structures. Their magnitudes are estimated by using the previously determined magnitude-felt area equations for recent events. Although historical epicenters are mostly found to lie on known tectonic structures, a few, however, are found to lie along structures that show not much activity during the instrumented period. A comparison of the magnitude distributions of historical and recent events showed that only the period 1850 to 1900 may be considered well-reported in terms of

  14. An empirical evolutionary magnitude estimation for early warning of earthquakes

    NASA Astrophysics Data System (ADS)

    Chen, Da-Yi; Wu, Yih-Min; Chin, Tai-Lin

    2017-03-01

    The earthquake early warning (EEW) system is difficult to provide consistent magnitude estimate in the early stage of an earthquake occurrence because only few stations are triggered and few seismic signals are recorded. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters using the initial portion of the recorded waveforms after P-wave arrival. However, for a large-magnitude earthquake (Mw > 7.0), the time to complete the whole ruptures resulted from the corresponding fault may be very long. The magnitude estimations may not be correctly predicted by the initial portion of the seismograms. To estimate the magnitude of a large earthquake in real-time, the amplitude parameters should be updated with ongoing waveforms instead of adopting amplitude contents in a predefined fixed-length time window, since it may underestimate magnitude for large-magnitude events. In this paper, we propose a fast, robust and less-saturated approach to estimate earthquake magnitudes. The EEW system will initially give a lower-bound of the magnitude in a time window with a few seconds and then update magnitude with less saturation by extending the time window. Here we compared two kinds of time windows for measuring amplitudes. One is P-wave time window (PTW) after P-wave arrival; the other is whole-wave time window after P-wave arrival (WTW), which may include both P and S wave. One to ten second time windows for both PTW and WTW are considered to measure the peak ground displacement from the vertical component of the waveforms. Linear regression analysis are run at each time step (1- to 10-s time interval) to find the empirical relationships among peak ground displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 in Taiwan with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that considering WTW to estimate magnitudes has smaller standard deviation than PTW. The

  15. Rapid estimate of earthquake source duration: application to tsunami warning.

    NASA Astrophysics Data System (ADS)

    Reymond, Dominique; Jamelot, Anthony; Hyvernaud, Olivier

    2016-04-01

    We present a method for estimating the source duration of the fault rupture, based on the high-frequency envelop of teleseismic P-Waves, inspired from the original work of (Ni et al., 2005). The main interest of the knowledge of this seismic parameter is to detect abnormal low velocity ruptures that are the characteristic of the so called 'tsunami-earthquake' (Kanamori, 1972). The validation of the results of source duration estimated by this method are compared with two other independent methods : the estimated duration obtained by the Wphase inversion (Kanamori and Rivera, 2008, Duputel et al., 2012) and the duration calculated by the SCARDEC process that determines the source time function (M. Vallée et al., 2011). The estimated source duration is also confronted to the slowness discriminant defined by Newman and Okal, 1998), that is calculated routinely for all earthquakes detected by our tsunami warning process (named PDFM2, Preliminary Determination of Focal Mechanism, (Clément and Reymond, 2014)). Concerning the point of view of operational tsunami warning, the numerical simulations of tsunami are deeply dependent on the source estimation: better is the source estimation, better will be the tsunami forecast. The source duration is not directly injected in the numerical simulations of tsunami, because the cinematic of the source is presently totally ignored (Jamelot and Reymond, 2015). But in the case of a tsunami-earthquake that occurs in the shallower part of the subduction zone, we have to consider a source in a medium of low rigidity modulus; consequently, for a given seismic moment, the source dimensions will be decreased while the slip distribution increased, like a 'compact' source (Okal, Hébert, 2007). Inversely, a rapid 'snappy' earthquake that has a poor tsunami excitation power, will be characterized by higher rigidity modulus, and will produce weaker displacement and lesser source dimensions than 'normal' earthquake. References: CLément, J

  16. Global Earthquake Casualties due to Secondary Effects: A Quantitative Analysis for Improving PAGER Losses

    USGS Publications Warehouse

    Wald, David J.

    2010-01-01

    This study presents a quantitative and geospatial description of global losses due to earthquake-induced secondary effects, including landslide, liquefaction, tsunami, and fire for events during the past 40 years. These processes are of great importance to the US Geological Survey’s (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, which is currently being developed to deliver rapid earthquake impact and loss assessments following large/significant global earthquakes. An important question is how dominant are losses due to secondary effects (and under what conditions, and in which regions)? Thus, which of these effects should receive higher priority research efforts in order to enhance PAGER’s overall assessment of earthquakes losses and alerting for the likelihood of secondary impacts? We find that while 21.5% of fatal earthquakes have deaths due to secondary (non-shaking) causes, only rarely are secondary effects the main cause of fatalities. The recent 2004 Great Sumatra–Andaman Islands earthquake is a notable exception, with extraordinary losses due to tsunami. The potential for secondary hazards varies greatly, and systematically, due to regional geologic and geomorphic conditions. Based on our findings, we have built country-specific disclaimers for PAGER that address potential for each hazard (Earle et al., Proceedings of the 14th World Conference of the Earthquake Engineering, Beijing, China, 2008). We will now focus on ways to model casualties from secondary effects based on their relative importance as well as their general predictability.

  17. Global earthquake casualties due to secondary effects: A quantitative analysis for improving rapid loss analyses

    USGS Publications Warehouse

    Marano, K.D.; Wald, D.J.; Allen, T.I.

    2010-01-01

    This study presents a quantitative and geospatial description of global losses due to earthquake-induced secondary effects, including landslide, liquefaction, tsunami, and fire for events during the past 40 years. These processes are of great importance to the US Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, which is currently being developed to deliver rapid earthquake impact and loss assessments following large/significant global earthquakes. An important question is how dominant are losses due to secondary effects (and under what conditions, and in which regions)? Thus, which of these effects should receive higher priority research efforts in order to enhance PAGER's overall assessment of earthquakes losses and alerting for the likelihood of secondary impacts? We find that while 21.5% of fatal earthquakes have deaths due to secondary (non-shaking) causes, only rarely are secondary effects the main cause of fatalities. The recent 2004 Great Sumatra-Andaman Islands earthquake is a notable exception, with extraordinary losses due to tsunami. The potential for secondary hazards varies greatly, and systematically, due to regional geologic and geomorphic conditions. Based on our findings, we have built country-specific disclaimers for PAGER that address potential for each hazard (Earle et al., Proceedings of the 14th World Conference of the Earthquake Engineering, Beijing, China, 2008). We will now focus on ways to model casualties from secondary effects based on their relative importance as well as their general predictability. ?? Springer Science+Business Media B.V. 2009.

  18. Earthquakes

    USGS Publications Warehouse

    Shedlock, Kaye M.; Pakiser, Louis Charles

    1998-01-01

    One of the most frightening and destructive phenomena of nature is a severe earthquake and its terrible aftereffects. An earthquake is a sudden movement of the Earth, caused by the abrupt release of strain that has accumulated over a long time. For hundreds of millions of years, the forces of plate tectonics have shaped the Earth as the huge plates that form the Earth's surface slowly move over, under, and past each other. Sometimes the movement is gradual. At other times, the plates are locked together, unable to release the accumulating energy. When the accumulated energy grows strong enough, the plates break free. If the earthquake occurs in a populated area, it may cause many deaths and injuries and extensive property damage. Today we are challenging the assumption that earthquakes must present an uncontrollable and unpredictable hazard to life and property. Scientists have begun to estimate the locations and likelihoods of future damaging earthquakes. Sites of greatest hazard are being identified, and definite progress is being made in designing structures that will withstand the effects of earthquakes.

  19. Post-Earthquake People Loss Evaluation Based on Seismic Multi-Level Hybrid Grid: A Case Study on Yushu Ms 7.1 Earthquake in China

    NASA Astrophysics Data System (ADS)

    Yang, Xiaohong; Xie, Zhong; Ling, Feng; Luo, Xiangang; Zhong, Ming

    2016-01-01

    People loss is one of the most important information that the government concerns after an earthquake, because it affects appropriate rescue levels. However, existing evaluation methods often consider an entire stricken region as a whole assessment area but disregard the spatial disparity of influencing factors. As a consequence, results are inaccurately evaluated. In order to address this problem, this paper proposes a post-earthquake evaluation approach of people loss based on the seismic multi-level hybrid grid (SMHG). In SMHG, the whole area is divided into grids at different levels with various sizes. In this manner, the efficiency of data management is improved. With SMHG, disaster statistics can be easily counted under both the administrative unit and per unit area. The proposed approach was then applied to investigate Yushu Ms7.1 earthquake in China. Results revealed that the number of deaths varied with different exposure grids. Among all the different grids, we found that using the 50×50 exposure grid can get the most satisfactory results, and the estimated number of deaths was 2,203, with an 18.3% deviation from the actual loss. People loss results obtained through the proposed approach were more accurate than those obtained through traditional GIS-based methods.

  20. Centralized web-based loss estimation tool: INLET for disaster response

    NASA Astrophysics Data System (ADS)

    Huyck, C. K.; Chung, H.-C.; Cho, S.; Mio, M. Z.; Ghosh, S.; Eguchi, R. T.; Mehrotra, S.

    2006-03-01

    In the years following the 1994 Northridge earthquake, many researchers in the earthquake community focused on the development of GIS-based loss estimation tools such as HAZUS. These highly customizable programs have many users, and different results after an event can be problematic. Online IMS (Internet Map Servers) offer a centralized system where data, model updates and results cascade to all users. INLET (Internet-based Loss Estimation Tool) is the first online real-time loss estimation system available to the emergency management and response community within Southern California. In the event of a significant earthquake, Perl scripts written to respond to USGS ShakeCast notifications will call INLET routines that use USGS ShakeMaps to estimate losses within minutes after an event. INLET incorporates extensive publicly available GIS databases and uses damage functions simplified from FEMA's HAZUS (R) software. INLET currently estimates building damage, transportation impacts, and casualties. The online model simulates the effects of earthquakes, in the context of the larger RESCUE project, in order to test the integration of IT in evacuation routing. The simulation tool provides a "testbed" environment for researchers to model the effect that disaster awareness and route familiarity can have on traffic congestion and evacuation time.

  1. Blood Loss Estimation Using Gauze Visual Analogue

    PubMed Central

    Ali Algadiem, Emran; Aleisa, Abdulmohsen Ali; Alsubaie, Huda Ibrahim; Buhlaiqah, Noora Radhi; Algadeeb, Jihad Bagir; Alsneini, Hussain Ali

    2016-01-01

    Background Estimating intraoperative blood loss can be a difficult task, especially when blood is mostly absorbed by gauze. In this study, we have provided an improved method for estimating blood absorbed by gauze. Objectives To develop a guide to estimate blood absorbed by surgical gauze. Materials and Methods A clinical experiment was conducted using aspirated blood and common surgical gauze to create a realistic amount of absorbed blood in the gauze. Different percentages of staining were photographed to create an analogue for the amount of blood absorbed by the gauze. Results A visual analogue scale was created to aid the estimation of blood absorbed by the gauze. The absorptive capacity of different gauze sizes was determined when the gauze was dripping with blood. The amount of reduction in absorption was also determined when the gauze was wetted with normal saline before use. Conclusions The use of a visual analogue may increase the accuracy of blood loss estimation and decrease the consequences related to over or underestimation of blood loss. PMID:27626017

  2. Estimating the confidence of earthquake damage scenarios: examples from a logic tree approach

    NASA Astrophysics Data System (ADS)

    Molina, S.; Lindholm, C. D.

    2007-07-01

    Earthquake loss estimation is now becoming an important tool in mitigation planning, where the loss modeling usually is based on a parameterized mathematical representation of the damage problem. In parallel with the development and improvement of such models, the question of sensitivity to parameters that carry uncertainties becomes increasingly important. We have to this end applied the capacity spectrum method (CSM) as described in FEMA HAZUS-MH. Multi-hazard Loss Estimation Methodology, Earthquake Model, Advanced Engineering Building Module. Federal Emergency Management Agency, United States (2003), and investigated the effects of selected parameters. The results demonstrate that loss scenarios may easily vary by as much as a factor of two because of simple parameter variations. Of particular importance for the uncertainty is the construction quality of the structure. These results represent a warning against simple acceptance of unbounded damage scenarios and strongly support the development of computational methods in which parameter uncertainties are propagated through the computations to facilitate confidence bounds for the damage scenarios.

  3. Locating earthquakes with surface waves and centroid moment tensor estimation

    NASA Astrophysics Data System (ADS)

    Wei, Shengji; Zhan, Zhongwen; Tan, Ying; Ni, Sidao; Helmberger, Don

    2012-04-01

    Traditionally, P wave arrival times have been used to locate regional earthquakes. In contrast, the travel times of surface waves dependent on source excitation and the source parameters and depth must be determined independently. Thus surface wave path delays need to be known before such data can be used for location. These delays can be estimated from previous earthquakes using the cut-and-paste technique, Ambient Seismic Noise tomography, and from 3D models. Taking the Chino Hills event as an example, we show consistency of path corrections for (>10 s) Love and Rayleigh waves to within about 1 s obtained from these methods. We then use these empirically derived delay maps to determine centroid locations of 138 Southern California moderate-sized (3.5 > Mw> 5.7) earthquakes using surface waves alone. It appears that these methods are capable of locating the main zone of rupture within a few (˜3) km accuracy relative to Southern California Seismic Network locations with 5 stations that are well distributed in azimuth. We also address the timing accuracy required to resolve non-double-couple source parameters which trades-off with location with less than a km error required for a 10% Compensated Linear Vector Dipole resolution.

  4. Soil amplification maps for estimating earthquake ground motions in the Central US

    USGS Publications Warehouse

    Bauer, R.A.; Kiefer, J.; Hester, N.

    2001-01-01

    The State Geologists of the Central United States Earthquake Consortium (CUSEC) are developing maps to assist State and local emergency managers and community officials in evaluating the earthquake hazards for the CUSEC region. The state geological surveys have worked together to produce a series of maps that show seismic shaking potential for eleven 1 X 2 degree (scale 1:250 000 or 1 in. ??? 3.9 miles) quadrangles that cover the high-risk area of the New Madrid Seismic Zone in eight states. Shear wave velocity values for the surficial materials were gathered and used to classify the soils according to their potential to amplify earthquake ground motions. Geologic base maps of surficial materials or 3-D material maps, either existing or produced for this project, were used in conjunction with shear wave velocities to classify the soils for the upper 15-30 m. These maps are available in an electronic form suitable for inclusion in the federal emergency management agency's earthquake loss estimation program (HAZUS). ?? 2001 Elsevier Science B.V. All rights reserved.

  5. Atmospheric Baseline Monitoring Data Losses Due to the Samoa Earthquake

    NASA Astrophysics Data System (ADS)

    Schnell, R. C.; Cunningham, M. C.; Vasel, B. A.; Butler, J. H.

    2009-12-01

    The National Oceanic and Atmospheric Administration (NOAA) operates an Atmospheric Baseline Observatory at Cape Matatula on the north-eastern point of American Samoa, opened in 1973. The manned observatory conducts continuous measurements of a wide range of climate forcing and atmospheric composition data including greenhouse gas concentrations, solar radiation, CFC and HFC concentrations, aerosols and ozone as well as less frequent measurements of many other parameters. The onset of September 29, 2009 earthquake is clearly visible in the continuous data streams in a variety of ways. The station electrical generator came online when the Samoa power grid failed so instruments were powered during and subsequent to the earthquake. Some instruments ceased operation in a spurt of spurious data followed by silence. Other instruments just stopped sending data abruptly when the shaking from the earthquake broke a data or power links, or an integral part of the instrument was damaged. Others survived the shaking but were put out of calibration. Still others suffered damage after the earthquake as heaters ran uncontrolled or rotating shafts continued operating in a damaged environment grinding away until they seized up or chewed a new operating space. Some instruments operated as if there was no earthquake, others were brought back online within a few days. Many of the more complex (and in most cases, most expensive) instruments will be out of service, some for at least 6 months or more. This presentation will show these results and discuss the impact of the earthquake on long-term measurements of climate forcing agents and other critical climate measurements.

  6. Real-Time Earthquake Intensity Estimation Using Streaming Data Analysis of Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Yelena; Tiampo, Kristy F.; Qin, Jinhui; Bauer, Michael A.

    2016-10-01

    Earthquake intensity is one of the key components of the decision-making process for disaster response and emergency services. Accurate and rapid intensity calculations can help to reduce total loss and the number of casualties after an earthquake. Modern intensity assessment procedures handle a variety of information sources, which can be divided into two main categories. The first type of data is that derived from physical sensors, such as seismographs and accelerometers, while the second type consists of data obtained from social sensors, such as witness observations of the consequences of the earthquake itself. Estimation approaches using additional data sources or that combine sources from both data types tend to increase intensity uncertainty due to human factors and inadequate procedures for temporal and spatial estimation, resulting in precision errors in both time and space. Here we present a processing approach for the real-time analysis of streams of data from both source types. The physical sensor data is acquired from the U.S. Geological Survey (USGS) seismic network in California and the social sensor data is based on Twitter user observations. First, empirical relationships between tweet rate and observed Modified Mercalli Intensity (MMI) are developed using data from the M6.0 South Napa, CAF earthquake that occurred on August 24, 2014. Second, the streams of both data types are analyzed together in simulated real-time to produce one intensity map. The second implementation is based on IBM InfoSphere Streams, a cloud platform for real-time analytics of big data. To handle large processing workloads for data from various sources, it is deployed and run on a cloud-based cluster of virtual machines. We compare the quality and evolution of intensity maps from different data sources over 10-min time intervals immediately following the earthquake. Results from the joint analysis shows that it provides more complete coverage, with better accuracy and higher

  7. Estimation of Europa's exosphere loss rates

    NASA Astrophysics Data System (ADS)

    Lucchetti, Alice; Plainaki, Christina; Cremonese, Gabriele; Milillo, Anna; Shematovich, Valery; Jia, Xianzhe; Cassidy, Timothy

    2015-04-01

    Reactions in Europa's exosphere are dominated by plasma interactions with neutrals. The cross-sections for these processes are energy dependent and therefore the respective loss rates of the exospheric species depend on the speed distribution of the charged particles relative to the neutrals, as well as the densities of each reactant. In this work we review the average H2O, O2, and H2 loss rates due to plasma-neutral interactions to perform an estimation of the Europa's total exosphere loss. Since the electron density at Europa's orbit varies significantly with the magnetic latitude of the moon in Jupiter's magnetosphere, the dissociation and ionization rates for electron-impact processes are subject to spatial and temporal variations. Therefore, the resulting neutral loss rates determining the actual spatial distribution of the neutral density is not homogeneous. In addition, the ion-neutral interactions have an input to the loss of exospheric species as well as to the modification of the energy distribution of the existing species (for example, the O2 energy distribution is modified through charge-exchange between O2 and O2+). In our calculations, the photoreactions were considered for conditions of quiet and active Sun.

  8. Earthquake Loss Assessment for the Evaluation of the Sovereign Risk and Financial Sustainability of Countries and Cities

    NASA Astrophysics Data System (ADS)

    Cardona, O. D.

    2013-05-01

    Recently earthquakes have struck cities both from developing as well as developed countries, revealing significant knowledge gaps and the need to improve the quality of input data and of the assumptions of the risk models. The quake and tsunami in Japan (2011) and the disasters due to earthquakes in Haiti (2010), Chile (2010), New Zealand (2011) and Spain (2011), only to mention some unexpected impacts in different regions, have left several concerns regarding hazard assessment as well as regarding the associated uncertainties to the estimation of the future losses. Understanding probable losses and reconstruction costs due to earthquakes creates powerful incentives for countries to develop planning options and tools to cope with sovereign risk, including allocating the sustained budgetary resources necessary to reduce those potential damages and safeguard development. Therefore the use of robust risk models is a need to assess the future economic impacts, the country's fiscal responsibilities and the contingent liabilities for governments and to formulate, justify and implement risk reduction measures and optimal financial strategies of risk retention and transfer. Special attention should be paid to the understanding of risk metrics such as the Loss Exceedance Curve (empiric and analytical) and the Expected Annual Loss in the context of conjoint and cascading hazards.

  9. Estimating the Threat of Tsunamigenic Earthquakes and Earthquake Induced-Landslide Tsunami in the Caribbean

    NASA Astrophysics Data System (ADS)

    McCann, W. R.

    2007-05-01

    more likely to produce slow earthquakes. Subduction of rough seafloor may activate thrust faults within the accretionary prism above the main decollement, causing indentation of the prism toe. Later reactivation of a dormant decollement would enhance the possibility of slow earthquakes. Subduction of significant seafloor relief and corresponding indentation of the accretionary prism toe would then be another parameter to estimate the likelihood of slow earthquakes. Using these criteria, several regions of the Northeastern Caribbean stand out as more likely sources for slow earthquakes.

  10. Estimation of earthquake effects associated with a great earthquake in the New Madrid seismic zone

    USGS Publications Warehouse

    Hopper, Margaret G.; Algermissen, Sylvester Theodore; Dobrovolny, Ernest E.

    1983-01-01

    Estimates have been made of the effects of a large Ms = 8.6, Io = XI earthquake hypothesed to occur anywhere in the New Madrid seismic zone. The estimates are based on the distributions of intensities associated with the earthquakes of 1811-12, 1843 and 1895 although the effects of other historical shocks are also considered. The resulting composite type intensity map for a maximum intensity XI is believed to represent the upper level of shaking likely to occur. Specific intensity maps have been developed for six cities near the epicentral region taking into account the most likely distribution of site response in each city. Intensities found are: IX for Carbondale, IL; VIII and IX for Evansville, IN; VI and VIII for Little Rock, AR; IX and X for Memphis, TN; VIII, IX, and X for Paducah, KY; and VIII and X for Poplar Bluff, MO. On a regional scale, intensities are found to attenuate from the New Madrid seismic zone most rapidly to the west and southwest sides of the zone, most slowly to the northwest along the Mississippi River, on the northeast along the Ohio River, and on the southeast toward Georgia and South Carolina. Intensities attenuate toward the north, east, and south in a more normal fashion. Known liquefaction effects are documented but much more research is needed to define the liquefaction potential.

  11. The global historical and future economic loss and cost of earthquakes during the production of adaptive worldwide economic fragility functions

    NASA Astrophysics Data System (ADS)

    Daniell, James; Wenzel, Friedemann

    2014-05-01

    Over the past decade, the production of economic indices behind the CATDAT Damaging Earthquakes Database has allowed for the conversion of historical earthquake economic loss and cost events into today's terms using long-term spatio-temporal series of consumer price index (CPI), construction costs, wage indices, and GDP from 1900-2013. As part of the doctoral thesis of Daniell (2014), databases and GIS layers for a country and sub-country level have been produced for population, GDP per capita, net and gross capital stock (depreciated and non-depreciated) using studies, census information and the perpetual inventory method. In addition, a detailed study has been undertaken to collect and reproduce as many historical isoseismal maps, macroseismic intensity results and reproductions of earthquakes as possible out of the 7208 damaging events in the CATDAT database from 1900 onwards. a) The isoseismal database and population bounds from 3000+ collected damaging events were compared with the output parameters of GDP and net and gross capital stock per intensity bound and administrative unit, creating a spatial join for analysis. b) The historical costs were divided into shaking/direct ground motion effects, and secondary effects costs. The shaking costs were further divided into gross capital stock related and GDP related costs for each administrative unit, intensity bound couplet. c) Costs were then estimated based on the optimisation of the function in terms of costs vs. gross capital stock and costs vs. GDP via the regression of the function. Losses were estimated based on net capital stock, looking at the infrastructure age and value at the time of the event. This dataset was then used to develop an economic exposure for each historical earthquake in comparison with the loss recorded in the CATDAT Damaging Earthquakes Database. The production of economic fragility functions for each country was possible using a temporal regression based on the parameters of

  12. Applicability of source scaling relations for crustal earthquakes to estimation of the ground motions of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe

    2017-01-01

    A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic

  13. Near-Real-Time Loss-Estimation for Instrumented Buildings

    NASA Astrophysics Data System (ADS)

    Porter, K. A.; Beck, J. L.; Ching, J.; Mitrani, J.

    2003-12-01

    Building owners make several important decisions in the hours after an earthquake occurs: whether to engage a structural engineer to inspect the building; what to tell investors, rating agencies, or other financial stakeholders; and how to assess the safety of tenants. A current research project seeks to develop the means to perform an automated, building-specific, probabilistic evaluation of detailed physical damage, safety, and loss to instrumented buildings. The project relies on three recent developments: real-time monitoring, an unscented particle filter, and the assembly-based vulnerability (ABV) technique. Real-time monitoring systems such as COMET and R-Shape continuously record and analyze accelerometer and other building data for several instrumented buildings. Potentially sparse response information can be input to a new, unscented particle filter to estimate potentially highly nonlinear structural response at all the building's degrees of freedom. The complete structural response is then input to the ABV framework, which applies a set of empirical component fragility functions to estimate the probabilistic damage state of every damageable component in the building. Damage data are then input within ABV to standard safety-evaluation criteria to estimate the likely action of safety inspectors. The probabilistic damage state is also input in ABV to a construction-cost-estimation algorithm to evaluate probabilistic repair cost. The project will combine these three elements for software implementation so that damage, safety, and loss can be calculated and transmitted to a decision maker within minutes of the cessation of strong motion. The research is illustrated using two buildings: one in California, the other in Japan.

  14. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  15. Earthquakes

    MedlinePlus

    ... Thunderstorms & Lightning Tornadoes Tsunamis Volcanoes Wildfires Main Content Earthquakes Earthquakes are sudden rolling or shaking events caused ... at any time of the year. Before An Earthquake Look around places where you spend time. Identify ...

  16. Rupture Process of the 1969 and 1975 Kurile Earthquakes Estimated from Tsunami Waveform Analyses

    NASA Astrophysics Data System (ADS)

    Ioki, Kei; Tanioka, Yuichiro

    2016-12-01

    The 1969 and 1975 great Kurile earthquakes occurred along the Kurile trench. Tsunamis generated by these earthquakes were observed at tide gauge stations around the coasts of the Okhotsk Sea and Pacific Ocean. To understand rupture process of the 1969 and 1975 earthquakes, slip distributions of the 1969 and 1975 events were estimated using tsunami waveform inversion technique. Seismic moments estimated from slip distributions of the 1969 and 1975 earthquakes were 1.1 × 1021 Nm ( M w 8.0) and 0.6 × 1021 Nm ( M w 7.8), respectively. The 1973 Nemuro-Oki earthquake occurred at the plate interface adjacent to that ruptured by the 1969 Kurile earthquake. The 1975 Shikotan earthquake occurred in a shallow region of the plate interface where was not ruptured by the 1969 Kurile earthquake. Further, like a sequence of the 1969 and 1975 earthquakes, it is possible that a great earthquake may occur in a shallow part of the plate interface a few years after a great earthquake that occurs in a deeper part of the same region along the trench.

  17. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  18. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  19. Global Earthquake and Volcanic Eruption Economic losses and costs from 1900-2014: 115 years of the CATDAT database - Trends, Normalisation and Visualisation

    NASA Astrophysics Data System (ADS)

    Daniell, James; Skapski, Jens-Udo; Vervaeck, Armand; Wenzel, Friedemann; Schaefer, Andreas

    2015-04-01

    Over the past 12 years, an in-depth database has been constructed for socio-economic losses from earthquakes and volcanoes. The effects of earthquakes and volcanic eruptions have been documented in many databases, however, many errors and incorrect details are often encountered. To combat this, the database was formed with socioeconomic checks of GDP, capital stock, population and other elements, as well as providing upper and lower bounds to each available event loss. The definition of economic losses within the CATDAT Damaging Earthquakes Database (Daniell et al., 2011a) as of v6.1 has now been redefined to provide three options of natural disaster loss pricing, including reconstruction cost, replacement cost and actual loss, in order to better define the impact of historical disasters. Similarly for volcanoes as for earthquakes, a reassessment has been undertaken looking at the historical net and gross capital stock and GDP at the time of the event, including the depreciated stock, in order to calculate the actual loss. A normalisation has then been undertaken using updated population, GDP and capital stock. The difference between depreciated and gross capital can be removed from the historical loss estimates which have been all calculated without taking depreciation of the building stock into account. The culmination of time series from 1900-2014 of net and gross capital stock, GDP, direct economic loss data, use of detailed studies of infrastructure age, and existing damage surveys, has allowed the first estimate of this nature. The death tolls in earthquakes from 1900-2014 are presented in various forms, showing around 2.32 million deaths due to earthquakes (with a range of 2.18 to 2.63 million) and around 59% due to masonry buildings and 28% from secondary effects. For the death tolls from the volcanic eruption database, 98000 deaths with a range from around 83000 to 107000 is seen from 1900-2014. The application of VSL life costing from death and injury

  20. A comparison of socio-economic loss analysis from the 2013 Haiyan Typhoon and Bohol Earthquake events in the Philippines in near real-time

    NASA Astrophysics Data System (ADS)

    Daniell, James; Mühr, Bernhard; Kunz-Plapp, Tina; Brink, Susan A.; Kunz, Michael; Khazai, Bijan; Wenzel, Friedemann

    2014-05-01

    In the aftermath of a disaster, the extent of the socioeconomic loss (fatalities, homelessness and economic losses) is often not known and it may take days before a reasonable estimate is known. Using the technique of socio-economic fragility functions developed (Daniell, 2014) using a regression of socio-economic indicators through time against historical empirical loss vs. intensity data, a first estimate can be established. With more information from the region as the disaster unfolds, a more detailed estimate can be provided via a calibration of the initial loss estimate parameters. In 2013, two main disasters hit the Philippines; the Bohol earthquake in October and the Haiyan typhoon in November. Although both disasters were contrasting and hit different regions, the same generalised methodology was used for initial rapid estimates and then the updating of the disaster loss estimate through time. The CEDIM Forensic Disaster Analysis Group of KIT and GFZ produced 6 reports for Bohol and 2 reports for Haiyan detailing various aspects of the disasters from the losses to building damage, the socioeconomic profile and also the social networking and disaster response. This study focusses on the loss analysis undertaken. The following technique was used:- 1. A regression of historical earthquake and typhoon losses for the Philippines was examined using the CATDAT Damaging Earthquakes Database, and various Philippines databases respectively. 2. The historical intensity impact of the examined events were placed in a GIS environment in order to allow correlation with the population and capital stock database from 1900-2013 to create a loss function. The modified human development index from 1900-2013 was also used to also calibrate events through time. 3. The earthquake intensity and the wind speed intensity was used from the 2013 events as well as the 2013 capital stock and population in order to calculate the number of fatalities (except in Haiyan), homeless and

  1. Estimates of loss rates of jaw tags on walleyes

    USGS Publications Warehouse

    Newman, Steven P.; Hoff, Michael H.

    1998-01-01

    The rate of jaw tag loss was evaluated for walleye Stizostedion vitreum in Escanaba Lake, Wisconsin. We estimated tag loss using two recapture methods, a creel census and fykenetting. Average annual tag loss estimates were 17.5% for fish recaptured by anglers and 27.8% for fish recaptured in fyke nets. However, fyke-net data were biased by tag loss during netting. The loss rate of jaw tags increased with time and walleye length.

  2. Estimation of strong ground motions from hypothetical earthquakes on the Cascadia subduction zone, Pacific Northwest

    USGS Publications Warehouse

    Heaton, T.H.; Hartzell, S.H.

    1989-01-01

    Strong ground motions are estimated for the Pacific Northwest assuming that large shallow earthquakes, similar to those experienced in southern Chile, southwestern Japan, and Colombia, may also occur on the Cascadia subduction zone. Fifty-six strong motion recordings for twenty-five subduction earthquakes of Ms???7.0 are used to estimate the response spectra that may result from earthquakes Mw<81/4. Large variations in observed ground motion levels are noted for a given site distance and earthquake magnitude. When compared with motions that have been observed in the western United States, large subduction zone earthquakes produce relatively large ground motions at surprisingly large distances. An earthquake similar to the 22 May 1960 Chilean earthquake (Mw 9.5) is the largest event that is considered to be plausible for the Cascadia subduction zone. This event has a moment which is two orders of magnitude larger than the largest earthquake for which we have strong motion records. The empirical Green's function technique is used to synthesize strong ground motions for such giant earthquakes. Observed teleseismic P-waveforms from giant earthquakes are also modeled using the empirical Green's function technique in order to constrain model parameters. The teleseismic modeling in the period range of 1.0 to 50 sec strongly suggests that fewer Green's functions should be randomly summed than is required to match the long-period moments of giant earthquakes. It appears that a large portion of the moment associated with giant earthquakes occurs at very long periods that are outside the frequency band of interest for strong ground motions. Nevertheless, the occurrence of a giant earthquake in the Pacific Northwest may produce quite strong shaking over a very large region. ?? 1989 Birkha??user Verlag.

  3. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  4. Earthquakes

    MedlinePlus

    ... and Cleanup Workers Hurricanes PSAs ASL Videos: Hurricanes Landslides & Mudslides Lightning Lightning Safety Tips First Aid Recommendations ... Disasters & Severe Weather Earthquakes Extreme Heat Floods Hurricanes Landslides Tornadoes Tsunamis Volcanoes Wildfires Winter Weather Earthquakes Language: ...

  5. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    USGS Publications Warehouse

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  6. Improving Estimates of Coseismic Subsidence from southern Cascadia Subduction Zone Earthquakes at northern Humboldt Bay, California

    NASA Astrophysics Data System (ADS)

    Padgett, J. S.; Engelhart, S. E.; Hemphill-Haley, E.; Kelsey, H. M.; Witter, R. C.

    2015-12-01

    Geological estimates of subsidence from past earthquakes help to constrain Cascadia subduction zone (CSZ) earthquake rupture models. To improve subsidence estimates for past earthquakes along the southern CSZ, we apply transfer function analysis on microfossils from 3 intertidal marshes in northern Humboldt Bay, California, ~60 km north of the Mendocino Triple Junction. The transfer function method uses elevation-dependent intertidal foraminiferal and diatom assemblages to reconstruct relative sea-level (RSL) change indicated by shifts in microfossil assemblages. We interpret stratigraphic evidence associated with sudden shifts in microfossils to reflect sudden RSL rise due to subsidence during past CSZ earthquakes. Laterally extensive (>5 km) and sharp mud-over-peat contacts beneath marshes at Jacoby Creek, Mad River Slough, and McDaniel Slough demonstrate widespread earthquake subsidence in northern Humboldt Bay. C-14 ages of plant macrofossils taken from above and below three contacts that correlate across all three sites, provide estimates of the times of subsidence at ~250 yr BP, ~1300 yr BP and ~1700 yr BP. Two further contacts observed at only two sites provide evidence for subsidence during possible CSZ earthquakes at ~900 yr BP and ~1100 yr BP. Our study contributes 20 AMS radiocarbon ages, of identifiable plant macrofossils, that improve estimates of the timing of past earthquakes along the southern CSZ. We anticipate that our results will provide more accurate and precise reconstructions of RSL change induced by southern CSZ earthquakes. Prior to our work, studies in northern Humboldt Bay provided subsidence estimates with vertical uncertainties >±0.5 m; too imprecise to adequately constrain earthquake rupture models. Our method, applied recently in coastal Oregon, has shown that subsidence during past CSZ earthquakes can be reconstructed with a precision of ±0.3m and substantially improves constraints on rupture models used for seismic hazard

  7. Estimating surface faulting impacts from the shakeout scenario earthquake

    USGS Publications Warehouse

    Treiman, J.A.; Pontib, D.J.

    2011-01-01

    An earthquake scenario, based on a kinematic rupture model, has been prepared for a Mw 7.8 earthquake on the southern San Andreas Fault. The rupture distribution, in the context of other historic large earthquakes, is judged reasonable for the purposes of this scenario. This model is used as the basis for generating a surface rupture map and for assessing potential direct impacts on lifelines and other infrastructure. Modeling the surface rupture involves identifying fault traces on which to place the rupture, assigning slip values to the fault traces, and characterizing the specific displacements that would occur to each lifeline impacted by the rupture. Different approaches were required to address variable slip distribution in response to a variety of fault patterns. Our results, involving judgment and experience, represent one plausible outcome and are not predictive because of the variable nature of surface rupture. ?? 2011, Earthquake Engineering Research Institute.

  8. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  9. Estimation of economic losses caused by disruption of lifeline service: An analysis of the Memphis Light, Gas and Water system

    SciTech Connect

    Chang, S.E.; Seligson, H.A.; Eguchi, R.T.

    1995-12-31

    The assessment of economic impact remains an important missing link in earthquake loss estimation procedures. This paper presents a general methodology for evaluating the economic losses caused by seismically-induced disruption of lifeline service in an urban area. The methodology consists of three steps: (1) development of a lifeline usage model on an industry basis; (2) estimation of the spatial distribution of economic activity throughout the urban area; and (3) assessment of direct losses through evaluation of the spatial coincidence of economic activity with lifeline service disruption. To demonstrate this methodology, a pilot analysis was conducted on the Memphis Light, Gas and Water electric power system for a Magnitude 7.5 earthquake in New Madrid seismic Zone. Using newly-available empirical data, business interruption in Shelby County, Tennessee, was estimated for major industries in the local economy. Extensions of the methodology are also discussed.

  10. Uncertainty of earthquake losses due to model uncertainty of input ground motions in the Los Angeles area

    USGS Publications Warehouse

    Cao, T.; Petersen, M.D.

    2006-01-01

    In a recent study we used the Monte Carlo simulation method to evaluate the ground-motion uncertainty of the 2002 update of the California probabilistic seismic hazard model. The resulting ground-motion distribution is used in this article to evaluate the contribution of the hazard model to the uncertainty in earthquake loss ratio, the ratio of the expected loss to the total value of a structure. We use the Hazards U.S. (HAZUS) methodology for loss estimation because it is a widely used and publicly available risk model and intended for regional studies by public agencies and for use by governmental decision makers. We found that the loss ratio uncertainty depends not only on the ground-motion uncertainty but also on the mean ground-motion level. The ground-motion uncertainty, as measured by the coefficient of variation (COV), is amplified when converting to the loss ratio uncertainty because loss increases concavely with ground motion. By comparing the ground-motion uncertainty with the corresponding loss ratio uncertainty for the structural damage of light wood-frame buildings in Los Angeles area, we show that the COV of loss ratio is almost twice the COV of ground motion with a return period of 475 years around the San Andreas fault and other major faults in the area. The loss ratio for the 2475-year ground-motion maps is about a factor of three higher than for the 475-year maps. However, the uncertainties in ground motion and loss ratio for the longer return periods are lower than for the shorter return periods because the uncertainty parameters in the hazard logic tree are independent of the return period, but the mean ground motion increases with return period.

  11. PAGER--Rapid assessment of an earthquake?s impact

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  12. Estimation of completeness magnitude with a Bayesian modeling of daily and weekly variations in earthquake detectability

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2014-12-01

    In the analysis of seismic activity, assessment of earthquake detectability of a seismic network is a fundamental issue. For this assessment, the completeness magnitude Mc, the minimum magnitude above which all earthquakes are recorded, is frequently estimated. In most cases, Mc is estimated for an earthquake catalog of duration longer than several weeks. However, owing to human activity, noise level in seismic data is higher on weekdays than on weekends, so that earthquake detectability has a weekly variation [e.g., Atef et al., 2009, BSSA]; the consideration of such a variation makes a significant contribution to the precise assessment of earthquake detectability and Mc. For a quantitative evaluation of the weekly variation, we introduced the statistical model of a magnitude-frequency distribution of earthquakes covering an entire magnitude range [Ogata & Katsura, 1993, GJI]. The frequency distribution is represented as the product of the Gutenberg-Richter law and a detection rate function. Then, the weekly variation in one of the model parameters, which corresponds to the magnitude where the detection rate of earthquakes is 50%, was estimated. Because earthquake detectability also have a daily variation [e.g., Iwata, 2013, GJI], and the weekly and daily variations were estimated simultaneously by adopting a modification of a Bayesian smoothing spline method for temporal change in earthquake detectability developed in Iwata [2014, Aust. N. Z. J. Stat.]. Based on the estimated variations in the parameter, the value of Mc was estimated. In this study, the Japan Meteorological Agency catalog from 2006 to 2010 was analyzed; this dataset is the same as analyzed in Iwata [2013] where only the daily variation in earthquake detectability was considered in the estimation of Mc. A rectangular grid with 0.1° intervals covering in and around Japan was deployed, and the value of Mc was estimated for each gridpoint. Consequently, a clear weekly variation was revealed; the

  13. Building vulnerability and human loss assessment in different earthquake intensity and time: a case study of the University of the Philippines, Los Baños (UPLB) Campus

    NASA Astrophysics Data System (ADS)

    Rusydy, I.; Faustino-Eslava, D. V.; Muksin, U.; Gallardo-Zafra, R.; Aguirre, J. J. C.; Bantayan, N. C.; Alam, L.; Dakey, S.

    2017-02-01

    Study on seismic hazard, building vulnerability and human loss assessment become substantial for building education institutions since the building are used by a lot of students, lecturers, researchers, and guests. The University of the Philippines, Los Banos (UPLB) located in an earthquake prone area. The earthquake could cause structural damage and injury of the UPLB community. We have conducted earthquake assessment in different magnitude and time to predict the posibility of ground shaking, building vulnerability and estimated the number of casualty of the UPLB community. The data preparation in this study includes the earthquake scenario modeling using Intensity Prediction Equations (IPEs) for shallow crustal shaking attenuation to produce intensity map of bedrock and surface. Earthquake model was generated from the segment IV and the segment X of the Valley Fault System (VFS). Building vulnerability of different type of building was calculated using fragility curve of the Philippines building. The population data for each building in various occupancy time, damage ratio, and injury ratio data were used to compute the number of casualties. The result reveals that earthquake model from the segment IV and the segment X of the VFS could generate earthquake intensity between 7.6 – 8.1 MMI in the UPLB campus. The 7.7 Mw earthquake (scenario I) from the segment IV could cause 32% - 51% damage of building and 6.5 Mw earthquake (scenario II) occurring in the segment X could cause 18% - 39% structural damage of UPLB buildings. If the earthquake occurs at 2 PM (day-time), it could injure 10.2% - 18.8% for the scenario I and could injure 7.2% - 15.6% of UPLB population in scenario II. The 5 Pm event, predicted will injure 5.1%-9.4% in the scenario I, and 3.6%-7.8% in scenario II. A nighttime event (2 Am) cause injury to students and guests who stay in dormitories. The earthquake is predicted to injure 13 - 66 students and guests in the scenario I and 9 - 47 people in

  14. Using a genetic algorithm to estimate the details of earthquake slip distributions from point surface displacements

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nic Bhloscaidh, M.

    2016-03-01

    Examining fault activity over several earthquake cycles is necessary for long-term modeling of the fault strain budget and stress state. While this requires knowledge of coseismic slip distributions for successive earthquakes along the fault, these exist only for the most recent events. However, overlying the Sunda Trench, sparsely distributed coral microatolls are sensitive to tectonically induced changes in relative sea levels and provide a century-spanning paleogeodetic and paleoseismic record. Here we present a new technique called the Genetic Algorithm Slip Estimator to constrain slip distributions from observed surface deformations of corals. We identify a suite of models consistent with the observations, and from them we compute an ensemble estimate of the causative slip. We systematically test our technique using synthetic data. Applying the technique to observed coral displacements for the 2005 Nias-Simeulue earthquake and 2007 Mentawai sequence, we reproduce key features of slip present in previously published inversions such as the magnitude and location of slip asperities. From the displacement data available for the 1797 and 1833 Mentawai earthquakes, we present slip estimates reproducing observed displacements. The areas of highest modeled slip in the paleoearthquake are nonoverlapping, and our solutions appear to tile the plate interface, complementing one another. This observation is supported by the complex rupture pattern of the 2007 Mentawai sequence, underlining the need to examine earthquake occurrence through long-term strain budget and stress modeling. Although developed to estimate earthquake slip, the technique is readily adaptable for a wider range of applications.

  15. Probability estimates of seismic event occurrence compared to health hazards - Forecasting Taipei's Earthquakes

    NASA Astrophysics Data System (ADS)

    Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.

    2014-12-01

    Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.

  16. Inter-plate aseismic slip on the subducting plate boundaries estimated from repeating earthquakes

    NASA Astrophysics Data System (ADS)

    Igarashi, T.

    2015-12-01

    Sequences of repeating earthquakes are caused by repeating slips of small patches surrounded by aseismic slip areas at plate boundary zones. Recently, they have been detected in many regions. In this study, I detected repeating earthquakes which occurred in Japan and the world by using seismograms observed in the Japanese seismic network, and investigated the space-time characteristics of inter-plate aseismic slip on the subducting plate boundaries. To extract repeating earthquakes, I calculate cross-correlation coefficients of band-pass filtering seismograms at each station following Igarashi [2010]. I used two data-set based on USGS catalog for about 25 years from May 1990 and JMA catalog for about 13 years from January 2002. As a result, I found many sequences of repeating earthquakes in the subducting plate boundaries of the Andaman-Sumatra-Java and Japan-Kuril-Kamchatka-Aleutian subduction zones. By applying the scaling relations among a seismic moment, recurrence interval and slip proposed by Nadeau and Johnson [1998], they indicate the space-time changes of inter-plate aseismic slips. Pairs of repeating earthquakes with the longest time interval occurred in the Solomon Islands area and the recurrence interval was about 18.5 years. The estimated slip-rate is about 46 mm/year, which correspond to about half of the relative plate motion in this area. Several sequences with fast slip-rates correspond to the post-seismic slips after the 2004 Sumatra-Andaman earthquake (M9.0), the 2006 Kuril earthquake (M8.3), the 2007 southern Sumatra earthquake (M8.5), and the 2011 Tohoku-oki earthquake (M9.0). The database of global repeating earthquakes enables the comparison of the inter-plate aseismic slips of various plate boundary zones of the world. I believe that I am likely to detect more sequences by extending analysis periods in the area where they were not found in this analysis.

  17. Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations

    NASA Astrophysics Data System (ADS)

    Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.

    2015-08-01

    This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using

  18. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  19. Estimation of flood losses to agricultural crops using remote sensing

    NASA Astrophysics Data System (ADS)

    Tapia-Silva, Felipe-Omar; Itzerott, Sibylle; Foerster, Saskia; Kuhlmann, Bernd; Kreibich, Heidi

    2011-01-01

    The estimation of flood damage is an important component of risk-oriented flood design, risk mapping, financial appraisals and comparative risk analyses. However, research on flood loss modelling, especially in the agricultural sector, has not yet gained much attention. Agricultural losses strongly depend on the crops affected, which need to be predicted accurately. Therefore, three different methods to predict flood-affected crops using remote sensing and ancillary data were developed, applied and validated. These methods are: (a) a hierarchical classification based on standard curves of spectral response using satellite images, (b) disaggregation of crop statistics using a Monte Carlo simulation and probabilities of crops to be cultivated on specific soils and (c) analysis of crop rotation with data mining Net Bayesian Classifiers (NBC) using soil data and crop data derived from a multi-year satellite image analysis. A flood loss estimation model for crops was applied and validated in flood detention areas (polders) at the Havel River (Untere Havelniederung) in Germany. The polders were used for temporary storage of flood water during the extreme flood event in August 2002. The flood loss to crops during the extreme flood event in August 2002 was estimated based on the results of the three crop prediction methods. The loss estimates were then compared with official loss data for validation purposes. The analysis of crop rotation with NBC obtained the best result, with 66% of crops correctly classified. The accuracy of the other methods reached 34% with identification using Normalized Difference Vegetation Index (NDVI) standard curves and 19% using disaggregation of crop statistics. The results were confirmed by evaluating the loss estimation procedure, in which the damage model using affected crops estimated by NBC showed the smallest overall deviation (1%) when compared to the official losses. Remote sensing offers various possibilities for the improvement of

  20. Housing type after the Great East Japan Earthquake and loss of motor function in elderly victims: a prospective observational study

    PubMed Central

    Tomata, Yasutake; Kogure, Mana; Sugawara, Yumi; Watanabe, Takashi; Asaka, Tadayoshi; Tsuji, Ichiro

    2016-01-01

    Objective Previous studies have reported that elderly victims of natural disasters might be prone to a subsequent decline in motor function. Victims of the Great East Japan Earthquake (GEJE) relocated to a wide range of different types of housing. As the evacuee lifestyle varies according to the type of housing available to them, their degree of motor function loss might also vary accordingly. However, the association between postdisaster housing type and loss of motor function has never been investigated. The present study was conducted to investigate the association between housing type after the GEJE and loss of motor function in elderly victims. Methods We conducted a prospective observational study of 478 Japanese individuals aged ≥65 years living in Miyagi Prefecture, one of the areas most significantly affected by the GEJE. Information on housing type after the GEJE, motor function as assessed by the Kihon checklist and other lifestyle factors was collected by interview and questionnaire in 2012. Information on motor function was then collected 1 year later. The multiple logistic regression model was used to estimate the multivariate adjusted ORs of motor function loss. Results We classified 53 (11.1%) of the respondents as having loss of motor function. The multivariate adjusted OR (with 95% CI) for loss of motor function among participants who were living in privately rented temporary housing/rental housing was 2.62 (1.10 to 6.24) compared to those who had remained in the same housing as that before the GEJE, and this increase was statistically significant. Conclusions The proportion of individuals with loss of motor function was higher among persons who had relocated to privately rented temporary housing/rental housing after the GEJE. This result may reflect the influence of a move to a living environment where few acquaintances are located (lack of social capital). PMID:27810976

  1. Ground motion modeling of the 1906 San Francisco earthquake II: Ground motion estimates for the 1906 earthquake and scenario events

    SciTech Connect

    Aagaard, B; Brocher, T; Dreger, D; Frankel, A; Graves, R; Harmsen, S; Hartzell, S; Larsen, S; McCandless, K; Nilsson, S; Petersson, N A; Rodgers, A; Sjogreen, B; Tkalcic, H; Zoback, M L

    2007-02-09

    We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  2. USGS approach to real-time estimation of earthquake-triggered ground failure - Results of 2015 workshop

    USGS Publications Warehouse

    Allstadt, Kate E.; Thompson, Eric M.; Wald, David J.; Hamburger, Michael W.; Godt, Jonathan W.; Knudsen, Keith L.; Jibson, Randall W.; Jessee, M. Anna; Zhu, Jing; Hearne, Michael; Baise, Laurie G.; Tanyas, Hakan; Marano, Kristin D.

    2016-03-30

    The U.S. Geological Survey (USGS) Earthquake Hazards and Landslide Hazards Programs are developing plans to add quantitative hazard assessments of earthquake-triggered landsliding and liquefaction to existing real-time earthquake products (ShakeMap, ShakeCast, PAGER) using open and readily available methodologies and products. To date, prototype global statistical models have been developed and are being refined, improved, and tested. These models are a good foundation, but much work remains to achieve robust and defensible models that meet the needs of end users. In order to establish an implementation plan and identify research priorities, the USGS convened a workshop in Golden, Colorado, in October 2015. This document summarizes current (as of early 2016) capabilities, research and operational priorities, and plans for further studies that were established at this workshop. Specific priorities established during the meeting include (1) developing a suite of alternative models; (2) making use of higher resolution and higher quality data where possible; (3) incorporating newer global and regional datasets and inventories; (4) reducing barriers to accessing inventory datasets; (5) developing methods for using inconsistent or incomplete datasets in aggregate; (6) developing standardized model testing and evaluation methods; (7) improving ShakeMap shaking estimates, particularly as relevant to ground failure, such as including topographic amplification and accounting for spatial variability; and (8) developing vulnerability functions for loss estimates.

  3. A Probabilistic Estimate of the Most Perceptible Earthquake Magnitudes in the NW Himalaya and Adjoining Regions

    NASA Astrophysics Data System (ADS)

    Yadav, R. B. S.; Koravos, G. Ch.; Tsapanos, T. M.; Vougiouka, G. E.

    2015-02-01

    NW Himalaya and its neighboring region (25°-40°N and 65°-85°E) is one of the most seismically hazardous regions in the Indian subcontinent, a region that has historically experienced large to great damaging earthquakes. In the present study, the most perceptible earthquake magnitudes, M p, are estimated for intensity I = VII, horizontal peak ground acceleration a = 300 cm/s2 and horizontal peak ground velocity v = 10 cm/s in 28 seismogenic zones using the two earthquake recurrence models of Kijko and Sellevoll (Bulletin of the Seismological Society of America 82(1):120-134 1992 ) and Gumbel's third asymptotic distribution of extremes (GIII). Both methods deal with maximum magnitudes. The earthquake perceptibility is calculated by combining earthquake recurrence models with ground motion attenuation relations at a particular level of intensity, acceleration and velocity. The estimated results reveal that the values of M p for velocity v = 10 cm/s show higher estimates than corresponding values for intensity I = VII and acceleration a = 300 cm/s2. It is also observed that differences in perceptible magnitudes calculated by the Kijko-Sellevoll method and GIII statistics show significantly high values, up to 0.7, 0.6 and 1.7 for intensity, acceleration and velocity, respectively, revealing the importance of earthquake recurrence model selection. The estimated most perceptible earthquake magnitudes, M p, in the present study vary from M W 5.1 to 7.7 in the entire zone of the study area. Results of perceptible magnitudes are also represented in the form of spatial maps in 28 seismogenic zones for the aforementioned threshold levels of intensity, acceleration and velocity, estimated from two recurrence models. The spatial maps show that the Quetta of Pakistan, the Hindukush-Pamir Himalaya, the Caucasus mountain belt and the Himalayan frontal thrust belt (Kashmir-Kangra-Uttarkashi-Chamoli regions) exhibit higher values of the most perceptible earthquake magnitudes ( M

  4. Coastal land loss and gain as potential earthquake trigger mechanism in SCRs

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2007-12-01

    In stable continental regions (SCRs), historic data show earthquakes can be triggered by natural tectonic sources in the interior of the crust and also by sources stemming from the Earth's sub/surface. Building off of this framework, the following abstract will discuss both as potential sources that might have triggered the 2007 ML4.2 Folkestone earthquake in Kent, England. Folkestone, located along the Southeast coast of Kent in England, is a mature aseismic region. However, a shallow earthquake with a local magnitude of ML = 4.2 occurred on April 28 2007 at 07:18 UTC about 1 km East of Folkestone (51.008° N, 1.206° E) between Dover and New Romney. The epicentral error is about ±5 km. While coastal land loss has major effects towards the Southwest and the Northeast of Folkestone, research observations suggest that erosion and landsliding do not exist in the immediate Folkestone city area (<1km). Furthermore, erosion removes rock material from the surface. This mass reduction decreases the gravitational stress component and would bring a fault away from failure, given a tectonic normal and strike-slip fault regime. In contrast, land gain by geoengineering (e.g., shingle accumulation) in the harbor of Folkestone dates back to 1806. The accumulated mass of sand and gravel accounted for a 2.8·109 kg (2.8 Mt) in 2007. This concentrated mass change less than 1 km away from the epicenter of the mainshock was able to change the tectonic stress in the strike-slip/normal stress regime. Since 1806, shear and normal stresses increased at most on oblique faults dipping 60±10°. The stresses reached values ranging between 1.0 KPa and 30.0 KPa in up to 2 km depth, which are critical for triggering earthquakes. Furthermore, the ratio between holding and driving forces continuously decreased for 200 years. In conclusion, coastal engineering at the surface most likely dominates as potential trigger mechanism for the 2007 ML4.2 Folkestone earthquake. It can be anticipated that

  5. ShakeMap Atlas 2.0: an improved suite of recent historical earthquake ShakeMaps for global hazard analyses and loss model calibration

    USGS Publications Warehouse

    Garcia, D.; Mah, R.T.; Johnson, K.L.; Hearne, M.G.; Marano, K.D.; Lin, K.-W.; Wald, D.J.

    2012-01-01

    We introduce the second version of the U.S. Geological Survey ShakeMap Atlas, which is an openly-available compilation of nearly 8,000 ShakeMaps of the most significant global earthquakes between 1973 and 2011. This revision of the Atlas includes: (1) a new version of the ShakeMap software that improves data usage and uncertainty estimations; (2) an updated earthquake source catalogue that includes regional locations and finite fault models; (3) a refined strategy to select prediction and conversion equations based on a new seismotectonic regionalization scheme; and (4) vastly more macroseismic intensity and ground-motion data from regional agencies All these changes make the new Atlas a self-consistent, calibrated ShakeMap catalogue that constitutes an invaluable resource for investigating near-source strong ground-motion, as well as for seismic hazard, scenario, risk, and loss-model development. To this end, the Atlas will provide a hazard base layer for PAGER loss calibration and for the Earthquake Consequences Database within the Global Earthquake Model initiative.

  6. Using Modified Mercalli Intensities to estimate acceleration response spectra for the 1906 San Francisco earthquake

    USGS Publications Warehouse

    Boatwright, J.; Bundock, H.; Seekins, L.C.

    2006-01-01

    We derive and test relations between the Modified Mercalli Intensity (MMI) and the pseudo-acceleration response spectra at 1.0 and 0.3 s - SA(1.0 s) and SA(0.3 s) - in order to map response spectral ordinates for the 1906 San Francisco earthquake. Recent analyses of intensity have shown that MMI ??? 6 correlates both with peak ground velocity and with response spectra for periods from 0.5 to 3.0 s. We use these recent results to derive a linear relation between MMI and log SA(1.0 s), and we refine this relation by comparing the SA(1.0 s) estimated from Boatwright and Bundock's (2005) MMI map for the 1906 earthquake to the SA(1.0 s) calculated from recordings of the 1989 Loma Prieta earthquake. South of San Jose, the intensity distributions for the 1906 and 1989 earthquakes are remarkably similar, despite the difference in magnitude and rupture extent between the two events. We use recent strong motion regressions to derive a relation between SA(1.0 s) and SA(0.3 s) for a M7.8 strike-slip earthquake that depends on soil type, acceleration level, and source distance. We test this relation by comparing SA(0.3 s) estimated for the 1906 earthquake to SA(0.3 s) calculated from recordings of both the 1989 Loma Prieta and 1994 Northridge earthquakes, as functions of distance from the fault. ?? 2006, Earthquake Engineering Research Institute.

  7. A discussion of the socio-economic losses and shelter impacts from the Van, Turkey Earthquakes of October and November 2011

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Kunz-Plapp, T.; Vervaeck, A.; Muehr, B.; Markus, M.

    2012-04-01

    The Van earthquake in 2011 hit at 10:41 GMT (13:41 Local) on Sunday, October 23rd, 2011. It was a Mw7.1-7.3 event located at a depth of around 10 km with the epicentre located directly between Ercis (pop. 75,000) and Van (pop. 370,000). Since then, the CEDIM Forensic Analysis Group (using a team of seismologists, engineers, sociologists and meteorologists) and www.earthquake-report.com has reported and analysed on the Van event. In addition, many damaging aftershocks occurring after the main eventwere analysed including a major aftershock centered in Van-Edremit on November 9th, 2011, causing much additional losses. The province of Van has around 1.035 million people as of the last census. The Van province is one of the poorest in Turkey and has much inequality between the rural and urban centers with an average HDI (Human Development Index) around that of Bhutan or Congo. The earthquakes are estimated to have caused 604 deaths (23 October) and 40 deaths (9 November); mostly due to falling debris and house collapse). In addition, between 1 billion TRY to 4 billion TRY (approx. 555 million USD - 2.2 billion USD) is estimated as total economic losses. This represents around 17 to 66% of the provincial GDP of the Van Province (approx. 3.3 billion USD) as of 2011. From the CATDAT Damaging Earthquakes Database, major earthquakes such as this one have occurred in the year 1111 causing major damage and having a magnitude around 6.5-7. In the year 1646 or 1648, Van was again struck by a M6.7 quake killing around 2000 people. In 1881, a M6.3 earthquake near Van killed 95 people. Again, in 1941, a M5.9 earthquake affected Ercis and Van killing between 190 and 430 people. 1945-1946 as well as 1972 brought again damaging and casualty-bearing earthquakes to the Van province. In 1976, the Van-Muradiye earthquake struck the border region with a M7, killing around 3840 people and causing around 51,000 people to become homeless. Key immediate lessons from similar historic

  8. Earthquakes

    EPA Pesticide Factsheets

    Information on this page will help you understand environmental dangers related to earthquakes, what you can do to prepare and recover. It will also help you recognize possible environmental hazards and learn what you can do to protect you and your family

  9. Estimating convective energy losses from solar central receivers

    SciTech Connect

    Siebers, D L; Kraabel, J S

    1984-04-01

    This report outlines a method for estimating the total convective energy loss from a receiver of a solar central receiver power plant. Two types of receivers are considered in detail: a cylindrical, external-type receiver and a cavity-type receiver. The method is intended to provide the designer with a tool for estimating the total convective energy loss that is based on current knowledge of convective heat transfer from receivers to the environment and that is adaptable to new information as it becomes available. The current knowledge consists of information from two recent large-scale experiments, as well as information already in the literature. Also outlined is a method for estimating the uncertainty in the convective loss estimates. Sample estimations of the total convective energy loss and the uncertainties in those convective energy loss estimates for the external receiver of the 10 MWe Solar Thermal Central Receiver Plant (Barstow, California) and the cavity receiver of the International Energy Agency Small Solar Power Systems Project (Almeria, Spain) are included in the appendices.

  10. A General Method to Estimate Earthquake Moment and Magnitude using Regional Phase Amplitudes

    SciTech Connect

    Pasyanos, M E

    2009-11-19

    This paper presents a general method of estimating earthquake magnitude using regional phase amplitudes, called regional M{sub o} or regional M{sub w}. Conceptually, this method uses an earthquake source model along with an attenuation model and geometrical spreading which accounts for the propagation to utilize regional phase amplitudes of any phase and frequency. Amplitudes are corrected to yield a source term from which one can estimate the seismic moment. Moment magnitudes can then be reliably determined with sets of observed phase amplitudes rather than predetermined ones, and afterwards averaged to robustly determine this parameter. We first examine in detail several events to demonstrate the methodology. We then look at various ensembles of phases and frequencies, and compare results to existing regional methods. We find regional M{sub o} to be a stable estimator of earthquake size that has several advantages over other methods. Because of its versatility, it is applicable to many more events, particularly smaller events. We make moment estimates for earthquakes ranging from magnitude 2 to as large as 7. Even with diverse input amplitude sources, we find magnitude estimates to be more robust than typical magnitudes and existing regional methods and might be tuned further to improve upon them. The method yields a more meaningful quantity of seismic moment, which can be recast as M{sub w}. Lastly, it is applied here to the Middle East region using an existing calibration model, but it would be easy to transport to any region with suitable attenuation calibration.

  11. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    USGS Publications Warehouse

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  12. The importance of in-situ observations for rapid loss estimates in the Euro-Med region

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Mazet Roux, G.; Gilles, S.

    2009-04-01

    A major (M>7) earthquake occurring in a densely populated area will inevitably cause significant damage and generally speaking the poorer the country the higher the number of fatalities. It was clear for any earthquake monitoring agency that the M7.8 Wenchuan earthquake in May 2008 was a disaster as soon its magnitude and location had been estimated. However, the loss estimates of moderate to strong earthquakes (M5 to M6) occurring close to an urban area is much trickier because the losses are the result of the convolution of many parameters (location, magnitude, depth, directivity, seismic attenuation, site effects, building vulnerability, repartition of the population at the time of the event…) which are either affected by non-negligible uncertainties or poorly constrained at least at a global scale. Just considering one of this parameter, the epicentral location: In this range of magnitude, the characteristic size of the potentially damaged area is comparable to the typical epicentral location uncertainty obtained in real time, i.e. 10 to 15 km. It is then not possible to discriminate in real time between an earthquake location right below a town which could cause significant damage and a location 15 km away which impact would be much lower. Clearly, if the uncertainties affecting each of the parameters are properly taken into account, for such earthquakes the resulting scenarios of losses will range from no impact to very significant impact and then the results will not be of much use. The way to reduce the uncertainties on the loss estimates in such cases is then to collect in-situ information on the local shaking level and/or on the actual damage at a number of localities. In area of low seismic hazard, the cost of installing dense accelerometric network is, in practice, too high and the only remaining solution is to rapidly collect observations of the damage. That is what the EMSC has been developing for the last few years by involving the Citizen in

  13. Heterogeneous rupture in the great Cascadia earthquake of 1700 inferred from coastal subsidence estimates

    USGS Publications Warehouse

    Wang, Pei-Ling; Engelhart, Simon E.; Wang, Kelin; Hawkes, Andrea D.; Horton, Benjamin P.; Nelson, Alan R.; Witter, Robert C.

    2013-01-01

    Past earthquake rupture models used to explain paleoseismic estimates of coastal subsidence during the great A.D. 1700 Cascadia earthquake have assumed a uniform slip distribution along the megathrust. Here we infer heterogeneous slip for the Cascadia margin in A.D. 1700 that is analogous to slip distributions during instrumentally recorded great subduction earthquakes worldwide. The assumption of uniform distribution in previous rupture models was due partly to the large uncertainties of then available paleoseismic data used to constrain the models. In this work, we use more precise estimates of subsidence in 1700 from detailed tidal microfossil studies. We develop a 3-D elastic dislocation model that allows the slip to vary both along strike and in the dip direction. Despite uncertainties in the updip and downdip slip extensions, the more precise subsidence estimates are best explained by a model with along-strike slip heterogeneity, with multiple patches of high-moment release separated by areas of low-moment release. For example, in A.D. 1700, there was very little slip near Alsea Bay, Oregon (~44.4°N), an area that coincides with a segment boundary previously suggested on the basis of gravity anomalies. A probable subducting seamount in this area may be responsible for impeding rupture during great earthquakes. Our results highlight the need for more precise, high-quality estimates of subsidence or uplift during prehistoric earthquakes from the coasts of southern British Columbia, northern Washington (north of 47°N), southernmost Oregon, and northern California (south of 43°N), where slip distributions of prehistoric earthquakes are poorly constrained.

  14. Modified Mercalli Intensity for scenario earthquakes in Evansville, Indiana

    USGS Publications Warehouse

    Cramer, Chris; Haase, Jennifer; Boyd, Oliver

    2012-01-01

    Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the fact that Evansville is close to the Wabash Valley and New Madrid seismic zones, there is concern about the hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake. Earthquake-hazard maps provide one way of conveying such estimates of strong ground shaking and will help the region prepare for future earthquakes and reduce earthquake-caused losses.

  15. LOSS ESTIMATE FOR ITER ECH TRANSMISSION LINE INCLUDING MULTIMODE PROPAGATION

    SciTech Connect

    Shapiro, Michael; Bigelow, Tim S; Caughman, John B; Rasmussen, David A

    2010-01-01

    The ITER electron cyclotron heating (ECH) transmission lines (TLs) are 63.5-mm-diam corrugated waveguides that will each carry 1 MW of power at 170 GHz. The TL is defined here as the corrugated wave guide system connecting the gyrotron mirror optics unit (MO U) to the entrance of the ECH launcher and includes miter bends and other corrugated wave guide components. The losses on the ITER TL have been calculated for four possible cases corresponding to having HE(11) mode purity at the input of the TL of 100, 97, 90, and 80%. The losses due to coupling, ohmic, and mode conversion loss are evaluated in detail using a numerical code and analytical approaches. Estimates of the calorimetric loss on the line show that the output power is reduced by about 5, +/- 1% because of ohmic loss in each of the four cases. Estimates of the mode conversion loss show that the fraction of output power in the HE(11) mode is similar to 3% smaller than the fraction of input power in the HE(11) mode. High output mode purity therefore can be achieved only with significantly higher input mode purity. Combining both ohmic and mode conversion loss, the efficiency of the TL from the gyrotron MOU to the ECH launcher can be roughly estimated in theory as 92% times the fraction of input power in the HE(11) mode.

  16. Estimating and Presenting Individualized Earthquake Risk Using Web-Based Information Services

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.; Donnellan, A.

    2009-12-01

    Great natural disasters have occurred many times throughout human history. Events such as the San Francisco earthquake of 1906, the 2004 Sumatra earthquake and tsunami, and the 2005 Hurricane Katrina have caused massive destruction and suffering. With the modern tools of risk analysis, forecasting, and the world wide web available, human societies should no longer tolerate the human and economic losses these disasters produce. Thanks to new technologies and web-based applications, it will soon be possible to enable a more sustainable human society in the face of severe, recurring natural disasters in the complex earth system. Web-based information services make it easy to specify geographical locations and describe specific building structures. Couple this with publicly available earthquake forecasts and web-based mapping tools and the public can make more informed choices about how to manage their personal exposure to risk from natural catastrophes.

  17. Estimating soil erosion changes in the Wenchuan earthquake disaster area using geo-spatial information technology

    NASA Astrophysics Data System (ADS)

    Zhang, Bing; Jiao, Quanjun; Wu, Yanhong; Zhang, Wenjuan

    2009-05-01

    The secondary disasters induced by the Wenchuan earthquake of May 12, 2008, such as landslides, collapsing rocks, debris flows, floods, etc., have changed the local natural landscape tremendously and caused heavy soil erosion in the earthquake-hit areas. Using thematic mapper images taken before the earthquake and airborne images taken after the earthquake, we extracted information about the destroyed landscape by utilizing remote sensing and geographical information system techniques. Then, taking into account multi-year precipitation, vegetation cover, soil type, land use, and elevation data, we evaluated the soil erosion area and intensity using the revised universal soil loss equation. Results indicate that the soil erosion in earthquake-hit areas was exacerbated, with the severe erosion area increasing by 279.2 km2, or 1.9% of the total statistical area. Large amounts of soil and debris blocked streams and formed many barrier lakes over an area of more than 3.9 km2. It was evident from the spatial distribution of soil erosion areas that the intensity of soil erosion accelerated in the stream valley areas, especially in the valleys of the Min River and the Jian River.

  18. Loss Estimation Modeling Of Scenario Lahars From Mount Rainier, Washington State, Using HAZUS-MH

    NASA Astrophysics Data System (ADS)

    Walsh, T. J.; Cakir, R.

    2011-12-01

    We have adapted lahar hazard zones developed by Hoblitt and others (1998) and converted to digital data by Schilling and others (2008) into the appropriate format for HAZUS-MH, which is FEMA's loss estimation model. We assume that structures engulfed by cohesive lahars will suffer complete loss, and structures affected by post-lahar flooding will be appropriately modeled by the HAZUS-MH flood model. Another approach investigated is to estimate the momentum of lahars, calculate a lateral force, and apply the earthquake model, substituting the lahar lateral force for PGA. Our initial model used the HAZUS default data, which include estimates of building type and value from census data. This model estimated a loss of about 12 billion for a repeat lahar similar to the Electron Mudflow down the Puyallup River. Because HAZUS data are based on census tracts, this estimated damage includes everything in the census tract, even buildings outside of the lahar hazard zone. To correct this, we acquired assessors data from all of the affected counties and converted them into HAZUS format. We then clipped it to the boundaries of the lahar hazard zone to more precisely delineate those properties actually at risk in each scenario. This refined our initial loss estimate to about 6 billion with exclusion of building content values. We are also investigating rebuilding the lahar hazard zones applying Lahar-Z to a more accurate topographic grid derived from recent Lidar data acquired from the Puget Sound Lidar Consortium and Mount Rainier National Park. Final results of these models for the major drainages of Mount Rainier will be posted to the Washington Interactive Geologic Map (http://www.dnr.wa.gov/ResearchScience/Topics/GeosciencesData/Pages/geology_portal.aspx).

  19. Rapid Estimation of Macroseismic Intensity for On-site Earthquake Early Warning in Italy from Early Radiated Energ

    NASA Astrophysics Data System (ADS)

    Emolo, A.; Zollo, A.; Brondi, P.; Picozzi, M.; Mucciarelli, M.

    2015-12-01

    Earthquake Early Warning System (EEWS) are effective tools for the risk mitigation in active seismic regions. Recently, a feasibility study of a nation-wide earthquake early warning systems has been conducted for Italy considering the RAN Network and the EEW software platform PRESTo. This work showed that a reliable estimations in terms of magnitude and epicentral localization would be available within 3-4 seconds after the first P-wave arrival. On the other hand, given the RAN's density, a regional EEWS approach would result in a Blind Zone (BZ) of 25-30 km in average. Such BZ dimension would provide lead-times greater than zero only for events having magnitude larger than 6.5. Considering that in Italy also smaller events are capable of generating great losses both in human and economic terms, as dramatically experienced during the recent 2009 L'Aquila (ML 5.9) and 2012 Emilia (ML 5.9) earthquakes, it has become urgent to develop and test on-site approaches. The present study is focused on the development of a new on-site EEW metodology for the estimation of the macroseismic intensity at a target site or area. In this analysis we have used a few thousands of accelerometric traces recorded by RAN related to the largest earthquakes (ML>4) occurred in Italy in the period 1997-2013. The work is focused on the integral EW parameter Squared Velocity Integral (IV2) and on its capability to predict the peak ground velocity PGV and the Housner Intensity IH, as well as from these latters we parameterized a new relation between IV2 and the Macroseismic Intensity. To assess the performance of the developed on-site EEW relation, we used data of the largest events occurred in Italy in the last 6 years recorded by the Osservatorio Sismico delle Strutture, as well as on the recordings of the moderate earthquake reported by INGV Strong Motion Data. The results shows that the macroseismic intensity values predicted by IV2 and the one estimated by PGV and IH are in good agreement.

  20. Strong earthquake motion estimates for three sites on the U.C. San Diego campus

    SciTech Connect

    Day, S; Doroudian, M; Elgamal, A; Gonzales, S; Heuze, F; Lai, T; Minster, B; Oglesby, D; Riemer, M; Vernon, F; Vucetic, M; Wagoner, J; Yang, Z

    2002-05-07

    The approach of the Campus Earthquake Program (CEP) is to combine the substantial expertise that exists within the UC system in geology, seismology, and geotechnical engineering, to estimate the earthquake strong motion exposure of UC facilities. These estimates draw upon recent advances in hazard assessment, seismic wave propagation modeling in rocks and soils, and dynamic soil testing. The UC campuses currently chosen for application of our integrated methodology are Riverside, San Diego, and Santa Barbara. The procedure starts with the identification of possible earthquake sources in the region and the determination of the most critical fault(s) related to earthquake exposure of the campus. Combined geological, geophysical, and geotechnical studies are then conducted to characterize each campus with specific focus on the location of particular target buildings of special interest to the campus administrators. We drill, sample, and geophysically log deep boreholes next to the target structure, to provide direct in-situ measurements of subsurface material properties, and to install uphole and downhole 3-component seismic sensors capable of recording both weak and strong motions. The boreholes provide access below the soil layers, to deeper materials that have relatively high seismic shear-wave velocities. Analyses of conjugate downhole and uphole records provide a basis for optimizing the representation of the low-strain response of the sites. Earthquake rupture scenarios of identified causative faults are combined with the earthquake records and with nonlinear soil models to provide site-specific estimates of strong motions at the selected target locations. The predicted ground motions are shared with the UC consultants, so that they can be used as input to the dynamic analysis of the buildings. Thus, for each campus targeted by the CEP project, the strong motion studies consist of two phases, Phase 1--initial source and site characterization, drilling

  1. Strong Earthquake Motion Estimates for Three Sites on the U.C. Riverside Campus

    SciTech Connect

    Archuleta, R.; Elgamal, A.; Heuze, F.; Lai, T.; Lavalle, D.; Lawrence, B.; Liu, P.C.; Matesic, L.; Park, S.; Riemar, M.; Steidl, J.; Vucetic, M.; Wagoner, J.; Yang, Z.

    2000-11-01

    The approach of the Campus Earthquake Program (CEP) is to combine the substantial expertise that exists within the UC system in geology, seismology, and geotechnical engineering, to estimate the earthquake strong motion exposure of UC facilities. These estimates draw upon recent advances in hazard assessment, seismic wave propagation modeling in rocks and soils, and dynamic soil testing. The UC campuses currently chosen for application of our integrated methodology are Riverside, San Diego, and Santa Barbara. The procedure starts with the identification of possible earthquake sources in the region and the determination of the most critical fault(s) related to earthquake exposure of the campus. Combined geological, geophysical, and geotechnical studies are then conducted to characterize each campus with specific focus on the location of particular target buildings of special interest to the campus administrators. We drill and geophysically log deep boreholes next to the target structure, to provide direct in-situ measurements of subsurface material properties, and to install uphole and downhole 3-component seismic sensors capable of recording both weak and strong motions. The boreholes provide access below the soil layers, to deeper materials that have relatively high seismic shear-wave velocities. Analyses of conjugate downhole and uphole records provide a basis for optimizing the representation of the low-strain response of the sites. Earthquake rupture scenarios of identified causative faults are combined with the earthquake records and with nonlinear soil models to provide site-specific estimates of strong motions at the selected target locations. The predicted ground motions are shared with the UC consultants, so that they can be used as input to the dynamic analysis of the buildings. Thus, for each campus targeted by the CEP project, the strong motion studies consist of two phases, Phase 1--initial source and site characterization, drilling, geophysical

  2. Toward reliable automated estimates of earthquake source properties from body wave spectra

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Ben-Zion, Yehuda

    2016-06-01

    We develop a two-stage methodology for automated estimation of earthquake source properties from body wave spectra. An automated picking algorithm is used to window and calculate spectra for both P and S phases. Empirical Green's functions are stacked to minimize nongeneric source effects such as directivity and are used to deconvolve the spectra of target earthquakes for analysis. In the first stage, window lengths and frequency ranges are defined automatically from the event magnitude and used to get preliminary estimates of the P and S corner frequencies of the target event. In the second stage, the preliminary corner frequencies are used to update various parameters to increase the amount of data and overall quality of the deconvolved spectral ratios (target event over stacked Empirical Green's function). The obtained spectral ratios are used to estimate the corner frequencies, strain/stress drops, radiated seismic energy, apparent stress, and the extent of directivity for both P and S waves. The technique is applied to data generated by five small to moderate earthquakes in southern California at hundreds of stations. Four of the five earthquakes are found to have significant directivity. The developed automated procedure is suitable for systematic processing of large seismic waveform data sets with no user involvement.

  3. Source scaling relationships of small earthquakes estimated from the inversion method using stopping phases

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Takeo, M.; Ito, H.; Ellsworth, W.; Matsuzawa, T.; Kuwahara, Y.; Iio, Y.; Horiuchi, S.; Ohmi, S.

    2002-12-01

    We estimate source parameters of small earthquakes from stopping phases and investigate the scaling relationships between source parameters. The method we employed [Imanishi and Takeo, 2002] assumes an elliptical fault model proposed by Savage [1966]. In this model, two high-frequency stopping phases, Hilbert transformations of each other, are radiated and the difference in arrival times between the two stopping phases is dependent on the average value of rupture velocity, the source dimension, the aspect ratio of elliptical fault, the direction of rupture propagation and the orientation of the fault plane. These parameters can be estimated by a nonlinear least squares inversion method. Earthquakes studied occurred between May and August 1999 at the western Nagano prefecture, Japan, which is characterized by high levels of shallow earthquakes. The data consist of seismograms recorded by an 800 m deep borehole and a 46 surface seismic array whose spacing is a few km. In particular, the 800 m borehole data provide a wide frequency bandwidth and greatly reduce ground noise and coda wave amplitude compared to surface recordings. High-frequency stopping phases are readily detected on accelerograms recorded in the borehole. After correcting both borehole and surface data for attenuation, we also measure the rise time, which is defined as the time lag from the arrival time of the direct wave to the first slope change in the displacement pulse. Using these durations, we estimate source parameters of 25 earthquakes ranging in size from M1.2 to M2.6. The rupture aspect ratio is estimated to be about 0.8 on an average. This suggests that the assumption of a circular crack model is valid as a first order approximation for earthquakes analyzed in this study. Static stress drops range from approximately 0.1 to 5 MPa and do not vary with seismic moment. It seems that the breakdown seen in the previous studies by other authors using surface data is simply an artifact of

  4. Towards Practical, Real-Time Estimation of Spatial Aftershock Probabilities: A Feasibility Study in Earthquake Hazard

    NASA Astrophysics Data System (ADS)

    Morrow, P.; McCloskey, J.; Steacy, S.

    2001-12-01

    It is now widely accepted that the goal of deterministic earthquake prediction is unattainable in the short term and may even be forbidden by nonlinearity in the generating dynamics. This nonlinearity does not, however, preclude the estimation of earthquake probability and, in particular, how this probability might change in space and time; earthquake hazard estimation might be possible in the absence of earthquake prediction. Recently, there has been a major development in the understanding of stress triggering of earthquakes which allows accurate calculation of the spatial variation of aftershock probability following any large earthquake. Over the past few years this Coulomb stress technique (CST) has been the subject of intensive study in the geophysics literature and has been extremely successful in explaining the spatial distribution of aftershocks following several major earthquakes. The power of current micro-computers, the great number of local, telemetered seismic networks, the rapid acquisition of data from satellites coupled with the speed of modern telecommunications and data transfer all mean that it may be possible that these new techniques could be applied in a forward sense. In other words, it is theoretically possible today to make predictions of the likely spatial distribution of aftershocks in near-real-time following a large earthquake. Approximate versions of such predictions could be available within, say, 0.1 days after the mainshock and might be continually refined and updated over the next 100 days. The European Commission has recently provided funding for a project to assess the extent to which it is currently possible to move CST predictions into a practically useful time frame so that low-confidence estimates of aftershock probability might be made within a few hours of an event and improved in near-real-time, as data of better quality become available over the following days to tens of days. Specifically, the project aims to assess the

  5. Strong Earthquake Motion Estimates for the UCSB Campus, and Related Response of the Engineering 1 Building

    SciTech Connect

    Archuleta, R.; Bonilla, F.; Doroudian, M.; Elgamal, A.; Hueze, F.

    2000-06-06

    This is the second report on the UC/CLC Campus Earthquake Program (CEP), concerning the estimation of exposure of the U.C. Santa Barbara campus to strong earthquake motions (Phase 2 study). The main results of Phase 1 are summarized in the current report. This document describes the studies which resulted in site-specific strong motion estimates for the Engineering I site, and discusses the potential impact of these motions on the building. The main elements of Phase 2 are: (1) determining that a M 6.8 earthquake on the North Channel-Pitas Point (NCPP) fault is the largest threat to the campus. Its recurrence interval is estimated at 350 to 525 years; (2) recording earthquakes from that fault on March 23, 1998 (M 3.2) and May 14, 1999 (M 3.2) at the new UCSB seismic station; (3) using these recordings as empirical Green's functions (EGF) in scenario earthquake simulations which provided strong motion estimates (seismic syntheses) at a depth of 74 m under the Engineering I site; 240 such simulations were performed, each with the same seismic moment, but giving a broad range of motions that were analyzed for their mean and standard deviation; (4) laboratory testing, at U.C. Berkeley and U.C. Los Angeles, of soil samples obtained from drilling at the UCSB station site, to determine their response to earthquake-type loading; (5) performing nonlinear soil dynamic calculations, using the soil properties determined in-situ and in the laboratory, to calculate the surface strong motions resulting from the seismic syntheses at depth; (6) comparing these CEP-generated strong motion estimates to acceleration spectra based on the application of state-of-practice methods - the IBC 2000 code, UBC 97 code and Probabilistic Seismic Hazard Analysis (PSHA), this comparison will be used to formulate design-basis spectra for future buildings and retrofits at UCSB; and (7) comparing the response of the Engineering I building to the CEP ground motion estimates and to the design

  6. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    USGS Publications Warehouse

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200  km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  7. Earthquake slip vectors and estimates of present-day plate motions

    NASA Technical Reports Server (NTRS)

    Demets, Charles

    1993-01-01

    Two alternative models for present-day global plate motions are derived from subsets of the NUVEL-1 data in order to investigate the degree to which earthquake slip vectors affect the NUVEL-1 model and to provide estimates of present-day plate velocities that are independent of earthquake slip vectors. The data set used to derive the first model excludes subduction zone slip vectors. The primary purpose of this model is to demonstrate that the 240 subduction zone slip vectors in the NUVEL-1 data set do not greatly affect the plate velocities predicted by NUVEL-1. A data set that excludes all of the 724 earthquake slip vectors used to derive NUVEL-1 is used to derive the second model. This model is suitable as a reference model for kinematic studies that require plate velocity estimates unaffected by earthquake slip vectors. The slip-dependent slip vector bias along transform faults is investigated using the second model, and evidence is sought for biases in slip directions along spreading centers.

  8. A spatially explicit estimate of avoided forest loss.

    PubMed

    Honey-Rosés, Jordi; Baylis, Kathy; Ramírez, M Isabel

    2011-10-01

    With the potential expansion of forest conservation programs spurred by climate-change agreements, there is a need to measure the extent to which such programs achieve their intended results. Conventional methods for evaluating conservation impact tend to be biased because they do not compare like areas or account for spatial relations. We assessed the effect of a conservation initiative that combined designation of protected areas with payments for environmental services to conserve over wintering habitat for the monarch butterfly (Danaus plexippus) in Mexico. To do so, we used a spatial-matching estimator that matches covariates among polygons and their neighbors. We measured avoided forest loss (avoided disturbance and deforestation) by comparing forest cover on protected and unprotected lands that were similar in terms of accessibility, governance, and forest type. Whereas conventional estimates of avoided forest loss suggest that conservation initiatives did not protect forest cover, we found evidence that the conservation measures are preserving forest cover. We found that the conservation measures protected between 200 ha and 710 ha (3-16%) of forest that is high-quality habitat for monarch butterflies, but had a smaller effect on total forest cover, preserving between 0 ha and 200 ha (0-2.5%) of forest with canopy cover >70%. We suggest that future estimates of avoided forest loss be analyzed spatially to account for how forest loss occurs across the landscape. Given the forthcoming demand from donors and carbon financiers for estimates of avoided forest loss, we anticipate our methods and results will contribute to future studies that estimate the outcome of conservation efforts.

  9. Southern California regional earthquake probability estimated from continuous GPS geodetic data

    NASA Astrophysics Data System (ADS)

    Anderson, G.

    2002-12-01

    Current seismic hazard estimates are primarily based on seismic and geologic data, but geodetic measurements from large, dense arrays such as the Southern California Integrated GPS Network (SCIGN) can also be used to estimate earthquake probabilities and seismic hazard. Geodetically-derived earthquake probability estimates are particularly important in regions with poorly-constrained fault slip rates. In addition, they are useful because such estimates come with well-determined error bounds. Long-term planning is underway to incorporate geodetic data in the next generation of United States national seismic hazard maps, and techniques for doing so need further development. I present a new method for estimating the expected rates of earthquakes using strain rates derived from geodetic station velocities. I compute the strain rates using a new technique devised by Y. Hsu and M. Simons [Y. Hsu and M. Simons, pers. comm.], which computes the horizontal strain rate tensor ( {˙ {ɛ}}) at each node of a pre-defined regular grid, using all geodetic velocities in the data set weighted by distance and estimated uncertainty. In addition, they use a novel weighting to handle the effects of station distribution: they divide the region covered by the geodetic network into Voronoi cells using the station locations and weight each station's contribution to {˙ {ɛ}} by the area of the Voronoi cell centered at that station. I convert {˙ {ɛ}} into the equivalent seismic moment rate density (˙ {M}) using the method of \\textit{Savage and Simpson} [1997] and maximum seismogenic depths estimated from regional seismicity; ˙ {M} gives the expected rate of seismic moment release in a region, based on the geodetic strain rates. Assuming the seismicity in the given region follows a Gutenberg-Richter relationship, I convert ˙ {M} to an expected rate of earthquakes of a given magnitude. I will present results of a study applying this method to data from the SCIGN array to estimate

  10. Estimating refractivity from propagation loss in turbulent media

    NASA Astrophysics Data System (ADS)

    Wagner, Mark; Gerstoft, Peter; Rogers, Ted

    2016-12-01

    This paper estimates lower atmospheric refractivity (M-profile) given an electromagnetic (EM) propagation loss (PL) measurement. Specifically, height-independent PL measurements over a range of 10-80 km are used to infer information about the existence and potential parameters of atmospheric ducts in the lowest 1 km of the atmosphere. The main improvement made on previous refractivity estimations is inclusion of range-dependent fluctuations due to turbulence in the forward propagation model. Using this framework, the maximum likelihood (ML) estimate of atmospheric refractivity has good accuracy, and with prior information about ducting the maximum a priori (MAP) refractivity estimate can be found. Monte Carlo methods are used to estimate the mean and covariance of PL, which are fed into a Gaussian likelihood function for evaluation of estimated refractivity probability. Comparisons were made between inversions performed on propagation loss data simulated by a wide angle parabolic equation (PE) propagation model with added homogeneous and inhomogeneous turbulence. It was found that the turbulence models produce significantly different results, suggesting that accurate modeling of turbulence is key.

  11. A phase coherence approach to estimating the spatial extent of earthquakes

    NASA Astrophysics Data System (ADS)

    Hawthorne, Jessica C.; Ampuero, Jean-Paul

    2016-04-01

    We present a new method for estimating the spatial extent of seismic sources. The approach takes advantage of an inter-station phase coherence computation that can identify co-located sources (Hawthorne and Ampuero, 2014). Here, however, we note that the phase coherence calculation can eliminate the Green's function and give high values only if both earthquakes are point sources---if their dimensions are much smaller than the wavelengths of the propagating seismic waves. By examining the decrease in coherence at higher frequencies (shorter wavelengths), we can estimate the spatial extents of the earthquake ruptures. The approach can to some extent be seen as a simple way of identifying directivity or variations in the apparent source time functions recorded at various stations. We apply this method to a set of well-recorded earthquakes near Parkfield, CA. We show that when the signal to noise ratio is high, the phase coherence remains high well above 50 Hz for closely spaced M<1.5 earthquake. The high-frequency phase coherence is smaller for larger earthquakes, suggesting larger spatial extents. The implied radii scale roughly as expected from typical magnitude-corner frequency scalings. We also examine a second source of high-frequency decoherence: spatial variation in the shape of the Green's functions. This spatial decoherence appears to occur on a similar wavelengths as the decoherence associated with the apparent source time functions. However, the variation in Green's functions can be normalized away to some extent by comparing observations at multiple components on a single station, which see the same apparent source time functions.

  12. Probabilistic estimates of surface coseismic slip and afterslip for Hayward fault earthquakes

    USGS Publications Warehouse

    Aagaard, Brad T.; Lienkaemper, James J.; Schwartz, David P.

    2012-01-01

    We examine the partition of long‐term geologic slip on the Hayward fault into interseismic creep, coseismic slip, and afterslip. Using Monte Carlo simulations, we compute expected coseismic slip and afterslip at three alinement array sites for Hayward fault earthquakes with nominal moment magnitudes ranging from about 6.5 to 7.1. We consider how interseismic creep might affect the coseismic slip distribution as well as the variability in locations of large and small slip patches and the magnitude of an earthquake for a given rupture area. We calibrate the estimates to be consistent with the ratio of interseismic creep rate at the alinement array sites to the geologic slip rate for the Hayward fault. We find that the coseismic slip at the surface is expected to comprise only a small fraction of the long‐term geologic slip. The median values of coseismic slip are less than 0.2 m in nearly all cases as a result of the influence of interseismic creep and afterslip. However, afterslip makes a substantial contribution to the long‐term geologic slip and may be responsible for up to 0.5–1.5 m (median plus one standard deviation [S.D.]) of additional slip following an earthquake rupture. Thus, utility and transportation infrastructure could be severely impacted by afterslip in the hours and days following a large earthquake on the Hayward fault that generated little coseismic slip. Inherent spatial variability in earthquake slip combined with the uncertainty in how interseismic creep affects coseismic slip results in large uncertainties in these slip estimates.

  13. Estimation of postfire nutrient loss in the Florida everglades.

    PubMed

    Qian, Y; Miao, S L; Gu, B; Li, Y C

    2009-01-01

    Postfire nutrient release into ecosystem via plant ash is critical to the understanding of fire impacts on the environment. Factors determining a postfire nutrient budget are prefire nutrient content in the combustible biomass, burn temperature, and the amount of combustible biomass. Our objective was to quantitatively describe the relationships between nutrient losses (or concentrations in ash) and burning temperature in laboratory controlled combustion and to further predict nutrient losses in field fire by applying predictive models established based on laboratory data. The percentage losses of total nitrogen (TN), total carbon (TC), and material mass showed a significant linear correlation with a slope close to 1, indicating that TN or TC loss occurred predominantly through volatilization during combustion. Data obtained in laboratory experiments suggest that the losses of TN, TC, as well as the ratio of ash total phosphorus (TP) concentration to leaf TP concentration have strong relationships with burning temperature and these relationships can be quantitatively described by nonlinear equations. The potential use of these nonlinear models relating nutrient loss (or concentration) to temperature in predicting nutrient concentrations in field ash appear to be promising. During a prescribed fire in the northern Everglades, 73.1% of TP was estimated to be retained in ash while 26.9% was lost to the atmosphere, agreeing well with the distribution of TP during previously reported wild fires. The use of predictive models would greatly reduce the cost associated with measuring field ash nutrient concentrations.

  14. Estimating Phosphorus Loss in Runoff from Manure and Fertilizer for a Phosphorus Loss Quantification Tool

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Non-point source pollution of fresh waters by phosphorus (P) is a concern because it contributes to accelerated eutrophication. Qualitative P Indexes that estimate the risk of field-scale P loss have been developed in the USA and Europe. However, given the state of the science concerning agricultura...

  15. Regional intensity attenuation models for France and the estimation of magnitude and location of historical earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Scotti, O.

    2006-01-01

    Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.

  16. Dose estimates in a loss of lead shielding truck accident.

    SciTech Connect

    Dennis, Matthew L.; Osborn, Douglas M.; Weiner, Ruth F.; Heames, Terence John

    2009-08-01

    The radiological transportation risk & consequence program, RADTRAN, has recently added an updated loss of lead shielding (LOS) model to it most recent version, RADTRAN 6.0. The LOS model was used to determine dose estimates to first-responders during a spent nuclear fuel transportation accident. Results varied according to the following: type of accident scenario, percent of lead slump, distance to shipment, and time spent in the area. This document presents a method of creating dose estimates for first-responders using RADTRAN with potential accident scenarios. This may be of particular interest in the event of high speed accidents or fires involving cask punctures.

  17. Probabilistic seismic loss estimation via endurance time method

    NASA Astrophysics Data System (ADS)

    Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.

    2017-01-01

    Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.

  18. Sufficient dimension reduction via squared-loss mutual information estimation.

    PubMed

    Suzuki, Taiji; Sugiyama, Masashi

    2013-03-01

    The goal of sufficient dimension reduction in supervised learning is to find the low-dimensional subspace of input features that contains all of the information about the output values that the input features possess. In this letter, we propose a novel sufficient dimension-reduction method using a squared-loss variant of mutual information as a dependency measure. We apply a density-ratio estimator for approximating squared-loss mutual information that is formulated as a minimum contrast estimator on parametric or nonparametric models. Since cross-validation is available for choosing an appropriate model, our method does not require any prespecified structure on the underlying distributions. We elucidate the asymptotic bias of our estimator on parametric models and the asymptotic convergence rate on nonparametric models. The convergence analysis utilizes the uniform tail-bound of a U-process, and the convergence rate is characterized by the bracketing entropy of the model. We then develop a natural gradient algorithm on the Grassmann manifold for sufficient subspace search. The analytic formula of our estimator allows us to compute the gradient efficiently. Numerical experiments show that the proposed method compares favorably with existing dimension-reduction approaches on artificial and benchmark data sets.

  19. Earthquake impact scale

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.

    2011-01-01

    With the advent of the USGS prompt assessment of global earthquakes for response (PAGER) system, which rapidly assesses earthquake impacts, U.S. and international earthquake responders are reconsidering their automatic alert and activation levels and response procedures. To help facilitate rapid and appropriate earthquake response, an Earthquake Impact Scale (EIS) is proposed on the basis of two complementary criteria. On the basis of the estimated cost of damage, one is most suitable for domestic events; the other, on the basis of estimated ranges of fatalities, is generally more appropriate for global events, particularly in developing countries. Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are quite effective in communicating predicted impact and response needed after an event through alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1,000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses reaching $1M, $100M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness predominate in countries in which local building practices typically lend themselves to high collapse and casualty rates, and these impacts lend to prioritization for international response. In contrast, financial and overall societal impacts often trigger the level of response in regions or countries in which prevalent earthquake resistant construction practices greatly reduce building collapse and resulting fatalities. Any newly devised alert, whether economic- or casualty-based, should be intuitive and consistent with established lexicons and procedures. Useful alerts should

  20. Estimating Earthquake Source Parameters from P-wave Spectra: Lessons from Theory and Observations

    NASA Astrophysics Data System (ADS)

    Shearer, P. M.; Denolle, M.; Kaneko, Y.

    2015-12-01

    Observations make clear that some earthquakes radiate relatively more high frequency energy that others of the same moment. But translating these differences into traditional source parameter measures, such as stress drop and radiated energy, can be problematic. Some of the issues include: (1) Because of directivity and other rupture propagation details, theoretical results show that recorded spectra will vary in shape among stations. Observational studies often neglect this effect or assume it will average out when multiple stations are used, but this averaging is rarely perfect, particularly considering the narrow range of takeoff angles used in teleseismic studies. (2) Depth phases for shallow events create interference in the spectra that can severely bias spectral estimates, unless depth phases are taken into account. (3) Corner frequency is not a well-defined parameter and different methods for its computation will yield different results. In addition, stress drop estimates inferred from corner frequencies rely on specific theoretical rupture models, and different assumed crack geometries and rupture velocities will yield different stress drop values. (4) Attenuation corrections may be inaccurate or not fully reflect local 3D near-source attenuation structure. The use of empirical Green's function (EGF) events can help, but these often have signal-to-noise issues or are not very close to the target earthquake. (5) Energy estimates typically rely on some degree of extrapolation of spectra beyond their observational band, introducing model assumptions into what is intended to be a direct measure of an earthquake property. (6) P-wave spectra are analyzed much more than S-wave spectra because of their greater frequency content, but they only carry a small fraction of the total radiated seismic energy and thus total energy estimates may rely on poorly known Es/Ep scaling relations. We will discuss strategies to address these problems and to compute improved source

  1. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

  2. Update earthquake risk assessment in Cairo, Egypt

    NASA Astrophysics Data System (ADS)

    Badawy, Ahmed; Korrat, Ibrahim; El-Hadidy, Mahmoud; Gaber, Hanan

    2016-12-01

    The Cairo earthquake (12 October 1992; m b = 5.8) is still and after 25 years one of the most painful events and is dug into the Egyptians memory. This is not due to the strength of the earthquake but due to the accompanied losses and damages (561 dead; 10,000 injured and 3000 families lost their homes). Nowadays, the most frequent and important question that should rise is "what if this earthquake is repeated today." In this study, we simulate the same size earthquake (12 October 1992) ground motion shaking and the consequent social-economic impacts in terms of losses and damages. Seismic hazard, earthquake catalogs, soil types, demographics, and building inventories were integrated into HAZUS-MH to produce a sound earthquake risk assessment for Cairo including economic and social losses. Generally, the earthquake risk assessment clearly indicates that "the losses and damages may be increased twice or three times" in Cairo compared to the 1992 earthquake. The earthquake risk profile reveals that five districts (Al-Sahel, El Basateen, Dar El-Salam, Gharb, and Madinat Nasr sharq) lie in high seismic risks, and three districts (Manshiyat Naser, El-Waily, and Wassat (center)) are in low seismic risk level. Moreover, the building damage estimations reflect that Gharb is the highest vulnerable district. The analysis shows that the Cairo urban area faces high risk. Deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated buildings damages are concentrated within the most densely populated (El Basateen, Dar El-Salam, Gharb, and Madinat Nasr Gharb) districts. Moreover, about 75 % of casualties are in the same districts. Actually, an earthquake risk assessment for Cairo represents a crucial application of the HAZUS earthquake loss estimation model for risk management. Finally, for mitigation, risk reduction, and to improve the seismic performance of structures and assure life safety

  3. Microzonation of Seismic Hazards and Estimation of Human Fatality for Scenario Earthquakes in Chianan Area, Taiwan

    NASA Astrophysics Data System (ADS)

    Liu, K. S.; Chiang, C. L.; Ho, T. T.; Tsai, Y. B.

    2015-12-01

    In this study, we assess seismic hazards in the 57 administration districts of Chianan area, Taiwan in the form of ShakeMaps as well as to estimate potential human fatalities from scenario earthquakes on the three Type I active faults in this area. As a result, it is noted that two regions with high MMI intensity greater than IX in the map of maximum ground motion. One is in the Chiayi area around Minsyong, Dalin and Meishan due to presence of the Meishan fault and large site amplification factors which can reach as high as 2.38 and 2.09 for PGA and PGV, respectively, in Minsyong. The other is in the Tainan area around Jiali, Madou, Siaying, Syuejia, Jiangjyun and Yanshuei due to a disastrous earthquake occurred near the border between Jiali and Madou with a magnitude of Mw 6.83 in 1862 and large site amplification factors which can reach as high as 2.89 and 2.97 for PGA and PGV, respectively, in Madou. In addition, the probabilities in 10, 30, and 50-year periods with seismic intensity exceeding MMII VIII in above areas are greater than 45%, 80% and 95%, respectively. Moreover, from the distribution of probabilities, high values of greater than 95% over a 10 year period with seismic intensity corresponding to CWBI V and MMI VI are found in central and northern Chiayi and northern Tainan. At last, from estimation of human fatalities for scenario earthquakes on three active faults in Chianan area, it is noted that the numbers of fatalities increase rapidly for people above age 45. Compared to the 1946 Hsinhua earthquake, the number of fatality estimated from the scenario earthquake on the Hsinhua active fault is significantly high. However, the higher number of fatality in this case is reasonable after considering the probably reasons. Hence, we urge local and the central governments to pay special attention on seismic hazard mitigation in this highly urbanized area with large number of old buildings.

  4. Earthquake shaking hazard estimates and exposure changes in the conterminous United States

    USGS Publications Warehouse

    Jaiswal, Kishor S.; Petersen, Mark D.; Rukstales, Kenneth S.; Leith, William S.

    2015-01-01

    A large portion of the population of the United States lives in areas vulnerable to earthquake hazards. This investigation aims to quantify population and infrastructure exposure within the conterminous U.S. that are subjected to varying levels of earthquake ground motions by systematically analyzing the last four cycles of the U.S. Geological Survey's (USGS) National Seismic Hazard Models (published in 1996, 2002, 2008 and 2014). Using the 2013 LandScan data, we estimate the numbers of people who are exposed to potentially damaging ground motions (peak ground accelerations at or above 0.1g). At least 28 million (~9% of the total population) may experience 0.1g level of shaking at relatively frequent intervals (annual rate of 1 in 72 years or 50% probability of exceedance (PE) in 50 years), 57 million (~18% of the total population) may experience this level of shaking at moderately frequent intervals (annual rate of 1 in 475 years or 10% PE in 50 years), and 143 million (~46% of the total population) may experience such shaking at relatively infrequent intervals (annual rate of 1 in 2,475 years or 2% PE in 50 years). We also show that there is a significant number of critical infrastructure facilities located in high earthquake-hazard areas (Modified Mercalli Intensity ≥ VII with moderately frequent recurrence interval).

  5. Reevaluation of the macroseismic effects of the 1887 Sonora, Mexico earthquake and its magnitude estimation

    USGS Publications Warehouse

    Suárez, Gerardo; Hough, Susan E.

    2008-01-01

    The Sonora, Mexico, earthquake of 3 May 1887 occurred a few years before the start of the instrumental era in seismology. We revisit all available accounts of the earthquake and assign Modified Mercalli Intensities (MMI), interpreting and analyzing macroseismic information using the best available modern methods. We find that earlier intensity assignments for this important earthquake were unjustifiably high in many cases. High intensity values were assigned based on accounts of rock falls, soil failure or changes in the water table, which are now known to be very poor indicators of shaking severity and intensity. Nonetheless, reliable accounts reveal that light damage (intensity VI) occurred at distances of up to ~200 km in both Mexico and the United States. The resulting set of 98 reevaluated intensity values is used to draw an isoseismal map of this event. Using the attenuation relation proposed by Bakun (2006b), we estimate an optimal moment magnitude of Mw7.6. Assuming this magnitude is correct, a fact supported independently by documented rupture parameters assuming standard scaling relations, our results support the conclusion that northern Sonora as well as the Basin and Range province are characterized by lower attenuation of intensities than California. However, this appears to be at odds with recent results that Lg attenuation in the Basin and Range province is comparable to that in California.

  6. Earthquake source scaling and self-similarity estimation from stacking P and S spectra

    NASA Astrophysics Data System (ADS)

    Prieto, GermáN. A.; Shearer, Peter M.; Vernon, Frank L.; Kilb, Debi

    2004-08-01

    We study the scaling relationships of source parameters and the self-similarity of earthquake spectra by analyzing a cluster of over 400 small earthquakes (ML = 0.5 to 3.4) recorded by the Anza seismic network in southern California. We compute P, S, and preevent noise spectra from each seismogram using a multitaper technique and approximate source and receiver terms by iteratively stacking the spectra. To estimate scaling relationships, we average the spectra in size bins based on their relative moment. We correct for attenuation by using the smallest moment bin as an empirical Green's function (EGF) for the stacked spectra in the larger moment bins. The shapes of the log spectra agree within their estimated uncertainties after shifting along the ω-3 line expected for self-similarity of the source spectra. We also estimate corner frequencies and radiated energy from the relative source spectra using a simple source model. The ratio between radiated seismic energy and seismic moment (proportional to apparent stress) is nearly constant with increasing moment over the magnitude range of our EGF-corrected data (ML = 1.8 to 3.4). Corner frequencies vary inversely as the cube root of moment, as expected from the observed self-similarity in the spectra. The ratio between P and S corner frequencies is observed to be 1.6 ± 0.2. We obtain values for absolute moment and energy by calibrating our results to local magnitudes for these earthquakes. This yields a S to P energy ratio of 9 ± 1.5 and a value of apparent stress of about 1 MPa.

  7. Testing Earthquake Source Inversion Methodologies

    NASA Astrophysics Data System (ADS)

    Page, Morgan; Mai, P. Martin; Schorlemmer, Danijel

    2011-03-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquake-related computations, such as ground motion simulations and static stress change calculations.

  8. Model parameter estimation bias induced by earthquake magnitude cut-off

    NASA Astrophysics Data System (ADS)

    Harte, D. S.

    2016-02-01

    We evaluate the bias in parameter estimates of the ETAS model. We show that when a simulated catalogue is magnitude-truncated there is considerable bias, whereas when it is not truncated there is no discernible bias. We also discuss two further implied assumptions in the ETAS and other self-exciting models. First, that the triggering boundary magnitude is equivalent to the catalogue completeness magnitude. Secondly, the assumption in the Gutenberg-Richter relationship that numbers of events increase exponentially as magnitude decreases. These two assumptions are confounded with the magnitude truncation effect. We discuss the effect of these problems on analyses of real earthquake catalogues.

  9. Estimation of Future Changes in Flood Disaster Losses

    NASA Astrophysics Data System (ADS)

    Konoshima, L.; Hirabayashi, Y.; Roobavannan, M.

    2012-12-01

    Disaster losses can be estimated by hazard intensity, exposure, and vulnerabilities. Many studies have addressed future economic losses from river floods, most of which are focused on Europe (Bouwer et al, 2010). Here flood disaster losses are calculated using the output of multi-model ensembles of CMIP5 GCMs in order to estimate the changes in damage loss due to climate change. For the global distribution of the expected future population and GDP, the ALPS scenario of RITE is population for is used. Here, flood event is defined as river discharge that has a probability of having 100 years return period. The time series of annual maximum daily discharge was fitted using moment fitting method for GEV distribution at each grid. L-moment method (Hosking and Wallis 1997) is used for estimating the parameters of distribution. For probability distribution, Gumbel distribution and Generalized Extreme Value (GEV) distribution were tested to see the future changes of 100-year value. Using the calculation of 100-year flood of present condition and annual maximum discharge for present and future climate conditions, the area exceeding 100-year flood is calculated for each 30 years. And to estimate the economic impact of future changes in occurrence of 100-year flood, affected total GDP is calculated by multiplying the affected population with country's GDP in areas exceeding 100-year flood value of present climate for each present and future conditions. The 100-year flood value is fixed with the value of present condition in calculating the affected value on the future condition. To consider the effect of the climatic condition and changes of economic growth, the regions are classified by continents. The Southeast Asia is divided into Japan and South Korea (No.1) and other countries (No.2), since the GDP and GDP growth rate within the two areas is quite different compared to other regions. Figure 1 shows the average and standard deviation (1-sigma) of future changing ratio

  10. The CATDAT damaging earthquakes database

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  11. Twitter as Information Source for Rapid Damage Estimation after Major Earthquakes

    NASA Astrophysics Data System (ADS)

    Eggert, Silke; Fohringer, Joachim

    2014-05-01

    Natural disasters like earthquakes require a fast response from local authorities. Well trained rescue teams have to be available, equipment and technology has to be ready set up, information have to be directed to the right positions so the head quarter can manage the operation precisely. The main goal is to reach the most affected areas in a minimum of time. But even with the best preparation for these cases, there will always be the uncertainty of what really happened in the affected area. Modern geophysical sensor networks provide high quality data. These measurements, however, are only mapping disjoint values from their respective locations for a limited amount of parameters. Using observations of witnesses represents one approach to enhance measured values from sensors ("humans as sensors"). These observations are increasingly disseminated via social media platforms. These "social sensors" offer several advantages over common sensors, e.g. high mobility, high versatility of captured parameters as well as rapid distribution of information. Moreover, the amount of data offered by social media platforms is quite extensive. We analyze messages distributed via Twitter after major earthquakes to get rapid information on what eye-witnesses report from the epicentral area. We use this information to (a) quickly learn about damage and losses to support fast disaster response and to (b) densify geophysical networks in areas where there is sparse information to gain a more detailed insight on felt intensities. We present a case study from the Mw 7.1 Philippines (Bohol) earthquake that happened on Oct. 15 2013. We extract Twitter messages, so called tweets containing one or more specified keywords from the semantic field of "earthquake" and use them for further analysis. For the time frame of Oct. 15 to Oct 18 we get a data base of in total 50.000 tweets whereof 2900 tweets are geo-localized and 470 have a photo attached. Analyses for both national level and locally for

  12. tau_p^{max} magnitude estimation, the case of the April 6, 2009 L'Aquila earthquake

    NASA Astrophysics Data System (ADS)

    Olivieri, Marco

    2013-04-01

    Rapid magnitude estimate procedures represent a crucial part of proposed earthquake early warning systems. Most of these estimates are focused on the first part of the P-wave train, the earlier and less destructive part of the ground motion that follows an earthquake. Allen and Kanamori (Science 300:786-789, 2003) proposed to use the predominant period of the P-wave to determine the magnitude of a large earthquake at local distance and Olivieri et al. (Bull Seismol Soc Am 185:74-81, 2008) calibrated a specific relation for the Italian region. The Mw 6.3 earthquake hit Central Italy on April 6, 2009 and the largest aftershocks provide a useful dataset to validate the proposed relation and discuss the risks connected to the extrapolation of magnitude relations with a poor dataset of large earthquake waveforms. A large discrepancy between local magnitude (ML) estimated by means of tau_p^{max} evaluation and standard ML (6.8 ± 1.5 vs. 5.9 ± 0.4) suggests using caution when ML vs. tau_p^{max} calibrations do not include a relevant dataset of large earthquakes. Effects from large residuals could be mitigated or removed introducing selection rules on τ p function, by regionalizing the ML vs. tau_p^{max} function in the presence of significant tectonic or geological heterogeneity, and using probabilistic and evolutionary methods.

  13. Estimating the 2008 Quetame (Colombia) earthquake source parameters from seismic data and InSAR measurements

    NASA Astrophysics Data System (ADS)

    Dicelis, Gabriel; Assumpção, Marcelo; Kellogg, James; Pedraza, Patricia; Dias, Fábio

    2016-12-01

    Seismic waveforms and geodetic measurements (InSAR) were used to determine the location, focal mechanism and coseismic surface displacements of the Mw 5.9 earthquake which struck the center of Colombia on May 24, 2008. We determined the focal mechanism of the main event using teleseismic P wave arrivals and regional waveform inversion for the moment tensor. We relocated the best set of aftershocks (30 events) with magnitudes larger than 2.0 recorded from May to June 2008 by a temporary local network as well as by stations of the Colombia national network. We successfully estimated coseismic deformation using SAR interferometry, despite distortion in some areas of the interferogram by atmospheric noise. The deformation was compared to synthetic data for rectangular dislocations in an elastic half-space. Nine source parameters (strike, dip, length, width, strike-slip deformation, dip-slip deformation, latitude shift, longitude shift, and minimum depth) were inverted to fit the observed changes in line-of-sight (LOS) toward the satellite four derived parameters were also estimated (rake, average slip, maximum depth and seismic moment). The aftershock relocation, the focal mechanism and the coseismic dislocation model agree with a right-lateral strike-slip fault with nodal planes oriented NE-SW and NW-SE. We use the results of the waveform inversion, radar interferometry and aftershock relocations to identify the high-angle NE-SW nodal plane as the primary fault. The inferred subsurface rupture length is roughly 11 km, which is consistent with the 12 km long distribution of aftershocks. This coseismic model can provide insights on earthquake mechanisms and seismic hazard assessments for the area, including the 8 million residents of Colombia's nearby capital city Bogota. The 2008 Quetame earthquake appears to be associated with the northeastward "escape" of the North Andean block, and it may help to illuminate how margin-parallel shear slip is partitioned in the

  14. Estimation of Coda Wave Attenuation for the National Capital Region, Delhi, India Using Local Earthquakes

    NASA Astrophysics Data System (ADS)

    Mohanty, William K.; Prakash, Rajesh; Suresh, G.; Shukla, A. K.; Yanger Walling, M.; Srivastava, J. P.

    2009-03-01

    Attenuation of seismic waves is very essential for the study of earthquake source parameters and also for ground-motion simulations, and this is important for the seismic hazard estimation of a region. The digital data acquired by 16 short-period seismic stations of the Delhi Telemetric Network for 55 earthquakes of magnitude 1.5 to 4.2, which occurred within an epicentral distance of 100 km in an area around Delhi, have been used to estimate the coda attenuation Q c . Using the Single Backscattering Model, the seismograms have been analyzed at 10 central frequencies. The frequency dependence average attenuation relationship Q c = 142 f 1.04 has been attained. Four Lapse-Time windows from 20 to 50 seconds duration with a difference of 10 seconds have been analyzed to study the lapse time dependence of Q c . The Q c values show that frequency dependence (exponent n) remains similar at all the lapse time window lengths. While the change in Q 0 values is significant, change in Q 0 with larger lapsetime reflects the rate of homogeneity at the depth. The variation of Q c indicates a definitive trend from west to east in accordance with the geology of the region.

  15. Quasi-static Slips Around the Source Areas of the 2003 Tokachi-oki (M8.0) and 2005 Miyagi-oki (M7.2) Earthquakes, Japan Estimated From Small Repeating Earthquakes

    NASA Astrophysics Data System (ADS)

    Uchida, N.; Matsuzawa, T.; Hirahara, S.; Igarashi, T.; Hasegawa, A.; Kasahara, M.

    2005-12-01

    We have estimated spatio-temporal distribution of interplate quasi-static slips around the source areas of the 2003 Tokachi-oki (M8.0) and 2005 Miyagi-oki (M7.2) earthquakes by using small repeating earthquakes. The small repeating earthquakes are thought to be caused by repeated rupture of small asperities surrounded by stable sliding areas on the fault. Here we estimated cumulative slips for small repeating earthquakes assuming that they were equal to the quasi-static slip histories in the surrounding areas on the plate boundaries (Igarashi et al., 2003; Uchida et al., 2003). The 2003 Tokachi-oki earthquake occurred on September 26, 2003 off the southeast of Hokkaido, Japan. The present analyses show that the slips in the areas around and to the east of the asperity of the earthquake were slow before the earthquake but that it was significantly accelerated after the earthquake. The slip rate acceleration to the east of the asperity probably triggered a M7.1 event which occurred on November 29, 2004 at the eastern edge of the accelerated area (about 100km east from the hypocenter of the Tokachi-oki earthquake). It seems that the quasi-static slip released the slip deficit in the locked area between the asperities of the 2003 Tokachi-oki and 1973 Nemuro-oki (M7.4) earthquakes. The 2005 Miyagi-oki earthquake occurred on August 16, 2005 in the anticipated source area for the recurrent _eMiyagi-oki earthquake_f. However, it was estimated that the earthquake did not destroyed the whole area of the asperity which caused the previous Miyagi-oki earthquake in 1978 (The Headquarters for Earthquake Research Promotion, 2005). Our result shows the quasi-static slips for the period of 20 years before the earthquake was almost constant to the west of the source area of the 2005 earthquake. The slips after the earthquake were not significant for the period of 15 days which suggests the plate boundary around the asperity for the earthquake is still locking.

  16. A plate boundary earthquake record from a wetland adjacent to the Alpine fault in New Zealand refines hazard estimates

    NASA Astrophysics Data System (ADS)

    Cochran, U. A.; Clark, K. J.; Howarth, J. D.; Biasi, G. P.; Langridge, R. M.; Villamor, P.; Berryman, K. R.; Vandergoes, M. J.

    2017-04-01

    Discovery and investigation of millennial-scale geological records of past large earthquakes improve understanding of earthquake frequency, recurrence behaviour, and likelihood of future rupture of major active faults. Here we present a ∼2000 year-long, seven-event earthquake record from John O'Groats wetland adjacent to the Alpine fault in New Zealand, one of the most active strike-slip faults in the world. We linked this record with the 7000 year-long, 22-event earthquake record from Hokuri Creek (20 km along strike to the north) to refine estimates of earthquake frequency and recurrence behaviour for the South Westland section of the plate boundary fault. Eight cores from John O'Groats wetland revealed a sequence that alternated between organic-dominated and clastic-dominated sediment packages. Transitions from a thick organic unit to a thick clastic unit that were sharp, involved a significant change in depositional environment, and were basin-wide, were interpreted as evidence of past surface-rupturing earthquakes. Radiocarbon dates of short-lived organic fractions either side of these transitions were modelled to provide estimates for earthquake ages. Of the seven events recognised at the John O'Groats site, three post-date the most recent event at Hokuri Creek, two match events at Hokuri Creek, and two events at John O'Groats occurred in a long interval during which the Hokuri Creek site may not have been recording earthquakes clearly. The preferred John O'Groats-Hokuri Creek earthquake record consists of 27 events since ∼6000 BC for which we calculate a mean recurrence interval of 291 ± 23 years, shorter than previously estimated for the South Westland section of the fault and shorter than the current interseismic period. The revised 50-year conditional probability of a surface-rupturing earthquake on this fault section is 29%. The coefficient of variation is estimated at 0.41. We suggest the low recurrence variability is likely to be a feature of

  17. Uncertainty estimations for moment tensor inversions: the issue of the 2012 May 20 Emilia earthquake

    NASA Astrophysics Data System (ADS)

    Scognamiglio, Laura; Magnoni, Federica; Tinti, Elisa; Casarotti, Emanuele

    2016-08-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Geoscientists ordinarily use moment tensor catalogues, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their analysis. The 2012 May 20 Emilia main shock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. A variability of ˜0.5 units in magnitude leads to a controversial knowledge of the real size of the event and reveals how the solutions could be poorly constrained. In this work, we investigate the stability of the moment tensor solution for this earthquake, studying the effect of five different 1-D velocity models, the number and the distribution of the stations used in the inversion procedure. We also introduce a 3-D velocity model to account for structural heterogeneity. We finally estimate the uncertainties associated to the computed focal planes and the obtained Mw. We conclude that our reliable source solutions provide a moment magnitude that ranges from 5.87, 1-D model, to 5.96, 3-D model, reducing the variability of the literature to ˜0.1. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, requires coming out with disclosed assumptions and explicit processing workflows. Finally and, probably more important, when moment tensor solution is used for secondary analyses it has to be combined with the same main boundary conditions (e.g. wave-velocity propagation model) to avoid conflicting results.

  18. Estimation of Pd y τc parameters for earthquakes of the SW Iberia (S. Vicente Cape)

    NASA Astrophysics Data System (ADS)

    Buforn, E.; Pro, C.; Carranza, M.; Zollo, A.; Pazos, A.; Lozano, L.; Carrilho, F.

    2012-04-01

    The S.Vicente Cape (SW Iberia) is a region where potential large and damaging earthquakes may occur, such as the 1755 Lisbon (Imax=X) or 1969 S. Vicente Cape (Ms=8,1) events. In order to study the feasibility of an Earthquake Early Warning System (EEWS) for earthquakes on this region (ALERT-ES project), we have estimated the Pd and τc parameters for a rapid estimation of the magnitude from the first seconds of the beginning of P-waves. We have selected earthquakes occurred on the period 2006-2011 with magnitude larger than 3.8 and recorded at regional distances (less than 500 km) at real time broad-band seismic stations of Instituto Geográfico Nacional , Western Mediterranean and Portuguese National Networks. We have studied time-windows from 2 to 4s and applied different filters. Due to the off-shore focus occurrence and very bad azimuthal coverage, we have corrected the Pd parameter by the radiation pattern obtained from focal mechanisms of the largest earthquakes of this region. We have normalized the Pd value to a reference distance (100 km) and after we have obtained empirical correlation laws Pd and τc to the magnitude, in order to obtain a rapid estimation of the magnitude.

  19. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window.

    PubMed

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-10-27

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude.

  20. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window

    PubMed Central

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-01-01

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude. PMID:25346344

  1. Estimating earthquake-induced failure probability and downtime of critical facilities.

    PubMed

    Porter, Keith; Ramer, Kyle

    2012-01-01

    Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.

  2. The range split-spectrum method for ionosphere estimation applied to the 2008 Kyrgyzstan earthquake

    NASA Astrophysics Data System (ADS)

    Gomba, Giorgio; Eineder, Michael

    2015-04-01

    L-band remote sensing systems, like the future Tandem-L mission, are disrupted by the ionized upper part of the atmosphere called ionosphere. The ionosphere is a region of the upper atmosphere composed by gases that are ionized by the solar radiation. The extent of the effects induced on a SAR measurement is given by the electron density integrated along the radio-wave paths and on its spatial variations. The main effect of the ionosphere on microwaves is to cause an additional delay, which introduces a phase difference between SAR measurements modifying the interferometric phase. The objectives of the Tandem-L mission are the systematic monitoring of dynamic Earth processes like Earth surface deformations, vegetation structure, ice and glacier changes and ocean surface currents. The scientific requirements regarding the mapping of surface deformation due to tectonic processes, earthquakes, volcanic cycles and anthropogenic factors demand deformation measurements; namely one, two or three dimensional displacement maps with resolutions of a few hundreds of meters and accuracies of centimeter to millimeter level. Ionospheric effects can make impossible to produce deformation maps with such accuracy and must therefore be estimated and compensated. As an example of this process, the implementation of the range split-spectrum method proposed in [1,2] will be presented and applied to an example dataset. The 2008 Kyrgyzstan Earthquake of October 5 is imaged by an ALOS PALSAR interferogram; a part from the earthquake, many fringes due to strong ionospheric variations can also be seen. The compensated interferogram shows how the ionosphere-related fringes were successfully estimated and removed. [1] Rosen, P.A.; Hensley, S.; Chen, C., "Measurement and mitigation of the ionosphere in L-band Interferometric SAR data," Radar Conference, 2010 IEEE , vol., no., pp.1459,1463, 10-14 May 2010 [2] Brcic, R.; Parizzi, A.; Eineder, M.; Bamler, R.; Meyer, F., "Estimation and

  3. Nowcasting earthquakes

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.; Donnellan, A.; Grant Ludwig, L.; Luginbuhl, M.; Gong, G.

    2016-11-01

    Nowcasting is a term originating from economics and finance. It refers to the process of determining the uncertain state of the economy or markets at the current time by indirect means. We apply this idea to seismically active regions, where the goal is to determine the current state of the fault system and its current level of progress through the earthquake cycle. In our implementation of this idea, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. Our method does not involve any model other than the idea of an earthquake cycle. Rather, we define a specific region and a specific large earthquake magnitude of interest, ensuring that we have enough data to span at least 20 or more large earthquake cycles in the region. We then compute the earthquake potential score (EPS) which is defined as the cumulative probability distribution P(n < n(t)) for the current count n(t) for the small earthquakes in the region. From the count of small earthquakes since the last large earthquake, we determine the value of EPS = P(n < n(t)). EPS is therefore the current level of hazard and assigns a number between 0% and 100% to every region so defined, thus providing a unique measure. Physically, the EPS corresponds to an estimate of the level of progress through the earthquake cycle in the defined region at the current time.

  4. Estimation of earthquake source parameters by the inversion of waveform data: synthetic waveforms

    USGS Publications Warehouse

    Sipkin, S.A.

    1982-01-01

    Two methods are presented for the recovery of a time-dependent moment-tensor source from waveform data. One procedure utilizes multichannel signal-enhancement theory; in the other a multichannel vector-deconvolution approach, developed by Oldenburg (1982) and based on Backus-Gilbert inverse theory, is used. These methods have the advantage of being extremely flexible; both may be used either routinely or as research tools for studying particular earthquakes in detail. Both methods are also robust with respect to small errors in the Green's functions and may be used to refine estimates of source depth by minimizing the misfits to the data. The multichannel vector-deconvolution approach, although it requires more interaction, also allows a trade-off between resolution and accuracy, and complete statistics for the solution are obtained. The procedures have been tested using a number of synthetic body-wave data sets, including point and complex sources, with satisfactory results. ?? 1982.

  5. Simultaneous estimation of earthquake source parameters and crustal Q value from broadband data of selected aftershocks of the 2001 M w 7.7 Bhuj earthquake

    NASA Astrophysics Data System (ADS)

    Saha, A.; Lijesh, S.; Mandal, P.

    2012-12-01

    This paper presents the simultaneous estimation of source parameters and crustal Q values for small to moderate-size aftershocks ( M w 2.1-5.1) of the M_{w }7.7 2001 Bhuj earthquake. The horizontal-component S-waves of 144 well located earthquakes (2001-2010) recorded at 3-10 broadband seismograph sites in the Kachchh Seismic Zone, Gujarat, India are analyzed, and their seismic corner frequencies, long-period spectral levels and crustal Q values are simultaneously estimated by inverting the horizontal component of the S-wave displacement spectrum using the Levenberg-Marquardt nonlinear inversion technique, wherein the inversion scheme is formulated based on the ω-square source spectral model. The static stress drops (Δ σ) are then calculated from the corner frequency and seismic moment. The estimated source parameters suggest that the seismic moment ( M 0) and source radius ( r) of aftershocks are varying from 1.12 × 1012 to 4.00 × 1016 N-m and 132.57 to 513.20 m, respectively. Whereas, estimated stress drops (Δ σ) and multiplicative factor ( E mo) values range from 0.01 to 20.0 MPa and 1.05 to 3.39, respectively. The corner frequencies are found to be ranging from 2.36 to 8.76 Hz. The crustal S-wave quality factor varies from 256 to 1882 with an average of 840 for the Kachchh region, which agrees well with the crustal Q value of the seismically active New Madrid region, USA. Our estimated stress drop values are quite large compared to the other similar size Indian intraplate earthquakes, which can be attributed to the presence of crustal mafic intrusives and aqueous fluids in the lower crust as revealed by the earlier tomographic study of the region.

  6. Estimation of co-seismic stress change of the 2008 Wenchuan Ms8.0 earthquake

    SciTech Connect

    Sun Dongsheng; Wang Hongcai; Ma Yinsheng; Zhou Chunjing

    2012-09-26

    In-situ stress change near the fault before and after a great earthquake is a key issue in the geosciences field. In this work, based on the 2008 Great Wenchuan earthquake fault slip dislocation model, the co-seismic stress tensor change due to the Wenchuan earthquake and the distribution functions around the Longmen Shan fault are given. Our calculated results are almost consistent with the before and after great Wenchuan earthquake in-situ measuring results. The quantitative assessment results provide a reference for the study of the mechanism of earthquakes.

  7. Testing earthquake source inversion methodologies

    USGS Publications Warehouse

    Page, M.; Mai, P.M.; Schorlemmer, D.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  8. Ground-motion modeling of the 1906 San Francisco Earthquake, part II: Ground-motion estimates for the 1906 earthquake and scenario events

    USGS Publications Warehouse

    Aagaard, B.T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.

    2008-01-01

    We estimate the ground motions produce by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  9. Mass Loss and Surface Displacement Estimates in Greenland from GRACE

    NASA Astrophysics Data System (ADS)

    Jensen, Tim; Forsberg, Rene

    2015-04-01

    The estimation of ice sheet mass changes from GRACE is basically an inverse problem, the solution is non-unique and several procedures for determining the mass distribution exists. We present Greenland mass loss results from two such procedures, namely a direct spherical harmonic inversion procedure possible through a thin layer assumption, and a generalized inverse masscon procedure. These results are updated to the end of 2014, including the unusual 2013 mass gain anomaly, and show a good agreement when taking into account leakage from the Canadian Icecaps. The GRACE mass changes are further compared to GPS uplift data on the bedrock along the edge of the ice sheet. The solid Earth deformation is assumed to consist of an elastic deformation of the crust and an anelastic deformation of the underlying mantle (GIA). The crustal deformation is due to current surface loading effects and therefore contains a strong seasonal component of variation, superimposed on a secular trend. The majority of the anelastic GIA deformation of the mantle is believed to be approximately constant. An accelerating secular trend and seasonal changes, as seen in Greenland, is therefore assumed to be due to elastic deformation from changes in surface mass loading from the ice sheet. The GRACE and GPS comparison is only valid by assuring that the signal content of the two observables are consistent. The GPS receivers are measuring movement at a single point on the bedrock surface, and therefore sensitive to a limited loading footprint, while the GRACE satellites on the other hand measures a filtered, attenuated gravitational field, at an altitude of approximately 500 km, making it sensitive to a much larger area. Despite this, the seasonal loading signal in the two observables show a reasonably good agreement.

  10. The source model and recurrence interval of Genroku-type Kanto earthquakes estimated from paleo-shoreline data

    NASA Astrophysics Data System (ADS)

    Sato, Toshinori; Higuchi, Harutaka; Miyauchi, Takahiro; Endo, Kaori; Tsumura, Noriko; Ito, Tanio; Noda, Akemi; Matsu'ura, Mitsuhiro

    2016-02-01

    In the southern Kanto region of Japan, where the Philippine Sea plate is descending at the Sagami trough, two different types of large interplate earthquakes have occurred repeatedly. The 1923 (Taisho) and 1703 (Genroku) Kanto earthquakes characterize the first and second types, respectively. A reliable source model has been obtained for the 1923 event from seismological and geodetical data, but not for the 1703 event because we have only historical records and paleo-shoreline data about it. We developed an inversion method to estimate fault slip distribution of interplate repeating earthquakes from paleo-shoreline data on the idea of crustal deformation cycles associated with subduction-zone earthquakes. By applying the inversion method to the present heights of the Genroku and Holocene marine terraces developed along the coasts of the southern Boso and Miura peninsulas, we estimated the fault slip distribution of the 1703 Genroku earthquake as follows. The source region extends along the Sagami trough from the Miura peninsula to the offing of the southern Boso peninsula, which covers the southern two thirds of the source region of the 1923 Kanto earthquake. The coseismic slip takes the maximum of 20 m at the southern tip of the Boso peninsula, and the moment magnitude (Mw) is calculated as 8.2. From the interseismic slip-deficit rates at the plate interface obtained by GPS data inversion, assuming that the total slip deficit is compensated by coseismic slip, we can roughly estimate the average recurrence interval as 350 years for large interplate events of any type and 1400 years for the Genroku-type events.

  11. The energy radiated by the 26 December 2004 Sumatra-Andaman earthquake estimated from 10-minute P-wave windows

    USGS Publications Warehouse

    Choy, G.L.; Boatwright, J.

    2007-01-01

    The rupture process of the Mw 9.1 Sumatra-Andaman earthquake lasted for approximately 500 sec, nearly twice as long as the teleseismic time windows between the P and PP arrival times generally used to compute radiated energy. In order to measure the P waves radiated by the entire earthquake, we analyze records that extend from the P-wave to the S-wave arrival times from stations at distances ?? >60??. These 8- to 10-min windows contain the PP, PPP, and ScP arrivals, along with other multiply reflected phases. To gauge the effect of including these additional phases, we form the spectral ratio of the source spectrum estimated from extended windows (between TP and TS) to the source spectrum estimated from normal windows (between TP and TPP). The extended windows are analyzed as though they contained only the P-pP-sP wave group. We analyze four smaller earthquakes that occurred in the vicinity of the Mw 9.1 mainshock, with similar depths and focal mechanisms. These smaller events range in magnitude from an Mw 6.0 aftershock of 9 January 2005 to the Mw 8.6 Nias earthquake that occurred to the south of the Sumatra-Andaman earthquake on 28 March 2005. We average the spectral ratios for these four events to obtain a frequency-dependent operator for the extended windows. We then correct the source spectrum estimated from the extended records of the 26 December 2004 mainshock to obtain a complete or corrected source spectrum for the entire rupture process (???600 sec) of the great Sumatra-Andaman earthquake. Our estimate of the total seismic energy radiated by this earthquake is 1.4 ?? 1017 J. When we compare the corrected source spectrum for the entire earthquake to the source spectrum from the first ???250 sec of the rupture process (obtained from normal teleseismic windows), we find that the mainshock radiated much more seismic energy in the first half of the rupture process than in the second half, especially over the period range from 3 sec to 40 sec.

  12. BEAM LOSS ESTIMATES AND CONTROL FOR THE BNL NEUTRINO FACILITY.

    SciTech Connect

    WENG, W.-T.; LEE, Y.Y.; RAPARIA, D.; TSOUPAS, N.; BEEBE-WANG, J.; WEI, J.; ZHANG, S.Y.

    2005-05-16

    The requirement for low beam loss is very important both to protect the beam component, and to make the hands-on maintenance possible. In this report, the design considerations to achieving high intensity and low loss will be presented. We start by specifying the beam loss limit at every physical process followed by the proper design and parameters for realizing the required goals. The process considered in this paper include the emittance growth in the linac, the H{sup -} injection, the transition crossing, the coherent instabilities and the extraction losses.

  13. Exploration of deep sedimentary layers in Tacna city, southern Peru, using microtremors and earthquake data for estimation of local amplification

    NASA Astrophysics Data System (ADS)

    Yamanaka, Hiroaki; Gamero, Mileyvi Selene Quispe; Chimoto, Kosuke; Saguchi, Kouichiro; Calderon, Diana; La Rosa, Fernándo Lázares; Bardales, Zenón Aguilar

    2016-01-01

    S-wave velocity profiles of sedimentary layers in Tacna, southern Peru, based on analysis of microtremor array data and earthquake records, have been determined for estimation of site amplification. We investigated vertical component of microtremors in temporary arrays at two sites in the city for Rayleigh wave phase velocity. A receiver function was also estimated from existing earthquake data at a strong motion station near one of the microtremor exploration sites. The phase velocity and the receiver function were jointly inverted to S-wave velocity profiles. The depths to the basement with an S-wave velocity of 2.8 km/s at the two sites are similar as about 1 km. The top soil at the site in a severely damaged area in the city had a lower S-wave velocity than that in a slightly damaged area during the 2001 southern Peru earthquake. We subsequently estimate site amplifications from the velocity profiles and find that amplification is large at periods from 0.2 to 0.8 s at the damaged area indicating possible reasons for the differences in the damage observed during the 2001 southern Peru earthquake.

  14. Understanding earthquake hazards in urban areas - Evansville Area Earthquake Hazards Mapping Project

    USGS Publications Warehouse

    Boyd, Oliver S.

    2012-01-01

    The region surrounding Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the proximity of Evansville to the Wabash Valley and New Madrid seismic zones, there is concern among nearby communities about hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake and are able to design structures to withstand this estimated ground shaking. Earthquake-hazard maps provide one way of conveying such information and can help the region of Evansville prepare for future earthquakes and reduce earthquake-caused loss of life and financial and structural loss. The Evansville Area Earthquake Hazards Mapping Project (EAEHMP) has produced three types of hazard maps for the Evansville area: (1) probabilistic seismic-hazard maps show the ground motion that is expected to be exceeded with a given probability within a given period of time; (2) scenario ground-shaking maps show the expected shaking from two specific scenario earthquakes; (3) liquefaction-potential maps show how likely the strong ground shaking from the scenario earthquakes is to produce liquefaction. These maps complement the U.S. Geological Survey's National Seismic Hazard Maps but are more detailed regionally and take into account surficial geology, soil thickness, and soil stiffness; these elements greatly affect ground shaking.

  15. Using safety inspection data to estimate shaking intensity for the 1994 Northridge earthquake

    USGS Publications Warehouse

    Thywissen, K.; Boatwright, J.

    1998-01-01

    We map the shaking intensity suffered in Los Angeles County during the 17 January 1994, Northridge earthquake using municipal safety inspection data. The intensity is estimated from the number of buildings given red, yellow, or green tags, aggregated by census tract. Census tracts contain from 200 to 4000 residential buildings and have an average area of 6 km2 but are as small as 2 and 1 km2 in the most densely populated areas of the San Fernando Valley and downtown Los Angeles, respectively. In comparison, the zip code areas on which standard MMI intensity estimates are based are six times larger, on average, than the census tracts. We group the buildings by age (before and after 1940 and 1976), by number of housing units (one, two to four, and five or more), and by construction type, and we normalize the tags by the total number of similar buildings in each census tract. We analyze the seven most abundant building categories. The fragilities (the fraction of buildings in each category tagged within each intensity level) for these seven building categories are adjusted so that the intensity estimates agree. We calibrate the shaking intensity to correspond with the modified Mercalli intensities (MMI) estimated and compiled by Dewey et al. (1995); the shapes of the resulting isoseismals are similar, although we underestimate the extent of the MMI = 6 and 7 areas. The fragility varies significantly between different building categories (by factors of 10 to 20) and building ages (by factors of 2 to 6). The post-1940 wood-frame multi-family (???5 units) dwellings make up the most fragile building category, and the post-1940 wood-frame single-family dwellings make up the most resistant building category.

  16. Source study of two small earthquakes of Delhi, India, and estimation of ground motion from future moderate, local events

    NASA Astrophysics Data System (ADS)

    Bansal, B. K.; Singh, S. K.; Dharmaraju, R.; Pacheco, J. F.; Ordaz, M.; Dattatrayam, R. S.; Suresh, G.

    2009-01-01

    We study source characteristics of two small, local earthquakes which occurred in Delhi on 28 April 2001 (Mw3.4) and 18 March 2004 (Mw2.6). Both earthquakes were located in the heart of New Delhi, and were recorded in the epicentral region by digital accelerographs. The depths of the events are 15 km and 8 km, respectively. First motions and waveform modeling yield a normal-faulting mechanism with large strike-slip component. The strike of one of the nodal planes roughly agrees with NE-SW orientation of faults and lineaments mapped in the region. We use the recordings of the 2004 event as empirical Green’s functions to synthesize expected ground motions in the epicentral region of a Mw5.0 earthquake in Delhi. It is possible that such a local event may control the hazard in Delhi. Our computations show that a Mw5.0 earthquake would give rise to PGA of ~200 to 450 gal, the smaller values occurring at hard sites. The estimate of corresponding PGV is ~6 to 15 cm/s. The recommended response spectra, Sa, 5% damping, for Delhi, which falls in zone IV of the Indian seismic zoning map, may not be conservative enough at soft sites for a postulated Mw5.0 local earthquake.

  17. Historic Earthquake Damage for Buildings and Damage Estimated by the Rapid Seismic Analysis Procedure: A Comparison.

    DTIC Science & Technology

    1986-03-01

    the 1971 San Fernando earthquake and later code changes to reflect these -. lessons. Criteria 2 and 5 eliminate the smaller buildings in the 2,500... Fernando earthquake. The spurious peak was caused by the amplification of the base motion through the rock ridge and the fracturing of the ridge... Fernando earthquake (Ref 12). For pre-1933 buildings, the damage threshold is 0.15g. Maximum ground accelerations of 0.3g or greater are associated

  18. Spatial and temporal variations of radiated seismic energy estimated for repeating earthquakes in northeastern Japan; implication for healing process

    NASA Astrophysics Data System (ADS)

    Ara, M.; Ide, S.; Uchida, N.

    2015-12-01

    Repeating earthquakes are shear slip on the plate interface, and helpful to monitor long-term deformation in subduction zones. Previous studies have measured the size of repeating earthquakes mainly using seismic moment, to calculate slip amount in each event. As another measure of event size, seismic energy may provide some information related to the frictional property on the plate interface. We estimated radiated seismic energy for 620 repeating earthquakes of MJMA from 2.5 to 5.9, detected by the method of Uchida and Matsuzawa [2013], in the Tohoku-Oki region. The study period is from 2001 to 2013, extending before and after the 2011 Tohoku-Oki earthquake of Mw 9, which is also accompanied with large afterslip [e.g., Ozawa et al., 2012]. The seismograms recorded by NIED Hi-net were used. We measured coda wave amplitude by the method of Mayeda et al. [2003] and estimated source spectra and radiated seismic energy by the method of Baltay et al. [2010] after slight modifications. The estimated scaled energy, the ratio between radiated seismic energy and seismic moment, shows a slight increase with seismic moment. The scaled energy increases with depth, while its temporal change before and after the Tohoku-Oki earthquake is not systematic. The scaled energy also increases with the inter-event time of repeating earthquakes. This might be explained by the difference of fault strength, proportional to the logarithm of time. In addition to this healing relation, scaling relationship between seismic moment and the inter-event time of repeating earthquake is well known [Nadeau and Johnson, 1998]. From these healing and scaling relationships, it is expected that scaled energy is proportional to the logarithm of seismic moment. This prediction is generally consistent with our observation, though the moment dependency is too small to be recognized as power or log. This healing-related scaling may be applicable to general earthquakes, and might be associated with the

  19. Effects of tag loss on direct estimates of population growth rate

    USGS Publications Warehouse

    Rotella, J.J.; Hines, J.E.

    2005-01-01

    The temporal symmetry approach of R. Pradel can be used with capture-recapture data to produce retrospective estimates of a population's growth rate, lambda(i), and the relative contributions to lambda(i) from different components of the population. Direct estimation of lambda(i) provides an alternative to using population projection matrices to estimate asymptotic lambda and is seeing increased use. However, the robustness of direct estimates of lambda(1) to violations of several key assumptions has not yet been investigated. Here, we consider tag loss as a possible source of bias for scenarios in which the rate of tag loss is (1) the same for all marked animals in the population and (2) a function of tag age. We computed analytic approximations of the expected values for each of the parameter estimators involved in direct estimation and used those values to calculate bias and precision for each parameter estimator. Estimates of lambda(i) were robust to homogeneous rates of tag loss. When tag loss rates varied by tag age, bias occurred for some of the sampling situations evaluated, especially those with low capture probability, a high rate of tag loss, or both. For situations with low rates of tag loss and high capture probability, bias was low and often negligible. Estimates of contributions of demographic components to lambda(i) were not robust to tag loss. Tag loss reduced the precision of all estimates because tag loss results in fewer marked animals remaining available for estimation. Clearly tag loss should be prevented if possible, and should be considered in analyses of lambda(i), but tag loss does not necessarily preclude unbiased estimation of lambda(i).

  20. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  1. Estimates of aseismic slip associated with small earthquakes near San Juan Bautista, CA

    NASA Astrophysics Data System (ADS)

    Hawthorne, J. C.; Simons, M.; Ampuero, J.-P.

    2016-11-01

    Postseismic slip observed after large (M > 6) earthquakes typically has an equivalent moment of a few tens of percent of the coseismic moment. Some observations of the recurrence intervals of repeating earthquakes suggest that postseismic slip following small (M≲4) earthquakes could be much larger—up to 10 or 100 times the coseismic moment. We use borehole strain data from U.S. Geological Survey strainmeter SJT to analyze deformation in the days before and after 1000 1.9 < M < 5 earthquakes near San Juan Bautista, CA. We find that on average, postseismic strain is roughly equal in magnitude to coseismic strain for the magnitude range considered, suggesting that postseismic moment following these small earthquakes is roughly equal to coseismic moment. This postseismic to coseismic moment ratio is larger than typically observed in earthquakes that rupture through the seismogenic zone but is much smaller than was hypothesized from modeling repeating earthquakes. Our results are consistent with a simple, self-similar model of earthquakes.

  2. Estimation of human heat loss in five Mediterranean regions.

    PubMed

    Bilgili, M; Simsek, E; Sahin, B; Yasar, A; Ozbek, A

    2015-10-01

    This study investigates the effects of seasonal weather differences on the human body's heat losses in the Mediterranean region of Turkey. The provinces of Adana, Antakya, Osmaniye, Mersin and Antalya were chosen for the research, and monthly atmospheric temperatures, relative humidity, wind speed and atmospheric pressure data from 2007 were used. In all these provinces, radiative, convective and evaporative heat losses from the human body based on skin surface and respiration were analyzed from meteorological data by using the heat balance equation. According to the results, the rate of radiative, convective and evaporative heat losses from the human body varies considerably from season to season. In all the provinces, 90% of heat loss was caused by heat transfer from the skin, with the remaining 10% taking place through respiration. Furthermore, radiative and convective heat loss through the skin reached the highest values in the winter months at approximately between 110 and 140W/m(2), with the lowest values coming in the summer months at roughly 30-50W/m(2).

  3. The tsunami source area of the 2003 Tokachi-oki earthquake estimated from tsunami travel times and its relationship to the 1952 Tokachi-oki earthquake

    USGS Publications Warehouse

    Hirata, K.; Tanioka, Y.; Satake, K.; Yamaki, S.; Geist, E.L.

    2004-01-01

    We estimate the tsunami source area of the 2003 Tokachi-oki earthquake (Mw 8.0) from observed tsunami travel times at 17 Japanese tide gauge stations. The estimated tsunami source area (???1.4 ?? 104 km2) coincides with the western-half of the ocean-bottom deformation area (???2.52 ?? 104 km2) of the 1952 Tokachi-oki earthquake (Mw 8.1), previously inferred from tsunami waveform inversion. This suggests that the 2003 event ruptured only the western-half of the 1952 rupture extent. Geographical distribution of the maximum tsunami heights in 2003 differs significantly from that of the 1952 tsunami, supporting this hypothesis. Analysis of first-peak tsunami travel times indicates that a major uplift of the ocean-bottom occurred approximately 30 km to the NNW of the mainshock epicenter, just above a major asperity inferred from seismic waveform inversion. Copyright ?? The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences.

  4. Estimates of stress drop and crustal tectonic stress from the 27 February 2010 Maule, Chile, earthquake: Implications for fault strength

    USGS Publications Warehouse

    Luttrell, K.M.; Tong, X.; Sandwell, D.T.; Brooks, B.A.; Bevis, M.G.

    2011-01-01

    The great 27 February 2010 Mw 8.8 earthquake off the coast of southern Chile ruptured a ???600 km length of subduction zone. In this paper, we make two independent estimates of shear stress in the crust in the region of the Chile earthquake. First, we use a coseismic slip model constrained by geodetic observations from interferometric synthetic aperture radar (InSAR) and GPS to derive a spatially variable estimate of the change in static shear stress along the ruptured fault. Second, we use a static force balance model to constrain the crustal shear stress required to simultaneously support observed fore-arc topography and the stress orientation indicated by the earthquake focal mechanism. This includes the derivation of a semianalytic solution for the stress field exerted by surface and Moho topography loading the crust. We find that the deviatoric stress exerted by topography is minimized in the limit when the crust is considered an incompressible elastic solid, with a Poisson ratio of 0.5, and is independent of Young's modulus. This places a strict lower bound on the critical stress state maintained by the crust supporting plastically deformed accretionary wedge topography. We estimate the coseismic shear stress change from the Maule event ranged from-6 MPa (stress increase) to 17 MPa (stress drop), with a maximum depth-averaged crustal shear-stress drop of 4 MPa. We separately estimate that the plate-driving forces acting in the region, regardless of their exact mechanism, must contribute at least 27 MPa trench-perpendicular compression and 15 MPa trench-parallel compression. This corresponds to a depth-averaged shear stress of at least 7 MPa. The comparable magnitude of these two independent shear stress estimates is consistent with the interpretation that the section of the megathrust fault ruptured in the Maule earthquake is weak, with the seismic cycle relieving much of the total sustained shear stress in the crust. Copyright 2011 by the American

  5. Experimental study of permanent displacement estimate method based on strong-motion earthquake accelerograms

    NASA Astrophysics Data System (ADS)

    Lu, Tao; Hu, Guorui

    2016-04-01

    In the engineering seismology studies, the seismic permanent displacement of the near-fault site is often obtained by the process of the ground motion accelerogram recorded by the instrument on the station. Because of the selection differences of the estimate methods and the algorithm parameters, the strongly different results of the permanent displacement is gotten often. And the reliability of the methods has not only been proved in fact, but also the selection of the algorithm parameters has to be carefully considered. In order to solve this problem, the experimental study on the permanent displacement according to the accelerogram was carried out with the experiment program of using the large shaking table and the sliding mechanism in the earthquake engineering laboratory. In the experiments,the large shaking table genarated the dynamincs excitation without the permanent displacement,the sliding mechanism fixed on the shaking table genarated the permanent displacement, and the accelerogram including the permant information had been recorded by the instrument on the sliding mechanism.Then the permanent displacement value had been obtained according to the accelerogram, and been compared with the displacement value gotten by the displacement meter and the digital close range photogrammetry. The experimental study showed that the reliable permanent displacement could be obtained by the existing processing method under the simple laboratory conditions with the preconditions of the algorithm parameters selection carefully.

  6. Magnitude estimates of two large aftershocks of the 16 December 1811 New Madrid earthquake

    USGS Publications Warehouse

    Hough, S.E.; Martin, S.

    2002-01-01

    The three principal New Madrid mainshocks of 1811-1812 were followed by extensive aftershock sequences that included numerous felt events. Although no instrumental data are available for either the mainshocks or the aftershocks, available historical accounts do provide information that can be used to estimate magnitudes and locations for the large events. In this article we investigate two of the largest aftershocks: one near dawn following the first mainshock on 16 December 1811, and one near midday on 17 December 1811. We reinterpret original felt reports to obtain a set of 48 and 20 modified Mercalli intensity values of the two aftershocks, respectively. For the dawn aftershock, we infer a Mw of approximately 7.0 based on a comparison of its intensities with those of the smallest New Madrid mainshock. Based on a detailed account that appears to describe near-field ground motions, we further propose a new fault rupture scenario for the dawn aftershock. We suggest that the aftershock had a thrust mechanism and occurred on a southeastern limb of the Reelfoot fault. For the 17 December 1811 aftershock, we infer a Mw of approximately 6.1 ?? 0.2. This value is determined using the method of Bakun et al. (2002), which is based on a new calibration of intensity versus distance for earthquakes in central and eastern North America. The location of this event is not well constrained, but the available accounts suggest an epicenter beyond the southern end of the New Madrid Seismic Zone.

  7. Improved phase arrival estimate and location for local earthquakes in South Korea

    NASA Astrophysics Data System (ADS)

    Morton, E. A.; Rowe, C. A.; Begnaud, M. L.

    2012-12-01

    The Korean Institute of Geoscience and Mineral Resources (KIGAM) and the Korean Meteorological Agency (KMA) regularly report local (distance < ~1200 km) seismicity recorded with their networks; we obtain preliminary event location estimates as well as waveform data, but no phase arrivals are reported, so the data are not immediately useful for earthquake location. Our goal is to identify seismic events that are sufficiently well-located to provide accurate seismic travel-time information for events within the KIGAM and KMA networks, and also recorded by some regional stations. Toward that end, we are using a combination of manual phase identification and arrival-time picking, with waveform cross-correlation, to cluster events that have occurred in close proximity to one another, which allows for improved phase identification by comparing the highly correlating waveforms. We cross-correlate the known events with one another on 5 seismic stations and cluster events that correlate above a correlation coefficient threshold of 0.7, which reveals few clusters containing few events each. The small number of repeating events suggests that the online catalogs have had mining and quarry blasts removed before publication, as these can contribute significantly to repeating seismic sources in relatively aseismic regions such as South Korea. The dispersed source locations in our catalog, however, are ideal for seismic velocity modeling by providing superior sampling through the dense seismic station arrangement, which produces favorable event-to-station ray path coverage. Following careful manual phase picking on 104 events chosen to provide adequate ray coverage, we re-locate the events to obtain improved source coordinates. The re-located events are used with Thurber's Simul2000 pseudo-bending local tomography code to estimate the crustal structure on the Korean Peninsula, which is an important contribution to ongoing calibration for events of interest in the region.

  8. Source rupture processes of the 2016 Kumamoto, Japan, earthquakes estimated from strong-motion waveforms

    NASA Astrophysics Data System (ADS)

    Kubo, Hisahiko; Suzuki, Wataru; Aoi, Shin; Sekiguchi, Haruko

    2016-10-01

    The detailed source rupture process of the M 7.3 event (April 16, 2016, 01:25, JST) of the 2016 Kumamoto, Japan, earthquakes was derived from strong-motion waveforms using multiple-time-window linear waveform inversion. Based on the observations of surface ruptures, the spatial distribution of aftershocks, and the geodetic data, a realistic curved fault model was developed for source-process analysis of this event. The seismic moment and maximum slip were estimated as 5.5 × 1019 Nm ( M w 7.1) and 3.8 m, respectively. The source model of the M 7.3 event had two significant ruptures. One rupture propagated toward the northeastern shallow region at 4 s after rupture initiation and continued with large slips to approximately 16 s. This rupture caused a large slip region 10-30 km northeast of the hypocenter that reached the caldera of Mt. Aso. Another rupture propagated toward the surface from the hypocenter at 2-6 s and then propagated toward the northeast along the near surface at 6-10 s. A comparison with the result of using a single fault plane model demonstrated that the use of the curved fault model led to improved waveform fit at the stations south of the fault. The source process of the M 6.5 event (April 14, 2016, 21:26, JST) was also estimated. In the source model obtained for the M 6.5 event, the seismic moment was 1.7 × 1018 Nm ( M w 6.1), and the rupture with large slips propagated from the hypocenter to the surface along the north-northeast direction at 1-6 s. The results in this study are consistent with observations of the surface ruptures. [Figure not available: see fulltext. Caption: .

  9. Estimating extreme losses for the Florida Public Hurricane Model—part II

    NASA Astrophysics Data System (ADS)

    Gulati, Sneh; George, Florence; Hamid, Shahid

    2017-01-01

    Rising global temperatures are leading to an increase in the number of extreme events and losses (http://www.epa.gov/climatechange/science/indicators/). Accurate estimation of these extreme losses with the intention of protecting themselves against them is critical to insurance companies. In a previous paper, Gulati et al. (2014) discussed probable maximum loss (PML) estimation for the Florida Public Hurricane Loss Model (FPHLM) using parametric and nonparametric methods. In this paper, we investigate the use of semi-parametric methods to do the same. Detailed analysis of the data shows that the annual losses from FPHLM do not tend to be very heavy tailed, and therefore, neither the popular Hill's method nor the moment's estimator work well. However, Pickand's estimator with threshold around the 84th percentile provides a good fit for the extreme quantiles for the losses.

  10. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  11. Source parameters of the 2008 Bukavu-Cyangugu earthquake estimated from InSAR and teleseismic data

    NASA Astrophysics Data System (ADS)

    D'Oreye, Nicolas; González, Pablo J.; Shuler, Ashley; Oth, Adrien; Bagalwa, Louis; Ekström, Göran; Kavotha, Déogratias; Kervyn, François; Lucas, Celia; Lukaya, François; Osodundu, Etoy; Wauthier, Christelle; Fernández, José

    2011-02-01

    Earthquake source parameter determination is of great importance for hazard assessment, as well as for a variety of scientific studies concerning regional stress and strain release and volcano-tectonic interaction. This is especially true for poorly instrumented, densely populated regions such as encountered in Africa, where even the distribution of seismicity remains poorly documented. In this paper, we combine data from satellite radar interferometry (InSAR) and teleseismic waveforms to determine the source parameters of the Mw 5.9 earthquake that occurred on 2008 February 3 near the cities of Bukavu (DR Congo) and Cyangugu (Rwanda). This was the second largest earthquake ever to be recorded in the Kivu basin, a section of the western branch of the East African Rift (EAR). This earthquake is of particular interest due to its shallow depth and proximity to active volcanoes and Lake Kivu, which contains high concentrations of dissolved carbon dioxide and methane. The shallow depth and possible similarity with dyking events recognized in other parts of EAR suggested the potential association of the earthquake with a magmatic intrusion, emphasizing the necessity of accurate source parameter determination. In general, we find that estimates of fault plane geometry, depth and scalar moment are highly consistent between teleseismic and InSAR studies. Centroid-moment-tensor (CMT) solutions locate the earthquake near the southern part of Lake Kivu, while InSAR studies place it under the lake itself. CMT solutions characterize the event as a nearly pure double-couple, normal faulting earthquake occurring on a fault plane striking 350° and dipping 52° east, with a rake of -101°. This is consistent with locally mapped faults, as well as InSAR data, which place the earthquake on a fault striking 355° and dipping 55° east, with a rake of -98°. The depth of the earthquake was constrained by a joint analysis of teleseismic P and SH waves and the CMT data set, showing that

  12. Earthquake Analysis.

    ERIC Educational Resources Information Center

    Espinoza, Fernando

    2000-01-01

    Indicates the importance of the development of students' measurement and estimation skills. Analyzes earthquake data recorded at seismograph stations and explains how to read and modify the graphs. Presents an activity for student evaluation. (YDS)

  13. Efficient Acoustic Uncertainty Estimation for Transmission Loss Calculations

    DTIC Science & Technology

    2011-09-01

    Soc. Am. Vol. 129, 589-592. PUBLICATIONS [1] Kundu , P.K., Cohen, I.M., and Dowling, D.R., Fluid Mechanics , 5th Ed. (Academic Press, Oxford, 2012), 891 pages. ...Transmission Loss Calculations Kevin R. James Department of Mechanical Engineering University of Michigan Ann Arbor, MI 48109-2133 phone: (734) 998...1807 fax: (734) 764-4256 email: krj@umich.edu David R. Dowling Department of Mechanical Engineering University of Michigan Ann Arbor, MI

  14. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    PubMed

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  15. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    PubMed Central

    Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  16. A teleseismic study of the 2002 Denali fault, Alaska, earthquake and implications for rapid strong-motion estimation

    USGS Publications Warehouse

    Ji, C.; Helmberger, D.V.; Wald, D.J.

    2004-01-01

    Slip histories for the 2002 M7.9 Denali fault, Alaska, earthquake are derived rapidly from global teleseismic waveform data. In phases, three models improve matching waveform data and recovery of rupture details. In the first model (Phase I), analogous to an automated solution, a simple fault plane is fixed based on the preliminary Harvard Centroid Moment Tensor mechanism and the epicenter provided by the Preliminary Determination of Epicenters. This model is then updated (Phase II) by implementing a more realistic fault geometry inferred from Digital Elevation Model topography and further (Phase III) by using the calibrated P-wave and SH-wave arrival times derived from modeling of the nearby 2002 M6.7 Nenana Mountain earthquake. These models are used to predict the peak ground velocity and the shaking intensity field in the fault vicinity. The procedure to estimate local strong motion could be automated and used for global real-time earthquake shaking and damage assessment. ?? 2004, Earthquake Engineering Research Institute.

  17. Loss of Information in Estimating Item Parameters in Incomplete Designs

    ERIC Educational Resources Information Center

    Eggen, Theo J. H. M.; Verelst, Norman D.

    2006-01-01

    In this paper, the efficiency of conditional maximum likelihood (CML) and marginal maximum likelihood (MML) estimation of the item parameters of the Rasch model in incomplete designs is investigated. The use of the concept of F-information (Eggen, 2000) is generalized to incomplete testing designs. The scaled determinant of the F-information…

  18. Long-period earthquake simulations in the Wasatch Front, UT: misfit characterization and ground motion estimates

    USGS Publications Warehouse

    Moschetti, Morgan P.; Ramírez-Guzmán, Leonardo

    2011-01-01

    In this research we characterize the goodness-of-fit between observed and synthetic seismograms from three small magnitude (M3.6-4.5) earthquakes in the region using the Wasatch Front community velocity model (WCVM) in order to determine the ability of the WCVM to predict earthquake ground motions for scenario earthquake modeling efforts. We employ the goodness-of-fit algorithms and criteria of Olsen and Mayhew (2010). In focusing comparisons on the ground motion parameters that are of greatest importance in engineering seismology, we find that the synthetic seismograms calculated using the WCVM produce a fair fit to the observed ground motion records up to a frequency of 0.5 Hz for two of the modeled earthquakes and up to 0.1 Hz for one of the earthquakes. In addition to the reference seismic material model (WCVM), we carry out earthquake simulations using material models with perturbations to the regional seismic model and with perturbations to the deep sedimentary basins. Simple perturbations to the regional seismic velocity model and to the seismic velocities of the sedimentary basin result in small improvements in the observed misfit but do not indicate a significantly improved material model. Unresolved differences between the observed and synthetic seismograms are likely due to un-modeled heterogeneities and incorrect basin geometries in the WCVM. These differences suggest that ground motion prediction accuracy from deterministic modeling varies across the region and further efforts to improve the WCVM are needed.

  19. Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning

    DTIC Science & Technology

    2008-01-01

    ranging from the income level to age and her preference order over a set of products (e.g. movies in Netflix ). The ranking task is to learn a map- ping...learners in RankBoost. However, in both cases, the proposed strategy selects the samples which are estimated to produce a faster convergence from the...steps in Section 5. 2. Related Work A number of strategies have been proposed for active learning in the classification framework. Some of those center

  20. Estimation of the Demand for Hospital Care After a Possible High-Magnitude Earthquake in the City of Lima, Peru.

    PubMed

    Bambarén, Celso; Uyen, Angela; Rodriguez, Miguel

    2017-02-01

    Introduction A model prepared by National Civil Defense (INDECI; Lima, Peru) estimated that an earthquake with an intensity of 8.0 Mw in front of the central coast of Peru would result in 51,019 deaths and 686,105 injured in districts of Metropolitan Lima and Callao. Using this information as a base, a study was designed to determine the characteristics of the demand for treatment in public hospitals and to estimate gaps in care in the hours immediately after such an event.

  1. Combined UAVSAR and GPS Estimates of Fault Slip for the M 6.0 South Napa Earthquake

    NASA Astrophysics Data System (ADS)

    Donnellan, A.; Parker, J. W.; Hawkins, B.; Hensley, S.; Jones, C. E.; Owen, S. E.; Moore, A. W.; Wang, J.; Pierce, M. E.; Rundle, J. B.

    2014-12-01

    Combined UAVSAR and GPS Estimates of Fault Slip for the M 6.0 South Napa Earthquake Andrea Donnellan, Jay Parker, Brian Hawkins, Scott Hensley, Cathleen Jones, Susan Owen, Angelyn Moore Jet Propulsion Laboratory, California Institute of Technology Marlon Pierce, Jun Wang Indiana University John Rundle University of California, Davis The South Napa to Santa Rosa area has been observed with NASA's UAVSAR since late 2009 as part of an experiment to monitor areas identified as having a high probability of an earthquake. The M 6.0 South Napa earthquake occurred on 24 August 2014. The area was flown 29 May 2014 preceeding the earthquake, and again on 29 August 2014, five days after the earthquake. The UAVSAR results show slip on a single fault at the south end of the rupture near the epicenter of the event. The rupture branches out into multiple faults further north near the Napa area. A combined inversion of rapid GPS results and the unwrapped UAVSAR interferogram indicate nearly pure strike slip motion. Using this assumption, the UAVSAR data show horizontal right-lateral slip across the fault of 19 cm at the south end of the rupture and increasing to 70 cm northward over a distance of 6.5 km. The joint inversion indicates slip of ~30 cm on a network of sub-parallel faults is concentrated in a zone about 17 km long. The lower depths of the faults are 5-8.5 km. The eastern two sub-parallel faults break the surface, while three faults to the west are buried at depths ranging from 2-6 km with deeper depths to the north and west. The geodetic moment release is equivalent to a M 6.1 event. Additional ruptures are observed in the interferogram, but the inversions suggest that they represent superficial slip that does not contribute to the overall moment release.

  2. Pictorial estimation of blood loss in a birthing pool--an aide memoire.

    PubMed

    Goodman, Anushia

    2015-04-01

    The aim of this article is to share some photographic images to help midwives visually estimate blood loss at water births. PubMed, CINAHL and MEDLINE databases were searched for relevant research. There is little evidence to inform the practice of visually estimating blood loss in water, as discussed further on in the article. This article outlines a simulation where varying amounts of blood were poured into a birthing pool, captured by photo images. Photo images of key amounts like 150mls, 300mls and 450mls can be useful visual markers when estimating blood loss at water births. The speed of spread across the pool may be a significant factor in assessing blood loss. The author recommends that midwives and educators embark on similar simulations to inform their skill in estimating blood loss at water births.

  3. Broadband Ground Motion Estimates for Scenario Earthquakes in the San Francisco Bay Region

    NASA Astrophysics Data System (ADS)

    Graves, R. W.

    2006-12-01

    Using broadband (0-10 Hz) simulation procedures, we are assessing the ground motions that could be generated by different earthquake scenarios occurring on major strike-slip faults of the San Francisco Bay region. These simulations explicitly account for several important ground motion features, including rupture directivity, 3D basin response, and the depletion of high frequency ground motions that occurs for surface rupturing events. This work compliments ongoing USGS efforts to quantify the ground shaking hazards throughout the San Francisco Bay region. These efforts involve development and testing of a 3D velocity model for northern California (USGS Bay Area Velocity Model, version 05.1.0) using observations from the 1989 Loma Prieta earthquake, characterization of 1906 rupture scenarios and ground motions, and the development and analysis of rupture scenarios on other Bay Area faults. The adequacy of the simulation model has been tested using ground motion data recorded during the 1989 Loma Prieta earthquake and by comparison with the reported intensity data from the 1906 earthquake. Comparisons of the simulated broadband (0-10 Hz) ground motions with the recorded motions for the 1989 Loma Prieta earthquake demonstrate that the modeling procedure matches the observations without significant bias over a broad range of frequencies, site types, and propagation distances. The Loma Prieta rupture model is based on a wavenumber-squared refinement of the Wald et al (1991) slip distribution, with the rupture velocity set at 75 percent of the local shear wave velocity and a Kostrov-type slip function having a rise time of about 1.4 sec. Simulations of 1906 scenario ruptures indicate very strong directivity effects to the north and south of the assumed epicenter, adjacent to San Francisco. We are currently analyzing additional earthquake scenarios on the Hayward-Rodgers Creek and San Andreas faults in order to provide a more comprehensive framework for assessing

  4. Estimating Intensities and/or Strong Motion Parameters Using Civilian Monitoring Videos: The May 12, 2008, Wenchuan Earthquake

    NASA Astrophysics Data System (ADS)

    Yang, Xiaolin; Wu, Zhongliang; Jiang, Changsheng; Xia, Min

    2011-05-01

    One of the important issues in macroseismology and engineering seismology is how to get as much intensity and/or strong motion data as possible. We collected and studied several cases in the May 12, 2008, Wenchuan earthquake, exploring the possibility of estimating intensities and/or strong ground motion parameters using civilian monitoring videos which were deployed originally for security purposes. We used 53 video recordings in different places to determine the intensity distribution of the earthquake, which is shown to be consistent with the intensity distribution mapped by field investigation, and even better than that given by the Community Internet Intensity Map. In some of the videos, the seismic wave propagation is clearly visible, and can be measured with the reference of some artificial objects such as cars and/or trucks. By measuring the propagating wave, strong motion parameters can be roughly but quantitatively estimated. As a demonstration of this `propagating-wave method', we used a series of civilian videos recorded in different parts of Sichuan and Shaanxi and estimated the local PGAs. The estimate is compared with the measurement reported by strong motion instruments. The result shows that civilian monitoring video provide a practical way of collecting and estimating intensity and/or strong motion parameters, having the advantage of being dynamic, and being able to be played back for further analysis, reflecting a new trend for macroseismology in our digital era.

  5. Defeating Earthquakes

    NASA Astrophysics Data System (ADS)

    Stein, R. S.

    2012-12-01

    The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by

  6. Tsunami Waveform Inversion Technique to Estimate the Initial Sea Surface Displacement - Application to the 2007 Niigataken Chuetsu-oki Earthquake Tsunami

    NASA Astrophysics Data System (ADS)

    Tanioka, Y.; Namegaya, Y.; Satake, K.

    2008-12-01

    Recent earthquake source studies using the tsunami waveform inversion technique generally estimate slip distributions of large earthquakes by assuming the fault geometries. However, if an earthquake source is complex or not obvious, it is better to estimate the initial sea surface displacement of the tsunami using the tsunami waveform inversion first. Then, that result can be used to estimate or discuss the source process of the large earthquake. In this study, in order to estimate the initial sea surface displacement due to an earthquake, a new inversion technique using observed tsunami waveforms is developed. The sea surface in the possible tsunami source region is divided into small cells. Tsunami waveforms, or Green"fs functions for the inversion, at tide gauge stations are numerically computed for each cell with a unit amount of uplift. The sea surface displacements for each cell are estimated by inversion of the observed tsunami waveforms at those tide gauges. We apply the above technique to estimate the initial sea surface displacement due to the 2007 Niigataken Chuetsu-oki earthquake (MJMA 6.8). The earthquake occurred off the coast of Niigata prefecture, the Japan Sea coast of the central Japan, at 10:13 a.m. (JST) on 16th July, 2007. Various source models of the earthquake were suggested using aftershock distribution data, seismological waveform data or geodetic data, but the fault plane of the earthquake is still controversial. The earthquake accompanied by tsunami, which was recorded at tide gauge stations along the Japan Sea coast. The maximum height of about 1 m was observed at a tide gauge station at Banjin, Kashiwazaki city, near the source region. Observed tsunami waveforms at ten tide gauge stations located around the source region are used for the inversion. The sea surface above the source region, or the aftershock area, is divided into 26 cells (4 km x 4 km) to estimate the initial sea surface displacement. The result shows that uplifts are

  7. Exploring the uncertainty range of coseismic stress drop estimations of large earthquakes using finite fault inversions

    NASA Astrophysics Data System (ADS)

    Adams, Mareike; Twardzik, Cedric; Ji, Chen

    2017-01-01

    A new finite fault inversion strategy is developed to explore the uncertainty range for the energy based average coseismic stress drop (overline {{{Δ }}{τ_E}}) of large earthquakes. For a given earthquake, we conduct a modified finite fault inversion to find a solution that not only matches seismic and geodetic data but also has a overline {{{Δ }}{τ_E}} matching a specified value. We do the inversions for a wide range of stress drops. These results produce a trade-off curve between the misfit to the observations and overline {{{Δ }}{τ_E}} , which allows one to define the range of overline {{{Δ }}{τ_E}} that will produce an acceptable misfit. The study of the 2014 Rat Islands Mw 7.9 earthquake reveals an unexpected result: when using only teleseismic waveforms as data, the lower bound of overline {{{Δ }}{τ_E}} (5-10 MPa) for this earthquake is successfully constrained. However, the same data set exhibits no sensitivity to its upper bound of overline {{{Δ }}{τ_E}} because there is limited resolution to the fine scale roughness of fault slip. Given that the spatial resolution of all seismic or geodetic data is limited, we can speculate that the upper bound of overline {{{Δ }}{τ_E}} cannot be constrained with them. This has consequences for the earthquake energy budget. Failing to constrain the upper bound of overline {{{Δ }}{τ_E}} leads to the conclusions that (1) the seismic radiation efficiency determined from the inverted model might be significantly overestimated and (2) the upper bound of the average fracture energy EG cannot be constrained by seismic or geodetic data. Thus, caution must be taken when investigating the characteristics of large earthquakes using the energy budget approach. Finally, searching for the lower bound of overline {{{Δ }}{τ_E}} can be used as an energy-based smoothing scheme during finite fault inversions.

  8. An Optimum Model to Estimate Path Losses for 400 MHz Band Land Mobile Radio

    NASA Astrophysics Data System (ADS)

    Miyashita, Michifumi; Terada, Takashi; Serizawa, Yoshizumi

    It is difficult to estimate path loss for land mobile radio using a single path loss model such as diffraction model or Okumura model individually when mobile radio is utilized in widespread area. Furthermore, high accuracy of the path loss estimation is needed when the radio system is digitized because degradation of CNR due to interference deteriorates communications. In this paper, conventional path loss models, i.e. the diffraction model, Okumura model and two-ray model, were evaluated with 400 MHz land mobile radio field measurements, and a method of improving path loss estimation by using each of these conventional models selectively was proposed. The ratio of error between -10 dB and +10 dB for the method applying the correction factors derived from our field measurements was 71.41 %, while the ratios for the conventional diffraction and Okumura models without any correction factors were 26.71 % and 49.42 %, respectively.

  9. Q Estimates using the Coda of Local Earthquakes in Western Turkey

    NASA Astrophysics Data System (ADS)

    Akyol, Nihal

    2015-04-01

    The regional extension in the central west Turkey has been associated to different deformation processes, such as: spreading and thinning of over-thickened crust following the latest collision across the Neotethys, Arabia-Eurasia convergence resulting in westward extrusion of the Anatolian Plate and Africa-Eurasia convergence forming regional tectonics in the back-arc extensional area. Utilizing single isotropic scattering model, the Coda quality factor (Qc) at five frequency bands (1.5, 3, 5, 7, 10 Hz) and for eight window lengths (25-60 s, in steps of 5 s) were estimated in the region. The data comes from 228 earthquakes with local magnitudes and depths range from 2.9 - 4.9 and 2.2 - 27.0 km, respectively. The source to receiver distance of the records changes between 11 and 72 km. Spatial differences of attenuation characteristics were examined by dividing the region into four subregions. The frequency dependence of Qc values between 1.5 and 10 Hz has been inferred utilizing Qc = Q0fn relationship. Q0 values change between 32.7 and 82.1, while n values changes between 0.91 and 0.79 for the main- and four sub-regions, respectively. Obtained frequency dependence of Qc values for a lapse time of 40 s in the main region is Qc(f) = 49.6±1.0f0.85±0.02. The obtained low Q0 values show that the central west Turkey region is characterized by a high seismic attenuation, in general. Strong frequency and lapse time dependencies of Qc values for the main- and four sub-region imply tectonic complexity in the region. The attenuation and its frequency dependency values versus the lapse time for the easternmost subregion, confirm the slab tear inferred from previous studies. The highest frequency dependency values, at all lapse times, in the westernmost subregion imply high degree of heterogeneity supported by severe anti-clockwise rotation in this area. Lapse time dependencies of attenuation and its frequency dependencies were examined for two different ranges of event depth

  10. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  11. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  12. Estimation of furrow irrigation sediment loss using an artificial neural network

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The area irrigated by furrow irrigation in the U.S. has been steadily decreasing but still represents about 20% of the total irrigated area in the U.S. Furrow irrigation sediment loss is a major water quality issue and a method for estimating sediment loss is needed to quantify the environmental imp...

  13. Impact-based earthquake alerts with the U.S. Geological Survey's PAGER system: what's next?

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Garcia, D.; So, E.; Hearne, M.

    2012-01-01

    In September 2010, the USGS began publicly releasing earthquake alerts for significant earthquakes around the globe based on estimates of potential casualties and economic losses with its Prompt Assessment of Global Earthquakes for Response (PAGER) system. These estimates significantly enhanced the utility of the USGS PAGER system which had been, since 2006, providing estimated population exposures to specific shaking intensities. Quantifying earthquake impacts and communicating estimated losses (and their uncertainties) to the public, the media, humanitarian, and response communities required a new protocol—necessitating the development of an Earthquake Impact Scale—described herein and now deployed with the PAGER system. After two years of PAGER-based impact alerting, we now review operations, hazard calculations, loss models, alerting protocols, and our success rate for recent (2010-2011) events. This review prompts analyses of the strengths, limitations, opportunities, and pressures, allowing clearer definition of future research and development priorities for the PAGER system.

  14. Fuzzy Discrimination Analysis Method for Earthquake Energy K-Class Estimation with respect to Local Magnitude Scale

    NASA Astrophysics Data System (ADS)

    Mumladze, T.; Gachechiladze, J.

    2014-12-01

    The purpose of the present study is to establish relation between earthquake energy K-class (the relative energy characteristic) defined as logarithm of seismic waves energy E in joules obtained from analog stations data and local (Richter) magnitude ML obtained from digital seismograms. As for these data contain uncertainties the effective tools of fuzzy discrimination analysis are suggested for subjective estimates. Application of fuzzy analysis methods is an innovative approach to solving a complicated problem of constracting a uniform energy scale through the whole earthquake catalogue, also it avoids many of the data collection problems associated with probabilistic approaches; and it can handle incomplete information, partial inconsistency and fuzzy descriptions of data in a natural way. Another important task is to obtain frequency-magnitude relation based on K parameter, calculation of the Gutenberg-Richter parameters (a, b) and examining seismic activity in Georgia. Earthquake data files are using for periods: from 1985 to 1990 and from 2004 to 2009 for area j=410 - 430.5, l=410 - 470.

  15. Estimation of soil loss by water erosion in the Chinese Loess Plateau using Universal Soil Loss Equation and GRACE

    NASA Astrophysics Data System (ADS)

    Schnitzer, S.; Seitz, F.; Eicker, A.; Güntner, A.; Wattenbach, M.; Menzel, A.

    2013-06-01

    For the estimation of soil loss by erosion in the strongly affected Chinese Loess Plateau we applied the Universal Soil Loss Equation (USLE) using a number of input data sets (monthly precipitation, soil types, digital elevation model, land cover and soil conservation measures). Calculations were performed in ArcGIS and SAGA. The large-scale soil erosion in the Loess Plateau results in a strong non-hydrological mass change. In order to investigate whether the resulting mass change from USLE may be validated by the gravity field satellite mission GRACE (Gravity Recovery and Climate Experiment), we processed different GRACE level-2 products (ITG, GFZ and CSR). The mass variations estimated in the GRACE trend were relatively close to the observed sediment yield data of the Yellow River. However, the soil losses resulting from two USLE parameterizations were comparatively high since USLE does not consider the sediment delivery ratio. Most eroded soil stays in the study area and only a fraction is exported by the Yellow River. Thus, the resultant mass loss appears to be too small to be resolved by GRACE.

  16. GPS estimates of microplate motions, northern Caribbean: evidence for a Hispaniola microplate and implications for earthquake hazard

    NASA Astrophysics Data System (ADS)

    Benford, B.; DeMets, C.; Calais, E.

    2012-09-01

    We use elastic block modelling of 126 GPS site velocities from Jamaica, Hispaniola, Puerto Rico and other islands in the northern Caribbean to test for the existence of a Hispaniola microplate and estimate angular velocities for the Gônave, Hispaniola, Puerto Rico-Virgin Islands and two smaller microplates relative to each other and the Caribbean and North America plates. A model in which the Gônave microplate spans the whole plate boundary between the Cayman spreading centre and Mona Passage west of Puerto Rico is rejected at a high confidence level. The data instead require an independently moving Hispaniola microplate between the Mona Passage and a likely diffuse boundary within or offshore from western Hispaniola. Our updated angular velocities predict 6.8 ± 1.0 mm yr-1 of left-lateral slip along the seismically hazardous Enriquillo-Plantain Garden fault zone of southwest Hispaniola, 9.8 ± 2.0 mm yr-1 of slip along the Septentrional fault of northern Hispaniola and ˜14-15 mm yr-1 of left-lateral slip along the Oriente fault south of Cuba. They also predict 5.7 ± 1 mm yr-1 of fault-normal motion in the vicinity of the Enriquillo-Plantain Garden fault zone, faster than previously estimated and possibly accommodated by folds and faults in the Enriquillo-Plantain Garden fault zone borderlands. Our new and a previous estimate of Gônave-Caribbean plate motion suggest that enough elastic strain accumulates to generate one to two Mw˜ 7 earthquakes per century along the Enriquillo-Plantain Garden and nearby faults of southwest Hispaniola. That the 2010 M= 7.0 Haiti earthquake ended a 240-yr-long period of seismic quiescence in this region raises concerns that it could mark the onset of a new earthquake sequence that will relieve elastic strain that has accumulated since the late 18th century.

  17. Slip distribution of the 2014 Mw = 8.1 Pisagua, northern Chile, earthquake sequence estimated from coseismic fore-arc surface cracks

    NASA Astrophysics Data System (ADS)

    Loveless, John P.; Scott, Chelsea P.; Allmendinger, Richard W.; González, Gabriel

    2016-10-01

    The 2014 Mw = 8.1 Iquique (Pisagua), Chile, earthquake sequence ruptured a segment of the Nazca-South America subduction zone that last hosted a great earthquake in 1877. The sequence opened >3700 surface cracks in the fore arc of decameter-scale length and millimeter-to centimeter-scale aperture. We use the strikes of measured cracks, inferred to be perpendicular to coseismically applied tension, to estimate the slip distribution of the main shock and largest aftershock. The slip estimates are compatible with those based on seismic, geodetic, and tsunami data, indicating that geologic observations can also place quantitative constraints on rupture properties. The earthquake sequence ruptured between two asperities inferred from a regional-scale distribution of surface cracks, interpreted to represent a modal or most common rupture scenario for the northern Chile subduction zone. We suggest that past events, including the 1877 earthquake, broke the 2014 Pisagua source area together with adjacent sections in a throughgoing rupture.

  18. Napa Earthquake impact on water systems

    NASA Astrophysics Data System (ADS)

    Wang, J.

    2014-12-01

    South Napa earthquake occurred in Napa, California on August 24 at 3am, local time, and the magnitude is 6.0. The earthquake was the largest in SF Bay Area since the 1989 Loma Prieta earthquake. Economic loss topped $ 1 billion. Wine makers cleaning up and estimated the damage on tourism. Around 15,000 cases of lovely cabernet were pouring into the garden at the Hess Collection. Earthquake potentially raise water pollution risks, could cause water crisis. CA suffered water shortage recent years, and it could be helpful on how to prevent underground/surface water pollution from earthquake. This research gives a clear view on drinking water system in CA, pollution on river systems, as well as estimation on earthquake impact on water supply. The Sacramento-San Joaquin River delta (close to Napa), is the center of the state's water distribution system, delivering fresh water to more than 25 million residents and 3 million acres of farmland. Delta water conveyed through a network of levees is crucial to Southern California. The drought has significantly curtailed water export, and salt water intrusion reduced fresh water outflows. Strong shaking from a nearby earthquake can cause saturated, loose, sandy soils liquefaction, and could potentially damage major delta levee systems near Napa. Napa earthquake is a wake-up call for Southern California. It could potentially damage freshwater supply system.

  19. Combining MODIS and Landsat imagery to estimate and map boreal forest cover loss

    USGS Publications Warehouse

    Potapov, P.; Hansen, M.C.; Stehman, S.V.; Loveland, T.R.; Pittman, K.

    2008-01-01

    Estimation of forest cover change is important for boreal forests, one of the most extensive forested biomes, due to its unique role in global timber stock, carbon sequestration and deposition, and high vulnerability to the effects of global climate change. We used time-series data from the MODerate Resolution Imaging Spectroradiometer (MODIS) to produce annual forest cover loss hotspot maps. These maps were used to assign all blocks (18.5 by 18.5??km) partitioning the boreal biome into strata of high, medium and low likelihood of forest cover loss. A stratified random sample of 118 blocks was interpreted for forest cover and forest cover loss using high spatial resolution Landsat imagery from 2000 and 2005. Area of forest cover gross loss from 2000 to 2005 within the boreal biome is estimated to be 1.63% (standard error 0.10%) of the total biome area, and represents a 4.02% reduction in year 2000 forest cover. The proportion of identified forest cover loss relative to regional forest area is much higher in North America than in Eurasia (5.63% to 3.00%). Of the total forest cover loss identified, 58.9% is attributable to wildfires. The MODIS pan-boreal change hotspot estimates reveal significant increases in forest cover loss due to wildfires in 2002 and 2003, with 2003 being the peak year of loss within the 5-year study period. Overall, the precision of the aggregate forest cover loss estimates derived from the Landsat data and the value of the MODIS-derived map displaying the spatial and temporal patterns of forest loss demonstrate the efficacy of this protocol for operational, cost-effective, and timely biome-wide monitoring of gross forest cover loss. ?? 2008 Elsevier Inc.

  20. Volcano-tectonic earthquakes: A new tool for estimating intrusive volumes and forecasting eruptions

    NASA Astrophysics Data System (ADS)

    White, Randall; McCausland, Wendy

    2016-01-01

    We present data on 136 high-frequency earthquakes and swarms, termed volcano-tectonic (VT) seismicity, which preceded 111 eruptions at 83 volcanoes, plus data on VT swarms that preceded intrusions at 21 other volcanoes. We find that VT seismicity is usually the earliest reported seismic precursor for eruptions at volcanoes that have been dormant for decades or more, and precedes eruptions of all magma types from basaltic to rhyolitic and all explosivities from VEI 0 to ultraplinian VEI 6 at such previously long-dormant volcanoes. Because large eruptions occur most commonly during resumption of activity at long-dormant volcanoes, VT seismicity is an important precursor for the Earth's most dangerous eruptions. VT seismicity precedes all explosive eruptions of VEI ≥ 5 and most if not all VEI 4 eruptions in our data set. Surprisingly we find that the VT seismicity originates at distal locations on tectonic fault structures at distances of one or two to tens of kilometers laterally from the site of the eventual eruption, and rarely if ever starts beneath the eruption site itself. The distal VT swarms generally occur at depths almost equal to the horizontal distance of the swarm from the summit out to about 15 km distance, beyond which hypocenter depths level out. We summarize several important characteristics of this distal VT seismicity including: swarm-like nature, onset days to years prior to the beginning of magmatic eruptions, peaking of activity at the time of the initial eruption whether phreatic or magmatic, and large non-double couple component to focal mechanisms. Most importantly we show that the intruded magma volume can be simply estimated from the cumulative seismic moment of the VT seismicity from: Log10 V = 0.77 Log ΣMoment - 5.32, with volume, V, in cubic meters and seismic moment in Newton meters. Because the cumulative seismic moment can be approximated from the size of just the few largest events, and is quite insensitive to precise locations

  1. Tag loss can bias Jolly-Seber capture-recapture estimates

    USGS Publications Warehouse

    McDonald, T.L.; Amstrup, Steven C.; Manly, B.F.J.

    2003-01-01

    We identified cases where the Jolly-Seber estimator of population size is biased under tag loss and tag-induced mortality by examining the mathematical arguments and performing computer simulations. We found that, except under certain tag-loss models and high sample sizes, the population size estimators (uncorrected for tag loss) are severely biased high when tag loss or tag-induced mortality occurs. Our findings verify that this misconception about effects of tag loss and tag-induced mortality could have serious consequences for field biologists interested in population size. Reiterating common sense, we encourage those engaged in capture-recapture studies to be careful and humane when handling animals during tagging, to use tags with high retention rates, to double-tag animals when possible, and to strive for the highest capture probabilities possible.

  2. An estimation method of the fault wind turbine power generation loss based on correlation analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Zhu, Shourang; Wang, Wei

    2017-01-01

    A method for estimating the power generation loss of a fault wind turbine is proposed in this paper. In this method, the wind speed is estimated and the estimated value of the loss of power generation is given by combining the actual output power characteristic curve of the wind turbine. In the wind speed estimation, the correlation analysis is used, and the normal operation of the wind speed of the fault wind turbine is selected, and the regression analysis method is used to obtain the estimated value of the wind speed. Based on the estimation method, this paper presents an implementation of the method in the monitoring system of the wind turbine, and verifies the effectiveness of the proposed method.

  3. Estimating Phosphorus Loss at the Whole-Farm Scale with User-Friendly Models

    NASA Astrophysics Data System (ADS)

    Vadas, P.; Powell, M.; Brink, G.; Busch, D.; Good, L.

    2014-12-01

    Phosphorus (P) loss from agricultural fields and delivery to surface waters persists as a water quality impairment issue. For dairy farms, P can be lost from cropland, pastures, barnyards, and open-air cattle lots; and all these sources must be evaluated to determine which ones are a priority for P loss remediation. We used interview surveys to document land use, cattle herd characteristics, and manure management for four grazing-based dairy farms in Wisconsin, USA. We then used the APLE and Snap-Plus models to estimate annual P loss from all areas on these farms and determine their relative contribution to whole-farm P loss. At the whole-farm level, average annual P loss (kg ha-1) from grazing-based dairy farms was low (0.6 to 1.8 kg ha-1), generally because a significant portion of land was in permanently vegetated pastures or hay and had low erosion. However, there were areas on the farms that represented sources of significant P loss. For cropland, the greatest P loss was from areas with exposed soil, typically for corn production, and especially on steeper sloping land. The farm areas with the greatest P loss had concentrated animal housing, including barnyards, and over-wintering and young-stock lots. These areas can represent from about 5% to almost 30% of total farm P loss, depending on lot management and P loss from other land uses. Our project builds on research to show that producer surveys can provide reliable management information to assess whole-farm P loss. It also shows that we can use models like RUSLE2, Snap-Plus, and APLE to rapidly, reliably, and quantitatively estimate P loss in runoff from all areas on a dairy farm and identify areas in greatest need of alternative management to reduce P loss.

  4. Estimation of ground motion for Bhuj (26 January 2001; Mw 7.6 and for future earthquakes in India

    USGS Publications Warehouse

    Singh, S.K.; Bansal, B.K.; Bhattacharya, S.N.; Pacheco, J.F.; Dattatrayam, R.S.; Ordaz, M.; Suresh, G.; ,; Hough, S.E.

    2003-01-01

    Only five moderate and large earthquakes (Mw ???5.7) in India-three in the Indian shield region and two in the Himalayan arc region-have given rise to multiple strong ground-motion recordings. Near-source data are available for only two of these events. The Bhuj earthquake (Mw 7.6), which occurred in the shield region, gave rise to useful recordings at distances exceeding 550 km. Because of the scarcity of the data, we use the stochastic method to estimate ground motions. We assume that (1) S waves dominate at R < 100 km and Lg waves at R ??? 100 km, (2) Q = 508f0.48 is valid for the Indian shield as well as the Himalayan arc region, (3) the effective duration is given by fc-1 + 0.05R, where fc is the corner frequency, and R is the hypocentral distance in kilometer, and (4) the acceleration spectra are sharply cut off beyond 35 Hz. We use two finite-source stochastic models. One is an approximate model that reduces to the ??2-source model at distances greater that about twice the source dimension. This model has the advantage that the ground motion is controlled by the familiar stress parameter, ????. In the other finite-source model, which is more reliable for near-source ground-motion estimation, the high-frequency radiation is controlled by the strength factor, sfact, a quantity that is physically related to the maximum slip rate on the fault. We estimate ???? needed to fit the observed Amax and Vmax data of each earthquake (which are mostly in the far field). The corresponding sfact is obtained by requiring that the predicted curves from the two models match each other in the far field up to a distance of about 500 km. The results show: (1) The ???? that explains Amax data for shield events may be a function of depth, increasing from ???50 bars at 10 km to ???400 bars at 36 km. The corresponding sfact values range from 1.0-2.0. The ???? values for the two Himalayan arc events are 75 and 150 bars (sfact = 1.0 and 1.4). (2) The ???? required to explain Vmax data

  5. Characteristics of radiation and propagation of seismic waves in the Baikal Rift Zone estimated by simulations of acceleration time histories of the recorded earthquakes

    NASA Astrophysics Data System (ADS)

    Pavlenko, O. V.; Tubanov, Ts. A.

    2017-01-01

    The regularities in the radiation and propagation of seismic waves within the Baikal Rift Zone in Buryatia are studied to estimate the ground motion parameters from the probable future strong earthquakes. The regional parameters of seismic radiation and propagation are estimated by the stochastic simulation (which provides the closest agreement between the calculations and observations) of the acceleration time histories of the earthquakes recorded by the Ulan-Ude seismic station. The acceleration time histories of the strongest earthquakes ( M W 3.4-4.8) that occurred in 2006-2011 at the epicentral distances of 96-125 km and had source depths of 8-12 km have been modeled. The calculations are conducted with estimates of the Q-factor which were previously obtained for the region. The frequency-dependent attenuation and geometrical spreading are estimated from the data on the deep structure of the crust and upper mantle (velocity sections) in the Ulan-Ude region, and the parameters determining the wave forms and duration of acceleration time histories are found by fitting. These parameters fairly well describe all the considered earthquakes. The Ulan-Ude station can be considered as the reference bedrock station with minimum local effects. The obtained estimates for the parameters of seismic radiation and propagation can be used for forecasting the ground motion from the future strong earthquakes and for constructing the seismic zoning maps for Buryatia.

  6. Preliminary estimation of high-frequency (4-20 Hz) energy released from the 2016 Kumamoto, Japan, earthquake sequence

    NASA Astrophysics Data System (ADS)

    Sawazaki, Kaoru; Nakahara, Hisashi; Shiomi, Katsuhiko

    2016-11-01

    We estimate high-frequency (4-20 Hz) energy release due to the 2016 Kumamoto, Japan, earthquake sequence, within a time period from April 14 to 26 through envelope inversion analysis applied to the Hi-net continuous seismograms. We especially focus on energy releases after each of the April 14 M JMA6.5 and the April 16 M JMA7.3 earthquakes. The cumulative energy release from aftershocks of the April 14 event reaches 60% of that from the April 14 event itself by the lapse time of 27 h (pre-April 16 period). On the other hand, the cumulative energy release from aftershocks of the April 16 event reaches only 11 and 13% of that from the April 16 event itself by the lapse times of 27 h and 10 days (post-April 16 period), respectively. This discrepancy in the normalized cumulative energy release (NCER) indicates that the April 14 event was followed by much larger relative aftershock productivity than the April 16 event. Thus, NCER would provide information that reflects relative aftershock productivity and ongoing seismicity pattern after a large earthquake. We also find that the temporal decay of the energy release rate obeys the power law. The exponent p E of the power-law decay is estimated to be 1.7-2.1, which is much larger than the typical p value of the Omori-Utsu law: slightly larger than 1. We propose a simple relationship given by p E = βp/ b, where p value, b value of the Gutenberg-Richter law, and β value of the magnitude-energy release relationship are combined.[Figure not available: see fulltext.

  7. Earthquake-triggered liquefaction in Southern Siberia and surroundings: a base for predictive models and seismic hazard estimation

    NASA Astrophysics Data System (ADS)

    Lunina, Oksana

    2016-04-01

    The forms and location patterns of soil liquefaction induced by earthquakes in southern Siberia, Mongolia, and northern Kazakhstan in 1950 through 2014 have been investigated, using field methods and a database of coseismic effects created as a GIS MapInfo application, with a handy input box for large data arrays. Statistical analysis of the data has revealed regional relationships between the magnitude (Ms) of an earthquake and the maximum distance of its environmental effect to the epicenter and to the causative fault (Lunina et al., 2014). Estimated limit distances to the fault for the Ms = 8.1 largest event are 130 km that is 3.5 times as short as those to the epicenter, which is 450 km. Along with this the wider of the fault the less liquefaction cases happen. 93% of them are within 40 km from the causative fault. Analysis of liquefaction locations relative to nearest faults in southern East Siberia shows the distances to be within 8 km but 69% of all cases are within 1 km. As a result, predictive models have been created for locations of seismic liquefaction, assuming a fault pattern for some parts of the Baikal rift zone. Base on our field and world data, equations have been suggested to relate the maximum sizes of liquefaction-induced clastic dikes (maximum width, visible maximum height and intensity index of clastic dikes) with Ms and local shaking intensity corresponding to the MSK-64 macroseismic intensity scale (Lunina and Gladkov, 2015). The obtained results make basis for modeling the distribution of the geohazard for the purposes of prediction and for estimating the earthquake parameters from liquefaction-induced clastic dikes. The author would like to express their gratitude to the Institute of the Earth's Crust, Siberian Branch of the Russian Academy of Sciences for providing laboratory to carry out this research and Russian Scientific Foundation for their financial support (Grant 14-17-00007).

  8. Variable anelastic attenuation and site effect in estimating source parameters of various major earthquakes including M w 7.8 Nepal and M w 7.5 Hindu kush earthquake by using far-field strong-motion data

    NASA Astrophysics Data System (ADS)

    Kumar, Naresh; Kumar, Parveen; Chauhan, Vishal; Hazarika, Devajit

    2016-12-01

    Strong-motion records of recent Gorkha Nepal earthquake (M w 7.8), its strong aftershocks and seismic events of Hindu kush region have been analysed for estimation of source parameters. The M w 7.8 Gorkha Nepal earthquake of 25 April 2015 and its six aftershocks of magnitude range 5.3-7.3 are recorded at Multi-Parametric Geophysical Observatory, Ghuttu, Garhwal Himalaya (India) >600 km west from the epicentre of main shock of Gorkha earthquake. The acceleration data of eight earthquakes occurred in the Hindu kush region also recorded at this observatory which is located >1000 km east from the epicentre of M w 7.5 Hindu kush earthquake on 26 October 2015. The shear wave spectra of acceleration record are corrected for the possible effects of anelastic attenuation at both source and recording site as well as for site amplification. The strong-motion data of six local earthquakes are used to estimate the site amplification and the shear wave quality factor (Q β) at recording site. The frequency-dependent Q β(f) = 124f 0.98 is computed at Ghuttu station by using inversion technique. The corrected spectrum is compared with theoretical spectrum obtained from Brune's circular model for the horizontal components using grid search algorithm. Computed seismic moment, stress drop and source radius of the earthquakes used in this work range 8.20 × 1016-5.72 × 1020 Nm, 7.1-50.6 bars and 3.55-36.70 km, respectively. The results match with the available values obtained by other agencies.

  9. The Use of Streambed Temperatures to Estimate Losses in an Arid Environment

    NASA Astrophysics Data System (ADS)

    Naranjo, R. C.; Young, M. H.; Niswonger, R.; Miller, J. J.; French, R. H.

    2001-12-01

    Quantifying channel transmission losses in arid environments is important for a variety of reasons, ranging from designing flood control mitigation structures to estimating ground water recharge. To quantify the losses in an alluvial channel, an experiment was performed on a 2 km reach of a channel on an alluvial fan, located on the U.S. Department of Energy's Nevada Test Site. The channel was subjected to three separate flow events. Transmission losses were estimated using discharge monitoring and a subsurface temperature modeling approach. Four stations were equipped to continuously monitor stage, temperature. Streambed temperatures measured at 0-, 30-, 50- and 100-cm depths were used to calibrate VS2DH, a two-dimensional, variably saturated flow model (Healy and Ronan, 1996). Average losses based on the difference in flow between each reach indicate that 21, 27, and 53 percent of the flow was reduced down stream of the source. Lower losses occurred within the reaches that contained caliche and the largest losses were measured at the lower reach that mostly contained loosely unconsolidated material. As expected, the thermal gradients corresponded well with the bedload material and the measured losses. Low thermal gradients were detected at the locations were where caliche was present, suggesting conduction-dominated heat transfer. The lower reach corresponded to the smallest thermal gradient, suggesting advection-dominated heat transfer. Losses predicted by VS2DH are within an order of magnitude of the estimated losses based on discharge measurements. The differences in losses are a result of both the spatial extent to which the modeling results are applied and unmeasured lateral subsurface flow. Large thermal gradients were detected at locations where caliche was present, suggesting conduction dominated heat tranfer.

  10. The use of streambed temperatures to estimate transmission losses on an experimental channel.

    SciTech Connect

    Ramon C. Naranjo; Michael H. Young; Richard Niswonger; Julianne J. Miller; Richard H. French

    2001-10-18

    Quantifying channel transmission losses in arid environments is important for a variety of reasons, from engineering design of flood control structures to evaluating recharge. To quantify the losses in an alluvial channel, an experiment was performed on a 2-km reach of an alluvial fan located on the Nevada Test Site. The channel was subjected to three separate flow events. Transmission losses were estimated using standard discharge monitoring and subsurface temperature modeling approach. Four stations were equipped to continuously monitor stage, temperature, and water content. Streambed temperatures measured at 0, 30, 50 and 100 cm depths were used to calibrate VS2DH, a two-dimensional, variably saturated flow model. Average losses based on the difference in flow between each station indicate that 21 percent, 27 percent, and 53 percent of the flow was reduced downgradient of the source. Results from the temperature monitoring identified locations with large thermal gradients, suggesting a conduction-dominated heat transfer on streambed sediments where caliche-cemented surfaces were present. Transmission losses at the lowermost segment corresponded to the smallest thermal gradient, suggesting an advection-dominated heat transfer. Losses predicted by VS2DH are within an order of magnitude of the estimated losses based on discharge measurements. The differences in losses are a result of the spatial extent to which the modeling results are applied and lateral subsurface flow.

  11. Real time earthquake information and tsunami estimation system for Indonesia, Philippines and Central-South American regions

    NASA Astrophysics Data System (ADS)

    Pulido Hernandez, N. E.; Inazu, D.; Saito, T.; Senda, J.; Fukuyama, E.; Kumagai, H.

    2015-12-01

    Southeast Asia as well as Central-South American regions are within the most active seismic regions in the world. To contribute to the understanding of source process of earthquakes the National Research Institute for Earth Science and Disaster Prevention NIED maintains the international seismic Network (ISN) since 2007. Continuous seismic waveforms from 294 broadband seismic stations in Indonesia, Philippines, and Central-South America regions are received in real time at NIED, and used for automatic location of seismic events. Using these data we perform automatic and manual estimation of moment tensor of seismic events (Mw>4.5) by using the SWIFT program developed at NIED. We simulate the propagation of local tsunamis in these regions using a tsunami simulation code and visualization system developed at NIED, combined with CMT parameters estimated by SWIFT. The goals of the system are to provide a rapid and reliable earthquake and tsunami information in particular for large seismic, and produce an appropriate database of earthquake source parameters and tsunami simulations for research. The system uses the hypocenter location and magnitude of earthquakes automatically determined at NIED by the SeisComP3 system (GFZ) from the continuous seismic waveforms in the region, to perform the automated calculation of moment tensors by SWIFT, and then carry out the automatic simulation and visualization of tsunami. The system generates maps of maximum tsunami heights within the target regions and along the coasts and display them with the fault model parameters used for tsunami simulations. Tsunami calculations are performed for all events with available automatic SWIFT/CMT solutions. Tsunami calculations are re-computed using SWIFT manual solutions for events with Mw>5.5 and centroid depths shallower than 100 km. Revised maximum tsunami heights as well as animation of tsunami propagation are also calculated and displayed for the two double couple solutions by SWIFT

  12. Simultaneous estimation of b-values and detection rates of earthquakes for the application to aftershock probability forecasting

    NASA Astrophysics Data System (ADS)

    Katsura, K.; Ogata, Y.

    2004-12-01

    Reasenberg and Jones [Science, 1989, 1994] proposed the aftershock probability forecasting based on the joint distribution [Utsu, J. Fac. Sci. Hokkaido Univ., 1970] of the modified Omori formula of aftershock decay and Gutenberg-Richter law of magnitude frequency, where the respective parameters are estimated by the maximum likelihood method [Ogata, J. Phys. Earth, 1983; Utsu, Geophys Bull. Hokkaido Univ., 1965, Aki, Bull. Earthq. Res. Inst., 1965]. The public forecast has been implemented by the responsible agencies in California and Japan. However, a considerable difficulty in the above procedure is that, due to the contamination of arriving seismic waves, detection rate of aftershocks is extremely low during a period immediately after the main shock, say, during the first day, when the forecasting is most critical for public in the affected area. Therefore, for the forecasting of a probability during such a period, they adopt a generic model with a set of the standard parameter values in California or Japan. For an effective and realistic estimation, I propose to utilize the statistical model introduced by Ogata and Katsura [Geophys. J. Int., 1993] for the simultaneous estimation of the b-values of Gutenberg-Richter law together with detection-rate (probability) of earthquakes of each magnitude-band from the provided data of all detected events, where the both parameters are allowed for changing in time. Thus, by using all detected aftershocks from the beginning of the period, we can estimate the underlying modified Omori rate of both detected and undetected events and their b-value changes, taking the time-varying missing rates of events into account. The similar computation is applied to the ETAS model for complex aftershock activity or regional seismicity where substantial missing events are expected immediately after a large aftershock or another strong earthquake in the vicinity. Demonstrations of the present procedure will be shown for the recent examples

  13. An atlas of ShakeMaps for selected global earthquakes

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul S.; Marano, Kristin D.

    2008-01-01

    An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.

  14. Estimation of slip scenarios of mega-thrust earthquakes and strong motion simulations for Central Andes, Peru

    NASA Astrophysics Data System (ADS)

    Pulido, N.; Tavera, H.; Aguilar, Z.; Chlieh, M.; Calderon, D.; Sekiguchi, T.; Nakai, S.; Yamazaki, F.

    2012-12-01

    We have developed a methodology for the estimation of slip scenarios for megathrust earthquakes based on a model of interseismic coupling (ISC) distribution in subduction margins obtained from geodetic data, as well as information of recurrence of historical earthquakes. This geodetic slip model (GSM) delineates the long wavelength asperities within the megathrust. For the simulation of strong ground motion it becomes necessary to introduce short wavelength heterogeneities to the source slip to be able to efficiently simulate high frequency ground motions. To achieve this purpose we elaborate "broadband" source models constructed by combining the GSM with several short wavelength slip distributions obtained from a Von Karman PSD function with random phases. Our application of the method to Central Andes in Peru, show that this region has presently the potential of generating an earthquake with moment magnitude of 8.9, with a peak slip of 17 m and a source area of approximately 500 km along strike and 165 km along dip. For the strong motion simulations we constructed 12 broadband slip models, and consider 9 possible hypocenter locations for each model. We performed strong motion simulations for the whole central Andes region (Peru), spanning an area from the Nazca ridge (16^o S) to the Mendana fracture (9^o S). For this purpose we use the hybrid strong motion simulation method of Pulido et al. (2004), improved to handle a general slip distribution. Our simulated PGA and PGV distributions indicate that a region of at least 500 km along the coast of central Andes is subjected to a MMI intensity of approximately 8, for the slip model that yielded the largest ground motions among the 12 slip models considered, averaged for all assumed hypocenter locations. This result is in agreement with the macroseismic intensity distribution estimated for the great 1746 earthquake (M~9) in central Andes (Dorbath et al. 1990). Our results indicate that the simulated PGA and PGV for

  15. Estimating field-of-view loss in bathymetric lidar: application to large-scale simulations.

    PubMed

    Carr, Domenic; Tuell, Grady

    2014-07-20

    When designing a bathymetric lidar, it is important to study simulated waveforms for various combinations of system and environmental parameters. To predict a system's ranging accuracy, it is often necessary to analyze thousands of waveforms. In these large-scale simulations, estimating field-of-view loss is a challenge because the calculation is complex and computationally intensive. This paper describes a new procedure for quickly approximating this loss, and illustrates how it can be used to efficiently predict ranging accuracy.

  16. Programmable calculator program for linear somatic cell scores to estimate mastitis yield losses.

    PubMed

    Kirk, J H

    1984-02-01

    A programmable calculator program calculates loss of milk yield in dairy cows based on linear somatic cell count scores. The program displays the distribution of the herd by lactation number and linear score for present and optimal goal situations. Loss of yield is in pounds and dollars by cow and herd. The program estimates optimal milk production and numbers of fewer cows at the goal for mastitis infection.

  17. Is CO radio line emission a reliable mass-loss-rate estimator for AGB stars?

    NASA Astrophysics Data System (ADS)

    Ramstedt, Sofia; Scḧier, Frederik; Olofsson, Hans

    The final evolutionary stage of low- to intermediate-mass stars, as they evolve along the asymptotic giant branch (AGB), is characterized by mass loss so intense (10-8-10-4 Msol yr-1) that eventually the AGB life time is determined by it. The material lost by the star is enriched in nucleo-synthesized material and thus AGB stars play an important role in the chemical evolution of galaxies. A reliable mass-loss-rate estimator is of utmost importance in order to increase our understanding of late stellar evolution and to reach conclusions about the amount of enriched material recycled by AGB stars. For low-mass-loss-rate AGB stars, modelling of observed rotational CO radio line emission has proven to be a good tool for estimating mass-loss rates [Olofsson et al. (2002) for M-type stars and Schöier & Olofsson (2001) for carbon stars], but several lines are needed to get good constraints. For high-mass-loss-rate objects the situation is more complicated, the main reason being saturation of the optically thick CO lines. Moreover, Kemper et al. (2003) introduced temporal changes in the mass-loss rate, or alternatively, spatially varying turbulent motions, in order to explain observed line-intensity ratios. This puts into question whether it is possible to model the circumstellar envelope using a constant mass-loss rate, or whether the physical structure of the outflow is more complex than normally assumed. We present observations of CO radio line emission for a sample of intermediate- to high-mass-loss-rate AGB stars. The lowest rotational transition line (J =1-0) was observed at OSO and the higher-frequency lines (J =2-1, 3-2, 4-3 and in some cases 6-5) were observed at the JCMT. Using a detailed, non-LTE, radiative transfer model we are able to reproduce observed line ratios (Figure 1) and constrain the mass-loss rates for the whole sample, using a constant mass-loss rate and a "standard" circumstellar envelope model. However, for some objects only a lower limit to

  18. Research on earthquake prediction from infrared cloud images

    NASA Astrophysics Data System (ADS)

    Fan, Jing; Chen, Zhong; Yan, Liang; Gong, Jing; Wang, Dong

    2015-12-01

    In recent years, the occurrence of large earthquakes is frequent all over the word. In the face of the inevitable natural disasters, the prediction of the earthquake is particularly important to avoid more loss of life and property. Many achievements in the field of predict earthquake from remote sensing images have been obtained in the last few decades. But the traditional prediction methods presented do have the limitations of can't forecast epicenter location accurately and automatically. In order to solve the problem, a new predicting earthquakes method based on extract the texture and emergence frequency of the earthquake cloud is proposed in this paper. First, strengthen the infrared cloud images. Second, extract the texture feature vector of each pixel. Then, classified those pixels and converted to several small suspected area. Finally, tracking the suspected area and estimate the possible location. The inversion experiment of Ludian earthquake show that this approach can forecast the seismic center feasible and accurately.

  19. Estimation of fault propagation distance from fold shape: Implications for earthquake hazard assessment

    NASA Astrophysics Data System (ADS)

    Allmendinger, Richard W.; Shaw, John H.

    2000-12-01

    A numerical grid search using the trishear kinematic model can be used to extract both slip and the distance that a fault tip line has propagated during growth of a fault-propagation fold. The propagation distance defines the initial position of the tip line at the onset of slip. In the Santa Fe Springs anticline of the Los Angeles basin, we show that the tip line of the underlying Puente Hills thrust fault initiated at the same position as the 1987 magnitude 6.0 Whittier Narrows earthquake.

  20. Estimating tag loss of the Atlantic Horseshoe crab, Limulus polyphemus, using a multi-state model

    USGS Publications Warehouse

    Butler, Catherine Alyssa; McGowan, Conor P.; Grand, James B.; Smith, David

    2012-01-01

    The Atlantic Horseshoe crab, Limulus polyphemus, is a valuable resource along the Mid-Atlantic coast which has, in recent years, experienced new management paradigms due to increased concern about this species role in the environment. While current management actions are underway, many acknowledge the need for improved and updated parameter estimates to reduce the uncertainty within the management models. Specifically, updated and improved estimates of demographic parameters such as adult crab survival in the regional population of interest, Delaware Bay, could greatly enhance these models and improve management decisions. There is however, some concern that difficulties in tag resighting or complete loss of tags could be occurring. As apparent from the assumptions of a Jolly-Seber model, loss of tags can result in a biased estimate and underestimate a survival rate. Given that uncertainty, as a first step towards estimating an unbiased estimate of adult survival, we first took steps to estimate the rate of tag loss. Using data from a double tag mark-resight study conducted in Delaware Bay and Program MARK, we designed a multi-state model to allow for the estimation of mortality of each tag separately and simultaneously.

  1. A chemodynamic approach for estimating losses of target organic chemicals from water during sample holding time

    USGS Publications Warehouse

    Capel, P.D.; Larson, S.J.

    1995-01-01

    Minimizing the loss of target organic chemicals from environmental water samples between the time of sample collection and isolation is important to the integrity of an investigation. During this sample holding time, there is a potential for analyte loss through volatilization from the water to the headspace, sorption to the walls and cap of the sample bottle; and transformation through biotic and/or abiotic reactions. This paper presents a chemodynamic-based, generalized approach to estimate the most probable loss processes for individual target organic chemicals. The basic premise is that the investigator must know which loss process(es) are important for a particular analyte, based on its chemodynamic properties, when choosing the appropriate method(s) to prevent loss.

  2. Method for estimating spatially variable seepage loss and hydraulic conductivity in intermittent and ephemeral streams

    USGS Publications Warehouse

    Niswonger, R.G.; Prudic, D.E.; Fogg, G.E.; Stonestrom, D.A.; Buckland, E.M.

    2008-01-01

    A method is presented for estimating seepage loss and streambed hydraulic conductivity along intermittent and ephemeral streams using streamflow front velocities in initially dry channels. The method uses the kinematic wave equation for routing streamflow in channels coupled to Philip's equation for infiltration. The coupled model considers variations in seepage loss both across and along the channel. Water redistribution in the unsaturated zone is also represented in the model. Sensitivity of the streamflow front velocity to parameters used for calculating seepage loss and for routing streamflow shows that the streambed hydraulic conductivity has the greatest sensitivity for moderate to large seepage loss rates. Channel roughness, geometry, and slope are most important for low seepage loss rates; however, streambed hydraulic conductivity is still important for values greater than 0.008 m/d. Two example applications are presented to demonstrate the utility of the method. Copyright 2008 by the American Geophysical Union.

  3. Estimation of insurance-related losses resulting from coastal flooding in France

    NASA Astrophysics Data System (ADS)

    Naulin, J. P.; Moncoulon, D.; Le Roy, S.; Pedreros, R.; Idier, D.; Oliveros, C.

    2016-01-01

    A model has been developed in order to estimate insurance-related losses caused by coastal flooding in France. The deterministic part of the model aims at identifying the potentially flood-impacted sectors and the subsequent insured losses a few days after the occurrence of a storm surge event on any part of the French coast. This deterministic component is a combination of three models: a hazard model, a vulnerability model, and a damage model. The first model uses the PREVIMER system to estimate the water level resulting from the simultaneous occurrence of a high tide and a surge caused by a meteorological event along the coast. A storage-cell flood model propagates these water levels over the land and thus determines the probable inundated areas. The vulnerability model, for its part, is derived from the insurance schedules and claims database, combining information such as risk type, class of business, and insured values. The outcome of the vulnerability and hazard models are then combined with the damage model to estimate the event damage and potential insured losses. This system shows satisfactory results in the estimation of the magnitude of the known losses related to the flood caused by the Xynthia storm. However, it also appears very sensitive to the water height estimated during the flood period, conditioned by the junction between seawater levels and coastal topography, the accuracy for which is still limited by the amount of information in the system.

  4. Estimation of insurance related losses resulting from coastal flooding in France

    NASA Astrophysics Data System (ADS)

    Naulin, J. P.; Moncoulon, D.; Le Roy, S.; Pedreros, R.; Idier, D.; Oliveros, C.

    2015-04-01

    A model has been developed in order to estimate insurance-related losses caused by coastal flooding in France. The deterministic part of the model aims at identifying the potentially flood-impacted sectors and the subsequent insured losses a few days after the occurrence of a storm surge event on any part of the French coast. This deterministic component is a combination of three models: a hazard model, a vulnerability model and a damage model. The first model uses the PREVIMER system to estimate the water level along the coast. A storage-cell flood model propagates these water levels over the land and thus determines the probable inundated areas. The vulnerability model, for its part, is derived from the insurance schedules and claims database; combining information such as risk type, class of business and insured values. The outcome of the vulnerability and hazard models are then combined with the damage model to estimate the event damage and potential insured losses. This system shows satisfactory results in the estimation of the magnitude of the known losses related to the flood caused by the Xynthia storm. However, it also appears very sensitive to the water height estimated during the flood period, conditioned by the junction between sea water levels and coastal topography for which the accuracy is still limited in the system.

  5. A smartphone application for earthquakes that matter!

    NASA Astrophysics Data System (ADS)

    Bossu, Rémy; Etivant, Caroline; Roussel, Fréderic; Mazet-Roux, Gilles; Steed, Robert

    2014-05-01

    level of shaking intensity with empirical models of fatality losses calibrated on past earthquakes in each country. Non-seismic detections and macroseismic questionnaires collected online are combined to identify as many as possible of the felt earthquakes regardless their magnitude. Non seismic detections include Twitter earthquake detections, developed by the US Geological Survey, where the number of tweets containing the keyword "earthquake" is monitored in real time and flashsourcing, developed by the EMSC, which detect traffic surges on its rapid earthquake information website caused by the natural convergence of eyewitnesses who rush to the Internet to investigate the cause of the shaking that they have just felt. All together, we estimate that the number of detected felt earthquakes is around 1 000 per year, compared with the 35 000 earthquakes annually reported by the EMSC! Felt events are already the subject of the web page "Latest significant earthquakes" on EMSC website (http://www.emsc-csem.org/Earthquake/significant_earthquakes.php) and of a dedicated Twitter service @LastQuake. We will present the identification process of the earthquakes that matter, the smartphone application itself (to be released in May) and its future evolutions.

  6. Uncertainty in sample estimates and the implicit loss function for soil information.

    NASA Astrophysics Data System (ADS)

    Lark, Murray

    2015-04-01

    One significant challenge in the communication of uncertain information is how to enable the sponsors of sampling exercises to make a rational choice of sample size. One way to do this is to compute the value of additional information given the loss function for errors. The loss function expresses the costs that result from decisions made using erroneous information. In certain circumstances, such as remediation of contaminated land prior to development, loss functions can be computed and used to guide rational decision making on the amount of resource to spend on sampling to collect soil information. In many circumstances the loss function cannot be obtained prior to decision making. This may be the case when multiple decisions may be based on the soil information and the costs of errors are hard to predict. The implicit loss function is proposed as a tool to aid decision making in these circumstances. Conditional on a logistical model which expresses costs of soil sampling as a function of effort, and statistical information from which the error of estimates can be modelled as a function of effort, the implicit loss function is the loss function which makes a particular decision on effort rational. In this presentation the loss function is defined and computed for a number of arbitrary decisions on sampling effort for a hypothetical soil monitoring problem. This is based on a logistical model of sampling cost parameterized from a recent geochemical survey of soil in Donegal, Ireland and on statistical parameters estimated with the aid of a process model for change in soil organic carbon. It is shown how the implicit loss function might provide a basis for reflection on a particular choice of sample size by comparing it with the values attributed to soil properties and functions. Scope for further research to develop and apply the implicit loss function to help decision making by policy makers and regulators is then discussed.

  7. Estimation of Age Using Alveolar Bone Loss: Forensic and Anthropological Applications.

    PubMed

    Ruquet, Michel; Saliba-Serre, Bérengère; Tardivo, Delphine; Foti, Bruno

    2015-09-01

    The objective of this study was to utilize a new odontological methodological approach based on radiographic for age estimation. The study was comprised of 397 participants aged between 9 and 87 years. A clinical examination and a radiographic assessment of alveolar bone loss were performed. Direct measures of alveolar bone level were recorded using CT scans. A medical examination report was attached to the investigation file. Because of the link between alveolar bone loss and age, a model was proposed to enable simple, reliable, and quick age estimation. This work added new arguments for age estimation. This study aimed to develop a simple, standardized, and reproducible technique for age estimation of adults of actual populations in forensic medicine and ancient populations in funeral anthropology.

  8. Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984

    USGS Publications Warehouse

    Sipkin, S.A.

    1987-01-01

    The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.

  9. Identification and Estimation of Postseismic Deformation: Implications for Plate Motion Models, Models of the Earthquake Cycle, and Terrestrial Reference Frame Definition

    NASA Astrophysics Data System (ADS)

    Kedar, S.; Bock, Y.; Moore, A. W.; Argus, D. F.; Fang, P.; Liu, Z.; Haase, J. S.; Su, L.; Owen, S. E.; Goldberg, D.; Squibb, M. B.; Geng, J.

    2015-12-01

    Postseismic deformation indicates a viscoelastic response of the lithosphere. It is critical, then, to identify and estimate the extent of postseismic deformation in both space and time, not only for its inherent information on crustal rheology and earthquake physics, but also since it must considered for plate motion models that are derived geodetically from the "steady-state" interseismic velocities, models of the earthquake cycle that provide interseismic strain accumulation and earthquake probability forecasts, as well as terrestrial reference frame definition that is the basis for space geodetic positioning. As part of the Solid Earth Science ESDR System) SESES project under a NASA MEaSUREs grant, JPL and SIO estimate combined daily position time series for over 1800 GNSS stations, both globally and at plate boundaries, independently using the GIPSY and GAMIT software packages, but with a consistent set of a prior epoch-date coordinates and metadata. The longest time series began in 1992, and many of them contain postseismic signals. For example, about 90 of the global GNSS stations out of more than 400 that define the ITRF have experienced one or more major earthquakes and 36 have had multiple earthquakes; as expected, most plate boundary stations have as well. We quantify the spatial (distance from rupture) and temporal (decay time) extent of postseismic deformation. We examine parametric models (log, exponential) and a physical model (rate- and state-dependent friction) to fit the time series. Using a PCA analysis, we determine whether or not a particular earthquake can be uniformly fit by a single underlying postseismic process - otherwise we fit individual stations. Then we investigate whether the estimated time series velocities can be directly used as input to plate motion models, rather than arbitrarily removing the apparent postseismic portion of a time series and/or eliminating stations closest to earthquake epicenters.

  10. The size of earthquakes

    USGS Publications Warehouse

    Kanamori, H.

    1980-01-01

    How we should measure the size of an earthquake has been historically a very important, as well as a very difficult, seismological problem. For example, figure 1 shows the loss of life caused by earthquakes in recent times and clearly demonstrates that 1976 was the worst year for earthquake casualties in the 20th century. However, the damage caused by an earthquake is due not only to its physical size but also to other factors such as where and when it occurs; thus, figure 1 is not necessarily an accurate measure of the "size" of earthquakes in 1976. the point is that the physical process underlying an earthquake is highly complex; we therefore cannot express every detail of an earthquake by a simple straightforward parameter. Indeed, it would be very convenient if we could find a single number that represents the overall physical size of an earthquake. This was in fact the concept behind the Richter magnitude scale introduced in 1935. 

  11. Landslides in Colorado, USA--Impacts and loss estimation for 2010

    USGS Publications Warehouse

    Highland, Lynn M.

    2012-01-01

    The focus of this study is to investigate landslides and consequent losses which affected Colorado in the year 2010. By obtaining landslide reports from a variety of sources, this report will demonstrate the feasibility of creating a profile of landslides and their effects on communities. A short overview of the current status of landslide-loss studies for the United States is introduced, followed by a compilation of landslide occurrence and associated losses and impacts which affected Colorado for the year 2010. Direct costs are summarized in descriptive and tabular form, and where possible, indirect costs are also noted or estimated. Total direct costs of landslides in Colorado for the year 2010 were approximately $9,149,335.00 (2010 U.S. dollars). (Since not all data for damages and costs were obtained, this figure realistically could be considerably higher.) Indirect costs were noted where available but are not totaled due to the fact that most indirect costs were not obtainable for various reasons outlined later in this report. Casualty data are considered as being within the scope of loss evaluation, and are reported in Appendix 1, but are not assigned dollar losses. More details on the source material for loss data not found in the reference section are reported in Appendix 2, and Appendix 3 summarizes notes on landslide-loss investigations in general and lessons learned during the process of loss-data collection.

  12. Estimation of return periods of multiple losses per winter associated with historical windstorm series over Germany

    NASA Astrophysics Data System (ADS)

    Karremann, Melanie; Pinto, Joaquim G.; von Bomhard, Philipp; Klawa, Matthias

    2014-05-01

    During the last decades, several windstorm series hit Western Europe leading to large cumulative economic losses. Such storm series are an example of serial clustering of extreme cyclones and present a considerable risk for the insurance industry. Here, clustering of events and return periods of storm series for Germany are quantified based on potential losses using empirical models. Two reanalysis datasets and observations from 123 German Weather Service stations are considered for the winters 1981/1982 to 2010/2011. Based on these datasets, histograms of events exceeding selected return levels (1-, 2- and 5-year) are derived. Return periods of historical storm series are estimated based on the Poisson and the negative Binomial distribution. About 4680 years of global circulation model simulations forced with current climate conditions are analysed to provide a better assessment of historical return periods. Estimations differ between the considered distributions. Except for frequent and weak events, the return period estimates obtained with the Poisson distribution clearly deviate from empirical data. This clearly documents overdispersion in the loss data, thus indicating the clustering of potential loss events. Better assessments are achieved for the negative Binomial distribution, e.g. 34 to 53 years for the storm series like 1989/1990. The overdispersion (clustering) of potential loss events clearly states the importance of an adequate risk assessment of multiple events per winter for economical applications.

  13. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  14. Perspectives on earthquake hazards in the New Madrid seismic zone, Missouri

    USGS Publications Warehouse

    Thenhaus, P.C.

    1990-01-01

    A sequence of three great earthquakes struck the Central United States during the winter of 1811-1812 in the area of New Madrid, Missouri. they are considered to be the greatest earthquakes in the conterminous U.S because they were felt and caused damage at far greater distances than any other earthquakes in U.S history. The large population currently living within the damage area of these earthquakes means that widespread destruction and loss of life is likely if the sequence were repeated. In contrast to California, where the earthquakes are felt frequently, the damaging earthquakes that have occurred in the Easter U.S-in 155 (Cape Ann, Mass.), 1811-12 (New Madrid, Mo.), 1886 (Charleston S.C) ,and 1897 (Giles County, Va.- are generally regarded as only historical phenomena (fig. 1). The social memory of these earthquakes no longer exists. A fundamental problem in the Eastern U.S, therefore, is that the earthquake hazard is not generally considered today in land-use and civic planning. This article offers perspectives on the earthquake hazard of the New Madrid seismic zone through discussions of the geology of the Mississippi Embayment, the historical earthquakes that have occurred there, the earthquake risk, and the "tools" that geoscientists have to study the region. The so-called earthquake hazard is defined  by the characterization of the physical attributes of the geological structures that cause earthquakes, the estimation of the recurrence times of the earthquakes, the estimation of the recurrence times of the earthquakes, their potential size, and the expected ground motions. the term "earthquake risk," on the other hand, refers to aspects of the expected damage to manmade strctures and to lifelines as a result of the earthquake hazard.  

  15. SEISMIC SITE RESPONSE ESTIMATION IN THE NEAR SOURCE REGION OF THE 2009 L’AQUILA, ITALY, EARTHQUAKE

    NASA Astrophysics Data System (ADS)

    Bertrand, E.; Azzara, R.; Bergamashi, F.; Bordoni, P.; Cara, F.; Cogliano, R.; Cultrera, G.; di Giulio, G.; Duval, A.; Fodarella, A.; Milana, G.; Pucillo, S.; Régnier, J.; Riccio, G.; Salichon, J.

    2009-12-01

    The 6th of April 2009, at 3:32 local time, a Mw 6.3 earthquake hit the Abruzzo region (central Italy) causing more than 300 casualties. The epicenter of the earthquake was 95km NE of Rome and 10km from the center of the city of L’Aquila, the administrative capital of the Abruzzo region. This city has a population of about 70,000 and was severely damaged by the earthquake, the total cost of the buildings damage being estimated around 3 Bn €. Historical masonry buildings particularly suffered from the seismic shaking, but some reinforced concrete structures from more modern construction were also heavily damaged. To better estimate the seismic solicitation of these structures during the earthquake, we deployed temporary arrays in the near source region. Downtown L’Aquila, as well as a rural quarter composed of ancient dwelling-centers located western L’Aquila (Roio area), have been instrumented. The array set up downtown consisted of nearly 25 stations including velocimetric and accelerometric sensors. In the Roio area, 6 stations operated for almost one month. The data has been processed in order to study the spectral ratios of the horizontal component of ground motion at the soil site and at a reference site, as well as the spectral ratio of the horizontal and the vertical movement at a single recording site. Downtown L’Aquila is set on a Quaternary fluvial terrace (breccias with limestone boulders and clasts in a marly matrix), which forms the left bank of the Aterno River and slopes down in the southwest direction towards the Aterno River. The alluvial are lying on lacustrine sediments reaching their maximum thickness (about 250m) in the center of L’Aquila. After De Luca et al. (2005), these quaternary deposits seem to lead in an important amplification factor in the low frequency range (0.5-0.6 Hz). However, the level of amplification varies strongly from one point to the other in the center of the city. This new experimentation allows new and more

  16. Proceedings of Conference XVIII: a workshop on "Continuing actions to reduce losses from earthquakes in the Mississippi Valley area," 24-26 May, 1982, St. Louis, Missouri

    USGS Publications Warehouse

    Gori, Paula L.; Hays, Walter W.; Kitzmiller, Carla

    1983-01-01

    payoff and trre lowest cost and effort requirements. These action plans, which identify steps that can be undertaken immediately to reduce losses from earthquakes in each of the seven States in the Mississippi Valley area, are contained in this report. The draft 5-year plan for the Central United States, prepared in the Knoxville workshop, was the starting point of the small group discussions in the St. Louis workshop which lead to the action plans contained in this report. For completeness, the draft 5-year plan for the Central United States is reproduced as Appendix B.

  17. The radiated seismic energy and apparent stress of interplate and intraplate earthquakes at subduction zone environments; implications for seismic hazard estimation

    USGS Publications Warehouse

    Choy, George L.; Boatwright, John L.; Kirby, Stephen H.

    2001-01-01

    The radiated seismic energies (ES) of 980 shallow subduction-zone earthquakes with magnitudes ? 5.8 are used to examine global patterns of energy release and apparent stress. In contrast to traditional methods which have relied upon empirical formulas, these energies are computed through direct spectral analysis of broadband seismic waveforms. Energy gives a physically different measure of earthquake size than moment. Moment, being derived from the low-frequency asymptote of the displacement spectra, is related to the final static displacement. Thus, moment is crucial to the long-term tectonic implication of an earthquake. In contrast, energy, being derived from the velocity power spectra, is more a measure of seismic potential for damage to anthropogenic structures. There is considerable scatter in the plot of ES-M0 for worldwide earthquakes. For any given M0, the ES can vary by as much as an order of magnitude about the mean regression line. The global variation between ES and M0, while large, is not random. When subsets of ES-M0 are plotted as a function of seismic region, tectonic setting and faulting type, the scatter in data is often substantially reduced. There are two profound implications for the estimation of seismic and tsunamic hazard. First, it is now feasible to characterize the apparent stress for particular regions. Second, a given M0 does not have a unique ES. This means that M0 alone is not sufficient to describe all aspects of an earthquake. In particular, we have found examples of interplate thrust-faulting earthquakes and intraslab normal-faulting earthquakes occurring in the same epicentral region with vastly different macroseismic effects. Despite the gross macroseismic disparities, the MW?s in these examples were identical. However, the Me?s (energy magnitudes) successfully distinguished the earthquakes that were more damaging.

  18. Nitrogen losses from dairy manure estimated through nitrogen mass balance and chemical markers

    USGS Publications Warehouse

    Hristov, Alexander N.; Zaman, S.; Vander Pol, M.; Ndegwa, P.; Campbell, L.; Silva, S.

    2009-01-01

    Ammonia is an important air and water pollutant, but the spatial variation in its concentrations presents technical difficulties in accurate determination of ammonia emissions from animal feeding operations. The objectives of this study were to investigate the relationship between ammonia volatilization and ??15N of dairy manure and the feasibility of estimating ammonia losses from a dairy facility using chemical markers. In Exp. 1, the N/P ratio in manure decreased by 30% in 14 d as cumulative ammonia losses increased exponentially. Delta 15N of manure increased throughout the course of the experiment and ??15N of emitted ammonia increased (p < 0.001) quadratically from -31??? to -15 ???. The relationship between cumulative ammonia losses and ??15N of manure was highly significant (p < 0.001; r2 = 0.76). In Exp. 2, using a mass balance approach, approximately half of the N excreted by dairy cows (Bos taurus) could not be accounted for in 24 h. Using N/P and N/K ratios in fresh and 24-h manure, an estimated 0.55 and 0.34 (respectively) of the N excreted with feces and urine could not be accounted for. This study demonstrated that chemical markers (P, K) can be successfully used to estimate ammonia losses from cattle manure. The relationship between manure ??15N and cumulative ammonia loss may also be useful for estimating ammonia losses. Although promising, the latter approach needs to be further studied and verified in various experimental conditions and in the field. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  19. Estimated crop yield losses due to surface ozone exposure and economic damage in India.

    PubMed

    Debaje, S B

    2014-06-01

    In this study, we estimate yield losses and economic damage of two major crops (winter wheat and rabi rice) due to surface ozone (O3) exposure using hourly O3 concentrations for the period 2002-2007 in India. This study estimates crop yield losses according to two indices of O3 exposure: 7-h seasonal daytime (0900-1600 hours) mean measured O3 concentration (M7) and AOT40 (accumulation exposure of O3 concentration over a threshold of 40 parts per billion by volume during daylight hours (0700-1800 hours), established by field studies. Our results indicate that relative yield loss from 5 to 11% (6-30%) for winter wheat and 3-6% (9-16%) for rabi rice using M7 (AOT40) index of the mean total winter wheat 81 million metric tons (Mt) and rabi rice 12 Mt production per year for the period 2002-2007. The estimated mean crop production loss (CPL) for winter wheat are from 9 to 29 Mt, account for economic cost loss was from 1,222 to 4,091 million US$ annually. Similarly, the mean CPL for rabi rice are from 0.64 to 2.1 Mt, worth 86-276 million US$. Our calculated winter wheat and rabi rice losses agree well with previous results, providing the further evidence that large crop yield losses occurring in India due to current O3 concentration and further elevated O3 concentration in future may pose threat to food security.

  20. Comparison of the Cut-and-Paste and Full Moment Tensor Methods for Estimating Earthquake Source Parameters

    NASA Astrophysics Data System (ADS)

    Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.

    2008-12-01

    Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the

  1. Development of an online tool for tsunami inundation simulation and tsunami loss estimation

    NASA Astrophysics Data System (ADS)

    Srivihok, P.; Honda, K.; Ruangrassamee, A.; Muangsin, V.; Naparat, P.; Foytong, P.; Promdumrong, N.; Aphimaeteethomrong, P.; Intavee, A.; Layug, J. E.; Kosin, T.

    2014-05-01

    The devastating impacts of the 2004 Indian Ocean tsunami highlighted the need for an effective end-to-end tsunami early warning system in the region that connects the scientific components of warning with preparedness of institutions and communities to respond to an emergency. Essential to preparedness planning is knowledge of tsunami risks. In this study, development of an online tool named “INSPIRE” for tsunami inundation simulation and tsunami loss estimation is presented. The tool is designed to accommodate various accuracy levels of tsunami exposure data which will support the users to undertake preliminary tsunami risk assessment from the existing data with progressive improvement with the use of more detailed and accurate datasets. Sampling survey technique is introduced to improve the local vulnerability data with lower cost and manpower. The performance of the proposed methodology and the INSPIRE tool were tested against the dataset in Kamala and Patong municipalities, Phuket province, Thailand. The estimated building type ratios from the sampling survey show the satisfactory agreement with the actual building data at the test sites. Sub-area classification by land use can improve the accuracy of the building type ratio estimation. For the resulting loss estimation, the exposure data generated from detailed field survey can provide the agreeable results when comparing to the actual building damage recorded for the Indian Ocean tsunami event in 2004. However, lower accuracy exposure data derived from sampling survey and remote sensing can still provide a comparative overview of estimated loss.

  2. Earthquake Monitoring and Early Warning Systems in Taiwan (Invited)

    NASA Astrophysics Data System (ADS)

    Wu, Y.

    2010-12-01

    The Taiwan region is characterized by a high shortening rate and a strong seismic activity. The Central Weather Bureau (CWB) is responsible for the earthquake monitoring in Taiwan. The CWB seismic network consists of 71 real-time short-period seismic stations in Taiwan region for routinely earthquake monitoring and has recorded about 18,000 events each year in a roughly 400 km x 550 km region. There are 53 real-time broadband stations install for seismological research purposes and reporting moment tensor solution in Taiwan. With the implementation of a real-time strong-motion network by the CWB, earthquake rapid reporting and early warning systems have been developed in Taiwan. The network consists of 110 stations. For rapid reporting system, when a potentially felt earthquake occurs around the Taiwan area, the location, magnitude and shake map of seismic intensities can be automatically reported within about 40 to 60 sec. For large earthquakes, the shaking map and losses can be estimated within 2 min after the earthquake occurrence. For earthquake early warning system, earthquake information could be determined at about 15 to 20 sec after a large earthquake occurrence. Therefore, this system can provide early warning before the arrival of S-wave for metropolitan areas located 70 km away from the epicenter. Recently, onsite earthquake early warning device is developed using MEMS sensor. It focuses on that to offer early warning for areas close to the epicenter.

  3. Annual South American Forest Loss Estimates (1989-2011) Based on Passive Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    van Marle, M.; van der Werf, G.; de Jeu, R.; Liu, Y.

    2014-12-01

    Vegetation dynamics, such as forest loss, are an important factor in global climate, but long-term and consistent information on these dynamics on continental scales is lacking. We have quantified large-scale forest loss over the 90s and 00s in the tropical biomes of South America using a passive-microwave satellite-based vegetation product. Our forest loss estimates are based on remotely sensed vegetation optical depth (VOD), which is an indicator of vegetation water content simultaneously retrieved with soil moisture. The advantage of low-frequency microwave remote sensing is that aerosols and clouds do not affect the observations. Furthermore, the longer wavelengths of passive microwaves penetrate deeper into vegetation than other products derived from optical and thermal sensors. This has the consequence that both woody parts of vegetation and leaves can be observed. The merged VOD product of AMSR-E and SSM/I observations, which covers over 23 years of daily observations, is used. We used this data stream and an outlier detection algorithm to quantify spatial and temporal variations in forest loss dynamics. Qualitatively, our results compared favorably to the newly developed Global Forest Change (GFC) maps based on Landsat data (r2=0.96), and this allowed us to convert the VOD outlier count to forest loss. Our results are spatially explicit with a 0.25-degree resolution and annual time step and we will present our estimates on country level. The added benefit of our results compared to GFC is the longer time period. The results indicate a relatively steady increase in forest loss in Brazil from 1989 until 2003, followed by two high forest loss years and a declining trend afterwards. This contrasts with other South American countries such as Bolivia and Peru, where forest losses increased in almost the whole 00s in comparison with the 90s.

  4. Annual South American forest loss estimates based on passive microwave remote sensing (1990-2010)

    NASA Astrophysics Data System (ADS)

    van Marle, M. J. E.; van der Werf, G. R.; de Jeu, R. A. M.; Liu, Y. Y.

    2016-02-01

    Consistent forest loss estimates are important to understand the role of forest loss and deforestation in the global carbon cycle, for biodiversity studies, and to estimate the mitigation potential of reducing deforestation. To date, most studies have relied on optical satellite data and new efforts have greatly improved our quantitative knowledge on forest dynamics. However, most of these studies yield results for only a relatively short time period or are limited to certain countries. We have quantified large-scale forest loss over a 21-year period (1990-2010) in the tropical biomes of South America using remotely sensed vegetation optical depth (VOD). This passive microwave satellite-based indicator of vegetation water content and vegetation density has a much coarser spatial resolution than optical data but its temporal resolution is higher and VOD is not impacted by aerosols and cloud cover. We used the merged VOD product of the Advanced Microwave Scanning Radiometer (AMSR-E) and Special Sensor Microwave Imager (SSM/I) observations, and developed a change detection algorithm to quantify spatial and temporal variations in forest loss dynamics. Our results compared reasonably well with the newly developed Landsat-based Global Forest Change (GFC) maps, available for the 2001 onwards period (r2 = 0.90 when comparing annual country-level estimates). This allowed us to convert our identified changes in VOD to forest loss area and compute these from 1990 onwards. We also compared these calibrated results to PRODES (r2 = 0.60 when comparing annual state-level estimates). We found that South American forest exhibited substantial interannual variability without a clear trend during the 1990s, but increased from 2000 until 2004. After 2004, forest loss decreased again, except for two smaller peaks in 2007 and 2010. For a large part, these trends were driven by changes in Brazil, which was responsible for 56 % of the total South American forest loss area over our study

  5. Calorie Estimation in Adults Differing in Body Weight Class and Weight Loss Status

    PubMed Central

    Brown, Ruth E; Canning, Karissa L; Fung, Michael; Jiandani, Dishay; Riddell, Michael C; Macpherson, Alison K; Kuk, Jennifer L

    2016-01-01

    Purpose Ability to accurately estimate calories is important for weight management, yet few studies have investigated whether individuals can accurately estimate calories during exercise, or in a meal. The objective of this study was to determine if accuracy of estimation of moderate or vigorous exercise energy expenditure and calories in food is associated with body weight class or weight loss status. Methods Fifty-eight adults who were either normal weight (NW) or overweight (OW), and either attempting (WL) or not attempting weight loss (noWL), exercised on a treadmill at a moderate (60% HRmax) and a vigorous intensity (75% HRmax) for 25 minutes. Subsequently, participants estimated the number of calories they expended through exercise, and created a meal that they believed to be calorically equivalent to the exercise energy expenditure. Results The mean difference between estimated and measured calories in exercise and food did not differ within or between groups following moderate exercise. Following vigorous exercise, OW-noWL overestimated energy expenditure by 72%, and overestimated the calories in their food by 37% (P<0.05). OW-noWL also significantly overestimated exercise energy expenditure compared to all other groups (P<0.05), and significantly overestimated calories in food compared to both WL groups (P<0.05). However, among all groups there was a considerable range of over and underestimation (−280 kcal to +702 kcal), as reflected by the large and statistically significant absolute error in calorie estimation of exercise and food. Conclusion There was a wide range of under and overestimation of calories during exercise and in a meal. Error in calorie estimation may be greater in overweight adults who are not attempting weight loss. PMID:26469988

  6. Body protein losses estimated by nitrogen balance and potassium-40 counting

    SciTech Connect

    Belyea, R.L.; Babbitt, C.L.; Sedgwick, H.T.; Zinn, G.M.

    1986-07-01

    Body protein losses estimated from N balance were compared with those estimated by 40K counting. Six nonlactating dairy cows were fed an adequate N diet for 7 wk, a low N diet for 9 wk, and a replete N diet for 3 wk. The low N diet contained high cell wall grass hay plus ground corn, starch, and molasses. Soybean meal was added to the low N diet to increase N in the adequate N and replete N diets. Intake was measured daily. Digestibilities, N balance, and body composition (estimated by 40K counting) were determined during each dietary regimen. During low N treatment, hay dry matter intake declined 2 kg/d, and supplement increased about .5 kg/d. Dry matter digestibility was not altered by N treatment. Protein and acid detergent fiber digestibilities decreased from 40 and 36% during adequate N to 20 and 2%, respectively, during low N. Fecal and urinary N also declined when cows were fed the low N diet. By the end of repletion, total intake, fiber, and protein digestibilities as well as N partition were similar to or exceeded those during adequate N intake. Body protein (N) loss was estimated by N balance to be about 3 kg compared with 8 kg by 40K counting. Body fat losses (32 kg) were large because of low energy digestibility and intake. Seven kilograms of body fat were regained during repletion, but there was no change in body protein.

  7. Handbook for the estimation of microwave propagation effects: Link calculations for earth-space paths (path loss and noise estimation)

    NASA Technical Reports Server (NTRS)

    Crane, R. K.; Blood, D. W.

    1979-01-01

    A single model for a standard of comparison for other models when dealing with rain attenuation problems in system design and experimentation is proposed. Refinements to the Global Rain Production Model are incorporated. Path loss and noise estimation procedures as the basic input to systems design for earth-to-space microwave links operating at frequencies from 1 to 300 GHz are provided. Topics covered include gaseous absorption, attenuation by rain, ionospheric and tropospheric scintillation, low elevation angle effects, radome attenuation, diversity schemes, link calculation, and receiver noise emission by atmospheric gases, rain, and antenna contributions.

  8. Estimating earthquake-rupture rates on a fault or fault system

    USGS Publications Warehouse

    Field, E.H.; Page, M.T.

    2011-01-01

    Previous approaches used to determine the rates of different earthquakes on a fault have made assumptions regarding segmentation, have been difficult to document and reproduce, and have lacked the ability to satisfy all available data constraints. We present a relatively objective and reproducible inverse methodology for determining the rate of different ruptures on a fault or fault system. The data used in the inversion include slip rate, event rate, and other constraints such as an optional a priori magnitude-frequency distribution. We demonstrate our methodology by solving for the long-term rate of ruptures on the southern San Andreas fault. Our results imply that a Gutenberg-Richter distribution is consistent with the data available for this fault; however, more work is needed to test the robustness of this assertion. More importantly, the methodology is extensible to an entire fault system (thereby including multifault ruptures) and can be used to quantify the relative benefits of collecting additional paleoseismic data at different sites.

  9. Annual South American forest loss estimates based on passive microwave remote sensing (1990-2010)

    NASA Astrophysics Data System (ADS)

    van Marle, M. J. E.; van der Werf, G. R.; de Jeu, R. A. M.; Liu, Y. Y.

    2015-07-01

    Consistent forest loss estimates are important to understand the role of forest loss and deforestation in the global carbon cycle, for biodiversity studies, and to estimate the mitigation potential of reducing deforestation. To date, most studies have relied on optical satellite data and new efforts have greatly improved our quantitative knowledge on forest dynamics. However, most of these studies yield results for only a relatively short time period or are limited to certain countries. We have quantified large-scale forest losses over a 21 year period (1990-2010) in the tropical biomes of South America using remotely sensed vegetation optical depth (VOD). This passive microwave satellite-based indicator of vegetation water content and vegetation density has a much coarser spatial resolution than optical but its temporal resolution is higher and VOD is not impacted by aerosols and cloud cover. We used the merged VOD product of the Advanced Microwave Scanning Radiometer (AMSR-E) and Special Sensor Microwave Imager (SSM/I) observations, and developed a change detection algorithm to quantify spatial and temporal variations in forest loss dynamics. Our results compared favorably to the newly developed Global Forest Change (GFC) maps based on Landsat data and available for the 2001 onwards period (r2 = 0.90 when comparing annual country-level estimates), which allowed us to convert our results to forest loss area and compute these from 1990 onwards. We found that South American forest exhibited substantial interannual variability without a clear trend during the 1990s, but increased from 2000 until 2004. After 2004, forest loss decreased again, except for two smaller peaks in 2007 and 2010. For a large part, these trends were driven by changes in Brazil, which was responsible for 56 % of the total South American forest loss over our study period according to our results. One of the key findings of our study is that while forest losses decreased in Brazil after 2005

  10. Combining double difference and amplitude ratio approaches for Q estimates at the NW Bohemia earthquake swarm region

    NASA Astrophysics Data System (ADS)

    Kriegerowski, Marius; Cesca, Simone; Krüger, Frank; Dahm, Torsten; Horálek, Josef

    2016-04-01

    Aside from the propagation velocity of seismic waves, their attenuation can provide a direct measure of rock properties in the sampled subspace. We present a new attenuation tomography approach exploiting relative amplitude spectral ratios of earthquake pairs. We focus our investigation on North West Bohemia - a region characterized by intense earthquake swarm activity in a confined source region. The inter-event distances are small compared to the epicentral distances to the receivers meeting a fundamental requirement of the method. Due to the similar event locations also the ray paths are very similar. Consequently, the relative spectral ratio is affected mostly by rock properties along the path of the vector distance and thus representative of the focal region. In order to exclude effects of the seismic source spectra, only the high frequency content beyond the corner frequency is taken into consideration. This requires high quality as well as high sampling records. Future improvements in that respect can be expected from the ICDP proposal "Eger rift", which includes plans to install borehole monitoring in the investigated region. 1D and 3D synthetic tests show the feasibility of the presented method. Furthermore, we demonstrate influences of perturbations in source locations and travel time estimates on the determination of Q. Errors in Q scale linearly with errors in the differential travel times. These sources of errors can be attributed to the complex velocity structure of the investigated region. A critical aspect is the signal-to-noise ratio, which imposes a strong limitation and emphasizes the demand for high quality recordings. Hence, the presented method is expected to benefit from bore hole installations. Since we focus our analysis on the NW Bohemia case study example, a synthetic earthquake catalog incorporating source characteristics deduced from preceding moment tensor inversions coupled with a realistic velocity model provides us with a realistic

  11. Urbanization and agricultural land loss in India: comparing satellite estimates with census data.

    PubMed

    Pandey, Bhartendu; Seto, Karen C

    2015-01-15

    We examine the impacts of urbanization on agricultural land loss in India from 2001 to 2010. We combined a hierarchical classification approach with econometric time series analysis to reconstruct land-cover change histories using time series MODIS 250 m VI images composited at 16-day intervals and night time lights (NTL) data. We compared estimates of agricultural land loss using satellite data with agricultural census data. Our analysis highlights six key results. First, agricultural land loss is occurring around smaller cities more than around bigger cities. Second, from 2001 to 2010, each state lost less than 1% of its total geographical area due to agriculture to urban expansion. Third, the northeastern states experienced the least amount of agricultural land loss. Fourth, agricultural land loss is largely in states and districts which have a larger number of operational or approved SEZs. Fifth, urban conversion of agricultural land is concentrated in a few districts and states with high rates of economic growth. Sixth, agricultural land loss is predominantly in states with higher agricultural land suitability compared to other states. Although the total area of agricultural land lost to urban expansion has been relatively low, our results show that since 2006, the amount of agricultural land converted has been increasing steadily. Given that the preponderance of India's urban population growth has yet to occur, the results suggest an increase in the conversion of agricultural land going into the future.

  12. Estimating the mitigation of anthropogenic loss of phosphorus in New Zealand grassland catchments.

    PubMed

    McDowell, R W

    2014-01-15

    Managing phosphorus in catchments is central to improving surface water quality, but knowing how much can be mitigated from agricultural land, and at what cost relative to a natural baseline (or reference condition), is difficult to assess. The difference between median concentrations now and under reference was defined as the anthropogenic loss, while the manageable loss was defined as the median P concentration possible without costing more than 10% of farm profitability (measured as earnings before interest and tax, EBIT). Nineteen strategies to mitigate P loss were ranked according to cost (low, medium, high, very high). Using the average dairy and drystock farms in 14 grassland catchments as test cases, the potential to mitigate P loss from land to water was then modelled for different strategies, beginning with strategies within the lowest cost category from best to least effective, before applying a strategy from a more expensive category. The anthropogenic contribution to stream median FRP and TP concentrations was estimated as 44 and 69%, respectively. However, applying up to three strategies per farm theoretically enabled mitigation of FRP and TP losses sufficient to maintain aesthetic and trout fishery values to be met and at a cost <1% EBIT for drystock farms and <6% EBIT for dairy farms. This shows that defining and acting upon the manageable loss in grassland catchments (with few point sources) has potential to achieve a water quality outcome within an ecological target at little cost.

  13. Bayesian Tsunami-Waveform Inversion and Tsunami-Source Uncertainty Estimation for the 2011 Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Dettmer, J.; Hossen, M. J.; Cummins, P. R.

    2014-12-01

    This paper develops a Bayesian inversion to infer spatio-temporal parameters of the tsunami source (sea surface) due to megathrust earthquakes. To date, tsunami-source parameter uncertainties are poorly studied. In particular, the effects of parametrization choices (e.g., discretisation, finite rupture velocity, dispersion) on uncertainties have not been quantified. This approach is based on a trans-dimensional self-parametrization of the sea surface, avoids regularization, and provides rigorous uncertainty estimation that accounts for model-selection ambiguity associated with the source discretisation. The sea surface is parametrized using self-adapting irregular grids which match the local resolving power of the data and provide parsimonious solutions for complex source characteristics. Finite and spatially variable rupture velocity fields are addressed by obtaining causal delay times from the Eikonal equation. Data are considered from ocean-bottom pressure and coastal wave gauges. Data predictions are based on Green-function libraries computed from ocean-basin scale tsunami models for cases that include/exclude dispersion effects. Green functions are computed for elementary waves of Gaussian shape and grid spacing which is below the resolution of the data. The inversion is applied to tsunami waveforms from the great Mw=9.0 2011 Tohoku-Oki (Japan) earthquake. Posterior results show a strongly elongated tsunami source along the Japan trench, as obtained in previous studies. However, we find that the tsunami data is fit with a source that is generally simpler than obtained in other studies, with a maximum amplitude less than 5 m. In addition, the data are sensitive to the spatial variability of rupture velocity and require a kinematic source model to obtain satisfactory fits which is consistent with other work employing linear multiple time-window parametrizations.

  14. Period-dependent source rupture behavior of the 2011 Tohoku earthquake estimated by multi period-band Bayesian waveform inversion

    NASA Astrophysics Data System (ADS)

    Kubo, H.; Asano, K.; Iwata, T.; Aoi, S.

    2014-12-01

    Previous studies for the period-dependent source characteristics of the 2011 Tohoku earthquake (e.g., Koper et al., 2011; Lay et al., 2012) were based on the short and long period source models using different method. Kubo et al. (2013) obtained source models of the 2011 Tohoku earthquake using multi period-bands waveform data by a common inversion method and discussed its period-dependent source characteristics. In this study, to achieve more in detail spatiotemporal source rupture behavior of this event, we introduce a new fault surface model having finer sub-fault size and estimate the source models in multi period-bands using a Bayesian inversion method combined with a multi-time-window method. Three components of velocity waveforms at 25 stations of K-NET, KiK-net, and F-net of NIED are used in this analysis. The target period band is 10-100 s. We divide this period band into three period bands (10-25 s, 25-50 s, and 50-100 s) and estimate a kinematic source model in each period band using a Bayesian inversion method with MCMC sampling (e.g., Fukuda & Johnson, 2008; Minson et al., 2013, 2014). The parameterization of spatiotemporal slip distribution follows the multi-time-window method (Hartzell & Heaton, 1983). The Green's functions are calculated by the 3D FDM (GMS; Aoi & Fujiwara, 1999) using a 3D velocity structure model (JIVSM; Koketsu et al., 2012). The assumed fault surface model is based on the Pacific plate boundary of JIVSM and is divided into 384 subfaults of about 16 * 16 km^2. The estimated source models in multi period-bands show the following source image: (1) First deep rupture off Miyagi at 0-60 s toward down-dip mostly radiating relatively short period (10-25 s) seismic waves. (2) Shallow rupture off Miyagi at 45-90 s toward up-dip with long duration radiating long period (50-100 s) seismic wave. (3) Second deep rupture off Miyagi at 60-105 s toward down-dip radiating longer period seismic waves then that of the first deep rupture. (4) Deep

  15. Estimates of methane loss and energy recovery potential in anaerobic reactors treating domestic wastewater.

    PubMed

    Lobato, L C S; Chernicharo, C A L; Souza, C L

    2012-01-01

    This work aimed at developing a mathematical model that could estimate more precisely the fraction of chemical oxygen demand (COD) recovered as methane in the biogas and which, effectively, represented the potential for energy recovery in upflow anaerobic sludge blanket (UASB) reactors treating domestic wastewater. The model sought to include all routes of conversion and losses in the reactor, including the portion of COD used for the reduction of sulfates and the loss of methane in the residual gas and dissolved in the effluent. Results from the production of biogas in small- and large-scale UASB reactors were used to validate the model. The results showed that the model allowed a more realistic estimate of biogas production and of its energy potential.

  16. Estimation of the Iron Loss in Deep-Sea Permanent Magnet Motors considering Seawater Compressive Stress

    PubMed Central

    Wei, Yanyu; Zou, Jibin; Li, Jianjun; Qi, Wenjuan; Li, Yong

    2014-01-01

    Deep-sea permanent magnet motor equipped with fluid compensated pressure-tolerant system is compressed by the high pressure fluid both outside and inside. The induced stress distribution in stator core is significantly different from that in land type motor. Its effect on the magnetic properties of stator core is important for deep-sea motor designers but seldom reported. In this paper, the stress distribution in stator core, regarding the seawater compressive stress, is calculated by 2D finite element method (FEM). The effect of compressive stress on magnetic properties of electrical steel sheet, that is, permeability, BH curves, and BW curves, is also measured. Then, based on the measured magnetic properties and calculated stress distribution, the stator iron loss is estimated by stress-electromagnetics-coupling FEM. At last the estimation is verified by experiment. Both the calculated and measured results show that stator iron loss increases obviously with the seawater compressive stress. PMID:25177717

  17. Speech quality estimation of voice over internet protocol codec using a packet loss impairment model.

    PubMed

    Lee, Min-Ki; Kang, Hong-Goo

    2013-11-01

    This letter proposes a degradation and cognition model to estimate speech quality impairment because of packet loss concealment (PLC) algorithm implemented in the speech CODEC SILK. By considering the fact that the quality degradation caused by packet loss is highly related to the PLC algorithm, the impact of quality degradation on various types of previous and lost packet classes is analyzed. Then, the PLC effects to the proposed class types are measured by the class conditional expectation of the degradation scores. Finally, the cognition module is derived to estimate the total quality degradation in a mean opinion score (MOS) scale. When assessed for correlation with subject test results, the correlation coefficient of the encoder-based class model is 0.93, and that of the decoder-based model is 0.87.

  18. Estimation of the iron loss in deep-sea permanent magnet motors considering seawater compressive stress.

    PubMed

    Xu, Yongxiang; Wei, Yanyu; Zou, Jibin; Li, Jianjun; Qi, Wenjuan; Li, Yong

    2014-01-01

    Deep-sea permanent magnet motor equipped with fluid compensated pressure-tolerant system is compressed by the high pressure fluid both outside and inside. The induced stress distribution in stator core is significantly different from that in land type motor. Its effect on the magnetic properties of stator core is important for deep-sea motor designers but seldom reported. In this paper, the stress distribution in stator core, regarding the seawater compressive stress, is calculated by 2D finite element method (FEM). The effect of compressive stress on magnetic properties of electrical steel sheet, that is, permeability, BH curves, and BW curves, is also measured. Then, based on the measured magnetic properties and calculated stress distribution, the stator iron loss is estimated by stress-electromagnetics-coupling FEM. At last the estimation is verified by experiment. Both the calculated and measured results show that stator iron loss increases obviously with the seawater compressive stress.

  19. Estimating formation properties from early-time recovery in wells subject to turbulent head losses

    USGS Publications Warehouse

    Shapiro, A.M.; Oki, D.S.; Greene, E.A.

    1998-01-01

    A mathematical model is developed to interpret the early-time recovering water level following the termination of pumping in wells subject to turbulent head losses. The model assumes that turbulent head losses dissipate immediately when pumping ends. In wells subject to both borehole storage and turbulent head losses, the early-time recovery exhibits a slope equal to 1/2 on log-log plots of the recovery versus time. This half-slope response should not be confused with the half-slope response associated with a linear flow regime during aquifer tests. The presence of a borehole skin due to formation damage or stimulation around the pumped well alters the early-time recovery in wells subject to turbulent head losses and gives the appearance of borehole storage, where the recovery exhibits a unit slope on log-log plots of recovery versus time. Type curves can be used to estimate the formation storafivity from the early-time recovery data. In wells that are suspected of having formation damage or stimulation, the type curves can be used to estimate the 'effective' radius of the pumped well, if an estimate of the formation storativity is available from observation wells or other information. Type curves for a homogeneous and isotropic dual-porosity aquifer are developed and applied to estimate formation properties and the effect of formation stimulation from a single-well test conducted in the Madison limestone near Rapid City, South Dakota.A mathematical model is developed to interpret the early-time recovering water level following the termination of pumping in wells subject to turbulent head losses. The model assumes that turbulent head losses dissipate immediately when pumping ends. In wells subject to both borehole storage and turbulent head losses, the early-time recovery exhibits a slope equal to 1/2 on log-log plots of the recovery versus time. This half-slope response should not be confused with the half-slope response associated with a linear flow regime during

  20. Systems, methods and computer readable media for estimating capacity loss in rechargeable electrochemical cells

    DOEpatents

    Gering, Kevin L.

    2013-06-18

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples charge characteristics of the electrochemical cell. The computing system periodically determines cell information from the charge characteristics of the electrochemical cell. The computing system also periodically adds a first degradation characteristic from the cell information to a first sigmoid expression, periodically adds a second degradation characteristic from the cell information to a second sigmoid expression and combines the first sigmoid expression and the second sigmoid expression to develop or augment a multiple sigmoid model (MSM) of the electrochemical cell. The MSM may be used to estimate a capacity loss of the electrochemical cell at a desired point in time and analyze other characteristics of the electrochemical cell. The first and second degradation characteristics may be loss of active host sites and loss of free lithium for Li-ion cells.

  1. Estimating the rate of retinal ganglion cell loss to detect glaucoma progression

    PubMed Central

    Hirooka, Kazuyuki; Izumibata, Saeko; Ukegawa, Kaori; Nitta, Eri; Tsujikawa, Akitaka

    2016-01-01

    Abstract This study aimed to evaluate the relationship between glaucoma progression and estimates of the retinal ganglion cells (RGCs) obtained by combining structural and functional measurements in patients with glaucoma. In the present observational cohort study, we examined 116 eyes of 62 glaucoma patients. Using Cirrus optical coherence tomography (OCT), a minimum of 5 serial retinal nerve fiber layer (RNFL) measurements were performed in all eyes. There was a 3-year separation between the first and last measurements. Visual field (VF) testing was performed on the same day as the RNFL imaging using the Swedish Interactive Threshold Algorithm Standard 30–2 program of the Humphrey Field Analyzer. Estimates of the RGC counts were obtained from standard automated perimetry (SAP) and OCT, with a weighted average then used to determine a final estimate of the number of RGCs for each eye. Linear regression was used to calculate the rate of the RGC loss, and trend analysis was used to evaluate both serial RNFL thicknesses and VF progression. Use of the average RNFL thickness parameter of OCT led to detection of progression in 14 of 116 eyes examined, whereas the mean deviation slope detected progression in 31 eyes. When the rates of RGC loss were used, progression was detected in 41 of the 116 eyes, with a mean rate of RGC loss of −28,260 ± 8110 cells/year. Estimation of the rate of RGC loss by combining structural and functional measurements resulted in better detection of glaucoma progression compared to either OCT or SAP. PMID:27472691

  2. Model for Estimating Life-Cycle Costs Associated with Noise-Induced Hearing Loss

    DTIC Science & Technology

    2007-01-10

    decisions. Currently, the cash outlays by the government for noise-induced hearing loss ( NIHL ) caused to service personnel by loud systems and spaces are...un-accounted for in estimates of life-cycle costs. A companion report demonstrated that a NIHL prediction algorithm from the American National...compensation costs of the predicted NIHL in this population. A numerical example of the algorithm operation was included. Using cost values applicable to

  3. Source parameters of the 2014 Mw 6.1 South Napa earthquake estimated from the Sentinel 1A, COSMO-SkyMed and GPS data

    NASA Astrophysics Data System (ADS)

    Guangcai, Feng; Zhiwei, Li; Xinjian, Shan; Bing, Xu; Yanan, Du

    2015-08-01

    Using the combination of two InSAR and one GPS data sets, we present the detailed source model of the 2014 Mw 6.1 South Napa earthquake, the biggest tremor to hit the San Francisco Bay Area since the 1989 Mw 6.9 Loma Prieta earthquake. The InSAR data are from the Sentinel-1A (S1A) and COSMO-SkyMed (CS) satellites, and GPS data are provided by Nevada Geodetic Laboratory. We firstly obtain the complete coseismic deformation fields of this event and estimate the InSAR data errors, then using the S1A data to construct the fault geometry, one main and two short parallel sub-faults which haven't been identified by field investigation. As expected the geometry is in good agreement with the aftershock distribution. By inverting the InSAR and GPS data, we derive a three segment slip and rake models. Our model indicates that this event was a right-lateral strike-slip earthquake with a slight reverse component in the West Napa Fault as we estimated. The fault is ~ 30 km long and more than 80% of the seismic moment was released at the center of the fault segment, where the slip reached its maximum (up to 1 m). We also find that our geodetic moment magnitude is 2.07 × 1018 Nm, corresponding to Mw 6.18, larger than that of USGS (Mw 6.0) and GCMT (Mw 6.1). This difference may partly be explained by our InSAR data including about one week's postseismic deformation and aftershocks. The results also demonstrate high SNR and great ability of the newly launched Sentinel-1A in earthquake study. Furthermore, this study suggests that this earthquake has potential to trigger nearby faults, especially the Green Valley fault where the coulomb stress was imparted by the 2014 South Napa earthquake.

  4. Estimation of Antarctic ozone loss from Ground-based total column measurements

    NASA Astrophysics Data System (ADS)

    Kuttippurath, J.; Goutail, F.; Pommereau, J.-P.; Lefèvre, F.; Roscoe, H. K.; Pazmiño, A.; Feng, W.; Chipperfield, M. P.

    2010-03-01

    The passive ozone method is used to estimate ozone loss from ground-based measurements in the Antarctic. A sensitivity study shows that the O3 loss can be estimated within an accuracy of ~4%. The method is then applied to the observations from Amundsen-Scott/South Pole, Arrival Heights, Belgrano, Concordia, Dumont d'Urville, Faraday, Halley, Marambio, Neumayer, Rothera, Syowa and Zhongshan for the diagnosis of ozone loss in the Antarctic. On average, the five-day running mean of the vortex averaged ozone column loss deduced from the ground-based stations shows about 53% in 2009, 59% in 2008, 55% in 2007, 56% in 2006 and 61% in 2005. The observed O3 loss and loss rates are in very good agreement with the satellite observations (Ozone Monitoring Instrument and Sciamachy) and are well reproduced by the model (Reprobus and SLIMCAT) calculations. The historical ground-based total ozone measurements show that the depletion started in the late 1970s, reached a maximum in the early 1990s, stabilising afterwards at this level until present, with the exception of 2002, the year of an early vortex break-up. There is no indication of significant recovery yet. At southern mid-latitudes, a total ozone reduction of 40-50% is observed at the newly installed station Rio Gallegos and 25-35% at Kerguelen in October-November of 2008-2009 and 2005-2009 (except 2008) respectively, and of 10-20% at Macquarie Island in July-August of 2006-2009. This illustrates the significance of measurements at the edges of Antarctica.

  5. Maximum Earthquake Magnitude Assessments by Japanese Government Committees (Invited)

    NASA Astrophysics Data System (ADS)

    Satake, K.

    2013-12-01

    The 2011 Tohoku earthquake (M 9.0) was the largest earthquake in Japanese history and such a gigantic earthquake was not foreseen around Japan. After the 2011 disaster, various government committees in Japan have discussed and assessed the maximum credible earthquake size around Japan, but their values vary without definite consensus. I will review them with earthquakes along the Nankai Trough as an example. The Central Disaster Management Council, under Cabinet Office, set up a policy for the future tsunami disaster mitigation. The possible future tsunamis are classified into two levels: L1 and L2. The L2 tsunamis are the largest possible tsunamis with low frequency of occurrence, for which saving people's lives is the first priority with soft measures such as tsunami hazard maps, evacuation facilities or disaster education. The L1 tsunamis are expected to occur more frequently, typically once in a few decades, for which hard countermeasures such as breakwater must be prepared. The assessments of L1 and L2 events are left to local governments. The CDMC also assigned M 9.1 as the maximum size of earthquake along the Nankai trough, then computed the ground shaking and tsunami inundation for several scenario earthquakes. The estimated loss is about ten times the 2011 disaster, with maximum casualties of 320,000 and economic loss of 2 trillion dollars. The Headquarters of Earthquake Research Promotion, under MEXT, was set up after the 1995 Kobe earthquake and has made long-term forecast of large earthquakes and published national seismic hazard maps. The future probability of earthquake occurrence, for example in the next 30 years, was calculated from the past data of large earthquakes, on the basis of characteristic earthquake model. The HERP recently revised the long-term forecast of Naknai trough earthquake; while the 30 year probability (60 - 70 %) is similar to the previous estimate, they noted the size can be M 8 to 9, considering the variability of past

  6. Estimating metabolic heat loss in birds and mammals by combining infrared thermography with biophysical modelling.

    PubMed

    McCafferty, D J; Gilbert, C; Paterson, W; Pomeroy, P P; Thompson, D; Currie, J I; Ancel, A

    2011-03-01

    Infrared thermography (IRT) is a technique that determines surface temperature based on physical laws of radiative transfer. Thermal imaging cameras have been used since the 1960s to determine the surface temperature patterns of a wide range of birds and mammals and how species regulate their surface temperature in response to different environmental conditions. As a large proportion of metabolic energy is transferred from the body to the environment as heat, biophysical models have been formulated to determine metabolic heat loss. These models are based on heat transfer equations for radiation, convection, conduction and evaporation and therefore surface temperature recorded by IRT can be used to calculate heat loss from different body regions. This approach has successfully demonstrated that in birds and mammals heat loss is regulated from poorly insulated regions of the body which are seen to be thermal windows for the dissipation of body heat. Rather than absolute measurement of metabolic heat loss, IRT and biophysical models have been most useful in estimating the relative heat loss from different body regions. Further calibration studies will improve the accuracy of models but the strength of this approach is that it is a non-invasive method of measuring the relative energy cost of an animal in response to different environments, behaviours and physiological states. It is likely that the increasing availability and portability of thermal imaging systems will lead to many new insights into the thermal physiology of endotherms.

  7. Routine estimate of focal depths for moderate and small earthquakes by modelling regional depth phase sPmP in eastern Canada

    NASA Astrophysics Data System (ADS)

    Ma, S.; Peci, V.; Adams, J.; McCormack, D.

    2003-04-01

    ROUTINE ESTIMATE OF FOCAL DEPTHS FOR MODERATE AND SMALL EARTHQUAKES BY MODELLING REGIONAL DEPTH PHASE sPmP IN EASTERN CANADA Shutian Ma, Veronika Peci, John Adams, and David McCormack(1) (1) National Earthquake Hazards Program, Geological Survey of Canada, 7 Observatory Crescent, Ottawa, ON, K1A 0Y3, Canada Shutian Ma (ma@seismo.nrcan.gc.ca/613-947 3520) Veronika Peci (peci@seismo.nrcan.gc.ca/613-995 7100) John Adams (adams@seismo.nrcan.gc.ca/613-995 5519) David McCormack (cormack@seismo.nrcan.gc.ca/613-992 8766) Earthquake focal depths are critical parameters for basic seismological research, seismotectonic study, seismic hazard assessment, and event discrimination. Focal depths for most earthquakes with Mw >= 4.5 can be estimated from teleseismic arrival times of P, pP and sP. For maller earthquakes, focal depths can be stimated from Pg and Sg arrival times recorded at close stations. However, for most earthquakes in eastern Canada, teleseismic signals are too weak and seismograph spacing too sparse for depth estimation. The regional phase sPmP is very sensitive to focal depth, generally well developed at epicentral distances greater than 100 km, and clearly recorded at many stations in eastern Canada for earthquakes with mN >= 2.8. We developed a procedure to estimate focal depth routinely with sPmP. We select vertical waveforms recorded at distances from about 100 to 300 km (using Geotool and SAC2000), generate synthetic waveforms (using reflectivity method) for a typical focal mechanism and for a suitable range of depths, and choose the depth at which the synthetic best matches the selected waveform. The software is easy to operate. For routine work an experienced operator can get a focal depth with waveform modelling within 10 minutes after the waveform is selected, or in a couple of minutes get a rough focal depth from sPmP and Pg or PmP arrival times without waveform modelling. We have confirmed our sPmP modelling results by two comparisons: (1) to depths

  8. Flood control and loss estimation for paddy field at midstream of Chao Phraya River Basin, Thailand

    NASA Astrophysics Data System (ADS)

    Cham, T. C.; Mitani, Y.

    2015-09-01

    2011 Thailand flood has brought serious impact to downstream of Chao Phraya River Basin. The flood peak period started from August, 2011 to the end of October, 2011. This research focuses on midstream of Chao Phraya River Basin, which is Nakhon Sawan area includes confluence of Nan River and Yom River, also confluence of Ping River and Nan River. The main purpose of this research is to understand the flood generation, estimate the flood volume and loss of paddy field, also recommends applicable flood counter measurement to ease the flood condition at downstream of Chao Phraya River Basin. In order to understand the flood condition, post-analysis is conducted at Nakhon Sawan. The post-analysis consists of field survey to measure the flood marks remained and interview with residents to understand living condition during flood. The 2011 Thailand flood generation at midstream is simulated using coupling of 1D and 2D hydrodynamic model to understand the flood generation during flood peak period. It is calibrated and validated using flood marks measured and streamflow data received from Royal Irrigation Department (RID). Validation of results shows good agreement between simulated result and actual condition. Subsequently, 3 scenarios of flood control are simulated and Geographic Information System (GIS) is used to assess the spatial distribution of flood extent and reduction of loss estimation at paddy field. In addition, loss estimation for paddy field at midstream is evaluated using GIS with the calculated inundation depth. Results show the proposed flood control at midstream able to minimize 5% of the loss of paddy field in 26 provinces.

  9. Kinematic source parameter estimation for the 1995 Mw 7.2 Gulf of Aqaba Earthquake by using InSAR and teleseismic data in a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Bathke, Hannes; Feng, Guangcai; Heimann, Sebastian; Nikkhoo, Mehdi; Zielke, Olaf; Jónsson, Sigurjon; Mai, Martin

    2016-04-01

    The 1995 Mw 7.2 Gulf of Aqaba earthquake was primarily a left-lateral strike-slip earthquake, occurring on the Dead Sea transform fault at the western border of the Arabian plate. The tectonic setting within the trans-tensional Gulf of Aqaba is complex, consisting of several en echelon transform faults and pull-apart basins. Several studies have been published, focusing on this earthquake using either InSAR or teleseismic (P and SH waves) data. However, the published finite-fault rupture models of the earthquake differ significantly. For example, it still remains unclear whether the Aqaba fault, the Aragonese fault or the Arnona fault ruptured in the event. It is also possible that several segments were activated. The main problem with past studies is that either InSAR or teleseismic data were used, but not both. Teleseismic data alone are unable to locate the event well, while the InSAR data are limited in the near field due to the earthquake's offshore location. In addition, the source fault is roughly north-south oriented and InSAR has limited sensitivity to north-south displacements. Here we improve on previous studies by using InSAR and teleseismic data jointly to constrain the source model. In addition, we use InSAR data from two additional tracks that have not been used before, which provides a more complete displacement field of the earthquake. Furthermore, in addition to the fault model parameters themselves, we also estimate the parameter uncertainties, which were not reported in previous studies. Based on these uncertainties we estimate a model-prediction covariance matrix in addition to the data covariance matrix that we then use in Bayesian inference sampling to solve for the static slip-distribution on the fault. By doing so, we avoid using a Laplacian smoothing operator, which is often subjective and may pose an unphysical constraint to the problem. Our results show that fault slip on only the Aragonese fault can satisfactorily explain the InSAR data

  10. The impact of uncertain precipitation data on insurance loss estimates using a Flood Catastrophe Model

    NASA Astrophysics Data System (ADS)

    Sampson, C. C.; Fewtrell, T. J.; O'Loughlin, F.; Pappenberger, F.; Bates, P. B.; Freer, J. E.; Cloke, H. L.

    2014-01-01

    Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and re-insurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge corrected rainfall radar, meteorological re-analysis data (ERA-Interim) and a satellite rainfall product (CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find these loss estimates to be highly sensitive to uncertainties propagated from the driving observational datasets, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.

  11. The impact of uncertain precipitation data on insurance loss estimates using a flood catastrophe model

    NASA Astrophysics Data System (ADS)

    Sampson, C. C.; Fewtrell, T. J.; O'Loughlin, F.; Pappenberger, F.; Bates, P. B.; Freer, J. E.; Cloke, H. L.

    2014-06-01

    Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and reinsurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland, in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than most commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module, and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge-corrected rainfall radar, meteorological reanalysis data (European Centre for Medium-Range Weather Forecasts Reanalysis-Interim; ERA-Interim) and a satellite rainfall product (The Climate Prediction Center morphing method; CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find the loss estimates to be more sensitive to uncertainties propagated from the driving precipitation data sets than to other uncertainties in the hazard and

  12. Source process of large (M~7) earthquakes in Japan Sea estimated from seismic waveforms and tsunami simulations

    NASA Astrophysics Data System (ADS)

    Murotani, S.; Harada, T.; Satake, K.

    2014-12-01

    Inversion of teleseismic waveforms yielded fault parameters of four M~7 earthquakes occurred between 1963 and 1983 in Japan Sea. Tsunami waveforms were simulated based on those parameters and compared to the observed waveforms on tide gauges. Eastern margin of Japan Sea has been considered as a nascent plate boundary between the Eurasian and North American plates but not a typical subduction zone, hence the maximum magnitude (M<8) of earthquakes is smaller than those in the Pacific Ocean. Nevertheless, several large earthquakes with M > 7.5 in the last century caused seismic and tsunami damages, such as the 2007 Chuetsu-oki (Mw 6.6), 2007 Noto (Mw 6.7), 1993 South off Hokkaido (Mw 7.7), 1983 Japan Sea (Mw 7.7), 1964 Niigata (Ms 7.5), or 1940 Shakotan-oki (Mw 7.5) earthquakes. Detailed studies of source process were performed for these earthquakes. Smaller (M~7) earthquakes also cause seismic and tsunami damages if their hypocenters are near the land. However, there are few analyses for earthquakes around M7. Therefore, we study the characteristics of the M~7 earthquakes in Japan Sea. The earthquakes we studied are the 1983 West off Aomori (MJMA 7.1), 1971 West off Sakhalin (MJMA 6.9), 1964 off Oga peninsula (MJMA 6.9), and 1963 Offshore Cape Echizen (MJMA 6.9) earthquakes. From the teleseismic waveforms inversions, the reverse-fault mechanisms were obtained except for the 1963 earthquake which has the strike-slip-fault mechanism. The fault area is 900 km2, 2800 km2, 3600 km2, and 3600 km2, respectively. Tsunami numerical computations are made from the source models obtained by the teleseismic inversions. Tsunamis from the 1983 earthquake were recorded at 32 tide gauge stations along the Japan Sea. Amplitudes of the calculated tsunami waveforms are much smaller than the observations. For the 1971 earthquake, amplitudes of the calculated tsunami waveforms are also smaller than the observations at 18 tide gauge stations. For the 1964 earthquake, the amplitudes are

  13. Prediction of earthquake-triggered landslide event sizes

    NASA Astrophysics Data System (ADS)

    Braun, Anika; Havenith, Hans-Balder; Schlögel, Romy

    2016-04-01

    Seismically induced landslides are a major environmental effect of earthquakes, which may significantly contribute to related losses. Moreover, in paleoseismology landslide event sizes are an important proxy for the estimation of the intensity and magnitude of past earthquakes and thus allowing us to improve seismic hazard assessment over longer terms. Not only earthquake intensity, but also factors such as the fault characteristics, topography, climatic conditions and the geological environment have a major impact on the intensity and spatial distribution of earthquake induced landslides. We present here a review of factors contributing to earthquake triggered slope failures based on an "event-by-event" classification approach. The objective of this analysis is to enable the short-term prediction of earthquake triggered landslide event sizes in terms of numbers and size of the affected area right after an earthquake event occurred. Five main factors, 'Intensity', 'Fault', 'Topographic energy', 'Climatic conditions' and 'Surface geology' were used to establish a relationship to the number and spatial extend of landslides triggered by an earthquake. The relative weight of these factors was extracted from published data for numerous past earthquakes; topographic inputs were checked in Google Earth and through geographic information systems. Based on well-documented recent earthquakes (e.g. Haiti 2010, Wenchuan 2008) and on older events for which reliable extensive information was available (e.g. Northridge 1994, Loma Prieta 1989, Guatemala 1976, Peru 1970) the combination and relative weight of the factors was calibrated. The calibrated factor combination was then applied to more than 20 earthquake events for which landslide distribution characteristics could be cross-checked. One of our main findings is that the 'Fault' factor, which is based on characteristics of the fault, the surface rupture and its location with respect to mountain areas, has the most important

  14. Estimating spatial distribution of soil loss over Seyhan River Basin in Turkey

    NASA Astrophysics Data System (ADS)

    İrvem, Ahmet; Topaloğlu, Fatih; Uygur, Veli

    2007-03-01

    SummaryThe purpose of this study was to investigate the spatial distribution of annual soil loss in Seyhan River Basin using USLE model. A geographic information system (GIS) was used to generate maps of the USLE factors which are rainfall erosivity ( R), soil erodibility ( K), slope length and steepness ( LS), cover ( C) and conservation practices ( P) factors. By integrating these maps in GIS, spatial distribution of soil loss over the Seyhan River Basin was obtained. Annual average soil loss for the Seyhan River Basin was 16.38 t ha -1 y -1. Annual soil loss more than 200 t ha -1 y -1 in pixel level was in the southern region, while the northern region showed the lower annual values. These results are verified by comparing sediment yield measurements in the basin. An area about 198.25 km 2 (0.96%) experiences extremely severe erosion risk, which needs suitable conservation measures to be adopted on a priority basis. The spatial distribution of erosion risk classes estimated 61.03% very low, 8.76% low, 23.52% moderate, 4.03% severe and 1.70% very severe. Thus, the USLE model was used in a GIS environment to identify regions susceptible to water erosion and needing immediate soil conservation planning and application in the Seyhan River Basin in Turkey.

  15. Estimates of Workers with Noise-Induced Hearing Loss and Population at Risk

    NASA Astrophysics Data System (ADS)

    Miyakita, T.; Ueda, A.

    1997-08-01

    Towards the goal of protecting workers from damage due to noise exposure, a vast store of knowledge has been generated about its nature, etiology and time course. There still exists, however, a strong need to reclarify the locations, nature and magnitude of the problem of noise-induced hearing loss (NIHL). Based on the rate of positive results in a hearing screening test in the workplace, this paper presents an attempt to estimate the total number of workers with more than 40 dB hearing loss at 4 kHz caused by occupational noise exposure. The estimated values in major industry groups were as follows: about 780 000 in manufacturing; 410 000 in construction; 360 000 in agriculture; forestry and fishing; and around 2 million in total. Although it is rather difficult to estimate the number of workers exposed to noise above 85 dB(A), it may be reasonable to believe that at least several million workers exposed to noise should be covered by the 1992 guidelines for the prevention of noise hazards.

  16. Estimating the loss in expectation of life due to cancer using flexible parametric survival models.

    PubMed

    Andersson, Therese M-L; Dickman, Paul W; Eloranta, Sandra; Lambe, Mats; Lambert, Paul C

    2013-12-30

    A useful summary measure for survival data is the expectation of life, which is calculated by obtaining the area under a survival curve. The loss in expectation of life due to a certain type of cancer is the difference between the expectation of life in the general population and the expectation of life among the cancer patients. This measure is used little in practice as its estimation generally requires extrapolation of both the expected and observed survival. A parametric distribution can be used for extrapolation of the observed survival, but it is difficult to find a distribution that captures the underlying shape of the survival function after the end of follow-up. In this paper, we base our extrapolation on relative survival, because it is more stable and reliable. Relative survival is defined as the observed survival divided by the expected survival, and the mortality analogue is excess mortality. Approaches have been suggested for extrapolation of relative survival within life-table data, by assuming that the excess mortality has reached zero (statistical cure) or has stabilized to a constant. We propose the use of flexible parametric survival models for relative survival, which enables estimating the loss in expectation of life on individual level data by making these assumptions or by extrapolating the estimated linear trend at the end of follow-up. We have evaluated the extrapolation from this model using data on four types of cancer, and the results agree well with observed data.

  17. An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.

    1994-01-01

    Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.

  18. Completeness of the fossil record: Estimating losses due to small body size

    NASA Astrophysics Data System (ADS)

    Cooper, Roger A.; Maxwell, Phillip A.; Crampton, James S.; Beu, Alan G.; Jones, Craig M.; Marshall, Bruce A.

    2006-04-01

    Size bias in the fossil record limits its use for interpreting patterns of past biodiversity and ecological change. Using comparative size frequency distributions of exceptionally good regional records of New Zealand Holocene and Cenozoic Mollusca in museum archive collections, we derive first-order estimates of the magnitude of the bias against small body size and the effect of this bias on completeness of the fossil record. Our database of 3907 fossil species represents an original living pool of 9086 species, from which ˜36% have been removed by size culling, 27% from the smallest size class (<5 mm). In contrast, non-size-related losses compose only 21% of the total. In soft rocks, the loss of small taxa can be reduced by nearly 50% through the employment of exhaustive collection and preparation techniques.

  19. Missing great earthquakes

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

  20. Rapidly Estimated Seismic Source Parameters for the 16 September 2015 Illapel, Chile M w 8.3 Earthquake

    NASA Astrophysics Data System (ADS)

    Ye, Lingling; Lay, Thorne; Kanamori, Hiroo; Koper, Keith D.

    2016-02-01

    On 16 September 2015, a great ( M w 8.3) interplate thrust earthquake ruptured offshore Illapel, Chile, producing a 4.7-m local tsunami. The last major rupture in the region was a 1943 M S 7.9 event. Seismic methods for rapidly characterizing the source process, of value for tsunami warning, were applied. The source moment tensor could be obtained robustly by W-phase inversion both within minutes (Chilean researchers had a good solution using regional data within 5 min) and within an hour using broadband seismic data. Short-period teleseismic P wave back-projections indicate northward rupture expansion from the hypocenter at a modest rupture expansion velocity of 1.5-2.0 km/s. Finite-fault inversions of teleseismic P and SH waves using that range of rupture velocities and a range of dips from 16°, consistent with the local slab geometry and some moment tensor solutions, to 22°, consistent with long-period moment tensor inversions, indicate a 180- to 240-km bilateral along-strike rupture zone with larger slip northwest to north of the epicenter (with peak slip of 7-10 m). Using a shallower fault model dip shifts slip seaward toward the trench, while a steeper dip moves it closer to the coastline. Slip separates into two patches as assumed rupture velocity increases. In all cases, localized ~5 m slip extends down-dip below the coast north of the epicenter. The seismic moment estimates for the range of faulting parameters considered vary from 3.7 × 1021 Nm (dip 16°) to 2.7 × 1021 Nm (dip 22°), the static stress drop estimates range from 2.6 to 3.5 MPa, and the radiated seismic energy, up to 1 Hz, is about 2.2-3.15 × 1016 J.

  1. Regional Estimates of Drought-Induced Tree Canopy Loss across Texas

    NASA Astrophysics Data System (ADS)

    Schwantes, A.; Swenson, J. J.; González-Roglich, M.; Johnson, D. M.; Domec, J. C.; Jackson, R. B.

    2015-12-01

    The severe drought of 2011 killed millions of trees across the state of Texas. Drought-induced tree-mortality can have significant impacts to carbon cycling, regional biophysics, and community composition. We quantified canopy cover loss across the state using remotely sensed imagery from before and after the drought at multiple scales. First, we classified ~200 orthophotos (1-m spatial resolution) from the National Agriculture Imagery Program, using a supervised maximum likelihood classification. Area of canopy cover loss in these classifications was highly correlated (R2 = 0.8) with ground estimates of canopy cover loss, measured in 74 plots across 15 different sites in Texas. These 1-m orthophoto classifications were then used to calibrate and validate coarser scale (30-m) Landsat imagery to create wall-to-wall tree canopy cover loss maps across the state of Texas. We quantified percent dead and live canopy within each pixel of Landsat to create continuous maps of dead and live tree cover, using two approaches: (1) a zero-inflated beta distribution model and (2) a random forest algorithm. Widespread canopy loss occurred across all the major natural systems of Texas, with the Edwards Plateau region most affected. In this region, on average, 10% of the forested area was lost due to the 2011 drought. We also identified climatic thresholds that controlled the spatial distribution of tree canopy loss across the state. However, surprisingly, there were many local hot spots of canopy loss, suggesting that not only climatic factors could explain the spatial patterns of canopy loss, but rather other factors related to soil, landscape, management, and stand density also likely played a role. As increases in extreme droughts are predicted to occur with climate change, it will become important to define methods that can detect associated drought-induced tree mortality across large regions. These maps could then be used (1) to quantify impacts to carbon cycling and regional

  2. Regional economic activity and absenteeism: a new approach to estimating the indirect costs of employee productivity loss.

    PubMed

    Bankert, Brian; Coberley, Carter; Pope, James E; Wells, Aaron

    2015-02-01

    This paper presents a new approach to estimating the indirect costs of health-related absenteeism. Productivity losses related to employee absenteeism have negative business implications for employers and these losses effectively deprive the business of an expected level of employee labor. The approach herein quantifies absenteeism cost using an output per labor hour-based method and extends employer-level results to the region. This new approach was applied to the employed population of 3 health insurance carriers. The economic cost of absenteeism was estimated to be $6.8 million, $0.8 million, and $0.7 million on average for the 3 employers; regional losses were roughly twice the magnitude of employer-specific losses. The new approach suggests that costs related to absenteeism for high output per labor hour industries exceed similar estimates derived from application of the human capital approach. The materially higher costs under the new approach emphasize the importance of accurately estimating productivity losses.

  3. New constraints on the rupture process of the 1999 August 17 Izmit earthquake deduced from estimates of stress glut rate moments

    NASA Astrophysics Data System (ADS)

    Clévédé, E.; Bouin, M.-P.; Bukchin, B.; Mostinskiy, A.; Patau, G.

    2004-12-01

    This paper illustrates the use of integral estimates given by the stress glut rate moments of total degree 2 for constraining the rupture scenario of a large earthquake in the particular case of the 1999 Izmit mainshock. We determine the integral estimates of the geometry, source duration and rupture propagation given by the stress glut rate moments of total degree 2 by inverting long-period surface wave (LPSW) amplitude spectra. Kinematic and static models of the Izmit earthquake published in the literature are quite different from one another. In order to extract the characteristic features of this event, we calculate the same integral estimates directly from those models and compare them with those deduced from our inversion. While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. With the aim of understand this discrepancy, we use simple equivalent kinematic models to reproduce the integral estimates of the considered rupture processes (including ours) by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the joint analysis of the LPSW solution and source tomographies allows us to elucidate the scattering of source processes published for this earthquake and to discriminate between the models. Our results strongly suggest that (1) there was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; (2) the apparent rupture velocity decreases on this segment.

  4. Izmit, Turkey 1999 Earthquake Interferogram

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This image is an interferogram that was created using pairs of images taken by Synthetic Aperture Radar (SAR). The images, acquired at two different times, have been combined to measure surface deformation or changes that may have occurred during the time between data acquisition. The images were collected by the European Space Agency's Remote Sensing satellite (ERS-2) on 13 August 1999 and 17 September 1999 and were combined to produce these image maps of the apparent surface deformation, or changes, during and after the 17 August 1999 Izmit, Turkey earthquake. This magnitude 7.6 earthquake was the largest in 60 years in Turkey and caused extensive damage and loss of life. Each of the color contours of the interferogram represents 28 mm (1.1 inches) of motion towards the satellite, or about 70 mm (2.8 inches) of horizontal motion. White areas are outside the SAR image or water of seas and lakes. The North Anatolian Fault that broke during the Izmit earthquake moved more than 2.5 meters (8.1 feet) to produce the pattern measured by the interferogram. Thin red lines show the locations of fault breaks mapped on the surface. The SAR interferogram shows that the deformation and fault slip extended west of the surface faults, underneath the Gulf of Izmit. Thick black lines mark the fault rupture inferred from the SAR data. Scientists are using the SAR interferometry along with other data collected on the ground to estimate the pattern of slip that occurred during the Izmit earthquake. This then used to improve computer models that predict how this deformation transferred stress to other faults and to the continuation of the North Anatolian Fault, which extends to the west past the large city of Istanbul. These models show that the Izmit earthquake further increased the already high probability of a major earthquake near Istanbul.

  5. On Assessment and Estimation of Potential Losses due to Land Subsidence in Urban Areas of Indonesia

    NASA Astrophysics Data System (ADS)

    Abidin, Hasanuddin Z.; Andreas, Heri; Gumilar, Irwan; Sidiq, Teguh P.

    2016-04-01

    subsidence have also relation among each other, the accurate quantification of the potential losses caused by land subsidence in urban areas is not an easy task to accomplish. The direct losses can be easier to estimate than the indirect losses. For example, the direct losses due to land subsidence in Bandung was estimated to be at least 180 Million USD; but the indirect losses is still unknown.

  6. Use of plume mapping data to estimate chlorinated solvent mass loss

    USGS Publications Warehouse

    Barbaro, J.R.; Neupane, P.P.

    2006-01-01

    Results from a plume mapping study from November 2000 through February 2001 in the sand-and-gravel surficial aquifer at Dover Air Force Base, Delaware, were used to assess the occurrence and extent of chlorinated solvent mass loss by calculating mass fluxes across two transverse cross sections and by observing changes in concentration ratios and mole fractions along a longitudinal cross section through the core of the plume. The plume mapping investigation was conducted to determine the spatial distribution of chlorinated solvents migrating from former waste disposal sites. Vertical contaminant concentration profiles were obtained with a direct-push drill rig and multilevel piezometers. These samples were supplemented with additional ground water samples collected with a minipiezometer from the bed of a perennial stream downgradient of the source areas. Results from the field program show that the plume, consisting mainly of tetrachloroethylene (PCE), trichloroethene (TCE), and cis-1,2-dichloroethene (cis-1,2-DCE), was approximately 670 m in length and 120 m in width, extended across much of the 9- to 18-m thickness of the surficial aquifer, and discharged to the stream in some areas. The analyses of the plume mapping data show that losses of the parent compounds, PCE and TCE, were negligible downgradient of the source. In contrast, losses of cis-1,2-DCE, a daughter compound, were observed in this plume. These losses very likely resulted from biodegradation, but the specific reaction mechanism could not be identified. This study demonstrates that plume mapping data can be used to estimate the occurrence and extent of chlorinated solvent mass loss from biodegradation and assess the effectiveness of natural attenuation as a remedial measure.

  7. Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Mahdi Alavi, S. M.; Saif, Mehrdad

    2013-12-01

    This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.

  8. Source parameters of the Pinotepa Nacional, Mexico, earthquake of 27 March, 1996 (Mw = 5.4) estimated from near-field recordings of a single station

    NASA Astrophysics Data System (ADS)

    Singh, S.K.; Pacheco, J.; Courboulex, F.; Novelo, D.A.

    We use near-field accelerograms recorded by the very broadband seismographic station of PNIG to locate the Pinotepa Nacional earthquake of 27 March, 1996 (Mw = 5.4) and to determine its source parameters. The data from PNIG on P and S arrival times, the azimuth of the arrival of P wave, and the angle of incidence of the P wave at the free surface permit the determination of the location (16.365° N, 98.303° W, depth = 18 km) and the origin time (12:34;48.35) of the earthquake.The displacement seismograms of the earthquake clearly shows contribution from the near-field terms. We compute a suite of synthetic seismograms for local mechanisms in the vicinity of the mechanism reported by the U.S. Geological Survey (USGS) and compare them with the observed seismograms at PNIG. The point whose synthetics fit the observed records well has the following parameters: seismic moment, M0 = 1.2 × 1024 dyne-cm; source time function: a triangular pulse of 0.9 sec duration; fault plane: strike = 291°, dip = 10°, and rake = 80°. The location and the source parameters obtained from the analysis of PNIG records differ significantly from those reported by the USGS. This demonstrates again, what has been shown by some previous researchers, that high-quality recordings from a single near-field station can considerably improve the estimation of the source parameters of an earthquake.

  9. Comparison of ground motions estimated from prediction equations and from observed damage during the M = 4.6 1983 Liège earthquake (Belgium)

    NASA Astrophysics Data System (ADS)

    García Moreno, D.; Camelbeeck, T.

    2013-08-01

    On 8 November 1983 an earthquake of magnitude 4.6 damaged more than 16 000 buildings in the region of Liège (Belgium). The extraordinary damage produced by this earthquake, considering its moderate magnitude, is extremely well documented, giving the opportunity to compare the consequences of a recent moderate earthquake in a typical old city of Western Europe with scenarios obtained by combining strong ground motions and vulnerability modelling. The present study compares 0.3 s spectral accelerations estimated from ground motion prediction equations typically used in Western Europe with those obtained locally by applying the statistical distribution of damaged masonry buildings to two fragility curves, one derived from the HAZUS programme of FEMA (FEMA, 1999) and another developed for high-vulnerability buildings by Lang and Bachmann (2004), and to a method proposed by Faccioli et al. (1999) relating the seismic vulnerability of buildings to the damage and ground motions. The results of this comparison reveal good agreement between maxima spectral accelerations calculated from these vulnerability and fragility curves and those predicted from attenuation law equations, suggesting peak ground accelerations for the epicentral area of the 1983 earthquake of 0.13-0.20 g (g: gravitational acceleration).

  10. Estimating Loss-of-Coolant Accident Frequencies for the Standardized Plant Analysis Risk Models

    SciTech Connect

    S. A. Eide; D. M. Rasmuson; C. L. Atwood

    2008-09-01

    The U.S. Nuclear Regulatory Commission maintains a set of risk models covering the U.S. commercial nuclear power plants. These standardized plant analysis risk (SPAR) models include several loss-of-coolant accident (LOCA) initiating events such as small (SLOCA), medium (MLOCA), and large (LLOCA). All of these events involve a loss of coolant inventory from the reactor coolant system. In order to maintain a level of consistency across these models, initiating event frequencies generally are based on plant-type average performance, where the plant types are boiling water reactors and pressurized water reactors. For certain risk analyses, these plant-type initiating event frequencies may be replaced by plant-specific estimates. Frequencies for SPAR LOCA initiating events previously were based on results presented in NUREG/CR-5750, but the newest models use results documented in NUREG/CR-6928. The estimates in NUREG/CR-6928 are based on historical data from the initiating events database for pressurized water reactor SLOCA or an interpretation of results presented in the draft version of NUREG-1829. The information in NUREG-1829 can be used several ways, resulting in different estimates for the various LOCA frequencies. Various ways NUREG-1829 information can be used to estimate LOCA frequencies were investigated and this paper presents two methods for the SPAR model standard inputs, which differ from the method used in NUREG/CR-6928. In addition, results obtained from NUREG-1829 are compared with actual operating experience as contained in the initiating events database.

  11. Demand surge following earthquakes

    USGS Publications Warehouse

    Olsen, Anna H.

    2012-01-01

    Demand surge is understood to be a socio-economic phenomenon where repair costs for the same damage are higher after large- versus small-scale natural disasters. It has reportedly increased monetary losses by 20 to 50%. In previous work, a model for the increased costs of reconstruction labor and materials was developed for hurricanes in the Southeast United States. The model showed that labor cost increases, rather than the material component, drove the total repair cost increases, and this finding could be extended to earthquakes. A study of past large-scale disasters suggested that there may be additional explanations for demand surge. Two such explanations specific to earthquakes are the exclusion of insurance coverage for earthquake damage and possible concurrent causation of damage from an earthquake followed by fire or tsunami. Additional research into these aspects might provide a better explanation for increased monetary losses after large- vs. small-scale earthquakes.

  12. Development of an Earthquake Impact Scale

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Marano, K. D.; Jaiswal, K. S.

    2009-12-01

    With the advent of the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system, domestic (U.S.) and international earthquake responders are reconsidering their automatic alert and activation levels as well as their response procedures. To help facilitate rapid and proportionate earthquake response, we propose and describe an Earthquake Impact Scale (EIS) founded on two alerting criteria. One, based on the estimated cost of damage, is most suitable for domestic events; the other, based on estimated ranges of fatalities, is more appropriate for most global events. Simple thresholds, derived from the systematic analysis of past earthquake impact and response levels, turn out to be quite effective in communicating predicted impact and response level of an event, characterized by alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (major disaster, necessitating international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses exceeding 1M, 10M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness dominate in countries where vernacular building practices typically lend themselves to high collapse and casualty rates, and it is these impacts that set prioritization for international response. In contrast, it is often financial and overall societal impacts that trigger the level of response in regions or countries where prevalent earthquake resistant construction practices greatly reduce building collapse and associated fatalities. Any newly devised alert protocols, whether financial or casualty based, must be intuitive and consistent with established lexicons and procedures. In this analysis, we make an attempt

  13. Economic Estimation of the Losses Caused by Surface Water Pollution Accidents in China From the Perspective of Water Bodies’ Functions

    PubMed Central

    Yao, Hong; You, Zhen; Liu, Bo

    2016-01-01

    The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies’ functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident’s origin and other indirect losses. In the valuation of damage to people’s life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water’s recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole. PMID:26805869

  14. Economic Estimation of the Losses Caused by Surface Water Pollution Accidents in China From the Perspective of Water Bodies' Functions.

    PubMed

    Yao, Hong; You, Zhen; Liu, Bo

    2016-01-22

    The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies' functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident's origin and other indirect losses. In the valuation of damage to people's life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water's recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole.

  15. Sound absorption coefficient in situ: an alternative for estimating soil loss factors.

    PubMed

    Freire, Rosane; Meletti de Abreu, Marco Henrique; Okada, Rafael Yuri; Soares, Paulo Fernando; GranhenTavares, Célia Regina

    2015-01-01

    The relationship between the sound absorption coefficient and factors of the Universal Soil Loss Equation (USLE) was determined in a section of the Maringá Stream basin, Paraná State, by using erosion plots. In the field, four erosion plots were built on a reduced scale, with dimensions of 2.0×12.5m. With respect to plot coverage, one was kept with bare soil and the others contained forage grass (Brachiaria), corn and wheat crops, respectively. Planting was performed without any type of conservation practice in an area with a 9% slope. A sedimentation tank was placed at the end of each plot to collect the material transported. For the acoustic system, pink noise was used in the measurement of the proposed monitoring, for collecting information on incident and reflected sound pressure levels. In general, obtained values of soil loss confirmed that 94.3% of material exported to the basin water came from the bare soil plot, 2.8% from the corn plot, 1.8% from the wheat plot, and 1.1% from the forage grass plot. With respect to the acoustic monitoring, results indicated that at 16kHz erosion plot coverage type had a significant influence on the sound absorption coefficient. High correlation coefficients were found in estimations of the A and C factors of the USLE, confirming that the acoustic technique is feasible for the determination of soil loss directly in the field.

  16. Estimated Lifetime Medical and Work-Loss Costs of Fatal Injuries--United States, 2013.

    PubMed

    Florence, Curtis; Simon, Thomas; Haegerich, Tamara; Luo, Feijun; Zhou, Chao

    2015-10-02

    Injury-associated deaths have substantial economic consequences. In 2013, unintentional injury was the fourth leading cause of death, suicide was the tenth, and homicide was the sixteenth; these three causes accounted for approximately 187,000 deaths in the United States. To assess the economic impact of fatal injuries, CDC analyzed death data from the National Vital Statistics System for 2013, along with cost of injury data using the Web-Based Injury Statistics Query and Reporting System. This report updates a previous study that analyzed death data from the year 2000, and employs recently revised methodology for determining the costs of injury outcomes, which uses the most current economic data and incorporates improvements for estimating medical costs associated with injury. Number of deaths, crude and age-specific death rates, and total lifetime work-loss costs and medical costs were calculated for fatal injuries by sex, age group, intent (intentional versus unintentional), and mechanism of injury. During 2013, the rate of fatal injury was 61.0 per 100,000 population, with combined medical and work-loss costs exceeding $214 billion. Costs from fatal injuries represent approximately one third of the total $671 billion medical and work-loss costs associated with all injuries in 2013. The magnitude of the economic burden associated with injury-associated deaths underscores the need for effective prevention.

  17. Photogrammetrically Derived Estimates of Glacier Mass Loss in the Upper Susitna Drainage Basin, Alaska Range, Alaska

    NASA Astrophysics Data System (ADS)

    Wolken, G. J.; Whorton, E.; Murphy, N.

    2014-12-01

    Glaciers in Alaska are currently experiencing some of the highest rates of mass loss on Earth, with mass wastage rates accelerating during the last several decades. Glaciers, and other components of the hydrologic cycle, are expected to continue to change in response to anticipated future atmospheric warming, thus, affecting the quantity and timing of river runoff. This study uses sequential digital elevation model (DEM) analysis to estimate the mass loss of glaciers in the upper Susitna drainage basin, Alaska Range, for the purpose of validating model simulations of past runoff changes. We use mainly stereo optical airborne and satellite data for several epochs between 1949 and 2014, and employ traditional stereo-photogrammetric and structure from motion processing techniques to derive DEMs of the upper Susitna basin glaciers. This work aims to improve the record of glacier change in the central Alaska Range, and serves as a critical validation dataset for a hydrological model that simulates the potential effects of future glacier mass loss on changes in river runoff over the lifespan of the proposed Susitna-Watana Hydroelectric Project.

  18. Estimating nitrogen losses in furrow irrigated soil amended by compost using HYDRUS-2D model

    NASA Astrophysics Data System (ADS)

    Iqbal, Shahid; Guber, Andrey; Zaman Khan, Haroon; ullah, Ehsan

    2014-05-01

    Furrow irrigation commonly results in high nitrogen (N) losses from soil profile via deep infiltration. Estimation of such losses and their reduction is not a trivial task because furrow irrigation creates highly nonuniform distribution of soil water that leads to preferential water and N fluxes in soil profile. Direct measurements of such fluxes are impractical. The objective of this study was to assess applicability of HYDRUS-2D model for estimating nitrogen balance in manure amended soil under furrow irrigation. Field experiments were conducted in a sandy loam soil amended by poultry manure compost (PMC) and pressmud compost (PrMC) fertilizers. The PMC and PrMC contained 2.5% and 0.9% N and were applied at 5 rates: 2, 4, 6, 8 and 10 ton/ha. Plots were irrigated starting from 26th day from planting using furrows with 1x1 ridge to furrow aspect ratio. Irrigation depths were 7.5 cm and time interval between irrigations varied from 8 to 15 days. Results of the field experiments showed that approximately the same corn yield was obtained with considerably higher N application rates using PMC than using PrMC as a fertilizer. HYDRUS-2D model was implemented to evaluate N fluxes in soil amended by PMC and PrMC fertilizers. Nitrogen exchange between two pools of organic N (compost and soil) and two pools of mineral N (soil NH4-N and soil NO3-N) was modeled using mineralization and nitrification reactions. Sources of mineral N losses from soil profile included denitrification, root N uptake and leaching with deep infiltration of water. HYDRUS-2D simulations showed that the observed increases in N root water uptake and corn yields associated with compost application could not be explained by the amount of N added to soil profile with the compost. Predicted N uptake by roots significantly underestimated the field data. Good agreement between simulated and field-estimated values of N root uptake was achieved when the rate of organic N mineralization was increased

  19. Validity of weight loss to estimate improvement in body composition in individuals attending a wellness center

    PubMed Central

    Cruz, Paulina; Johnson, Bruce D.; Karpinski, Susan C.; Limoges, Katherine A.; Warren, Beth A.; Olsen, Kerry D.; Somers, Virend K.; Jensen, Michael D.; Clark, Matthew M.; Lopez-Jimenez, Francisco

    2014-01-01

    The accuracy of weight loss in estimating successful changes in body composition (BC), namely fat mass loss, is not known and was addressed in our study. To assess the correlation between change in body weight and change in fat mass (FM), fat % and fat-free mass (FFM), 465 participants (41% male; 41±13years), who met the criteria for weight change at a wellness center, underwent air-displacement plethysmography. Body weight and BC were measured at the same time. We categorized the change in body weight, FM and FFM as an increase if there was >1 kg gain, a decrease if there was >1 kg loss and no change if the difference was ≤ 1 kg. We estimated the diagnostic performance of weight change to identify improvement in BC. After a median time of 132 days, 255 people who lost >1 kg of weight, 216(84.7%) had lost >1 kg of FM, but 69(27.1%) had lost >1 kg of FFM. Of the 143 people with no weight change, 42(29.4%) had actually lost >1kg of FM. Of the 67 who gained >1 kg of weight at follow-up, in 23(34.3%) this was due to an increase in FFM but not in FM. Weight change had a NPV of 73%. Mean weight change was 2.4 kg. Our results indicate that favorable improvements in BC may go undetected in almost 1/3 of people whose weight remains the same and in 1/3 of people who gain weight after attending a wellness center. These results underscore the potential role of BC measurements in people attempting lifestyle changes. PMID:21566566

  20. Estimating Annual Soil Carbon Loss in Agricultural Peatland Soils Using a Nitrogen Budget Approach

    PubMed Central

    Kirk, Emilie R.; van Kessel, Chris; Horwath, William R.; Linquist, Bruce A.

    2015-01-01

    Around the world, peatland degradation and soil subsidence is occurring where these soils have been converted to agriculture. Since initial drainage in the mid-1800s, continuous farming of such soils in the California Sacramento-San Joaquin Delta (the Delta) has led to subsidence of up to 8 meters in places, primarily due to soil organic matter (SOM) oxidation and physical compaction. Rice (Oryza sativa) production has been proposed as an alternative cropping system to limit SOM oxidation. Preliminary research on these soils revealed high N uptake by rice in N fertilizer omission plots, which we hypothesized was the result of SOM oxidation releasing N. Testing this hypothesis, we developed a novel N budgeting approach to assess annual soil C and N loss based on plant N uptake and fallow season N mineralization. Through field experiments examining N dynamics during growing season and winter fallow periods, a complete annual N budget was developed. Soil C loss was calculated from SOM-N mineralization using the soil C:N ratio. Surface water and crop residue were negligible in the total N uptake budget (3 – 4 % combined). Shallow groundwater contributed 24 – 33 %, likely representing subsurface SOM-N mineralization. Assuming 6 and 25 kg N ha-1 from atmospheric deposition and biological N2 fixation, respectively, our results suggest 77 – 81 % of plant N uptake (129 – 149 kg N ha-1) was supplied by SOM mineralization. Considering a range of N uptake efficiency from 50 – 70 %, estimated net C loss ranged from 1149 – 2473 kg C ha-1. These findings suggest that rice systems, as currently managed, reduce the rate of C loss from organic delta soils relative to other agricultural practices. PMID:25822494

  1. Estimating annual soil carbon loss in agricultural peatland soils using a nitrogen budget approach.

    PubMed

    Kirk, Emilie R; van Kessel, Chris; Horwath, William R; Linquist, Bruce A

    2015-01-01

    Around the world, peatland degradation and soil subsidence is occurring where these soils have been converted to agriculture. Since initial drainage in the mid-1800s, continuous farming of such soils in the California Sacramento-San Joaquin Delta (the Delta) has led to subsidence of up to 8 meters in places, primarily due to soil organic matter (SOM) oxidation and physical compaction. Rice (Oryza sativa) production has been proposed as an alternative cropping system to limit SOM oxidation. Preliminary research on these soils revealed high N uptake by rice in N fertilizer omission plots, which we hypothesized was the result of SOM oxidation releasing N. Testing this hypothesis, we developed a novel N budgeting approach to assess annual soil C and N loss based on plant N uptake and fallow season N mineralization. Through field experiments examining N dynamics during growing season and winter fallow periods, a complete annual N budget was developed. Soil C loss was calculated from SOM-N mineralization using the soil C:N ratio. Surface water and crop residue were negligible in the total N uptake budget (3 - 4 % combined). Shallow groundwater contributed 24 - 33 %, likely representing subsurface SOM-N mineralization. Assuming 6 and 25 kg N ha-1 from atmospheric deposition and biological N2 fixation, respectively, our results suggest 77 - 81 % of plant N uptake (129 - 149 kg N ha-1) was supplied by SOM mineralization. Considering a range of N uptake efficiency from 50 - 70 %, estimated net C loss ranged from 1149 - 2473 kg C ha-1. These findings suggest that rice systems, as currently managed, reduce the rate of C loss from organic delta soils relative to other agricultural practices.

  2. A new pan-tropical estimate of carbon loss in natural and managed forests in 2000-2012

    NASA Astrophysics Data System (ADS)

    Tyukavina, A.; Baccini, A.; Hansen, M.; Potapov, P.; Stehman, S. V.; Houghton, R. A.; Krylov, A.; Turubanova, S.; Goetz, S. J.

    2015-12-01

    Clearing of tropical forests, which includes semi-permanent conversion of forests to other land uses (deforestation) and more temporary forest disturbances, is a significant source of carbon emissions. The previous estimates of tropical forest carbon loss vary among studies due to the differences in definitions, methodologies and data inputs. The best currently available satellite-derived datasets, such as a 30-m forest cover loss map by Hansen et al. (2013), may be used to produce methodologically consistent carbon loss estimates for the entire tropical region, but forest cover loss area derived from maps is biased due to classification errors. In this study we produced an unbiased estimate of forest cover loss area from a validation sample, as suggested by good practice recommendations. Stratified random sampling was implemented with forest carbon stock strata defined based on Landsat-derived tree canopy cover, height, intactness (Potapov et al., 2008) and forest cover loss (Hansen et al., 2013). The largest difference between the sample-based and Hansen et al. (2013) forest loss area estimates occurred in humid tropical Africa. This result supports the earlier finding (Tyukavina et al., 2013) that Landsat-based forest cover loss maps may significantly underestimate loss area in regions with small-scale forest dynamics while performing well in regions with large industrial forest clearing, such as Brazil and Indonesia (where differences between sample-based and map estimates were within 10%). To produce final carbon loss estimates, sample-based forest loss area estimates for each stratum were related to GLAS-lidar derived forest biomass (Baccini et al., 2012). Our sample-based results distinguish gross losses of aboveground carbon from natural forests (0.59 PgC/yr), which include primary, mature secondary forests and natural woodlands, and from managed forests (0.43 PgC/yr), which include plantations, agroforestry systems and areas of subsistence agriculture

  3. Strong Motion Prediction Method Using Statistical Green's Function Estimated From K-Net Records and its Application to the Hypothesized Fukuoka Earthquake

    NASA Astrophysics Data System (ADS)

    Kawase, H.; Shigeo Itoh, S.; Kuhara, H.; Matsuo, H.

    2001-12-01

    First we extract statistical characteristics of seismic ground motions from K-Net records observed in the Kyushu region. We select ground motions for earthquakes with shallow depths (<60km) and moderate magnitudes (>4.5), observed within 200km from hypocenters. For the envelope characteristics first we express them by Boore's envelope function (Boore, 1983) and identify its model parameters. Then we express them as a function of the magnitude M and the hypocentral distance X using two step regression analysis. For the spectral characteristics we separate source, path, and site effects from the observed Fourier spectra and express them also as a function of M and X. Once we obtain these statistical parameters, we can synthesize ground motions hypothetically observed at any location of the K-Net sites for arbitrary source. We validate them by comparing them with observed data. Next we use them to predict strong motions for future large earthquakes through the so-called statistical Green's function method. Before to predict ground motions for a hypothesized earthquake we must test our method against the observed ground motions in previous large earthquakes. We first apply the method to the Kagoshima-ken Hokuseibu earthquake with Mjma 6.3 where we observe strong directivity at one K-Net station. Then, we simulate strong motion at the bedrock level during the Hyogo-ken Nanbu earthquake. In either case synthetic waveforms match well with the observed. Thus it is proved that we can predict the ground motions using our statistical Green's function if we properly express the source. Finally, we apply this method to a hypothesized Fukuoka earthquake. First strong motions at the bedrock level are predicted and then the strong motions at the ground surface are obtained by the 1-D wave propagation theory. We assume the same source scenario as in Kobe. The peak ground velocity (PGV) estimated reaches 100 cm/s at most, which is much less than the PGV observed in Kobe, primarily

  4. Estimation of Strong Ground Motion from a Great Earthquake Mw 8.5 in Central Seismic Gap Region, Himalaya (India) Using Empirical Green's Function Technique

    NASA Astrophysics Data System (ADS)

    Sharma, Babita; Chopra, Sumer; Sutar, Anup Kumar; Bansal, B. K.

    2013-12-01

    In the present study ground motions for a Mw 8.5 scenario earthquake are estimated at 13 sites in Kumaun-Garhwal region using the empirical Green's function technique. The recordings of 1991 Uttarkashi earthquake of Mw 6.8 at these sites are used as an element earthquake. A heterogeneous source model consisting of two asperities is considered for simulating the ground motions. The entire central seismic gap (CSG) can expect acceleration in excess of 100 cm/s2 with NW portion in excess of 400 cm/s2 and SE between 100 and 200 cm/s2. The central portion can expect peak ground acceleration (PGA) between 200 and 400 cm/s2. It has been observed from simulation of strong ground motion that sites located near the rupture initiation point can expect accelerations in excess of 1 g. In the present analysis, Bhatwari and Uttarkashi can expect ground accelerations in excess of 1 g. The estimates of the PGA are compared with earlier studies in the same region using different methodologies and it was found that the results are comparable. This has put constrains on the expected PGAs in this region. The obtained PGA values can be used in identifying the vulnerable areas in the central Himalaya, thereby facilitating the planning, design and construction of new structures and strengthening of the existing structures in the region.

  5. Conditional density estimation with dimensionality reduction via squared-loss conditional entropy minimization.

    PubMed

    Tangkaratt, Voot; Xie, Ning; Sugiyama, Masashi

    2015-01-01

    Regression aims at estimating the conditional mean of output given input. However, regression is not informative enough if the conditional density is multimodal, heteroskedastic, and asymmetric. In such a case, estimating the conditional density itself is preferable, but conditional density estimation (CDE) is challenging in high-dimensional space. A naive approach to coping with high dimensionality is to first perform dimensionality reduction (DR) and then execute CDE. However, a two-step process does not perform well in practice because the error incurred in the first DR step can be magnified in the second CDE step. In this letter, we propose a novel single-shot procedure that performs CDE and DR simultaneously in an integrated way. Our key idea is to formulate DR as the problem of minimizing a squared-loss variant of conditional entropy, and this is solved using CDE. Thus, an additional CDE step is not needed after DR. We demonstrate the usefulness of the proposed method through extensive experiments on various data sets, including humanoid robot transition and computer art.

  6. ESTIMATING SURFACE RUNOFF LOSS OF DISSOLVED PHOSPHORUS FROM MANURE APPLICATIONS TO CROPLAND FOR THE WISCONSIN PHOSPHOROUS INDEX

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Wisconsin Phosphorus (P) Index is a field-level runoff P loss risk assessment tool for evaluating agricultural management practices. It assigns an annual risk ranking to a field by estimating annual sediment-bound and dissolved P losses to the nearest surface water. On cropland with no recent ma...

  7. Source estimate and tsunami forecast from far-field deep-ocean tsunami waveforms—The 27 February 2010 Mw 8.8 Maule earthquake

    NASA Astrophysics Data System (ADS)

    Yoshimoto, Masahiro; Watada, Shingo; Fujii, Yushiro; Satake, Kenji

    2016-01-01

    We inverted the 2010 Maule earthquake tsunami waveforms recorded at DART (Deep-ocean Assessment and Reporting Tsunamis) stations in the Pacific Ocean by taking into account the effects of the seawater compressibility, elasticity of the solid Earth, and gravitational potential change. These effects slow down the tsunami speed and consequently move the slip offshore or updip direction, consistent with the slip distribution obtained by a joint inversion of DART, tide gauge, GPS, and coastal geodetic data. Separate inversions of only near-field DART data and only far-field DART data produce similar slip distributions. The former demonstrates that accurate tsunami arrival times and waveforms of trans-Pacific tsunamis can be forecast in real time. The latter indicates that if the tsunami source area is as large as the 2010 Maule earthquake, the tsunami source can be accurately estimated from the far-field deep-ocean tsunami records without near-field data.

  8. Novel point estimation from a semiparametric ratio estimator (SPRE): long-term health outcomes from short-term linear data, with application to weight loss in obesity.

    PubMed

    Weissman-Miller, Deborah

    2013-11-02

    Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.

  9. Earthquake Risk Assessment and Risk Transfer

    NASA Astrophysics Data System (ADS)

    Liechti, D.; Zbinden, A.; Rüttener, E.

    Research on risk assessment of natural catastrophes is very important for estimating its economical and social impact. The loss potentials of such disasters (e.g. earthquake and storms) for property owners, insurance and nationwide economies are driven by the hazard, the damageability (vulnerability) of buildings and infrastructures and depend on the ability to transfer these losses to different parties. In addition, the geographic distribution of the exposed values, the uncertainty of building vulnerability and the individual deductible are main factors determining the size of a loss. The deductible is the key element that steers the distribution of losses between insured and insurer. Therefore the risk analysis concentrates on deductible and vulnerability of insured buildings and maps their variations to allow efficient decisions. With consideration to stochastic event sets, the corresponding event losses can be modelled as expected loss grades of a Beta probability density function. Based on deductible and standard deviation of expected loss grades, the loss for the insured and for the insurer can be quantified. In addition, the varying deductible impact on different geographic regions can be described. This analysis has been carried out for earthquake insurance portfolios with various building types and different deductibles. Besides quantifying loss distributions between insured and insurer based on uncertainty assumptions and deductible consideration, mapping yields ideas to optimise the risk transfer process and can be used for developing risk mitigation strategies.

  10. Estimating landslide losses - preliminary results of a seven-State pilot project

    USGS Publications Warehouse

    Highland, Lynn M.

    2006-01-01

    reliable information on economic losses associated with landslides. Each State survey examined the availability, distribution, and inherent uncertainties of economic loss data in their study areas. Their results provide the basis for identifying the most fruitful methods of collecting landslide loss data nationally, using methods that are consistent and provide common goals. These results can enhance and establish the future directions of scientific investigation priorities by convincingly documenting landslide risks and consequences that are universal throughout the 50 States. This report is organized as follows: A general summary of the pilot project history, goals, and preliminary conclusions from the Lincoln, Neb. workshop are presented first. Internet links are then provided for each State report, which appear on the internet in PDF format and which have been placed at the end of this open-file report. A reference section follows the reports, and, lastly, an Appendix of categories of landslide loss and sources of loss information is included for the reader's information. Please note: The Oregon Geological Survey has also submitted a preliminary report on indirect loss estimation methodology, which is also linked with the others. Each State report is unique and presented in the form in which it was submitted, having been independently peer reviewed by each respective State survey. As such, no universal 'style' or format has been adopted as there have been no decisions on which inventory methods will be recommended to the 50 states, as of this writing. The reports are presented here as information for decision makers, and for the record; although several reports provide recommendations on inventory methods that could be adopted nationwide, currently no decisions have been made on adopting a uniform methodology for the States.

  11. Magnetic Resonance Measurement of Turbulent Kinetic Energy for the Estimation of Irreversible Pressure Loss in Aortic Stenosis

    PubMed Central

    Dyverfeldt, Petter; Hope, Michael D.; Tseng, Elaine E.; Saloner, David

    2013-01-01

    OBJECTIVES The authors sought to measure the turbulent kinetic energy (TKE) in the ascending aorta of patients with aortic stenosis and to assess its relationship to irreversible pressure loss. BACKGROUND Irreversible pressure loss caused by energy dissipation in post-stenotic flow is an important determinant of the hemodynamic significance of aortic stenosis. The simplified Bernoulli equation used to estimate pressure gradients often misclassifies the ventricular overload caused by aortic stenosis. The current gold standard for estimation of irreversible pressure loss is catheterization, but this method is rarely used due to its invasiveness. Post-stenotic pressure loss is largely caused by dissipation of turbulent kinetic energy into heat. Recent developments in magnetic resonance flow imaging permit noninvasive estimation of TKE. METHODS The study was approved by the local ethics review board and all subjects gave written informed consent. Three-dimensional cine magnetic resonance flow imaging was used to measure TKE in 18 subjects (4 normal volunteers, 14 patients with aortic stenosis with and without dilation). For each subject, the peak total TKE in the ascending aorta was compared with a pressure loss index. The pressure loss index was based on a previously validated theory relating pressure loss to measures obtainable by echocardiography. RESULTS The total TKE did not appear to be related to global flow patterns visualized based on magnetic resonance–measured velocity fields. The TKE was significantly higher in patients with aortic stenosis than in normal volunteers (p < 0.001). The peak total TKE in the ascending aorta was strongly correlated to index pressure loss (R2 = 0.91). CONCLUSIONS Peak total TKE in the ascending aorta correlated strongly with irreversible pressure loss estimated by a well-established method. Direct measurement of TKE by magnetic resonance flow imaging may, with further validation, be used to estimate irreversible pressure loss

  12. National-scale estimation of gross forest aboveground carbon loss: a case study of the Democratic Republic of the Congo

    NASA Astrophysics Data System (ADS)

    Tyukavina, A.; Stehman, S. V.; Potapov, P. V.; Turubanova, S. A.; Baccini, A.; Goetz, S. J.; Laporte, N. T.; Houghton, R. A.; Hansen, M. C.

    2013-12-01

    Recent advances in remote sensing enable the mapping and monitoring of carbon stocks without relying on extensive in situ measurements. The Democratic Republic of the Congo (DRC) is among the countries where national forest inventories (NFI) are either non-existent or out of date. Here we demonstrate a method for estimating national-scale gross forest aboveground carbon (AGC) loss and associated uncertainties using remotely sensed-derived forest cover loss and biomass carbon density data. Lidar data were used as a surrogate for NFI plot measurements to estimate carbon stocks and AGC loss based on forest type and activity data derived using time-series multispectral imagery. Specifically, DRC forest type and loss from the FACET (Forêts d’Afrique Centrale Evaluées par Télédétection) product, created using Landsat data, were related to carbon data derived from the Geoscience Laser Altimeter System (GLAS). Validation data for FACET forest area loss were created at a 30-m spatial resolution and compared to the 60-m spatial resolution FACET map. We produced two gross AGC loss estimates for the DRC for the last decade (2000-2010): a map-scale estimate (53.3 ± 9.8 Tg C yr-1) accounting for whole-pixel classification errors in the 60-m resolution FACET forest cover change product, and a sub-grid estimate (72.1 ± 12.7 Tg C yr-1) that took into account 60-m cells that experienced partial forest loss. Our sub-grid forest cover and AGC loss estimates, which included smaller-scale forest disturbances, exceed published assessments. Results raise the issue of scale in forest cover change mapping and validation, and subsequent impacts on remotely sensed carbon stock change estimation, particularly for smallholder dominated systems such as the DRC.

  13. An Estimation of the Climatic Effects of Stratospheric Ozone Losses during the 1980s. Appendix K

    NASA Technical Reports Server (NTRS)

    MacKay, Robert M.; Ko, Malcolm K. W.; Shia, Run-Lie; Yang, Yajaing; Zhou, Shuntai; Molnar, Gyula

    1997-01-01

    In order to study the potential climatic effects of the ozone hole more directly and to assess the validity of previous lower resolution model results, the latest high spatial resolution version of the Atmospheric and Environmental Research, Inc., seasonal radiative dynamical climate model is used to simulate the climatic effects of ozone changes relative to the other greenhouse gases. The steady-state climatic effect of a sustained decrease in lower stratospheric ozone, similar in magnitude to the observed 1979-90 decrease, is estimated by comparing three steady-state climate simulations: 1) 1979 greenhouse gas concentrations and 1979 ozone, II) 1990 greenhouse gas concentrations with 1979 ozone, and III) 1990 greenhouse gas concentrations with 1990 ozone. The simulated increase in surface air temperature resulting from nonozone greenhouse gases is 0.272 K. When changes in lower stratospheric ozone are included, the greenhouse warming is 0.165 K, which is approximately 39% lower than when ozone is fixed at the 1979 concentrations. Ozone perturbations at high latitudes result in a cooling of the surface-troposphere system that is greater (by a factor of 2.8) than that estimated from the change in radiative forcing resulting from ozone depiction and the model's 2 x CO, climate sensitivity. The results suggest that changes in meridional heat transport from low to high latitudes combined with the decrease in the infrared opacity of the lower stratosphere are very important in determining the steady-state response to high latitude ozone losses. The 39% compensation in greenhouse warming resulting from lower stratospheric ozone losses is also larger than the 28% compensation simulated previously by the lower resolution model. The higher resolution model is able to resolve the high latitude features of the assumed ozone perturbation, which are important in determining the overall climate sensitivity to these perturbations.

  14. Estimating Seismic Hazards from the Catalog of Taiwan Earthquakes from 1900 to 2014 in Terms of Maximum Magnitude

    NASA Astrophysics Data System (ADS)

    Chen, Kuei-Pao; Chang, Wen-Yen

    2017-02-01

    Maximum expected earthquake magnitude is an important parameter when designing mitigation measures for seismic hazards. This study calculated the maximum magnitude of potential earthquakes for each cell in a 0.1° × 0.1° grid of Taiwan. Two zones vulnerable to maximum magnitudes of M w ≥6.0, which will cause extensive building damage, were identified: one extends from Hsinchu southward to Taichung, Nantou, Chiayi, and Tainan in western Taiwan; the other extends from Ilan southward to Hualian and Taitung in eastern Taiwan. These zones are also characterized by low b values, which are consistent with high peak ground shaking. We also employed an innovative method to calculate (at intervals of M w 0.5) the bounds and median of recurrence time for earthquakes of magnitude M w 6.0-8.0 in Taiwan.

  15. Scaling relationship between corner frequencies and seismic moments of ultra micro earthquakes estimated with coda-wave spectral ratio -the Mponeng mine in South Africa

    NASA Astrophysics Data System (ADS)

    Wada, N.; Kawakata, H.; Murakami, O.; Doi, I.; Yoshimitsu, N.; Nakatani, M.; Yabe, Y.; Naoi, M. M.; Miyakawa, K.; Miyake, H.; Ide, S.; Igarashi, T.; Morema, G.; Pinder, E.; Ogasawara, H.

    2011-12-01

    Scaling relationship between corner frequencies, fc, and seismic moments, Mo is an important clue to understand the seismic source characteristics. Aki (1967) showed that Mo is proportional to fc-3 for large earthquakes (cubic law). Iio (1986) claimed breakdown of the cubic law between fc and Mo for smaller earthquakes (Mw < 2), and Gibowicz et al. (1991) also showed the breakdown for the ultra micro and small earthquakes (Mw < -2). However, it has been reported that the cubic law holds even for micro earthquakes (-1 < Mw > 4) by using high quality data observed at a deep borehole (Abercrombie, 1995; Ogasawara et al., 2001; Hiramatsu et al., 2002; Yamada et al., 2007). In order to clarify the scaling relationship for smaller earthquakes (Mw < -1), we analyzed ultra micro earthquakes using very high sampling records (48 kHz) of borehole seismometers installed within a hard rock at the Mponeng mine in South Africa. We used 4 tri-axial accelerometers of three-component that have a flat response up to 25 kHz. They were installed to be 10 to 30 meters apart from each other at 3,300 meters deep. During the period from 2008/10/14 to 2008/10/30 (17 days), 8,927 events were recorded. We estimated fc and Mo for 60 events (-3 < Mw < -1) within 200 meters from the seismometers. Assuming the Brune's source model, we estimated fc and Mo from spectral ratios. Common practice is using direct waves from adjacent events. However, there were only 5 event pairs with the distance between them less than 20 meters and Mw difference over one. In addition, the observation array is very small (radius less than 30 m), which means that effects of directivity and radiation pattern on direct waves are similar at all stations. Hence, we used spectral ratio of coda waves, since these effects are averaged and will be effectively reduced (Mayeda et al., 2007; Somei et al., 2010). Coda analysis was attempted only for relatively large 20 events (we call "coda events" hereafter) that have coda energy

  16. Exploring the uncertainty range of co-seismic stress drop estimations of large earthquakes using finite fault inversions

    NASA Astrophysics Data System (ADS)

    Adams, Mareike; Twardzik, Cedric; Ji, Chen

    2016-10-01

    A new finite fault inversion strategy is developed to explore the uncertainty range for the energy based average co-seismic stress drop (overline {{{Δ }}{τ_E}}) of large earthquakes. For a given earthquake, we conduct a modified finite fault inversion to find a solution that not only matches seismic and geodetic data but also has a overline {{{Δ }}{τ_E}} matching a specified value. We do the inversions for a wide range of stress drops. These results produce a trade-off curve between the misfit to the observations and overline {{{Δ }}{τ_E}} , which allows one to define the range of overline {{{Δ }}{τ_E}} that will produce an acceptable misfit. The study of the 2014 Rat Islands Mw 7.9 earthquake reveals an unexpected result: when using only teleseismic waveforms as data, the lower bound of overline {{{Δ }}{τ_E}} (5-10 MPa) for this earthquake is successfully constrained. However, the same dataset exhibits no sensitivity to its upper bound of overline {{{Δ }}{τ_E}} because there is limited resolution to the fine scale roughness of fault slip. Given that the spatial resolution of all seismic or geodetic data is limited, we can speculate that the upper bound of overline {{{Δ }}{τ_E}} cannot be constrained with them. This has consequences for the earthquake energy budget. Failing to constrain the upper bound of overline {{{Δ }}{τ_E}} leads to the conclusions that 1) the seismic radiation efficiency determined from the inverted model might be significantly overestimated; 2) the upper bound of the average fracture energy EG cannot be constrained by seismic or geodetic data. Thus, caution must be taken when investigating the characteristics of large earthquakes using the energy budget approach. Finally, searching for the lower bound of overline {{{Δ }}{τ_E}} can be used as an energy-based smoothing scheme during finite fault inversions.

  17. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    SciTech Connect

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-18

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.

  18. Direct estimation of entropy loss due to reduced translational and rotational motions upon molecular binding.

    PubMed

    Lu, Benzhuo; Wong, Chung F

    2005-12-05

    The entropic cost due to the loss of translational and rotational (T-R) degree of freedom upon binding has been well recognized for several decades. Tightly bound ligands have higher entropic costs than loosely bound ligands. Quantifying the ligand's residual T-R motions after binding, however, is not an easy task. We describe an approach that uses a reduced Hessian matrix to estimate the contributions due to translational and rotational degrees of freedom to entropy change upon molecular binding. The calculations use a harmonic model for the bound state but only include the T-R degrees of freedom. This approximation significantly speeds up entropy calculations because only 6 x 6 matrices need to be treated, which makes it easier to be used in computer-aided drug design for studying many ligands. The methodological connection with other methods is discussed as well. We tested this approximation by applying it to study the binding of ATP, peptide inhibitor (PKI), and several bound water molecules to protein kinase A (PKA). These ligands span a wide range in size. The model gave reasonable estimates of the residual T-R entropy of bound ligands or water molecules. The residual T-R entropy demonstrated a wide range of values, e.g., 4 to 16 cal/K.mol for the bound water molecules of PKA.

  19. Estimation of endogenous phosphorus loss in growing and finishing pigs fed semi-purified diets.

    PubMed

    Pettey, L A; Cromwell, G L; Lindemann, M D

    2006-03-01

    Thirty-six barrows were used in a series of 3 P-balance experiments in which growing and finishing pigs were fed highly digestible, semi-purified diets at or below the dietary available P requirement to estimate the effect of BW on endogenous P loss. Experiments 1, 2, and 3 were conducted with pigs averaging 27, 59, and 98 kg of BW, respectively. In each experiment, pigs were placed in metabolism crates and allotted by weight and litter to 3 dietary treatments. The basal diet consisted of sucrose, dextrose, cornstarch, and casein fortified with minerals (except P) and vitamins. Diets 1, 2, and 3 in Exp. 1 were the basal diet with 0, 0.078, or 0.157% added P, respectively, from monosodium phosphate. In Exp. 2 and 3, diets 1, 2, and 3 were the basal diet with 0, 0.067, and 0.134% added P, respectively, from monosodium phosphate. Within replicate, pigs were fed equal amounts of feed twice daily. Pigs were adjusted to treatments for 7 d before a 6-d, marker-to-marker collection of feces and urine. Phosphorus intakes for pigs fed the 3 diets ranged from 1.73 to 3.91 g/d in Exp. 1, from 2.18 to 5.32 g/d in Exp. 2, and from 1.96 to 6.26 g/d in Exp. 3. Fecal P excretion and P absorption increased linearly (P < 0.05) with increasing P intake. In the 3 experiments, urinary P excretion (g/d) was low for pigs fed diet 1 (0.010, 0.011, 0.019) and diet 2 (0.013, 0.058, 0.084) and was low for pigs fed diet 3 in Exp. 1 (0.037); however, urinary P was greater in pigs fed diet 3 in Exp. 2 and 3 (0.550 and 0.486, respectively). When P absorption (Y, g/d) was regressed on P intake (X, g/d) in Exp. 1, 2, and 3, the relationships were linear (P < 0.01): Y = -0.110 + 0.971X (R2 = 0.999), Y = -0.156 + 0.939X (R2 = 0.998), and Y = -0.226 + 0.8919X (R2 = 0.982), respectively. Thus, our estimates of endogenous P loss at zero P intake were 110, 156, and 226 mg/d for 27-, 59-, and 98-kg pigs, respectively. When these Y-intercepts were regressed on BW, the relationship was Y = 63.06 + 1.632X (R

  20. Probabilistic Methodology for Estimation of Number and Economic Loss (Cost) of Future Landslides in the San Francisco Bay Region, California

    USGS Publications Warehouse

    Crovelli, Robert A.; Coe, Jeffrey A.

    2008-01-01

    The Probabilistic Landslide Assessment Cost Estimation System (PLACES) presented in this report estimates the number and economic loss (cost) of landslides during a specified future time in individual areas, and then calculates the sum of those estimates. The analytic probabilistic methodology is based upon conditional probability theory and laws of expectation and variance. The probabilistic methodology is expressed in the form of a Microsoft Excel computer spreadsheet program. Using historical records, the PLACES spreadsheet is used to estimate the number of future damaging landslides and total damage, as economic loss, from future landslides caused by rainstorms in 10 counties of the San Francisco Bay region in California. Estimates are made for any future 5-year period of time. The estimated total number of future damaging landslides for the entire 10-county region during any future 5-year period of time is about 330. Santa Cruz County has the highest estimated number of damaging landslides (about 90), whereas Napa, San Francisco, and Solano Counties have the lowest estimated number of damaging landslides (5?6 each). Estimated direct costs from future damaging landslides for the entire 10-county region for any future 5-year period are about US $76 million (year 2000 dollars). San Mateo County has the highest estimated costs ($16.62 million), and Solano County has the lowest estimated costs (about $0.90 million). Estimated direct costs are also subdivided into public and private costs.

  1. Shear-tensile crack as a tool for reliable estimates of the non-double-couple mechanism: West Bohemia-Vogtland earthquake 1997 swarm

    NASA Astrophysics Data System (ADS)

    Šílený, Jan; Horálek, Josef

    2016-10-01

    Shear-tensile crack is a model for an earthquake mechanism that is more constrained than the moment tensor but that can still describe a non-shear focus. As such, the shear-tensile crack model is more robust than the moment tensor model and yields more reliable estimates for the earthquake mechanism. Such an advantage verifies the credibility of the non-double-couple component found for some events of the 1997 West Bohemia-Vogtland earthquake swarm. As expected, in several cases, a significantly resolved non-double-couple component was obtained where the moment tensor approach failed. Additionally, for non-shear sources, the shear-tensile crack model offers optimization of the Poisson number within the focus, concurrently with retrieval of the mechanism. However, results obtained for the joint inversion of the 1997 swarm indicate that resolution is low. A series of synthetic experiments indicated that limited observations during 1997 were not the cause. Rather, hypothetical experiments of both very good and extremely poor network configurations similarly yielded a low resolution for the Poisson number. Applying this method to data for recent swarms is irrelevant because the small non-double-couple components detected within the inversion are spurious and, thus, the events are pure double-couple phenomena.

  2. A multiple-approach radiometric age estimate for the Rotoiti and Earthquake Flat eruptions, New Zealand, with implications for the MIS 4/3 boundary

    USGS Publications Warehouse

    Wilson, C.J.N.; Rhoades, D.A.; Lanphere, M.A.; Calvert, A.T.; Houghton, B.F.; Weaver, S.D.; Cole, J.W.

    2007-01-01

    Pyroclastic fall deposits of the paired Rotoiti and Earthquake Flat eruptions from the Taupo Volcanic Zone (New Zealand) combine to form a widespread isochronous horizon over much of northern New Zealand and the southwest Pacific. This horizon is important for correlating climatic and environmental changes during the Last Glacial period, but has been the subject of numerous disparate age estimates between 35.1??2.8 and 71??6 ka (all errors are 1 s.d.), obtained by a variety of techniques. A potassium-argon (K-Ar) age of 64??4 ka was previously determined on bracketing lavas at Mayor Island volcano, offshore from the Taupo Volcanic Zone. We present a new, more-precise 40Ar/39Ar age determination on a lava flow on Mayor Island, that shortly post-dates the Rotoiti/Earthquake Flat fall deposits, of 58.5??1.1 ka. This value, coupled with existing ages from underlying lavas, yield a new estimate for the age of the combined eruptions of 61.0??1.4 ka, which is consistent with U-Th disequilibrium model-age data for zircons from the Rotoiti deposits. Direct 40Ar/39Ar age determinations of plagioclase and biotite from the Rotoiti and Earthquake Flat eruption products yield variable values between 49.6??2.8 and 125.3??10.0 ka, with the scatter attributed to low radiogenic Ar yields, and/or alteration, and/or inheritance of xenocrystic material with inherited Ar. Rotoiti/Earthquake Flat fall deposits occur in New Zealand in association with palynological indicators of mild climate, attributed to Marine Isotope Stage (MIS) 3 and thus used to suggest an age that is post-59 ka. The natures of the criteria used to define the MIS 4/3 boundary in the Northern and Southern hemispheres, however, imply that the new 61 ka age for the Rotoiti/Earthquake Flat eruption deposits will provide the inverse, namely, a more accurate isochronous marker for correlating diverse changes across the MIS 4/3 boundary in the southwest Pacific. ?? 2007 Elsevier Ltd. All rights reserved.

  3. A Match-based approach to the estimation of polar stratospheric ozone loss using Aura Microwave Limb Sounder observations

    NASA Astrophysics Data System (ADS)

    Livesey, N. J.; Santee, M. L.; Manney, G. L.

    2015-04-01

    The well-established "Match" approach to quantifying chemical destruction of ozone in the polar lower stratosphere is applied to ozone observations from the Microwave Limb Sounder (MLS) on NASA's Aura spacecraft. Quantification of ozone loss requires distinguishing transport- and chemically induced changes in ozone abundance. This is accomplished in the Match approach by examining cases where trajectories indicate that the same airmass has been observed on multiple occasions. The method was pioneered using ozone sonde observations, for which hundreds of matched ozone observations per winter are typically available. The dense coverage of the MLS measurements, particularly at polar latitudes, allows matches to be made to thousands of observations each day. This study is enabled by recently developed MLS Lagrangian Trajectory Diagnostic (LTD) support products. Sensitivity studies indicate that the largest influence on the ozone loss estimates are the value of potential vorticity (PV) used to define the edge of the polar vortex (within which matched observations must lie) and the degree to which the PV of an airmass is allowed to vary between matched observations. Applying Match calculations to MLS observations of nitrous oxide, a long-lived tracer whose expected rate of change on these timescales is negligible, enables quantification of the impact of transport errors on the Match-based ozone loss estimates. Our loss estimates are generally in agreement with previous estimates for selected Arctic winters, though indicating smaller losses than many other studies. Arctic ozone losses are greatest during the 2010/11 winter, as seen in prior studies, with 2.0 ppmv (parts per million by volume) loss estimated at 450 K potential temperature. As expected, Antarctic winter ozone losses are consistently greater than those for the Arctic, with less interannual variability (e.g., ranging between 2.3 and 3.0 ppmv at 450 K). This study exemplifies the insights into atmospheric

  4. Soil loss estimation and prioritization of sub-watersheds of Kali River basin, Karnataka, India, using RUSLE and GIS.

    PubMed

    Markose, Vipin Joseph; Jayappa, K S

    2016-04-01

    Most of the mountainous regions in tropical humid climatic zone experience severe soil loss due to natural factors. In the absence of measured data, modeling techniques play a crucial role for quantitative estimation of soil loss in such regions. The objective of this research work is to estimate soil loss and prioritize the sub-watersheds of Kali River basin using Revised Universal Soil Loss Equation (RUSLE) model. Various thematic layers of RUSLE factors such as rainfall erosivity (R), soil erodibility (K), topographic factor (LS), crop management factor (C), and support practice factor (P) have been prepared by using multiple spatial and non-spatial data sets. These layers are integrated in geographic information system (GIS) environment and estimated the soil loss. The results show that ∼42 % of the study area falls under low erosion risk and only 6.97 % area suffer from very high erosion risk. Based on the rate of soil loss, 165 sub-watersheds have been prioritized into four categories-very high, high, moderate, and low erosion risk. Anthropogenic activities such as deforestation, construction of dams, and rapid urbanization are the main reasons for high rate of soil loss in the study area. The soil erosion rate and prioritization maps help in implementation of a proper watershed management plan for the river basin.

  5. Cascading uncertainties in flood inundation models to uncertain estimates of damage and loss

    NASA Astrophysics Data System (ADS)

    Fewtrell, Timothy; Michel, Gero; Ntelekos, Alexandros; Bates, Paul

    2010-05-01

    The complexity of flood processes, particularly in urban environments, and the difficulties of collecting data during flood events, presents significant and particular challenges to modellers, especially when considering large geographic areas. As a result, the modelling process incorporates a number of areas of uncertainty during model conceptualisation, construction and evaluation. There is a wealth of literature detailing the relative magnitudes of uncertainties in numerical flood input data (e.g. boundary conditions, model resolution and friction specification) for a wide variety of flood inundation scenarios (e.g. fluvial inundation and surface water flooding). Indeed, recent UK funded projects (e.g. FREE) have explicitly examined the effect of cascading uncertainties in ensembles of GCM output through rainfall-runoff models to hydraulic flood inundation models. However, there has been little work examining the effect of cascading uncertainties in flood hazard ensembles to estimates of damage and loss, the quantity of interest when assessing flood risk. Furthermore, vulnerability is possibly the largest area of uncertainty for (re-)insurers as in-depth and reliable of knowledge of portfolios is difficult to obtain. Insurance industry CAT models attempt to represent a credible range of flood events over large geographic areas and as such examining all sources of uncertainty is not computationally tractable. However, the insurance industry is also marked by a trend towards an increasing need to understand the variability in flood loss estimates derived from these CAT models. In order to assess the relative importance of uncertainties in flood inundation models and depth/damage curves, hypothetical 1-in-100 and 1-in-200 year return period flood events are propagated through the Greenwich embayment in London, UK. Errors resulting from topographic smoothing, friction specification and inflow boundary conditions are cascaded to form an ensemble of flood levels and

  6. Estimating Fish Exploitation and Aquatic Habitat Loss across Diffuse Inland Recreational Fisheries

    PubMed Central

    de Kerckhove, Derrick Tupper; Minns, Charles Kenneth; Chu, Cindy

    2015-01-01

    The current state of many freshwater fish stocks worldwide is largely unknown but suspected to be vulnerable to exploitation from recreational fisheries and habitat degradation. Both these factors, combined with complex ecological dynamics and the diffuse nature of inland fisheries could lead to an invisible collapse: the drastic decline in fish stocks without great public or management awareness. In this study we provide a method to address the pervasive knowledge gaps in regional rates of exploitation and habitat degradation, and demonstrate its use in one of North America’s largest and most diffuse recreational freshwater fisheries (Ontario, Canada). We estimated that 1) fish stocks were highly exploited and in apparent danger of collapse in management zones close to large population centres, and 2) fish habitat was under a low but constant threat of degradation at rates comparable to deforestation in Ontario and throughout Canada. These findings confirm some commonly held, but difficult to quantify, beliefs in inland fisheries management but also provide some further insights including 1) large anthropogenic projects greater than one hectare could contribute much more to fish habitat loss on an area basis than the cumulative effect of smaller projects within one year, 2) hooking mortality from catch-and-release fisheries is likely a greater source of mortality than the harvest itself, and 3) in most northern management zones over 50% of the fisheries resources are not yet accessible to anglers. While this model primarily provides a framework to prioritize management decisions and further targeted stock assessments, we note that our regional estimates of fisheries productivity and exploitation were similar to broadscale monitoring efforts by the Province of Ontario. We discuss the policy implications from our results and extending the model to other jurisdictions and countries. PMID:25875790

  7. Estimating fish exploitation and aquatic habitat loss across diffuse inland recreational fisheries.

    PubMed

    de Kerckhove, Derrick Tupper; Minns, Charles Kenneth; Chu, Cindy

    2015-01-01

    The current state of many freshwater fish stocks worldwide is largely unknown but suspected to be vulnerable to exploitation from recreational fisheries and habitat degradation. Both these factors, combined with complex ecological dynamics and the diffuse nature of inland fisheries could lead to an invisible collapse: the drastic decline in fish stocks without great public or management awareness. In this study we provide a method to address the pervasive knowledge gaps in regional rates of exploitation and habitat degradation, and demonstrate its use in one of North America's largest and most diffuse recreational freshwater fisheries (Ontario, Canada). We estimated that (1) fish stocks were highly exploited and in apparent danger of collapse in management zones close to large population centres, and (2) fish habitat was under a low but constant threat of degradation at rates comparable to deforestation in Ontario and throughout Canada. These findings confirm some commonly held, but difficult to quantify, beliefs in inland fisheries management but also provide some further insights including (1) large anthropogenic projects greater than one hectare could contribute much more to fish habitat loss on an area basis than the cumulative effect of smaller projects within one year, (2) hooking mortality from catch-and-release fisheries is likely a greater source of mortality than the harvest itself, and (3) in most northern management zones over 50% of the fisheries resources are not yet accessible to anglers. While this model primarily provides a framework to prioritize management decisions and further targeted stock assessments, we note that our regional estimates of fisheries productivity and exploitation were similar to broadscale monitoring efforts by the Province of Ontario. We discuss the policy implications from our results and extending the model to other jurisdictions and countries.

  8. Variability of ozone loss during Arctic winter (1991 to 2000) estimated from UARS Microwave Limb Sounder measurement

    NASA Technical Reports Server (NTRS)

    Manney, G.; Froidevaux, F.; Santee, M. L.; Livesey, N. J.; Sabutis, J. L.; Waters, J. W.

    2002-01-01

    A comprehensive analysis of version 5 Upper Atmosphere Research Satellite (UARS) Microwave Limb Sounder (MLS) ozone data using a Lagrangian Transport (LT) model provides estimates of chemical ozone depletion for the 1991-1992 through 1997-1998 Arctic winters. These new estimates give a consistent, three-dimensional picture of ozone loss during seven Arctic winters; previous Arctic ozone loss estimates from MLS were based on various earlier data versions and were done only for late winter and only for a subset of the years observed by MLS. We find large interannual variability in the amount, timing, and patterns of ozone depletion and in the degree to which chemical loss is masked by dynamical processes.

  9. Earthquakes; March-April 1975

    USGS Publications Warehouse

    Person, W.J.

    1975-01-01

    There were no major earthquakes (magnitude 7.0-7.9) in March or April; however, there were earthquake fatalities in Chile, Iran, and Venezuela and approximately 35 earthquake-related injuries were reported around the world. In the United States a magnitude 6.0 earthquake struck the Idaho-Utah border region. Damage was estimated at about a million dollars. The shock was felt over a wide area and was the largest to hit the continental Untied States since the San Fernando earthquake of February 1971. 

  10. Mass wasting triggered by the 5 March 1987 Ecuador earthquakes

    USGS Publications Warehouse

    Schuster, R.L.; Nieto, A.S.; O'Rourke, T. D.; Crespo, E.; Plaza-Nieto, G.

    1996-01-01

    On 5 March 1987, two earthquakes (Ms=6.1 and Ms=6.9) occurred about 25 km north of Reventador Volcano, along the eastern slopes of the Andes Mountains in northeastern Ecuador. Although the shaking damaged structures in towns and villages near the epicentral area, the economic and social losses directly due to earthquake shaking were small compared to the effects of catastrophic earthquake-triggered mass wasting and flooding. About 600 mm of rain fell in the region in the month preceding the earthquakes; thus, the surficial soils had high moisture contents. Slope failures commonly started as thin slides, which rapidly turned into fluid debris avalanches and debris flows. The surficial soils and thick vegetation covering them flowed down the slopes into minor tributaries and then were carried into major rivers. Rock and earth slides, debris avalanches, debris and mud flows, and resulting floods destroyed about 40 km of the Trans-Ecuadorian oil pipeline and the only highway from Quito to Ecuador's northeastern rain forests and oil fields. Estimates of total volume of earthquake-induced mass wastage ranged from 75-110 million m3. Economic losses were about US$ 1 billion. Nearly all of the approximately 1000 deaths from the earthquakes were a consequence of mass wasting and/ or flooding.

  11. Source mechanism of May 24, 2013 Sea of Okhotsk deep earthquake (Mw8.3) estimated by broadband waveform modeling

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Miyoshi, T.; Nakamura, T.; Obayashi, M.; Tono, Y.

    2013-12-01

    May 24, 2013 Sea of Okhotsk earthquake (Mw 8.3, depth 640km NEIC) is not only one of the largest events in this general region but also one of the largest deep earthquakes ever recorded. We apply the waveform inversion technique (Kikuchi & Kanamori, 1991) to obtain slip distribution in the source fault of this earthquake in the same manner as our previous work (Nakamura et al., 2010). We use 57 broadband seismograms of IRIS GSN seismic stations with epicentral distance between 30 and 90 degrees. The broadband original data are integrated into ground displacement and band-pass filtered in the frequency band 0.002-1 Hz. Assuming 1D velocity model and the fault size of 135 x 135 km (along strike and dip, respectively), we obtain source rupture model for both nodal planes with high dip angle (81 degree) and low dip angle (10 degree). In order to determine which source rupture model would explain the observations, we calculate broadband synthetic seismograms with these source models for a realistic 3D Earth model using the spectral-element method (Komatitsch & Tromp, 2001). We performed the simulations on 24,576 processors in 3072 nodes of the K-computer in RIKEN. We use a mesh with 200 million spectral-elements, for a total of 13 billion global integration grid points. This translates into an approximate grid spacing of 2.0 km along the Earth's surface. On this number of nodes, a simulation of 50 minutes of wave propagation accurate at periods of 4.5 seconds and longer requires about 5 hours of CPU time. The comparison of the synthetic waveforms with the observation shows that the source rupture model with the low dip angle fault plane better explains the observation especially at stations, which locate south of the epicenter. Our results indicate that the source rupture of this deep earthquake occurred along the horizontal fault plane inside the subducting pacific plate.

  12. Source Mechanism of May 30, 2015 Bonin Islands, Japan Deep Earthquake (Mw7.8) Estimated by Broadband Waveform Modeling

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Nakamura, T.; Miyoshi, T.

    2015-12-01

    May 30, 2015 Bonin Islands, Japan earthquake (Mw 7.8, depth 679.9km GCMT) was one of the deepest earthquakes ever recorded. We apply the waveform inversion technique (Kikuchi & Kanamori, 1991) to obtain slip distribution in the source fault of this earthquake in the same manner as our previous work (Nakamura et al., 2010). We use 60 broadband seismograms of IRIS GSN seismic stations with epicentral distance between 30 and 90 degrees. The broadband original data are integrated into ground displacement and band-pass filtered in the frequency band 0.002-1 Hz. We use the velocity structure model IASP91 to calculate the wavefield near source and stations. We assume that the fault is squared with the length 50 km. We obtain source rupture model for both nodal planes with high dip angle (74 degree) and low dip angle (26 degree) and compare the synthetic seismograms with the observations to determine which source rupture model would explain the observations better. We calculate broadband synthetic seismograms with these source propagation models using the spectral-element method (Komatitsch & Tromp, 2001). We use new Earth Simulator system in JAMSTEC to compute synthetic seismograms using the spectral-element method. The simulations are performed on 7,776 processors, which require 1,944 nodes of the Earth Simulator. On this number of nodes, a simulation of 50 minutes of wave propagation accurate at periods of 3.8 seconds and longer requires about 5 hours of CPU time. Comparisons of the synthetic waveforms with the observation at teleseismic stations show that the arrival time of pP wave calculated for depth 679km matches well with the observation, which demonstrates that the earthquake really happened below the 660 km discontinuity. In our present forward simulations, the source rupture model with the low-angle fault dipping is likely to better explain the observations.

  13. Testing the use of bulk organic δ13C, δ15N, and Corg:Ntot ratios to estimate subsidence during the 1964 great Alaska earthquake

    USGS Publications Warehouse

    Bender, Adrian M; Witter, Robert C.; Rogers, Matthew

    2015-01-01

    During the Mw 9.2 1964 great Alaska earthquake, Turnagain Arm near Girdwood, Alaska subsided 1.7 ± 0.1 m based on pre- and postearthquake leveling. The coseismic subsidence in 1964 caused equivalent sudden relative sea-level (RSL) rise that is stratigraphically preserved as mud-over-peat contacts where intertidal silt buried peaty marsh surfaces. Changes in intertidal microfossil assemblages across these contacts have been used to estimate subsidence in 1964 by applying quantitative microfossil transfer functions to reconstruct corresponding RSL rise. Here, we review the use of organic stable C and N isotope values and Corg:Ntot ratios as alternative proxies for reconstructing coseismic RSL changes, and report independent estimates of subsidence in 1964 by using δ13C values from intertidal sediment to assess RSL change caused by the earthquake. We observe that surface sediment δ13C values systematically decrease by ∼4‰ over the ∼2.5 m increase in elevation along three 60- to 100-m-long transects extending from intertidal mud flat to upland environments. We use a straightforward linear regression to quantify the relationship between modern sediment δ13C values and elevation (n = 84, R2 = 0.56). The linear regression provides a slope–intercept equation used to reconstruct the paleoelevation of the site before and after the earthquake based on δ13C values in sandy silt above and herbaceous peat below the 1964 contact. The regression standard error (average = ±0.59‰) reflects the modern isotopic variability at sites of similar surface elevation, and is equivalent to an uncertainty of ±0.4 m elevation with respect to Mean Higher High Water. To reduce potential errors in paleoelevation and subsidence estimates, we analyzed multiple sediment δ13C values in nine cores on a shore-perpendicular transect at Bird Point. Our method estimates 1.3 ± 0.4 m of coseismic RSL rise across the 1964 contact by taking the arithmetic mean of the

  14. Applications of Multi-Cycle Earthquake Simulations to Earthquake Hazard

    NASA Astrophysics Data System (ADS)

    Gilchrist, Jacquelyn Joan

    This dissertation seeks to contribute to earthquake hazard analyses and forecasting by conducting a detailed study of the processes controlling the occurrence, and particularly the clustering, of large earthquakes, the probabilities of these large events, and the dynamics of their ruptures. We use the multi-cycle earthquake simulator RSQSim to investigate several fundamental aspects of earthquake occurrence in order to improve the understanding of earthquake hazard. RSQSim, a 3D, boundary element code that incorporates rate- and state-friction to simulate earthquakes in fully interacting, complex fault systems has been successful at modeling several aspects of fault slip and earthquake occurrence. Multi-event earthquake models with time-dependent nucleation based on rate- and state-dependent friction, such as RSQSim, provide a viable physics-based method for modeling earthquake processes. These models can provide a better understanding of earthquake hazard by improving our knowledge of earthquake processes and probabilities. RSQSim is fast and efficient, and therefore is able to simulate very long sequences of earthquakes (from hundreds of thousands to millions of events). This makes RSQSim an ideal instrument for filling in the current gaps in earthquake data, from short and incomplete earthquake catalogs to unrealistic initial conditions used for dynamic rupture models. RSQSim catalogs include foreshocks, aftershocks, and occasional clusters of large earthquakes, the statistics of which are important for the estimation of earthquake probabilities. Additionally, RSQSim finds a near optimal nucleation location that enables ruptures to propagate at minimal stress conditions and thus can provide suites of heterogeneous initial conditions for dynamic rupture models that produce reduced ground motions compared to models with homogeneous initial stresses and arbitrary forced nucleation locations.

  15. Estimating the magnitude of prediction uncertainties for field-scale P loss models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, an uncertainty analysis for the Annual P Loss Estima...

  16. A new tool for estimating phosphorus loss from cattle barnyards and outdoor lots

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phosphorus (P) loss from agriculture can compromise quality of receiving water bodies. For cattle farms, P can be lost from cropland, pastures, and outdoor animal lots. We developed a new model that predicts annual runoff, total solids loss, and total and dissolved P loss from cattle lots. The model...

  17. A Full Dynamic Compound Inverse Method for output-only element-level system identification and input estimation from earthquake response signals

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2016-08-01

    This paper proposes a new output-only element-level system identification and input estimation technique, towards the simultaneous identification of modal parameters, input excitation time history and structural features at the element-level by adopting earthquake-induced structural response signals. The method, named Full Dynamic Compound Inverse Method (FDCIM), releases strong assumptions of earlier element-level techniques, by working with a two-stage iterative algorithm. Jointly, a Statistical Average technique, a modification process and a parameter projection strategy are adopted at each stage to achieve stronger convergence for the identified estimates. The proposed method works in a deterministic way and is completely developed in State-Space form. Further, it does not require continuous- to discrete-time transformations and does not depend on initialization conditions. Synthetic earthquake-induced response signals from different shear-type buildings are generated to validate the implemented procedure, also with noise-corrupted cases. The achieved results provide a necessary condition to demonstrate the effectiveness of the proposed identification method.

  18. Estimating the probability of occurrence of earthquakes (M>6) in the Western part of the Corinth rift using fault-based and classical seismotectonic approaches.

    NASA Astrophysics Data System (ADS)

    Boiselet, Aurelien; Scotti, Oona; Lyon-Caen, Hélène

    2014-05-01

    The Corinth rift, Greece, is one of the regions with highest strain rates in the Euro-Mediterranean area and as such it has long been identified as a site of major importance for earthquake studies in Europe (20 years of research by the Corinth Rift Laboratory and 4 years of in-depth studies by the ANR-SISCOR project). This enhanced knowledge, acquired in particular, in the western part of the Gulf of Corinth, an area about 50 by 40 km, between the city of Patras to the west and the city of Aigion to the east, provides an excellent opportunity to compare fault-based and classical seismotectonic approaches currently used in seismic hazard assessment studies. A homogeneous earthquake catalogue was first constructed for the Greek territory based on two existing earthquake catalogues available for Greece (National Observatory of Athens and Thessaloniki). In spite of numerous documented damaging earthquakes, only a limited amount of macroseismic intensity data points are available in the existing databases for the damaging earthquakes affecting the west Corinth rift region. A re-interpretation of the macroseismic intensity field for numerous events was thus conducted, following an in-depth analysis of existing and newly found documentation (for details see Rovida et al. EGU2014-6346). In parallel, the construction of a comprehensive database of all relevant geological, geodetical and geophysical information (available in the literature and recently collected within the ANR-SISCOR project), allowed proposing rupture geometries for the different fault-systems identified in the study region. The combination of the new earthquake parameters and the newly defined fault geometries, together with the existing published paleoseismic data, allowed proposing a suite of rupture scenarios including the activation of multiple fault segments. The methodology used to achieve this goal consisted in setting up a logic tree that reflected the opinion of all the members of the ANR

  19. Frictional Heat Generation and Slip Duration Estimated From Micro-fault in an Exhumed Accretionary Complex and Their Relations to the Scaling Law for Slow Earthquakes

    NASA Astrophysics Data System (ADS)

    Hashimoto, Y.; Morita, K.; Okubo, M.; Hamada, Y.; Lin, W.; Hirose, T.; Kitamura, M.

    2015-12-01

    Fault motion has been estimated by diffusion pattern of frictional heating recorded in geology (e.g., Fulton et al., 2012). The same record in deeper subduction plate interface can be observed from micro-faults in an exhumed accretionary complex. In this study, we focused on a micro-fault within the Cretaceous Shimanto Belt, SW Japan to estimate fault motion from the frictional heating diffusion pattern. A carbonaceous material concentrated layer (CMCL) with ~2m of thickness is observed in study area. Some micro-faults cut the CMCL. Thickness of a fault is about 3.7mm. Injection veins and dilatant fractures were observed in thin sections, suggesting that the high fluid pressure was existed. Samples with 10cm long were collected to measure distribution of vitrinite reflectance (Ro) as a function of distance from the center of micro-fault. Ro of host rock was ~1.0%. Diffusion pattern was detected decreasing in Ro from ~1.2%-~1.1%. Characteristic diffusion distance is ~4-~9cm. We conducted grid search to find the optimal frictional heat generation per unit area (Q, the product of friction coefficient, normal stress and slip velocity) and slip duration (t) to fit the diffusion pattern. Thermal diffusivity (0.98*10-8m2/s) and thermal conductivity (2.0 W/mK) were measured. In the result, 2000-2500J/m2 of Q and 63000-126000s of t were estimated. Moment magnitudes (M0) of slow earthquakes (slow EQs) follow a scaling law with slip duration and its dimension is different from that for normal earthquakes (normal EQ) (Ide et al., 2007). The slip duration estimated in this study (~104-~105s) consistent with 4-5 of M0, never fit to the scaling law for normal EQ. Heat generation can be inverted from 4-5 of M0, corresponding with ~108-~1011J, which is consistent with rupture area of 105-108m2 in this study. The comparisons in heat generation and slip duration between geological measurements and geophysical remote observations give us the estimation of rupture area, M0, and

  20. Aura's Microwave Limb Sounder Estimates of Ozone Loss, 2004/2005 Arctic Winter

    NASA Technical Reports Server (NTRS)

    2005-01-01

    These data maps from Aura's Microwave Limb Sounder depict levels of hydrogen chloride (top), chlorine monoxide (center), and ozone (bottom) at an altitude of approximately 19 kilometers (490,000 feet) on selected days during the 2004-05 Arctic winter. White contours demark the boundary of the winter polar vortex.

    The maps from December 23, 2004, illustrate vortex conditions shortly before significant chemical ozone destruction began. By January 23, 2005, chlorine is substantially converted from the 'safe' form of hydrogen chloride, which is depleted throughout the vortex, to the 'unsafe' form of chlorine monoxide, which is enhanced in the portions of the region that receive sunlight at that time of year. Ozone increased over the month as a result of dynamical effects, and chemical ozone destruction is just beginning at this time. A brief period of intense cold a few days later promotes further chlorine activation and consequent changes in hydrogen chloride and chlorine monoxide levels on January 27, 2005. Peak chlorine monoxide enhancement occurs in early February.

    By February 24, 2005, chlorine deactivation is well underway, with chlorine monoxide abundances dropping and hydrogen chloride abundances rising. Almost all chlorine monoxide has been quenched by March 10, 2005. The fact that hydrogen chloride has not fully rebounded to December abundances suggests that some of that chemical was recovered into another chlorine reservoir species.

    Ozone maps for January 27, 2005, through March 10, 2005, show indications of mixing of air from outside the polar vortex into it. Such occurrences throughout this winter, especially in late February and early March, complicate analyses, and detailed calculations are required to rigorously disentangle chemical and dynamical effects and accurately diagnose chemical ozone destruction.

    Based on various analyses of Microwave Limb Sounder data, we estimate that maximum local ozone loss of approximately 2 parts

  1. Influence of Agropastoral System Components on Mountain Grassland Vulnerability Estimated by Connectivity Loss.

    PubMed

    Gartzia, Maite; Fillat, Federico; Pérez-Cabello, Fernando; Alados, Concepción L

    2016-01-01

    Over the last decades, global changes have altered the structure and properties of natural and semi-natural mountain grasslands. Those changes have contributed to grassland loss mainly through colonization by woody species at low elevations, and increases in biomass and greenness at high elevations. Nevertheless, the interactions between agropastoral components; i.e., ecological (grassland, environmental, and geolocation properties), social, and economic components, and their effects on the grasslands are still poorly understood. We estimated the vulnerability of dense grasslands in the Central Pyrenees, Spain, based on the connectivity loss (CL) among grassland patches that has occurred between the 1980s and the 2000s, as a result of i) an increase in biomass and greenness (CL-IBG), ii) woody encroachment (CL-WE), or iii) a decrease in biomass and greenness (CL-DBG). The environmental and grassland components of the agropastoral system were associated with the three processes, especially CL-IBG and CL-WE, in relation with the succession of vegetation toward climax communities, fostered by land abandonment and exacerbated by climate warming. CL-IBG occurred in pasture units that had a high proportion of dense grasslands and low current livestock pressure. CL-WE was most strongly associated with pasture units that had a high proportion of woody habitat and a large reduction in sheep and goat pressure between the 1930s and the 2000s. The economic component was correlated with the CL-WE and the CL-DBG; specifically, expensive pastures were the most productive and could maintain the highest rates of livestock grazing, which slowed down woody encroachment, but caused grassland degradation and DBG. In addition, CL-DBG was associated with geolocation of grasslands, mainly because livestock tend to graze closer to passable roads and buildings, where they cause grassland degradation. To properly manage the grasslands, an integrated management plan must be developed that

  2. Influence of Agropastoral System Components on Mountain Grassland Vulnerability Estimated by Connectivity Loss

    PubMed Central

    Fillat, Federico; Pérez-Cabello, Fernando; Alados, Concepción L.

    2016-01-01

    Over the last decades, global changes have altered the structure and properties of natural and semi-natural mountain grasslands. Those changes have contributed to grassland loss mainly through colonization by woody species at low elevations, and increases in biomass and greenness at high elevations. Nevertheless, the interactions between agropastoral components; i.e., ecological (grassland, environmental, and geolocation properties), social, and economic components, and their effects on the grasslands are still poorly understood. We estimated the vulnerability of dense grasslands in the Central Pyrenees, Spain, based on the connectivity loss (CL) among grassland patches that has occurred between the 1980s and the 2000s, as a result of i) an increase in biomass and greenness (CL-IBG), ii) woody encroachment (CL-WE), or iii) a decrease in biomass and greenness (CL-DBG). The environmental and grassland components of the agropastoral system were associated with the three processes, especially CL-IBG and CL-WE, in relation with the succession of vegetation toward climax communities, fostered by land abandonment and exacerbated by climate warming. CL-IBG occurred in pasture units that had a high proportion of dense grasslands and low current livestock pressure. CL-WE was most strongly associated with pasture units that had a high proportion of woody habitat and a large reduction in sheep and goat pressure between the 1930s and the 2000s. The economic component was correlated with the CL-WE and the CL-DBG; specifically, expensive pastures were the most productive and could maintain the highest rates of livestock grazing, which slowed down woody encroachment, but caused grassland degradation and DBG. In addition, CL-DBG was associated with geolocation of grasslands, mainly because livestock tend to graze closer to passable roads and buildings, where they cause grassland degradation. To properly manage the grasslands, an integrated management plan must be developed that

  3. The integration of stress, strain, and seismogenic fault data: towards more robust estimates of the earthquake potential in Italy and its surroundings

    NASA Astrophysics Data System (ADS)

    Caporali, Alessandro; Braitenberg, Carla; Burrato, Pierfrancesco; Carafa, Michele; Di Giovambattista, Rita; Gentili, Stefania; Mariucci, Maria Teresa; Montone, Paola; Morsut, Federico; Nicolini, Luca; Pivetta, Tommaso; Roselli, Pamela; Rossi, Giuliana; Valensise, Gian Luca; Vigano, Alfio

    2016-04-01

    Italy is an earthquake-prone country with a long tradition in observational seismology. For many years, the country's unique historical earthquake record has revealed fundamental properties of Italian seismicity and has been used to determine earthquake rates. Paleoseismological studies conducted over the past 20 years have shown that the length of this record - 5 to 8 centuries, depending on areas - is just a fraction of the typical recurrence interval of Italian faults - consistently larger than a millennium. Hence, so far the earthquake potential may have been significantly over- or under-estimated. Based on a clear perception of these circumstances, over the past two decades large networks and datasets describing independent aspects of the seismic cycle have been developed. INGV, OGS, some universities and local administrations have built networks that globally include nearly 500 permanent GPS/GNSS sites, routinely used to compute accurate horizontal velocity gradients reflecting the accumulation of tectonic strain. INGV developed the Italian present-day stress map, which includes over 700 datapoints based on geophysical in-situ measurements and fault plane solutions, and the Database of Individual Seismogenic Sources (DISS), a unique compilation featuring nearly 300 three-dimensional seismogenic faults over the entire nation. INGV also updates and maintains the Catalogo Parametrico dei Terremoti Italiani (CPTI) and the instrumental earthquake database ISIDe, whereas OGS operates its own seismic catalogue for northeastern Italy. We present preliminary results on the use of this wealth of homogeneously collected and updated observations of stress and strain as a source of loading/unloading of the faults listed in the DISS database. We use the geodetic strain rate - after converting it to stress rate in conjunction with the geophysical stress data of the Stress Map - to compute the Coulomb Failure Function on all fault planes described by the DISS database. This

  4. Recent wetland land loss due to hurricanes: improved estimates based upon multiple source images

    USGS Publications Warehouse

    Kranenburg, Christine J.; Palaseanu-Lovejoy, Monica; Barras, John A.; Brock, John C.; Wang, Ping; Rosati, Julie D.; Roberts, Tiffany M.

    2011-01-01

    The objective of this study was to provide a moderate resolution 30-m fractional water map of the Chenier Plain for 2003, 2006 and 2009 by using information contained in high-resolution satellite imagery of a subset of the study area. Indices and transforms pertaining to vegetation and water were created using the high-resolution imagery, and a threshold was applied to obtain a categorical land/water map. The high-resolution data was used to train a decision-tree classifier to estimate percent water in a lower resolution (Landsat) image. Two new water indices based on the tasseled cap transformation were proposed for IKONOS imagery in wetland environments and more than 700 input parameter combinations were considered for each Landsat image classified. Final selection and thresholding of the resulting percent water maps involved over 5,000 unambiguous classified random points using corresponding 1-m resolution aerial photographs, and a statistical optimization procedure to determine the threshold at which the maximum Kappa coefficient occurs. Each selected dataset has a Kappa coefficient, percent correctly classified (PCC) water, land and total greater than 90%. An accuracy assessment using 1,000 independent random points was performed. Using the validation points, the PCC values decreased to around 90%. The time series change analysis indicated that due to Hurricane Rita, the study area lost 6.5% of marsh area, and transient changes were less than 3% for either land or water. Hurricane Ike resulted in an additional 8% land loss, although not enough time has passed to discriminate between persistent and transient changes.

  5. Estimating peak dynamic strains at the ground surface and at depth during earthquake shaking: application to the safety study of a geological storage of CO2

    NASA Astrophysics Data System (ADS)

    Sy, S.; Douglas, J.; Seyedi, D.

    2009-04-01

    Within the framework of a scenario-based methodology used to evaluate the risks related to the geological storage of CO2, the risk posed by earthquakes to the storage safety must be evaluated. The main aim of this article is to predict by a simple empirical method, verified by numerical simulations, the peak dynamic strains in the reservoir during an earthquake. This allows, following an investigation of the risk of rock rupture or damage in the reservoir and sealing units (i.e. caprock and wells), an evaluation of the seismic risk. However, these subsequent calculations are not carried out in this article, which is limited to a determination of the dynamic stresses during an earthquake. A simplified procedure for the prediction of maximum soil strains was proposed by Newmark [1] and later used by several authors. In this approach, the peak strain is equal to the horizontal peak ground velocity (PGV) divided by the apparent 'propagation speed' of strong-motion waves, C. Using that approximate equation for strain is difficult, because C is not known a priori and depends both on site and wave characteristics. A more recent approach simplifies it by replacing C by β1, which is the shear-wave velocity in the uppermost layer of the soil structure, and by including a site-specific corrective factor A [2]. However, all these studies were limited to determining the peak dynamic strains at the ground surface and not at depth. Thus it is necessary for us to evaluate the site-specific corrective factor A for the geological context of studied area for the peak dynamic strain at the surface and also to estimate a similar correlation between strains and PGV/β at depth, where β is the shear-wave velocity in the considered layer. A sophisticated one-dimensional site response computer program is used to create a set for analysis of peak strains. It calculates, for a given geological model consisting of parallel layers and a given input accelerogram, the ground

  6. Results of rainfall simulation to estimate sediment-bound carbon and nitrogen loss from an Atlantic Coastal Plain (USDA) ultisol

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The impact of erosion on soil and carbon loss and redistribution within landscapes is an important component for developing estimates of carbon sequestration potential, management plans to maintain soil quality, and transport of sediment bound agrochemicals. Soils of the Southeastern U.S. Coastal Pl...

  7. Gross margin losses due to Salmonella Dublin infection in Danish dairy cattle herds estimated by simulation modelling.

    PubMed

    Nielsen, T D; Kudahl, A B; Østergaard, S; Nielsen, L R

    2013-08-01

    Salmonella Dublin affects production and animal health in cattle herds. The objective of this study was to quantify the gross margin (GM) losses following introduction and spread of S. Dublin within dairy herds. The GM losses were estimated using an age-structured stochastic, mechanistic and dynamic simulation model. The model incorporated six age groups (neonatal, pre-weaned calves, weaned calves, growing heifers, breeding heifers and cows) and five infection stages (susceptible, acutely infected, carrier, super shedder and resistant). The effects of introducing one S. Dublin infectious heifer were estimated through 1000 simulation iterations for 12 scenarios. These 12 scenarios were combinations of three herd sizes (85, 200 and 400 cows) and four management levels (very good, good, poor and very poor). Input parameters for effects of S. Dublin on production and animal health were based on literature and calibrations to mimic real life observations. Mean annual GMs per cow stall were compared between herds experiencing within-herd spread of S. Dublin and non-infected reference herds over a 10-year period. The estimated GM losses were largest in the first year after infection, and increased with poorer management and herd size, e.g. average annual GM losses were estimated to 49 euros per stall for the first year after infection, and to 8 euros per stall annually averaged over the 10 years after herd infection for a 200 cow stall herd with very good management. In contrast, a 200 cow stall herd with very poor management lost on average 326 euros per stall during the first year, and 188 euros per stall annually averaged over the 10-year period following introduction of infection. The GM losses arose from both direct losses such as reduced milk yield, dead animals, treatment costs and abortions as well as indirect losses such as reduced income from sold heifers and calves, and lower milk yield of replacement animals. Through sensitivity analyses it was found that the

  8. Earthquake Facts

    MedlinePlus

    ... May 22, 1960. The earliest reported earthquake in California was felt in 1769 by the exploring expedition ... by wind or tides. Each year the southern California area has about 10,000 earthquakes . Most of ...

  9. Spatial and temporal estimation of soil loss for the sustainable management of a wet semi-arid watershed cluster.

    PubMed

    Rejani, R; Rao, K V; Osman, M; Srinivasa Rao, Ch; Reddy, K Sammi; Chary, G R; Pushpanjali; Samuel, Josily

    2016-03-01

    The ungauged wet semi-arid watershed cluster, Seethagondi, lies in the Adilabad district of Telangana in India and is prone to severe erosion and water scarcity. The runoff and soil loss data at watershed, catchment, and field level are necessary for planning soil and water conservation interventions. In this study, an attempt was made to develop a spatial soil loss estimation model for Seethagondi cluster using RUSLE coupled with ARCGIS and was used to estimate the soil loss spatially and temporally. The daily rainfall data of Aphrodite for the period from 1951 to 2007 was used, and the annual rainfall varied from 508 to 1351 mm with a mean annual rainfall of 950 mm and a mean erosivity of 6789 MJ mm ha(-1) h(-1) year(-1). Considerable variation in land use land cover especially in crop land and fallow land was observed during normal and drought years, and corresponding variation in the erosivity, C factor, and soil loss was also noted. The mean value of C factor derived from NDVI for crop land was 0.42 and 0.22 in normal year and drought years, respectively. The topography is undulating and major portion of the cluster has slope less than 10°, and 85.3% of the cluster has soil loss below 20 t ha(-1) year(-1). The soil loss from crop land varied from 2.9 to 3.6 t ha(-1) year(-1) in low rainfall years to 31.8 to 34.7 t ha(-1) year(-1) in high rainfall years with a mean annual soil loss of 12.2 t ha(-1) year(-1). The soil loss from crop land was higher in the month of August with an annual soil loss of 13.1 and 2.9 t ha(-1) year(-1) in normal and drought year, respectively. Based on the soil loss in a normal year, the interventions recommended for 85.3% of area of the watershed includes agronomic measures such as contour cultivation, graded bunds, strip cropping, mixed cropping, crop rotations, mulching, summer plowing, vegetative bunds, agri-horticultural system, and management practices such as broad bed furrow, raised sunken beds, and harvesting available water

  10. Hurricane Loss Estimation Models: Opportunities for Improving the State of the Art.

    NASA Astrophysics Data System (ADS)

    Watson, Charles C., Jr.; Johnson, Mark E.

    2004-11-01

    The results of hurricane loss models are used regularly for multibillion dollar decisions in the insurance and financial services industries. These models are proprietary, and this “black box” nature hinders analysis. The proprietary models produce a wide range of results, often producing loss costs that differ by a ratio of three to one or more. In a study for the state of North Carolina, 324 combinations of loss models were analyzed, based on a combination of nine wind models, four surface friction models, and nine damage models drawn from the published literature in insurance, engineering, and meteorology. These combinations were tested against reported losses from Hurricanes Hugo and Andrew as reported by a major insurance company, as well as storm total losses for additional storms. Annual loss costs were then computed using these 324 combinations of models for both North Carolina and Florida, and compared with publicly available proprietary model results in Florida. The wide range of resulting loss costs for open, scientifically defensible models that perform well against observed losses mirrors the wide range of loss costs computed by the proprietary models currently in use. This outcome may be discouraging for governmental and corporate decision makers relying on this data for policy and investment guidance (due to the high variability across model results), but it also provides guidance for the efforts of future investigations to improve loss models. Although hurricane loss models are true multidisciplinary efforts, involving meteorology, engineering, statistics, and actuarial sciences, the field of meteorology offers the most promising opportunities for improvement of the state of the art.

  11. Applying the Land Use Portfolio Model to Estimate Natural-Hazard Loss and Risk - A Hypothetical Demonstration for Ventura County, California

    USGS Publications Warehouse

    Dinitz, Laura B.

    2008-01-01

    -MH currently performs analyses for earthquakes, floods, and hurricane wind. HAZUS-MH loss estimates, however, do not account for some uncertainties associated with the specific natural-hazard scenarios, such as the likelihood of occurrence within a particular time horizon or the effectiveness of alternative risk-reduction options. Because of the uncertainties involved, it is challenging to make informative decisions about how to cost-effectively reduce risk from natural-hazard events. Risk analysis is one approach that decision-makers can use to evaluate alternative risk-reduction choices when outcomes are unknown. The Land Use Portfolio Model (LUPM), developed by the U.S. Geological Survey (USGS), is a geospatial scenario-based tool that incorporates hazard-event uncertainties to support risk analysis. The LUPM offers an approach to estimate and compare risks and returns from investments in risk-reduction measures. This paper describes and demonstrates a hypothetical application of the LUPM for Ventura County, California, and examines the challenges involved in developing decision tools that provide quantitative methods to estimate losses and analyze risk from natural hazards.

  12. Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical abstract: Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analys...

  13. Phosphorus loss and its estimation in a small watershed of the Yimeng mountainous area, China

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Non-point source pollution is severe in the Yimeng Mountainous area of China. Few studies have been conducted to identify and predict phosphorus loss at a watershed scale in this region. The objectives of this study were to identify the characteristics of phosphorus loss and further to develop regre...

  14. A Match-based approach to the estimation of polar stratospheric ozone loss using Aura Microwave Limb Sounder observations

    NASA Astrophysics Data System (ADS)

    Livesey, N. J.; Santee, M. L.; Manney, G. L.

    2015-09-01

    The well-established "Match" approach to quantifying chemical destruction of ozone in the polar lower stratosphere is applied to ozone observations from the Microwave Limb Sounder (MLS) on NASA's Aura spacecraft. Quantification of ozone loss requires distinguishing transport- and chemically induced changes in ozone abundance. This is accomplished in the Match approach by examining cases where trajectories indicate that the same air mass has been observed on multiple occasions. The method was pioneered using ozonesonde observations, for which hundreds of matched ozone observations per winter are typically available. The dense coverage of the MLS measurements, particularly at polar latitudes, allows matches to be made to thousands of observations each day. This study is enabled by recently developed MLS Lagrangian trajectory diagnostic (LTD) support products. Sensitivity studies indicate that the largest influence on the ozone loss estimates are the value of potential vorticity (PV) used to define the edge of the polar vortex (within which matched observations must lie) and the degree to which the PV of an air mass is allowed to vary between matched observations. Applying Match calculations to MLS observations of nitrous oxide, a long-lived tracer whose expected rate of change is negligible on the weekly to monthly timescales considered here, enables quantification of the impact of transport errors on the Match-based ozone loss estimates. Our loss estimates are generally in agreement with previous estimates for selected Arctic winters, though indicating smaller losses than many other studies. Arctic ozone losses are greatest during the 2010/11 winter, as seen in prior studies, with 2.0 ppmv (parts per million by volume) loss estimated at 450 K potential temperature (~ 18 km altitude). As expected, Antarctic winter ozone losses are consistently greater than those for the Arctic, with less interannual variability (e.g., ranging between 2.3 and 3.0 ppmv at 450 K). This

  15. Quantitative estimation of farmland soil loss by wind-erosion using improved particle-size distribution comparison method (IPSDC)

    NASA Astrophysics Data System (ADS)

    Rende, Wang; Zhongling, Guo; Chunping, Chang; Dengpan, Xiao; Hongjun, Jiang

    2015-12-01

    The rapid and accurate estimation of soil loss by wind erosion still remains challenge. This study presents an improved scheme for estimating the soil loss by wind erosion of farmland. The method estimates the soil loss by wind erosion based on a comparison of the relative contents of erodible and non-erodible particles between the surface and sub-surface layers of the farmland ploughed layer after wind erosion. It is based on the features that the soil particle-size distribution of the sampling soil layer (approximately 2 cm) is relatively uniform, and that on the surface layer, wind erosion causes the relative numbers of erodible and non-erodible particles to decrease and increase, respectively. Estimations were performed using this method for the wind erosion periods (WEP) from Oct. of 2012 to May of 2013 and from Oct. of 2013 to April of 2014 and a large wind-erosion event (WEE) on May 3, 2014 in the Bashang area of Hebei Province. The results showed that the average soil loss of farmland by wind erosion from Oct. of 2012 to May of 2013 was 2852.14 g/m2 with an average depth of 0.21 cm, while soil loss by wind from Oct. of 2013 to April of 2014 was 1199.17 g/m2 with a mean depth of 0.08 cm. During the severe WEE on May 3, 2014, the average soil loss of farmland by wind erosion was 1299.19 g/m2 with an average depth of 0.10 cm. The soil loss by wind erosion of ploughed and raked fields (PRF) was approximately twice as large as that of oat-stubble fields (OSF). The improved method of particle-size distribution comparison (IPSDC) has several advantages. It can not only calculate the wind erosion amount, but also the wind deposition amount. Slight changes in the sampling thickness and in the particle diameter range of the non-erodible particles will not obviously influence the results. Furthermore, the method is convenient, rapid, simple to implement. It is suitable for estimating the soil loss or deposition by wind erosion of farmland with flat surfaces and high

  16. Forecasting Earthquakes

    NASA Technical Reports Server (NTRS)

    1994-01-01

    In this video there are scenes of damage from the Northridge Earthquake and interviews with Dr. Andrea Donnelan, Geophysics at JPL, and Dr. Jim Dolan, earthquake geologist from Cal. Tech. The interviews discuss earthquake forecasting by tracking changes in the earth's crust using antenna receiving signals from a series of satellites called the Global Positioning System (GPS).

  17. Hidden Earthquakes.

    ERIC Educational Resources Information Center

    Stein, Ross S.; Yeats, Robert S.

    1989-01-01

    Points out that large earthquakes can take place not only on faults that cut the earth's surface but also on blind faults under folded terrain. Describes four examples of fold earthquakes. Discusses the fold earthquakes using several diagrams and pictures. (YP)

  18. Data completeness of the Kumamoto earthquake sequence in the JMA catalog and its influence on the estimation of the ETAS parameters

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Ogata, Yosihiko; Wang, Ting

    2017-02-01

    This study investigates the missing data problem in the Japan Meteorological Agency catalog of the Kumamoto aftershock sequence, which occurred since April 15, 2016, in Japan. Based on the assumption that earthquake magnitudes are independent of their occurrence times, we replenish the short-term missing data of small earthquakes by using a bi-scale transformation and study their influence on the maximum likelihood estimate (MLE) of the epidemic-type aftershock sequences (ETAS) parameters by comparing the analysis results from the original and the replenished datasets. The results show that the MLEs of the ETAS parameters vary when this model is fitted to the recorded catalog with different cutoff magnitudes, while those MLEs remain stable for the replenished dataset. Further analysis shows that the seismicity becomes quiescent after the occurrence of the second major shock, which can be regarded as a precursory phenomenon of the occurrence of the subsequent M_J7.3 mainshock. This relative quiescence is demonstrated more clearly by the analysis of the replenished dataset.

  19. Earthquake early warning performance tests for Istanbul

    NASA Astrophysics Data System (ADS)

    Köhler, N.; Wenzel, F.; Erdik, M.; Alcik, H.; Mert, A.

    2009-04-01

    The Marmara Region is the most densily populated region in Turkey. The greater area of the mega-city Istanbul inhabits about 14 million people. The city is located in the direct vicinity of the Main Marmara Fault, a dextral strike-slip fault system intersecting the Sea of Marmara, which is the western continuation of the North Anatolian Fault [Le Pichon et al., 2001]. Its closest distance to the city of Istanbul ranges between 15 and 20 km. Recent estimates by Parsons [2004] give a probability of more than 40% of a M ≥ 7 earthquake that will affect Istanbul within the next 30 years. Due to this high seismic risk, earthquake early warning is an important task in disaster management and seismic risk reduction, increasing the safety of millions of people living in and around Istanbul and reducing economic losses. The Istanbul Earthquake Rapid Response and Early Warning System (IERREWS) includes a set of 10 strong-motion sensors used for early warning which are installed between Istanbul and the Main Marmara Fault. The system works on the exceedance of amplitude thresholds, whereas three alarm levels are defined at three different thresholds [Erdik et al., 2003]. In the context of the research project EDIM (Earthquake Disaster Information System for the Marmara Region, Turkey), the early warning network is planned to be extended by an additional set of 10 strong-motion sensors installed around the Sea of Marmara to include the greater Marmara Region into the early warning process. We present performance tests of both the existing and the planned extended early warning network using ground motion simulations for 280 synthetic earthquakes along the Main Marmara Fault with moment magnitudes between 4.5 and 7.5. We apply the amplitude thresholds of IERREWS, as well as, for comparison, an early warning algorithm based on artificial neural networks which estimates hypocentral location and magnitude of the occurring earthquake. The estimates are updated continuously with

  20. One-Step Targeted Minimum Loss-based Estimation Based on Universal Least Favorable One-Dimensional Submodels

    PubMed Central

    van der Laan, Mark; Gruber, Susan

    2016-01-01

    Consider a study in which one observes n independent and identically distributed random variables whose probability distribution is known to be an element of a particular statistical model, and one is concerned with estimation of a particular real valued pathwise differentiable target parameter of this data probability distribution. The targeted maximum likelihood estimator (TMLE) is an asymptotically efficient substitution estimator obtained by constructing a so called least favorable parametric submodel through an initial estimator with score, at zero fluctuation of the initial estimator, that spans the efficient influence curve, and iteratively maximizing the corresponding parametric likelihood till no more updates occur, at which point the updated initial estimator solves the so called efficient influence curve equation. In this article we construct a one-dimensional universal least favorable submodel for which the TMLE only takes one step, and thereby requires minimal extra data fitting to achieve its goal of solving the efficient influence curve equation. We generalize these to universal least favorable submodels through the relevant part of the data distribution as required for targeted minimum loss-based estimation. Finally, remarkably, given a multidimensional target parameter, we develop a universal canonical one-dimensional submodel such that the one-step TMLE, only maximizing the log-likelihood over a univariate parameter, solves the multivariate efficient influence curve equation. This allows us to construct a one-step TMLE based on a one-dimensional parametric submodel through the initial estimator, that solves any multivariate desired set of estimating equations. PMID:27227728

  1. Structural Constraints and Earthquake Recurrence Estimates for the West Tahoe-Dollar Point Fault, Lake Tahoe Basin, California

    NASA Astrophysics Data System (ADS)

    Maloney, J. M.; Driscoll, N. W.; Kent, G.; Brothers, D. S.; Baskin, R. L.; Babcock, J. M.; Noble, P. J.; Karlin, R. E.

    2011-12-01

    Previous work in the Lake Tahoe Basin (LTB), California, identified the West Tahoe-Dollar Point Fault (WTDPF) as the most hazardous fault in the region. Onshore and offshore geophysical mapping delineated three segments of the WTDPF extending along the western margin of the LTB. The rupture patterns between the three WTDPF segments remain poorly understood. Fallen Leaf Lake (FLL), Cascade Lake, and Emerald Bay are three sub-basins of the LTB, located south of Lake Tahoe, that provide an opportunity to image primary earthquake deformation along the WTDPF and associated landslide deposits. We present results from recent (June 2011) high-resolution seismic CHIRP surveys in FLL and Cascade Lake, as well as complete multibeam swath bathymetry coverage of FLL. Radiocarbon dates obtained from the new piston cores acquired in FLL provide age constraints on the older FLL slide deposits and build on and complement previous work that dated the most recent event (MRE) in Fallen Leaf Lake at ~4.1-4.5 k.y. BP. The CHIRP data beneath FLL image slide deposits that appear to correlate with contemporaneous slide deposits in Emerald Bay and Lake Tahoe. A major slide imaged in FLL CHIRP data is slightly younger than the Tsoyowata ash (7950-7730 cal yrs BP) identified in sediment cores and appears synchronous with a major Lake Tahoe slide deposit (7890-7190 cal yrs BP). The equivalent age of these slides suggests the penultimate earthquake on the WTDPF may have triggered them. If correct, we postulate a recurrence interval of ~3-4 k.y. These results suggest the FLL segment of the WTDPF is near its seismic recurrence cycle. Additionally, CHIRP profiles acquired in Cascade Lake image the WTDPF for the first time in this sub-basin, which is located near the transition zone between the FLL and Rubicon Point Sections of the WTDPF. We observe two fault-strands trending N45°W across southern Cascade Lake for ~450 m. The strands produce scarps of ~5 m and ~2.7 m, respectively, on the lake

  2. Shear-scaling-based approach for irreversible energy loss estimation in stenotic aortic flow - An in vitro study.

    PubMed

    Gülan, Utku; Binter, Christian; Kozerke, Sebastian; Holzner, Markus

    2017-03-12

    Today, the functional and risk assessment of stenosed arteries is mostly based on ultrasound Doppler blood flow velocity measurements or catheter pressure measurements, which rely on several assumptions. Alternatively, blood velocity including turbulent kinetic energy (TKE) may be measured using MRI. The aim of the present study is to validate a TKE-based approach that relies on the fact that turbulence production is dominated by the flow's shear to determine the total irreversible energy loss from MRI scans. Three-dimensional particle tracking velocimetry (3D-PTV) and phase-contrast magnetic resonance imaging (PC-MRI) simulations were performed in an anatomically accurate, compliant, silicon aortic phantom. We found that measuring only the laminar viscous losses does not reflect the true losses of stenotic flows since the contribution of the turbulent losses to the total loss become more dominant for more severe stenosis types (for example, the laminar loss is 0.0094±0.0015W and the turbulent loss is 0.0361±0.0015W for the Remax=13,800 case, where Remax is the Reynolds number based on the velocity in the vena-contracta). We show that the commonly used simplified and modified Bernoulli's approaches overestimate the total loss, while the new TKE-based method proposed here, referred to as "shear scaling" approach, results in a good agreement between 3D-PTV and simulated PC-MRI (mean error is around 10%). In addition, we validated the shear scaling approach on a geometry with post-stenotic dilatation using numerical data by Casas et al. (2016). The shear scaling-based method may hence be an interesting alternative for irreversible energy loss estimation to replace traditional approaches for clinical use. We expect that our results will evoke further research, in particular patient studies for clinical implementation of the new method.

  3. Use of starting condition score to estimate changes in body weight and composition during weight loss in obese dogs.

    PubMed

    German, A J; Holden, S L; Bissot, T; Morris, P J; Biourge, V

    2009-10-01

    Prior to starting a weight loss programme, target weight (TW) is often estimated, using starting body condition score (BCS). The current study assessed how well such estimates perform in clinical practice. Information on body weight, BCS and body composition was assessed before and after weight loss in 28 obese, client-owned dogs. Median decrease in starting weight per BCS unit was 10% (5-15%), with no significant difference between dogs losing moderate (1-2 BCS points) or marked (3-4 BCS points) amounts of weight (P=0.627). Mean decrease in body fat per BCS unit change was 5% (3-9%). A model based on a change of 10% of starting weight per unit of BCS above ideal (5/9) most closely estimated actual TW, but marked variability was seen. Therefore, although such calculations may provide a guide to final TW in obese dogs, they can either over- or under-estimate the appropriate end point of weight loss.

  4. Simplified Loss Estimation of Splice to Photonic Crystal Fiber using New Model

    NASA Astrophysics Data System (ADS)

    Karak, Anup; Kundu, Dipankar; Sarkar, Somenath

    2016-06-01

    For a range of fiber parameters and wavelengths, the splice losses between photonic crystal fiber and a single mode fiber are calculated using our simplified and effective model of photonic crystal fiber following a recently developed elaborate method. Again, since the transverse offset and angular mismatch are the serious factors which contribute crucially to splice losses between two optical fibers, these losses between the same couple of fibers are also studied, using our formulation. The concerned results are seen to match fairly excellently with rigorous ones and consistently in comparison with earlier empirical results. Moreover, our formulation can be developed from theoretical framework over entire optogeometrical parameters of photonic crystal fiber within single mode region instead of using deeply involved full vectorial methods. This user-friendly simple approach of computing splice loss should find wide use by experimentalists and system users.

  5. Wildlife Loss Estimates and Summary of Previous Mitigation Related to Hydroelectric Projects in Montana, Volume Three, Hungry Horse Project.

    SciTech Connect

    Casey, Daniel

    1984-10-01

    This assessment addresses the impacts to the wildlife populations and wildlife habitats due to the Hungry Horse Dam project on the South Fork of the Flathead River and previous mitigation of theses losses. In order to develop and focus mitigation efforts, it was first necessary to estimate wildlife and wildlife hatitat losses attributable to the construction and operation of the project. The purpose of this report was to document the best available information concerning the degree of impacts to target wildlife species. Indirect benefits to wildlife species not listed will be identified during the development of alternative mitigation measures. Wildlife species incurring positive impacts attributable to the project were identified.

  6. Estimation of Blood Loss: Comparing the Accuracy of Operating Room Personnel

    DTIC Science & Technology

    1991-02-01

    Operating Room Services to reserve an unutilized room for the day of the experiment . The experimental period was on June 14, 1990, from 8:30 AM to 12:00...moderate loss he may experience a decrease in pulse pressure, tachycardia, tachypnea, and postural hypotension. A major blood loss may constitute...during the procedure. In discussing his experience with 3,000 transfusions, Blain (1929) emphasized that the amount of blood lost during operations

  7. Extinction cascades partially estimate herbivore losses in a complete Lepidoptera--plant food web.

    PubMed

    Pearse, Ian S; Altermatt, Florian

    2013-08-01

    The loss of species from an ecological community can have cascading effects leading to the extinction of other species. Specialist herbivores are highly diverse and may be particularly susceptible to extinction due to host plant loss. We used a bipartite food web of 900 Lepidoptera (butterfly and moth) herbivores and 2403 plant species from Central Europe to simulate the cascading effect of plant extinctions on Lepidoptera extinctions. Realistic extinction sequences of plants, incorporating red-list status, range size, and native status, altered subsequent Lepidoptera extinctions. We compared simulated Lepidoptera extinctions to the number of actual regional Lepidoptera extinctions and found that all predicted scenarios underestimated total observed extinctions but accurately predicted observed extinctions attributed to host loss (n = 8, 14%). Likely, many regional Lepidoptera extinctions occurred for reasons other than loss of host plant alone, such as climate change and habitat loss. Ecological networks can be useful in assessing a component of extinction risk to herbivores based on host loss, but further factors may be equally important.

  8. Seismic Network Performance Estimation: Comparing Predictions of Magnitude of Completeness and Location Accuracy to Observations from an Earthquake Catalogue

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Greig, D. W.; Ackerley, N. J.

    2014-12-01

    The design of seismic networks for the monitoring of induced seismicity is of critical importance. The recent introduction of regulations in various locations around the world (with more upcoming) has created a need for a priori confirmation that certain performance standards are met. We develop a tool to assess two key measures of network performance without an earthquake catalogue: magnitude of completeness and location accuracy. Site noise measurements are taken at existing seismic stations or as part of a noise survey. We then interpolate between measured values to determine a noise map for the entire region. The site noise is then summed with the instrument noise to determine the effective station noise at each of the proposed station locations. Location accuracy is evaluated by generating a covariance matrix that represents the error ellipsoid from the travel time derivatives (Peters and Crosson, 1972). To determine the magnitude of completeness we assume isotropic radiation and mandate a minimum signal to noise ratio for detection. For every gridpoint, we compute the Brune spectra for synthetic events and iterate to determine the smallest magnitude event that can be detected by at least four stations. We apply this methodology to an example network. We predict the magnitude of completeness and the location accuracy and compare the predicted values to observed values generated from the existing earthquake catalogue for the network. We discuss the effects of hypothetical station additions and removals on network performance to simulate network expansions and station failures. The ability to predict hypothetical station performance allows for the optimization of seismic network design and enables prediction of network performance even for a purely hypothetical seismic network. This allows the operators of networks for induced seismicity monitoring to be confident that performance criteria are met from day one of operations.

  9. Earthquake hazards: a national threat

    USGS Publications Warehouse

    ,

    2006-01-01

    Earthquakes are one of the most costly natural hazards faced by the Nation, posing a significant risk to 75 million Americans in 39 States. The risks that earthquakes pose to society, including death, injury, and economic loss, can be greatly reduced by (1) better planning, construction, and mitigation practices before earthquakes happen, and (2) providing critical and timely information to improve response after they occur. As part of the multi-agency National Earthquake Hazards Reduction Program, the U.S. Geological Survey (USGS) has the lead Federal responsibility to provide notification of earthquakes in order to enhance public safety and to reduce losses through effective forecasts based on the best possible scientific information.

  10. Ground motion estimation for the elevated bridges of the Kyushu Shinkansen derailment caused by the foreshock of the 2016 Kumamoto earthquake based on the site-effect substitution method

    NASA Astrophysics Data System (ADS)

    Hata, Yoshiya; Yabe, Masaaki; Kasai, Akira; Matsuzaki, Hiroshi; Takahashi, Yoshikazu; Akiyama, Mitsuyoshi

    2016-12-01

    An earthquake of JMA magnitude 6.5 (first event) hit Kumamoto Prefecture, Japan, at 21:26 JST, April 14, 2016. Subsequently, an earthquake of JMA magnitude 7.3 (second event) hit Kumamoto and Oita Prefectures at 01:46 JST, April 16, 2016. An out-of-service Kyushu Shinkansen train carrying no passengers traveling on elevated bridges was derailed by the first event. This was the third derailment caused by an earthquake in the history of the Japanese Shinkansen, after one caused by the 2004 Mid-Niigata Prefecture Earthquake and another triggered by the 2011 Tohoku Earthquake. To analyze the mechanism of this third derailment, it is crucial to evaluate the strong ground motion at the derailment site with high accuracy. For this study, temporary earthquake observations were first carried out at a location near the bridge site; these observations were conducted because although the JMA Kumamoto Station site and the derailment site are closely located, the ground response characteristics at these sites differ. Next, empirical site amplification and phase effects were evaluated based on the obtained observation records. Finally, seismic waveforms during the first event at the bridge site of interest were estimated based on the site-effect substitution method. The resulting estimated acceleration and velocity waveforms for the derailment site include much larger amplitudes than the waveforms recorded at the JMA Kumamoto and MLIT Kumamoto station sites. The reliability of these estimates is confirmed by the finding that the same methods reproduce strong ground motions at the MLIT Kumamoto Station site accurately. These estimated ground motions will be useful for reasonable safety assessment of anti-derailment devices on elevated railway bridges.[Figure not available: see fulltext.

  11. Estimated tooth loss based on number of present teeth in Japanese adults using national surveys of dental disease.

    PubMed

    Yoshino, Koichi; Ishizuka, Yoichi; Fukai, Kakuhiro; Takiguchi, Toru; Sugihara, Naoki

    2015-01-01

    Oral health instruction for adults should take into account the potential effect of tooth loss, as this has been suggested to predict further tooth loss. Therefore, the purpose of this study was to determine whether further tooth loss could be predicted from the number of present teeth (PT). We employed the same method as in our previous study, this time using two national surveys of dental disease, which were deemed to represent a generational cohort. Percentiles were estimated using the cumulative frequency distribution of PT from the two surveys. The first was a survey of 704 participants aged 50-59 years conducted in 2005, and the second was a survey of 747 participants aged 56-65 years conducted in 2011. The 1st to 100th percentiles of the number of PT were calculated for both age groups. Using these percentiles and a generational cohort analysis based on the two surveys, the number of teeth lost per year could be calculated. The distribution of number of teeth lost generated a convex curve. Peak tooth loss occurred at around 12-14 PT, with 0.54 teeth being lost per year. The percentage of teeth lost (per number of PT) increased as number of PT decreased. The results confirmed that tooth loss promotes further tooth loss. These data should be made available for use in adult oral health education.

  12. Estimating Earthquake Magnitude from the Kentucky Bend Scarp in the New Madrid Seismic Zone Using Field Geomorphic Mapping and High-Resolution LiDAR Topography

    NASA Astrophysics Data System (ADS)

    Kelson, K. I.; Kirkendall, W. G.

    2014-12-01

    Recent suggestions that the 1811-1812 earthquakes in the New Madrid Seismic Zone (NMSZ) ranged from M6.8-7.0 versus M8.0 have implications for seismic hazard estimation in the central US. We more accurately identify the location of the NW-striking, NE-facing Kentucky Bend scarp along the northern Reelfoot fault, which is spatially associated with the Lake County uplift, contemporary seismicity, and changes in the Mississippi River from the February 1812 earthquake. We use 1m-resolution LiDAR hillshades and slope surfaces, aerial photography, soil surveys, and field geomorphic mapping to estimate the location, pattern, and amount of late Holocene coseismic surface deformation. We define eight late Holocene to historic fluvial deposits, and delineate younger alluvia that are progressively inset into older deposits on the upthrown, western side of the fault. Some younger, clayey deposits indicate past ponding against the scarp, perhaps following surface deformational events. The Reelfoot fault is represented by sinuous breaks-in-slope cutting across these fluvial deposits, locally coinciding with shallow faults identified via seismic reflection data (Woolery et al., 1999). The deformation pattern is consistent with NE-directed reverse faulting along single or multiple SW-dipping fault planes, and the complex pattern of fluvial deposition appears partially controlled by intermittent uplift. Six localities contain scarps across correlative deposits and allow evaluation of cumulative surface deformation from LiDAR-derived topographic profiles. Displacements range from 3.4±0.2 m, to 2.2±0.2 m, 1.4±0.3 m, and 0.6±0.1 m across four progressively younger surfaces. The spatial distribution of the profiles argues against the differences being a result of along-strike uplift variability. We attribute the lesser displacements of progressively younger deposits to recurrent surface deformation, but do not yet interpret these initial data with respect to possible earthquake

  13. A preliminary report on the Great Wenchuan Earthquake

    NASA Astrophysics Data System (ADS)

    Wang, Zifa

    2008-06-01

    The May 12, 2008 Great Wenchuan Earthquake has resulted in more than 68,858 deaths and losses in the hundreds of billions RMB as of May 30, 2008, and these numbers will undoubtedly increase as more information becomes available on the extent of the event. Immediately after the earthquake, the China Earthquake Administration (CEA) responded quickly by sending teams of experts to the affected region, eventually including over 60 staff members from the Institute of Engineering Mechanics (IEM). This paper reports preliminary information that has been gathered in the first 18 days after the event, covering seismicity, search and rescue efforts, observed ground motions, and damage and loss estimates. The extensive field investigation has revealed a number of valuable findings that could be useful in improving research in earthquake engineering in the future. Once again, this earthquake has shown that the vertical component of ground motion is as significant as horizontal ground motions in the near-source area. Finally, note that as more information is gathered, the numbers reported in this paper will need to be adjusted accordingly.

  14. Estimating losses in heat networks coated with modern liquid crystal thermal insulation

    NASA Astrophysics Data System (ADS)

    Ilyin, R. A.

    2015-07-01

    One of the present issues during heat network operation in Russia is the losses of thermal energy at its transfer to consumers. According to statements of experts, losses in heat networks reach 35-50%. In this work, some properties of thermo-insulating materials currently in use are described. The innovative TLM Ceramic liquid-crystal thermal insulation is presented by its positive technical and economical characteristics, as well as field-performance data, and the doubts of experts about its declared properties. Location measurement data are presented for Astrakhan Severnaya heat and power plant hot-water system section covered with the 2-mm-thick liquid-crystal thermal insulation layer. Specific heat losses from the hot-water system surface have been determined and the arguments for inexpediency of applying TLM Ceramic liquid-crystal thermal insulation in heat-and-power engineering are discussed.

  15. Turkish Compulsory Earthquake Insurance (TCIP)

    NASA Astrophysics Data System (ADS)

    Erdik, M.; Durukal, E.; Sesetyan, K.

    2009-04-01

    Through a World Bank project a government-sponsored Turkish Catastrophic Insurance Pool (TCIP) is created in 2000 with the essential aim of transferring the government's financial burden of replacing earthquake-damaged housing to international reinsurance and capital markets. Providing coverage to about 2.9 Million homeowners TCIP is the largest insurance program in the country with about 0.5 Billion USD in its own reserves and about 2.3 Billion USD in total claims paying capacity. The total payment for earthquake damage since 2000 (mostly small, 226 earthquakes) amounts to about 13 Million USD. The country-wide penetration rate is about 22%, highest in the Marmara region (30%) and lowest in the south-east Turkey (9%). TCIP is the sole-source provider of earthquake loss coverage up to 90,000 USD per house. The annual premium, categorized on the basis of earthquake zones type of structure, is about US90 for a 100 square meter reinforced concrete building in the most hazardous zone with 2% deductible. The earthquake engineering related shortcomings of the TCIP is exemplified by fact that the average rate of 0.13% (for reinforced concrete buildings) with only 2% deductible is rather low compared to countries with similar earthquake exposure. From an earthquake engineering point of view the risk underwriting (Typification of housing units to be insured, earthquake intensity zonation and the sum insured) of the TCIP needs to be overhauled. Especially for large cities, models can be developed where its expected earthquake performance (and consequently the insurance premium) can be can be assessed on the basis of the location of the unit (microzoned earthquake hazard) and basic structural attributes (earthquake vulnerability relationships). With such an approach, in the future the TCIP can contribute to the control of construction through differentiation of premia on the basis of earthquake vulnerability.

  16. Model for Estimating Noise-Induced Hearing Loss Associated With Occupational Noise Exposure in a Specified US Navy Population

    DTIC Science & Technology

    2007-01-10

    U.S. government for noise-induced hearing loss ( NIHL ) caused to service personnel by noisy systems and spaces are unaccounted for in estimates of...life-cycle costs. This pilot study explored whether a NIHL prediction algorithm from the American National Standards Institute (ANSI S3.44-1996) could...medical and compensation costs of NIHL in this population. This population of Sailors has a “simple” exposure in that the main career-long noise

  17. DXA, bioelectrical impedance, ultrasonography and biometry for the estimation of fat and lean mass in cats during weight loss

    PubMed Central

    2012-01-01

    Background Few equations have been developed in veterinary medicine compared to human medicine to predict body composition. The present study was done to evaluate the influence of weight loss on biometry (BIO), bioimpedance analysis (BIA) and ultrasonography (US) in cats, proposing equations to estimate fat (FM) and lean (LM) body mass, as compared to dual energy x-ray absorptiometry (DXA) as the referenced method. For this were used 16 gonadectomized obese cats (8 males and 8 females) in a weight loss program. DXA, BIO, BIA and US were performed in the obese state (T0; obese animals), after 10% of weight loss (T1) and after 20% of weight loss (T2). Stepwise regression was used to analyze the relationship between the dependent variables (FM, LM) determined by DXA and the independent variables obtained by BIO, BIA and US. The better models chosen were evaluated by a simple regression analysis and means predicted vs. determined by DXA were compared to verify the accuracy of the equations. Results The independent variables determined by BIO, BIA and US that best correlated (p < 0.005) with the dependent variables (FM and LM) were BW (body weight), TC (thoracic circumference), PC (pelvic circumference), R (resistance) and SFLT (subcutaneous fat layer thickness). Using Mallows’Cp statistics, p value and r2, 19 equations were selected (12 for FM, 7 for LM); however, only 7 equations accurately predicted FM and one LM of cats. Conclusions The equations with two variables are better to use because they are effective and will be an alternative method to estimate body composition in the clinical routine. For estimated lean mass the equations using body weight associated with biometrics measures can be proposed. For estimated fat mass the equations using body weight associated with bioimpedance analysis can be proposed. PMID:22781317

  18. Estimating Tempo and Mode of Y Chromosome Turnover: Explaining Y Chromosome Loss With the Fragile Y Hypothesis

    PubMed Central

    Blackmon, Heath; Demuth, Jeffery P.

    2014-01-01

    Chromosomal sex determination is phylogenetically widespread, having arisen independently in many lineages. Decades of theoretical work provide predictions about sex chromosome differentiation that are well supported by observations in both XY and ZW systems. However, the phylogenetic scope of previous work gives us a limited understanding of the pace of sex chromosome gain and loss and why Y or W chromosomes are more often lost in some lineages than others, creating XO or ZO systems. To gain phylogenetic breadth we therefore assembled a database of 4724 beetle species’ karyotypes and found substantial variation in sex chromosome systems. We used the data to estimate rates of Y chromosome gain and loss across a phylogeny of 1126 taxa estimated from seven genes. Contrary to our initial expectations, we find that highly degenerated Y chromosomes of many members of the suborder Polyphaga are rarely lost, and that cases of Y chromosome loss are strongly associated with chiasmatic segregation during male meiosis. We propose the “fragile Y” hypothesis, that recurrent selection to reduce recombination between the X and Y chromosome leads to the evolution of a small pseudoautosomal region (PAR), which, in taxa that require XY chiasmata for proper segregation during meiosis, increases the probability of aneuploid gamete production, with Y chromosome loss. This hypothesis predicts that taxa that evolve achiasmatic segregation during male meiosis will rarely lose the Y chromosome. We discuss data from mammals, which are consistent with our prediction. PMID:24939995

  19. Estimating tempo and mode of Y chromosome turnover: explaining Y chromosome loss with the fragile Y hypothesis.

    PubMed

    Blackmon, Heath; Demuth, Jeffery P

    2014-06-01

    Chromosomal sex determination is phylogenetically widespread, having arisen independently in many lineages. Decades of theoretical work provide predictions about sex chromosome differentiation that are well supported by observations in both XY and ZW systems. However, the phylogenetic scope of previous work gives us a limited understanding of the pace of sex chromosome gain and loss and why Y or W chromosomes are more often lost in some lineages than others, creating XO or ZO systems. To gain phylogenetic breadth we therefore assembled a database of 4724 beetle species' karyotypes and found substantial variation in sex chromosome systems. We used the data to estimate rates of Y chromosome gain and loss across a phylogeny of 1126 taxa estimated from seven genes. Contrary to our initial expectations, we find that highly degenerated Y chromosomes of many members of the suborder Polyphaga are rarely lost, and that cases of Y chromosome loss are strongly associated with chiasmatic segregation during male meiosis. We propose the "fragile Y" hypothesis, that recurrent selection to reduce recombination between the X and Y chromosome leads to the evolution of a small pseudoautosomal region (PAR), which, in taxa that require XY chiasmata for proper segregation during meiosis, increases the probability of aneuploid gamete production, with Y chromosome loss. This hypothesis predicts that taxa that evolve achiasmatic segregation during male meiosis will rarely lose the Y chromosome. We discuss data from mammals, which are consistent with our prediction.

  20. An approach to estimating radiological risk of offsite release from a design basis earthquake for the Process Experimental Pilot Plant (PREPP)

    SciTech Connect

    Lucero, V.; Meale, B.M.; Reny, D.A.; Brown, A.N.

    1990-09-01

    In compliance with Department of Energy (DOE) Order 6430.1A, a seismic analysis was performed on DOE's Process Experimental Pilot Plant (PREPP), a facility for processing low-level and transuranic (TRU) waste. Because no hazard curves were available for the Idaho National Engineering Laboratory (INEL), DOE guidelines were used to estimate the frequency for the specified design-basis earthquake (DBE). A dynamic structural analysis of the building was performed, using the DBE parameters, followed by a probabilistic risk assessment (PRA). For the PRA, a functional organization of the facility equipment was effected so that top events for a representative event tree model could be determined. Building response spectra (calculated from the structural analysis), in conjunction with generic fragility data, were used to generate fragility curves for the PREPP equipment. Using these curves, failure probabilities for each top event were calculated. These probabilities were integrated into the event tree model, and accident sequences and respective probabilities were calculated through quantification. By combining the sequences failure probabilities with a transport analysis of the estimated airborne source term from a DBE, onsite and offsite consequences were calculated. The results of the comprehensive analysis substantiated the ability of the PREPP facility to withstand a DBE with negligible consequence (i.e., estimated release was within personnel and environmental dose guidelines). 57 refs., 19 figs., 20 tabs.

  1. Motor unit loss estimation by the multipoint incremental MUNE method in children with spinal muscular atrophy--a preliminary study.

    PubMed

    Gawel, Malgorzata; Kostera-Pruszczyk, Anna; Lusakowska, Anna; Jedrzejowska, Maria; Ryniewicz, Barbara; Lipowska, Marta; Gawel, Damian; Kaminska, Anna

    2015-03-01

    Quantitative EMG reflects denervation of muscles after lower motor neuron degeneration in spinal muscular atrophy (SMA) but does not reflect actual motor unit loss. The aim of our study was to assess the value of the multipoint incremental motor unit number estimation (MUNE) method in the modification by Shefner in estimating motor unit loss in SMA. The number of motor units, the mean amplitude of an average surface-detected single motor unit potential (SMUP), and the amplitude of compound motor action potentials (CMAP) were estimated in 14 children with SMA in the abductor pollicis brevis (ABP). Significant differences in MUNE values and SMUP and CMAP amplitude were found between the SMA and control groups (P < 0.0001). MUNE values correlated with Hammersmith Functional Motor Scale (HFMS) scores (P < 0.05). Increased SMUP amplitude values correlated with decreased HFMS scores (P < 0.05). The study confirms that MUNE method in the modification by Shefner is a useful tool reflecting motor unit loss in SMA, and it is easy to perform and well tolerated. MUNE and SMUP amplitude seemed to be sensitive parameters reflecting motor dysfunction in SMA but a longitudinal study in a larger number of subjects is needed.

  2. The 1964 Great Alaska Earthquake and tsunamis: a modern perspective and enduring legacies

    USGS Publications Warehouse

    Brocher, Thomas M.; Filson, John R.; Fuis, Gary S.; Haeussler, Peter J.; Holzer, Thomas L.; Plafker, George; Blair, J. Luke

    2014-01-01

    The magnitude 9.2 Great Alaska Earthquake that struck south-central Alaska at 5:36 p.m. on Friday, March 27, 1964, is the largest recorded earthquake in U.S. history and the second-largest earthquake recorded with modern instruments. The earthquake was felt throughout most of mainland Alaska, as far west as Dutch Harbor in the Aleutian Islands some 480 miles away, and at Seattle, Washington, more than 1,200 miles to the southeast of the fault rupture, where the Space Needle swayed perceptibly. The earthquake caused rivers, lakes, and other waterways to slosh as far away as the coasts of Texas and Louisiana. Water-level recorders in 47 states—the entire Nation except for Connecticut, Delaware, and Rhode Island— registered the earthquake. It was so large that it caused the entire Earth to ring like a bell: vibrations that were among the first of their kind ever recorded by modern instruments. The Great Alaska Earthquake spawned thousands of lesser aftershocks and hundreds of damaging landslides, submarine slumps, and other ground failures. Alaska’s largest city, Anchorage, located west of the fault rupture, sustained heavy property damage. Tsunamis produced by the earthquake resulted in deaths and damage as far away as Oregon and California. Altogether the earthquake and subsequent tsunamis caused 129 fatalities and an estimated $2.3 billion in property losses (in 2013 dollars). Most of the population of Alaska and its major transportation routes, ports, and infrastructure lie near the eastern segment of the Aleutian Trench that ruptured in the 1964 earthquake. Although the Great Alaska Earthquake was tragic because of the loss of life and property, it provided a wealth of data about subductionzone earthquakes and the hazards they pose. The leap in scientific understanding that followed the 1964 earthquake has led to major breakthroughs in earth science research worldwide over the past half century. This fact sheet commemorates Great Alaska Earthquake and

  3. Across-frequency behavioral estimates of the contribution of inner and outer hair cell dysfunction to individualized audiometric loss

    PubMed Central

    Johannesen, Peter T.; Pérez-González, Patricia; Lopez-Poveda, Enrique A.

    2014-01-01

    Identifying the multiple contributors to the audiometric loss of a hearing impaired (HI) listener at a particular frequency is becoming gradually more useful as