Science.gov

Sample records for earthquake loss estimation

  1. Earthquake Loss Estimation Uncertainties

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander

    2013-04-01

    The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity

  2. Loss estimation of Membramo earthquake

    NASA Astrophysics Data System (ADS)

    Damanik, R.; Sedayo, H.

    2016-05-01

    Papua Tectonics are dominated by the oblique collision of the Pacific plate along the north side of the island. A very high relative plate motions (i.e. 120 mm/year) between the Pacific and Papua-Australian Plates, gives this region a very high earthquake production rate, about twice as much as that of Sumatra, the western margin of Indonesia. Most of the seismicity occurring beneath the island of New Guinea is clustered near the Huon Peninsula, the Mamberamo region, and the Bird's Neck. At 04:41 local time(GMT+9), July 28th 2015, a large earthquake of Mw = 7.0 occurred at West Mamberamo Fault System. The earthquake focal mechanism are dominated by northwest-trending thrust mechanisms. GMPE and ATC vulnerability curve were used to estimate distribution of damage. Mean of estimated losses was caused by this earthquake is IDR78.6 billion. We estimated insurance loss will be only small portion in total general due to deductible.

  3. Estimation of Future Earthquake Losses in California

    NASA Astrophysics Data System (ADS)

    Rowshandel, B.; Wills, C. J.; Cao, T.; Reichle, M.; Branum, D.

    2003-12-01

    Recent developments in earthquake hazards and damage modeling, computing, and data management and processing, have made it possible to develop estimates of the levels of damage from earthquakes that may be expected in the future in California. These developments have been mostly published in the open literature, and provide an opportunity to estimate the levels of earthquake damage Californians can expect to suffer during the next several decades. Within the past 30 years, earthquake losses have increased dramatically, mostly because our exposure to earthquake hazards has increased. All but four of the recent damaging earthquakes have occurred distant from California's major population centers. Two, the Loma Prieta earthquake and the San Fernando earthquake, occurred on the edges of major populated areas. Loma Prieta caused significant damage in the nearby Santa Cruz and in the more distant, heavily populated, San Francisco Bay area. The 1971 San Fernando earthquake had an epicenter in the lightly populated San Gabriel Mountains, but caused slightly over 2 billion dollars in damage in the Los Angeles area. As urban areas continue to expand, the population and infrastructure at risk increases. When earthquakes occur closer to populated areas, damage is more significant. The relatively minor Whittier Narrows earthquake of 1987 caused over 500 million dollars in damage because it occurred in the Los Angeles metropolitan area, not at its fringes. The Northridge earthquake had fault rupture directly beneath the San Fernando Valley, and caused about 46 billion dollars in damage. This vast increase in damage from the San Fernando earthquake reflected both the location of the earthquake directly beneath the populated area and the 23 years of continued development and resulting greater exposure to potential damage. We have calculated losses from potential future earthquake, both as scenarios of potential earthquakes and as annualized losses considering all the potential

  4. Building Loss Estimation for Earthquake Insurance Pricing

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Erdik, M.; Sesetyan, K.; Demircioglu, M. B.; Fahjan, Y.; Siyahi, B.

    2005-12-01

    After the 1999 earthquakes in Turkey several changes in the insurance sector took place. A compulsory earthquake insurance scheme was introduced by the government. The reinsurance companies increased their rates. Some even supended operations in the market. And, most important, the insurance companies realized the importance of portfolio analysis in shaping their future market strategies. The paper describes an earthquake loss assessment methodology that can be used for insurance pricing and portfolio loss estimation that is based on our work esperience in the insurance market. The basic ingredients are probabilistic and deterministic regional site dependent earthquake hazard, regional building inventory (and/or portfolio), building vulnerabilities associated with typical construction systems in Turkey and estimations of building replacement costs for different damage levels. Probable maximum and average annualized losses are estimated as the result of analysis. There is a two-level earthquake insurance system in Turkey, the effect of which is incorporated in the algorithm: the national compulsory earthquake insurance scheme and the private earthquake insurance system. To buy private insurance one has to be covered by the national system, that has limited coverage. As a demonstration of the methodology we look at the case of Istanbul and use its building inventory data instead of a portfolio. A state-of-the-art time depent earthquake hazard model that portrays the increased earthquake expectancies in Istanbul is used. Intensity and spectral displacement based vulnerability relationships are incorporated in the analysis. In particular we look at the uncertainty in the loss estimations that arise from the vulnerability relationships, and at the effect of the implemented repair cost ratios.

  5. Estimating economic losses from earthquakes using an empirical approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2013-01-01

    We extended the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) empirical fatality estimation methodology proposed by Jaiswal et al. (2009) to rapidly estimate economic losses after significant earthquakes worldwide. The requisite model inputs are shaking intensity estimates made by the ShakeMap system, the spatial distribution of population available from the LandScan database, modern and historic country or sub-country population and Gross Domestic Product (GDP) data, and economic loss data from Munich Re's historical earthquakes catalog. We developed a strategy to approximately scale GDP-based economic exposure for historical and recent earthquakes in order to estimate economic losses. The process consists of using a country-specific multiplicative factor to accommodate the disparity between economic exposure and the annual per capita GDP, and it has proven successful in hindcast-ing past losses. Although loss, population, shaking estimates, and economic data used in the calibration process are uncertain, approximate ranges of losses can be estimated for the primary purpose of gauging the overall scope of the disaster and coordinating response. The proposed methodology is both indirect and approximate and is thus best suited as a rapid loss estimation model for applications like the PAGER system.

  6. Estimating annualized earthquake losses for the conterminous United States

    USGS Publications Warehouse

    Jaiswal, Kishor S.; Bausch, Douglas; Chen, Rui; Bouabid, Jawhar; Seligson, Hope

    2015-01-01

    We make use of the most recent National Seismic Hazard Maps (the years 2008 and 2014 cycles), updated census data on population, and economic exposure estimates of general building stock to quantify annualized earthquake loss (AEL) for the conterminous United States. The AEL analyses were performed using the Federal Emergency Management Agency's (FEMA) Hazus software, which facilitated a systematic comparison of the influence of the 2014 National Seismic Hazard Maps in terms of annualized loss estimates in different parts of the country. The losses from an individual earthquake could easily exceed many tens of billions of dollars, and the long-term averaged value of losses from all earthquakes within the conterminous U.S. has been estimated to be a few billion dollars per year. This study estimated nationwide losses to be approximately $4.5 billion per year (in 2012$), roughly 80% of which can be attributed to the States of California, Oregon and Washington. We document the change in estimated AELs arising solely from the change in the assumed hazard map. The change from the 2008 map to the 2014 map results in a 10 to 20% reduction in AELs for the highly seismic States of the Western United States, whereas the reduction is even more significant for Central and Eastern United States.

  7. Global Building Inventory for Earthquake Loss Estimation and Risk Management

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David; Porter, Keith

    2010-01-01

    We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat’s demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature.

  8. Rapid estimation of earthquake loss based on instrumental seismic intensity: design and realization

    NASA Astrophysics Data System (ADS)

    Huang, Hongsheng; Chen, Lin; Zhu, Gengqing; Wang, Lin; Lin, Yanzhao; Wang, Huishan

    2013-11-01

    As a result of our ability to acquire large volumes of real-time earthquake observation data, coupled with increased computer performance, near real-time seismic instrument intensity can be obtained by using ground motion data observed by instruments and by using the appropriate spatial interpolation methods. By combining vulnerability study results from earthquake disaster research with earthquake disaster assessment models, we can estimate the losses caused by devastating earthquakes, in an attempt to provide more reliable information for earthquake emergency response and decision support. This paper analyzes the latest progress on the methods of rapid earthquake loss estimation at home and abroad. A new method involving seismic instrument intensity rapid reporting to estimate earthquake loss is proposed and the relevant software is developed. Finally, a case study using the M L4.9 earthquake that occurred in Shun-chang county, Fujian Province on March 13, 2007 is given as an example of the proposed method.

  9. Method of expected earthquake losses estimation based on the frequency of seismic site intensity

    NASA Astrophysics Data System (ADS)

    Gao, Meng-Tan

    1995-05-01

    During a given period, a site will suffer the attack from earthquake several times. But this effect is neglected in the currently used model of loss estimation from earthquake. When calculating the occurrence rate of the affected intensity, the difference of the exceeding probability is used. Such treatment will underestimate the earthquake loss, especially when the exposure period is long. To overcome the shortcomings of the model currently used, a new frame of earthquake loss estimation is provided from the logic sense: during the given period, the expected earthquake loss responding to the specific affected intensity is equal to the expected number of the intensity multiplying the expected loss under the condition of such an affected intensity, and the total expected loss is equal to the effects of all the possible intensities. On the basis of the seismicity model used in compiling the “ Chinese Seismic Intensity Zoning Map (1990)”, a new formula of expected loss evaluation and the variance of the evaluation are provided. It is inferred from the example and the comparison with the currently used method that the new method is applicable and necessary. These results will lay a scientific foundation for the estimation of earthquake loss, insurance and disaster prevention.

  10. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    USGS Publications Warehouse

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  11. A global building inventory for earthquake loss estimation and risk management

    USGS Publications Warehouse

    Jaiswal, K.; Wald, D.; Porter, K.

    2010-01-01

    We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat's demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature. ?? 2010, Earthquake Engineering Research Institute.

  12. Comparing population exposure to multiple Washington earthquake scenarios for prioritizing loss estimation studies

    USGS Publications Warehouse

    Wood, Nathan J.; Ratliff, Jamie L.; Schelling, John; Weaver, Craig S.

    2014-01-01

    Scenario-based, loss-estimation studies are useful for gauging potential societal impacts from earthquakes but can be challenging to undertake in areas with multiple scenarios and jurisdictions. We present a geospatial approach using various population data for comparing earthquake scenarios and jurisdictions to help emergency managers prioritize where to focus limited resources on data development and loss-estimation studies. Using 20 earthquake scenarios developed for the State of Washington (USA), we demonstrate how a population-exposure analysis across multiple jurisdictions based on Modified Mercalli Intensity (MMI) classes helps emergency managers understand and communicate where potential loss of life may be concentrated and where impacts may be more related to quality of life. Results indicate that certain well-known scenarios may directly impact the greatest number of people, whereas other, potentially lesser-known, scenarios impact fewer people but consequences could be more severe. The use of economic data to profile each jurisdiction’s workforce in earthquake hazard zones also provides additional insight on at-risk populations. This approach can serve as a first step in understanding societal impacts of earthquakes and helping practitioners to efficiently use their limited risk-reduction resources.

  13. Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge

    USGS Publications Warehouse

    Jaiswal, K.S.; Wald, D.J.

    2012-01-01

    We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.

  14. Loss estimates for a Puente Hills blind-thrust earthquake in Los Angeles, California

    USGS Publications Warehouse

    Field, E.H.; Seligson, H.A.; Gupta, N.; Gupta, V.; Jordan, T.H.; Campbell, K.W.

    2005-01-01

    Based on OpenSHA and HAZUS-MH, we present loss estimates for an earthquake rupture on the recently identified Puente Hills blind-thrust fault beneath Los Angeles. Given a range of possible magnitudes and ground motion models, and presuming a full fault rupture, we estimate the total economic loss to be between $82 and $252 billion. This range is not only considerably higher than a previous estimate of $69 billion, but also implies the event would be the costliest disaster in U.S. history. The analysis has also provided the following predictions: 3,000-18,000 fatalities, 142,000-735,000 displaced households, 42,000-211,000 in need of short-term public shelter, and 30,000-99,000 tons of debris generated. Finally, we show that the choice of ground motion model can be more influential than the earthquake magnitude, and that reducing this epistemic uncertainty (e.g., via model improvement and/or rejection) could reduce the uncertainty of the loss estimates by up to a factor of two. We note that a full Puente Hills fault rupture is a rare event (once every ???3,000 years), and that other seismic sources pose significant risk as well. ?? 2005, Earthquake Engineering Research Institute.

  15. Regional earthquake loss estimation in the Autonomous Province of Bolzano - South Tyrol (Italy)

    NASA Astrophysics Data System (ADS)

    Huttenlau, Matthias; Winter, Benjamin

    2013-04-01

    Beside storm events geophysical events cause a majority of natural hazard losses on a global scale. However, in alpine regions with a moderate earthquake risk potential like in the study area and thereupon connected consequences on the collective memory this source of risk is often neglected in contrast to gravitational and hydrological hazards processes. In this context, the comparative analysis of potential disasters and emergencies on a national level in Switzerland (Katarisk study) has shown that earthquakes are the most serious source of risk in general. In order to estimate the potential losses of earthquake events for different return periods and loss dimensions of extreme events the following study was conducted in the Autonomous Province of Bolzano - South Tyrol (Italy). The applied methodology follows the generally accepted risk concept based on the risk components hazard, elements at risk and vulnerability, whereby risk is not defined holistically (direct, indirect, tangible and intangible) but with the risk category losses on buildings and inventory as a general risk proxy. The hazard analysis is based on a regional macroseismic scenario approach. Thereby, the settlement centre of each community (116 communities) is defined as potential epicentre. For each epicentre four different epicentral scenarios (return periods of 98, 475, 975 and 2475 years) are calculated based on the simple but approved and generally accepted attenuation law according to Sponheuer (1960). The relevant input parameters to calculate the epicentral scenarios are (i) the macroseismic intensity and (ii) the focal depth. The considered macroseismic intensities are based on a probabilistic seismic hazard analysis (PSHA) of the Italian earthquake catalogue on a community level (Dipartimento della Protezione Civile). The relevant focal depth are considered as a mean within a defined buffer of the focal depths of the harmonized earthquake catalogues of Italy and Switzerland as well as

  16. Ways to increase the reliability of earthquake loss estimations in emergency mode

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valeri; Ugarov, Aleksander

    2016-04-01

    The lessons of earthquake disasters in Nepal, China, Indonesia, India, Haiti, Turkey and many others show that authorities in charge of emergency response are most often lacking prompt and reliable information on the disaster itself and its secondary effects. Timely and adequate action just after a strong earthquake can result in significant benefits in saving lives and other benefits, especially, in densely populated areas with high level of industrialization. The reliability of rough and rapid information provided by "global systems" (i.e. systems operated without consideration on wherever the earthquake has occurred), in emergency mode is strongly dependent on many factors dealt with input data and simulation models used in such systems. The paper analyses the different factors contribution to the total "error" of fatality estimation in emergency mode. Examples of four strong events in Nepal, Italy, China, Italy allowed to make a conclusion that the reliability of loss estimations is first of all influenced by the uncertainties in event parameters determination (coordinates, magnitude, source depth); this factors' group rating is the highest; as the degree of influence on reliability of loss estimations is equal to about 50%. The second place is taken by the factors' group responsible for macroseismic field simulation; the degree of influence of the group errors is about 30%. The last place is taken by group of factors, which describes the built environment distribution and regional vulnerability functions; the factors' group contributes about 20% to the error of loss estimation. Ways to minimize the influence of different factors on the reliability of loss assessment in near real time are proposed. The first one is to determine the rating of seismological surveys for different zones in attempting to decrease uncertainties in the earthquake parameters input determination in emergency mode. The second one is to "calibrate" the "global systems" drawing advantage

  17. Estimating earthquake potential

    USGS Publications Warehouse

    Page, R.A.

    1980-01-01

    The hazards to life and property from earthquakes can be minimized in three ways. First, structures can be designed and built to resist the effects of earthquakes. Second, the location of structures and human activities can be chosen to avoid or to limit the use of areas known to be subject to serious earthquake hazards. Third, preparations for an earthquake in response to a prediction or warning can reduce the loss of life and damage to property as well as promote a rapid recovery from the disaster. The success of the first two strategies, earthquake engineering and land use planning, depends on being able to reliably estimate the earthquake potential. The key considerations in defining the potential of a region are the location, size, and character of future earthquakes and frequency of their occurrence. Both historic seismicity of the region and the geologic record are considered in evaluating earthquake potential. 

  18. Loss estimation in southeast Korea from a scenario earthquake using the deterministic method in HAZUS

    NASA Astrophysics Data System (ADS)

    Kang, S.; Kim, K.; Suk, B.; Yoo, H.

    2007-12-01

    Strong ground motion attenuation relationship represents a comprehensive trend of ground shakings at sites with distances from the source, geology, local soil conditions, and others. It is necessary to develop an attenuation relationship with careful considerations of characteristics of the target area for reliable seismic hazard/risk assessments. In the study, observed ground motions from the January 2007 magnitude 4.9 Odaesan earthquake and the events occurring in the Gyeongsang provinces are compared with the previously proposed ground attenuation relationships in the Korean Peninsula to select most appropriate one. In the meantime, a few strong ground motion attenuation relationships are proposed and introduced in HAZUS, which have been designed for the Western United States and the Central and Eastern United States. The selected relationship from the ones for the Korean Peninsula has been compared with attenuation relationships available in HAZUS. Then, the attenuation relation for the Western United States proposed by Sadigh et al. (1997) for the Site Class B has been selected for this study. Reliability of the assessment will be improved by using an appropriate attenuation relation. It has been used for the earthquake loss estimation of the Gyeongju area located in southeast Korea using the deterministic method in HAZUS with a scenario earthquake (M=6.7). Our preliminary estimates show 15.6% damage of houses, shelter needs for about three thousands residents, and 75 life losses in the study area for the scenario events occurring at 2 A.M. Approximately 96% of hospitals will be in normal operation in 24 hours from the proposed event. Losses related to houses will be more than 114 million US dollars. Application of the improved methodology for loss estimation in Korea will help decision makers for planning disaster responses and hazard mitigation.

  19. Estimation of damage and human losses due to earthquakes worldwide - QLARM strategy and experience

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Rosset, P.; Wyss, M.; Wiemer, S.; Bonjour, C.; Cua, G.

    2009-04-01

    Within the framework of the IMRPOVE project, we are constructing our second-generation loss estimation tool QLARM (earthQuake Loss Assessment for Response and Mitigation). At the same time, we are upgrading the input data to be used in real-time and scenario mode. The software and databases will be open to all scientific users. The estimates include: (1) total number of fatalities and injured, (2) casualties by settlement, (3) percent of buildings in five damage grades in each settlement, (4) a map showing mean damage by settlement, and (5) functionality of large medical facilities. We present here our strategy and progress so far in constructing and calibrating the new tool. The QLARM worldwide database of the elements-at-risk consists of point and discrete city models with the following parameters: (1) Soil amplification factors; (2) distribution of building stock and population into vulnerability classes of the European Macroseismic Scale (EMS-98); (3) most recent population numbers by settlement or district; (4) information regarding medical facilities where available. We calculate the seismic demand in terms of (a) macroseismic (seismic intensity) or (b) instrumental (PGA) parameters. Attenuation relationships predicting both parameters will be used for different regions worldwide, considering the tectonic regime and wave propagation characteristics. We estimate damage and losses using: (i) vulnerability models pertinent to EMS-98 vulnerability classes; (ii) building collapse rates pertinent to different regions worldwide; and, (iii) casualty matrices pertinent to EMS-98 vulnerability classes. We also provide approximate estimates for the functionality of large medical facilities considering their structural, non-structural damage and loss-of-function of the medical equipment and installations. We calibrate the QLARM database and the loss estimation tool using macroseismic observations and information regarding damage and human losses from past earthquakes

  20. A simulation of Earthquake Loss Estimation in Southeastern Korea using HAZUS and the local site classification Map

    NASA Astrophysics Data System (ADS)

    Kang, S.; Kim, K.

    2013-12-01

    Regionally varying seismic hazards can be estimated using an earthquake loss estimation system (e.g. HAZUS-MH). The estimations for actual earthquakes help federal and local authorities develop rapid, effective recovery measures. Estimates for scenario earthquakes help in designing a comprehensive earthquake hazard mitigation plan. Local site characteristics influence the ground motion. Although direct measurements are desirable to construct a site-amplification map, such data are expensive and time consuming to collect. Thus we derived a site classification map of the southern Korean Peninsula using geologic and geomorphologic data, which are readily available for the entire southern Korean Peninsula. Class B sites (mainly rock) are predominant in the area, although localized areas of softer soils are found along major rivers and seashores. The site classification map is compared with independent site classification studies to confirm our site classification map effectively represents the local behavior of site amplification during an earthquake. We then estimated the losses due to a magnitude 6.7 scenario earthquake in Gyeongju, southeastern Korea, with and without the site classification map. Significant differences in loss estimates were observed. The loss without the site classification map decreased without variation with increasing epicentral distance, while the loss with the site classification map varied from region to region, due to both the epicentral distance and local site effects. The major cause of the large loss expected in Gyeongju is the short epicentral distance. Pohang Nam-Gu is located farther from the earthquake source region. Nonetheless, the loss estimates in the remote city are as large as those in Gyeongju and are attributed to the site effect of soft soil found widely in the area.

  1. Impact of Uncertainty on Loss Estimates for a Repeat of the 1908 Messina-Reggio Calabria Earthquake in Southern Italy

    SciTech Connect

    Franco, Guillermo; Shen-Tu, Bing Ming; Bazzurro, Paolo; Goretti, Agostino; Valensise, Gianluca

    2008-07-08

    Increasing sophistication in the insurance and reinsurance market is stimulating the move towards catastrophe models that offer a greater degree of flexibility in the definition of model parameters and model assumptions. This study explores the impact of uncertainty in the input parameters on the loss estimates by departing from the exclusive usage of mean values to establish the earthquake event mechanism, the ground motion fields, or the damageability of the building stock. Here the potential losses due to a repeat of the 1908 Messina-Reggio Calabria event are calculated using different plausible alternatives found in the literature that encompass 12 event scenarios, 2 different ground motion prediction equations, and 16 combinations of damage functions for the building stock, a total of 384 loss scenarios. These results constitute the basis for a sensitivity analysis of the different assumptions on the loss estimates that allows the model user to estimate the impact of the uncertainty on input parameters and the potential spread of the model results. For the event under scrutiny, average losses would amount today to about 9.000 to 10.000 million Euros. The uncertainty in the model parameters is reflected in the high coefficient of variation of this loss, reaching approximately 45%. The choice of ground motion prediction equations and vulnerability functions of the building stock contribute the most to the uncertainty in loss estimates. This indicates that the application of non-local-specific information has a great impact on the spread of potential catastrophic losses. In order to close this uncertainty gap, more exhaustive documentation practices in insurance portfolios will have to go hand in hand with greater flexibility in the model input parameters.

  2. Observed and estimated economic losses in Guadeloupe (French Antilles) after Les Saintes Earthquake (2004). Application to risk comparison

    NASA Astrophysics Data System (ADS)

    Monfort, Daniel; Reveillère, Arnaud; Lecacheux, Sophie; Muller, Héloise; Grisanti, Ludovic; Baills, Audrey; Bertil, Didier; Sedan, Olivier; Tinard, Pierre

    2013-04-01

    The main objective of this work is to compare the potential direct economic losses between two different hazards in Guadeloupe (French Antilles), earthquakes and storm surges, for different return periods. In order to validate some hypotheses which are done concerning building typologies and their insured values a comparison between real economic loss data and estimated ones is done using a real event. In 2004 there was an earthquake in Guadeloupe, Mw 6.3, in a little archipelago in the south of Guadeloupe called Les Saintes. The heaviest intensities were VIII in the municipalities of Les Saintes and decreases from VII to IV in the other municipalities of Guadeloupe. The CCR, French Reinsurance Organism, has provided data about the total insured economic losses estimated per municipality (in a situation in 2011) and the insurance penetration ratio, it means, the ratio of insured exposed elements per municipality. Some other information about observed damaged structures is quite irregular all over the archipelago, being the only reliable one the observed macroseismic intensity per municipality (field survey done by BCSF). These data at Guadeloupe's scale has been compared with results coming from a retro damage scenario for this earthquake done with the vulnerability data from current buildings and their mean economic value of each building type and taking into account the local amplification effects on the earthquake propagation. In general the results are quite similar but with some significant differences. The results coming from scenario are quite correlated with the spatial attenuation from the earthquake intensity; the heaviest economic losses are concentrated within the municipalities exposed to a considerable and damageable intensity (VII to VIII). On the other side, CCR data show that heavy economic damages are not only located in the most impacted cities but also in the most important municipalities of the archipelago in terms of economic activity

  3. Loss Estimates in Scenario Mode may Help to Harden Those in Real-Time: Repeat of the 1356 Basel Earthquake as an Example

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Kaestli, P.

    2007-12-01

    Estimating losses within minutes after earthquakes worldwide can be difficult because of error sources, unexpected issues and the pressure of time. Therefore, we propose to compile a catalog of scenario loss estimates for locations where future earthquakes may be expected. These scenarios could then be consulted in real-time to assist with estimating the order of magnitude of the losses and identifying local problems that may exist. As an example, we present scenario loss estimates for a repeat of the 1356 M6.9 earthquake near Basel, Switzerland, which was the largest and most devastating historic earthquake north of the Alps. The losses we estimated are defined as average damage to buildings for all settlements affected (intensity equal to and larger than V on the modified Mercalli scale), number of injured and number of fatalities. The results in which we have most confidence are the ratio of human losses within the city of Basel to that outside of it, because any errors in the absolute values of the loss estimates tend to cancel. For a repeat of the 1356 earthquake with assumed magnitude of 6.9 and three possible epicenters (distances 6, 10, and 15 km from Basel), we calculated that the countryside would sustain 2 to 4 times the number of human losses than the city itself. In cases of smaller earthquakes (M6.5 and 6.0), at the same distances from Basel, the countryside will sustain 2 to 16 times the losses of the city. With the current population, the number of fatalities is expected to lie in the range of 6,000 to 22,000 for M6.9, 1,700 to 8,400 for an M6.5, and 160 to 1,400 for an M6.0. However, these values should be taken as preliminary, pending recalculation with recent information on building stock properties. Our preliminary estimates suggest that the number of persons requiring hospitalization may range from 6,000 to 8,000 in case of an M6.9, from 2,300 to 4,500 in case of an M6.5, and from 350 to 1,000, in case of an M6.0 earthquake. The portion of

  4. Planning a Preliminary program for Earthquake Loss Estimation and Emergency Operation by Three-dimensional Structural Model of Active Faults

    NASA Astrophysics Data System (ADS)

    Ke, M. C.

    2015-12-01

    Large scale earthquakes often cause serious economic losses and a lot of deaths. Because the seismic magnitude, the occurring time and the occurring location of earthquakes are still unable to predict now. The pre-disaster risk modeling and post-disaster operation are really important works of reducing earthquake damages. In order to understanding disaster risk of earthquakes, people usually use the technology of Earthquake simulation to build the earthquake scenarios. Therefore, Point source, fault line source and fault plane source are the models which often are used as a seismic source of scenarios. The assessment results made from different models used on risk assessment and emergency operation of earthquakes are well, but the accuracy of the assessment results could still be upgrade. This program invites experts and scholars from Taiwan University, National Central University, and National Cheng Kung University, and tries using historical records of earthquakes, geological data and geophysical data to build underground three-dimensional structure planes of active faults. It is a purpose to replace projection fault planes by underground fault planes as similar true. The analysis accuracy of earthquake prevention efforts can be upgraded by this database. Then these three-dimensional data will be applied to different stages of disaster prevention. For pre-disaster, results of earthquake risk analysis obtained by the three-dimensional data of the fault plane are closer to real damage. For disaster, three-dimensional data of the fault plane can be help to speculate that aftershocks distributed and serious damage area. The program has been used 14 geological profiles to build the three dimensional data of Hsinchu fault and HisnCheng faults in 2015. Other active faults will be completed in 2018 and be actually applied on earthquake disaster prevention.

  5. Urban Earthquake Shaking and Loss Assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Level 2 analysis of the ELER Software (similar to HAZUS and SELENA) is essentially intended for earthquake risk assessment (building damage, consequential human casualties and macro economic loss quantifiers) in urban areas. The basic Shake Mapping is similar to the Level 0 and Level 1 analysis however, options are available for more sophisticated treatment of site response through externally entered data and improvement of the shake map through incorporation

  6. Too generous to a fault? Is reliable earthquake safety a lost art? Errors in expected human losses due to incorrect seismic hazard estimates

    NASA Astrophysics Data System (ADS)

    Bela, James

    2014-11-01

    "One is well advised, when traveling to a new territory, to take a good map and then to check the map with the actual territory during the journey." In just such a reality check, Global Seismic Hazard Assessment Program (GSHAP) maps (prepared using PSHA) portrayed a "low seismic hazard," which was then also assumed to be the "risk to which the populations were exposed." But time-after-time-after-time the actual earthquakes that occurred were not only "surprises" (many times larger than those implied on the maps), but they were often near the maximum potential size (Maximum Credible Earthquake or MCE) that geologically could occur. Given these "errors in expected human losses due to incorrect seismic hazard estimates" revealed globally in these past performances of the GSHAP maps (> 700,000 deaths 2001-2011), we need to ask not only: "Is reliable earthquake safety a lost art?" but also: "Who and what were the `Raiders of the Lost Art?' "

  7. A new method for the production of social fragility functions and the result of its use in worldwide fatality loss estimation for earthquakes

    NASA Astrophysics Data System (ADS)

    Daniell, James; Wenzel, Friedemann

    2014-05-01

    A review of over 200 fatality models over the past 50 years for earthquake loss estimation from various authors has identified key parameters that influence fatality estimation in each of these models. These are often very specific and cannot be readily adapted globally. In the doctoral dissertation of the author, a new method is used for regression of fatalities to intensity using loss functions based not only on fatalities, but also using population models and other socioeconomic parameters created through time for every country worldwide for the period 1900-2013. A calibration of functions was undertaken from 1900-2008, and each individual quake analysed from 2009-2013 in real-time, in conjunction with www.earthquake-report.com. Using the CATDAT Damaging Earthquakes Database containing socioeconomic loss information for 7208 damaging earthquake events from 1900-2013 including disaggregation of secondary effects, fatality estimates for over 2035 events have been re-examined from 1900-2013. In addition, 99 of these events have detailed data for the individual cities and towns or have been reconstructed to create a death rate as a percentage of population. Many historical isoseismal maps and macroseismic intensity datapoint surveys collected globally, have been digitised and modelled covering around 1353 of these 2035 fatal events, to include an estimate of population, occupancy and socioeconomic climate at the time of the event at each intensity bracket. In addition, 1651 events without fatalities but causing damage have also been examined in this way. The production of socioeconomic and engineering indices such as HDI and building vulnerability has been undertaken on a country-level and state/province-level leading to a dataset allowing regressions not only using a static view of risk, but also allowing for the change in the socioeconomic climate between the earthquake events to be undertaken. This means that a year 1920 event in a country, will not simply be

  8. Rapid exposure and loss estimates for the May 12, 2008 Mw 7.9 Wenchuan earthquake provided by the U.S. Geological Survey's PAGER system

    USGS Publications Warehouse

    Earle, P.S.; Wald, D.J.; Allen, T.I.; Jaiswal, K.S.; Porter, K.A.; Hearne, M.G.

    2008-01-01

    One half-hour after the May 12th Mw 7.9 Wenchuan, China earthquake, the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) system distributed an automatically generated alert stating that 1.2 million people were exposed to severe-to-extreme shaking (Modified Mercalli Intensity VIII or greater). It was immediately clear that a large-scale disaster had occurred. These alerts were widely distributed and referenced by the major media outlets and used by governments, scientific, and relief agencies to guide their responses. The PAGER alerts and Web pages included predictive ShakeMaps showing estimates of ground shaking, maps of population density, and a list of estimated intensities at impacted cities. Manual, revised alerts were issued in the following hours that included the dimensions of the fault rupture. Within a half-day, PAGER’s estimates of the population exposed to strong shaking levels stabilized at 5.2 million people. A coordinated research effort is underway to extend PAGER’s capability to include estimates of the number of casualties. We are pursuing loss models that will allow PAGER the flexibility to use detailed inventory and engineering results in regions where these data are available while also calculating loss estimates in regions where little is known about the type and strength of the built infrastructure. Prototype PAGER fatality estimates are currently implemented and can be manually triggered. In the hours following the Wenchuan earthquake, these models predicted fatalities in the tens of thousands.

  9. Trends in global earthquake loss

    NASA Astrophysics Data System (ADS)

    Arnst, Isabel; Wenzel, Friedemann; Daniell, James

    2016-04-01

    Based on the CATDAT damage and loss database we analyse global trends of earthquake losses (in current values) and fatalities for the period between 1900 and 2015 from a statistical perspective. For this time period the data are complete for magnitudes above 6. First, we study the basic statistics of losses and find that losses below 10 bl. US satisfy approximately a power law with an exponent of 1.7 for the cumulative distribution. Higher loss values are modelled with the General Pareto Distribution (GPD). The 'transition' between power law and GPD is determined with the Mean Excess Function. We split the data set into a period of pre 1955 and post 1955 loss data as in those periods the exposure is significantly different due to population growth. The Annual Average Loss (AAL) for direct damage for events below 10 bl. US differs by a factor of 6, whereas the incorporation of the extreme loss events increases the AAL from 25 bl. US/yr to 30 bl. US/yr. Annual Average Deaths (AAD) show little (30%) difference for events below 6.000 fatalities and AAD values of 19.000 and 26.000 deaths per year if extreme values are incorporated. With data on the global Gross Domestic Product (GDP) that reflects the annual expenditures (consumption, investment, government spending) and on capital stock we relate losses to the economic capacity of societies and find that GDP (in real terms) grows much faster than losses so that the latter one play a decreasing role given the growing prosperity of mankind. This reasoning does not necessarily apply on a regional scale. Main conclusions of the analysis are that (a) a correct projection of historic loss values to nowadays US values is critical; (b) extreme value analysis is mandatory; (c) growing exposure is reflected in the AAL and AAD results for the periods pre and post 1955 events; (d) scaling loss values with global GDP data indicates that the relative size - from a global perspective - of losses decreases rapidly over time.

  10. Origin of Human Losses due to the Emilia Romagna, Italy, M5.9 Earthquake of 20 May 2012 and their Estimate in Real Time

    NASA Astrophysics Data System (ADS)

    Wyss, M.

    2012-12-01

    Estimating human losses within less than an hour worldwide requires assumptions and simplifications. Earthquake for which losses are accurately recorded after the event provide clues concerning the influence of error sources. If final observations and real time estimates differ significantly, data and methods to calculate losses may be modified or calibrated. In the case of the earthquake in the Emilia Romagna region with M5.9 on May 20th, the real time epicenter estimates of the GFZ and the USGS differed from the ultimate location by the INGV by 6 and 9 km, respectively. Fatalities estimated within an hour of the earthquake by the loss estimating tool QLARM, based on these two epicenters, numbered 20 and 31, whereas 7 were reported in the end, and 12 would have been calculated if the ultimate epicenter released by INGV had been used. These four numbers being small, do not differ statistically. Thus, the epicenter errors in this case did not appreciably influence the results. The QUEST team of INGV has reported intensities with I ≥ 5 at 40 locations with accuracies of 0.5 units and QLARM estimated I > 4.5 at 224 locations. The differences between the observed and calculated values at the 23 common locations show that the calculation in the 17 instances with significant differences were too high on average by one unit. By assuming higher than average attenuation within standard bounds for worldwide loss estimates, the calculated intensities model the observed ones better: For 57% of the locations, the difference was not significant; for the others, the calculated intensities were still somewhat higher than the observed ones. Using a generic attenuation law with higher than average attenuation, but not tailored to the region, the number of estimated fatalities becomes 12 compared to 7 reported ones. Thus, attenuation in this case decreased the discrepancy between observed and reported death by approximately a factor of two. The source of the fatalities is

  11. Pan-European Seismic Risk Assessment: A proof of concept using the Earthquake Loss Estimation Routine (ELER)

    NASA Astrophysics Data System (ADS)

    Corbane, Christina; Hancilar, Ufuk; Silva, Vitor; Ehrlich, Daniele; De Groeve, Tom

    2016-04-01

    One of the key objectives of the new EU civil protection mechanism is an enhanced understanding of risks the EU is facing. Developing a European perspective may create significant opportunities of successfully combining resources for the common objective of preventing and mitigating shared risks. Risk assessments and mapping represent the first step in these preventive efforts. The EU is facing an increasing number of natural disasters. Among them earthquakes are the second deadliest after extreme temperatures. A better-shared understanding of where seismic risk lies in the EU is useful to identify which regions are most at risk and where more detailed seismic risk assessments are needed. In that scope, seismic risk assessment models at a pan-European level have a great potential in obtaining an overview of the expected economic and human losses using a homogeneous quantitative approach and harmonized datasets. This study strives to demonstrate the feasibility of performing a probabilistic seismic risk assessment at a pan-European level with an open access methodology and using open datasets available across the EU. It aims also at highlighting the challenges and needs in datasets and the information gaps for a consistent seismic risk assessment at the pan-European level. The study constitutes a "proof of concept" that can complement the information provided by Member States in their National Risk Assessments. Its main contribution lies in pooling open-access data from different sources in a homogeneous format, which could serve as baseline data for performing more in depth risk assessments in Europe.

  12. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  13. A quick earthquake disaster loss assessment method supported by dasymetric data for emergency response in China

    NASA Astrophysics Data System (ADS)

    Xu, Jinghai; An, Jiwen; Nie, Gaozong

    2016-04-01

    Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.

  14. A dasymetric data supported earthquake disaster loss quick assessment method for emergency response in China

    NASA Astrophysics Data System (ADS)

    Xu, J.; An, J.; Nie, G.

    2015-02-01

    Improving earthquake disaster loss estimation speed and accuracy is one of key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new two-phase earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has four stages: determining the earthquake loss calculation factor, gridding possible damage matrixes, the building damage calculation and the people loss calculation. The dasymetric map approach makes this possible. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of disaster loss from pre-calculated loss estimation data to obtain the final estimation results. The method is validated by four actual earthquakes that occurred in China. The method not only significant improves the speed and accuracy of loss estimation, but gives spatial distribution for the loss, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes happen. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.

  15. Extreme Earthquake Risk Estimation by Hybrid Modeling

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Garcia, S.; Emerson, D.; Perea, N.; Salazar, A.; Moulinec, C.

    2012-12-01

    The estimation of the hazard and the economical consequences i.e. the risk associated to the occurrence of extreme magnitude earthquakes in the neighborhood of urban or lifeline infrastructure, such as the 11 March 2011 Mw 9, Tohoku, Japan, represents a complex challenge as it involves the propagation of seismic waves in large volumes of the earth crust, from unusually large seismic source ruptures up to the infrastructure location. The large number of casualties and huge economic losses observed for those earthquakes, some of which have a frequency of occurrence of hundreds or thousands of years, calls for the development of new paradigms and methodologies in order to generate better estimates, both of the seismic hazard, as well as of its consequences, and if possible, to estimate the probability distributions of their ground intensities and of their economical impacts (direct and indirect losses), this in order to implement technological and economical policies to mitigate and reduce, as much as possible, the mentioned consequences. Herewith, we propose a hybrid modeling which uses 3D seismic wave propagation (3DWP) and neural network (NN) modeling in order to estimate the seismic risk of extreme earthquakes. The 3DWP modeling is achieved by using a 3D finite difference code run in the ~100 thousands cores Blue Gene Q supercomputer of the STFC Daresbury Laboratory of UK, combined with empirical Green function (EGF) techniques and NN algorithms. In particular the 3DWP is used to generate broadband samples of the 3D wave propagation of extreme earthquakes (plausible) scenarios corresponding to synthetic seismic sources and to enlarge those samples by using feed-forward NN. We present the results of the validation of the proposed hybrid modeling for Mw 8 subduction events, and show examples of its application for the estimation of the hazard and the economical consequences, for extreme Mw 8.5 subduction earthquake scenarios with seismic sources in the Mexican

  16. ELER software - a new tool for urban earthquake loss assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Erdik, M.

    2010-12-01

    Rapid loss estimation after potentially damaging earthquakes is critical for effective emergency response and public information. A methodology and software package, ELER-Earthquake Loss Estimation Routine, for rapid estimation of earthquake shaking and losses throughout the Euro-Mediterranean region was developed under the Joint Research Activity-3 (JRA3) of the EC FP6 Project entitled "Network of Research Infrastructures for European Seismology-NERIES". Recently, a new version (v2.0) of ELER software has been released. The multi-level methodology developed is capable of incorporating regional variability and uncertainty originating from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. Although primarily intended for quasi real-time estimation of earthquake shaking and losses, the routine is also equally capable of incorporating scenario-based earthquake loss assessments. This paper introduces the urban earthquake loss assessment module (Level 2) of the ELER software which makes use of the most detailed inventory databases of physical and social elements at risk in combination with the analytical vulnerability relationships and building damage-related casualty vulnerability models for the estimation of building damage and casualty distributions, respectively. Spectral capacity-based loss assessment methodology and its vital components are presented. The analysis methods of the Level 2 module, i.e. Capacity Spectrum Method (ATC-40, 1996), Modified Acceleration-Displacement Response Spectrum Method (FEMA 440, 2005), Reduction Factor Method (Fajfar, 2000) and Coefficient Method (ASCE 41-06, 2006), are applied to the selected building types for validation and verification purposes. The damage estimates are compared to the results obtained from the other studies available in the literature, i.e. SELENA v4.0 (Molina et al., 2008) and

  17. The OPAL Project: Open source Procedure for Assessment of Loss using Global Earthquake Modelling software

    NASA Astrophysics Data System (ADS)

    Daniell, James

    2010-05-01

    This paper provides a comparison between Earthquake Loss Estimation (ELE) software packages and their application using an "Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software" (OPAL). The OPAL procedure has been developed to provide a framework for optimisation of a Global Earthquake Modelling process through: 1) Overview of current and new components of earthquake loss assessment (vulnerability, hazard, exposure, specific cost and technology); 2) Preliminary research, acquisition and familiarisation with all available ELE software packages; 3) Assessment of these 30+ software packages in order to identify the advantages and disadvantages of the ELE methods used; and 4) Loss analysis for a deterministic earthquake (Mw7.2) for the Zeytinburnu district, Istanbul, Turkey, by applying 3 software packages (2 new and 1 existing): a modified displacement-based method based on DBELA (Displacement Based Earthquake Loss Assessment), a capacity spectrum based method HAZUS (HAZards United States) and the Norwegian HAZUS-based SELENA (SEismic Loss EstimatioN using a logic tree Approach) software which was adapted for use in order to compare the different processes needed for the production of damage, economic and social loss estimates. The modified DBELA procedure was found to be more computationally expensive, yet had less variability, indicating the need for multi-tier approaches to global earthquake loss estimation. Similar systems planning and ELE software produced through the OPAL procedure can be applied to worldwide applications, given exposure data. Keywords: OPAL, displacement-based, DBELA, earthquake loss estimation, earthquake loss assessment, open source, HAZUS

  18. Rapid estimation of the economic consequences of global earthquakes

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2011-01-01

    The U.S. Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, operational since mid 2007, rapidly estimates the most affected locations and the population exposure at different levels of shaking intensities. The PAGER system has significantly improved the way aid agencies determine the scale of response needed in the aftermath of an earthquake. For example, the PAGER exposure estimates provided reasonably accurate assessments of the scale and spatial extent of the damage and losses following the 2008 Wenchuan earthquake (Mw 7.9) in China, the 2009 L'Aquila earthquake (Mw 6.3) in Italy, the 2010 Haiti earthquake (Mw 7.0), and the 2010 Chile earthquake (Mw 8.8). Nevertheless, some engineering and seismological expertise is often required to digest PAGER's exposure estimate and turn it into estimated fatalities and economic losses. This has been the focus of PAGER's most recent development. With the new loss-estimation component of the PAGER system it is now possible to produce rapid estimation of expected fatalities for global earthquakes (Jaiswal and others, 2009). While an estimate of earthquake fatalities is a fundamental indicator of potential human consequences in developing countries (for example, Iran, Pakistan, Haiti, Peru, and many others), economic consequences often drive the responses in much of the developed world (for example, New Zealand, the United States, and Chile), where the improved structural behavior of seismically resistant buildings significantly reduces earthquake casualties. Rapid availability of estimates of both fatalities and economic losses can be a valuable resource. The total time needed to determine the actual scope of an earthquake disaster and to respond effectively varies from country to country. It can take days or sometimes weeks before the damage and consequences of a disaster can be understood both socially and economically. The objective of the U.S. Geological Survey's PAGER system is

  19. Ten Years of Real-Time Earthquake Loss Alerts

    NASA Astrophysics Data System (ADS)

    Wyss, M.

    2013-12-01

    The priorities of the most important parameters of an earthquake disaster are: Number of fatalities, number of injured, mean damage as a function of settlement, expected intensity of shaking at critical facilities. The requirements to calculate these parameters in real time are: 1) Availability of reliable earthquake source parameters within minutes. 2) Capability of calculating expected intensities of strong ground shaking. 3) Data sets on population distribution and conditions of building stock as a function of settlements. 4) Data on locations of critical facilities. 5) Verified methods of calculating damage and losses. 6) Personnel available on a 24/7 basis to perform and review these calculations. There are three services available that distribute information about the likely consequences of earthquakes within about half an hour of the event. Two of these calculate losses, one gives only general information. Although, much progress has been made during the last ten years improving the data sets and the calculating methods, much remains to be done. The data sets are only first order approximations and the methods bare refinement. Nevertheless, the quantitative loss estimates after damaging earthquakes in real time are generally correct in the sense that they allow distinguishing disastrous from inconsequential events.

  20. Strategies for rapid global earthquake impact estimation: the Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, D.J.

    2013-01-01

    This chapter summarizes the state-of-the-art for rapid earthquake impact estimation. It details the needs and challenges associated with quick estimation of earthquake losses following global earthquakes, and provides a brief literature review of various approaches that have been used in the past. With this background, the chapter introduces the operational earthquake loss estimation system developed by the U.S. Geological Survey (USGS) known as PAGER (for Prompt Assessment of Global Earthquakes for Response). It also details some of the ongoing developments of PAGER’s loss estimation models to better supplement the operational empirical models, and to produce value-added web content for a variety of PAGER users.

  1. Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software (OPAL)

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.

    2011-07-01

    This paper provides a comparison between Earthquake Loss Estimation (ELE) software packages and their application using an "Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software" (OPAL). The OPAL procedure was created to provide a framework for optimisation of a Global Earthquake Modelling process through: 1. overview of current and new components of earthquake loss assessment (vulnerability, hazard, exposure, specific cost, and technology); 2. preliminary research, acquisition, and familiarisation for available ELE software packages; 3. assessment of these software packages in order to identify the advantages and disadvantages of the ELE methods used; and 4. loss analysis for a deterministic earthquake (Mw = 7.2) for the Zeytinburnu district, Istanbul, Turkey, by applying 3 software packages (2 new and 1 existing): a modified displacement-based method based on DBELA (Displacement Based Earthquake Loss Assessment, Crowley et al., 2006), a capacity spectrum based method HAZUS (HAZards United States, FEMA, USA, 2003) and the Norwegian HAZUS-based SELENA (SEismic Loss EstimatioN using a logic tree Approach, Lindholm et al., 2007) software which was adapted for use in order to compare the different processes needed for the production of damage, economic, and social loss estimates. The modified DBELA procedure was found to be more computationally expensive, yet had less variability, indicating the need for multi-tier approaches to global earthquake loss estimation. Similar systems planning and ELE software produced through the OPAL procedure can be applied to worldwide applications, given exposure data.

  2. Losses to single-family housing from ground motions in the 1994 Northridge, California, earthquake

    USGS Publications Warehouse

    Wesson, R.L.; Perkins, D.M.; Leyendecker, E.V.; Roth, R.J., Jr.; Petersen, M.D.

    2004-01-01

    The distributions of insured losses to single-family housing following the 1994 Northridge, California, earthquake for 234 ZIP codes can be satisfactorily modeled with gamma distributions. Regressions of the parameters in the gamma distribution on estimates of ground motion, derived from ShakeMap estimates or from interpolated observations, provide a basis for developing curves of conditional probability of loss given a ground motion. Comparison of the resulting estimates of aggregate loss with the actual aggregate loss gives satisfactory agreement for several different ground-motion parameters. Estimates of loss based on a deterministic spatial model of the earthquake ground motion, using standard attenuation relationships and NEHRP soil factors, give satisfactory results for some ground-motion parameters if the input ground motions are increased about one and one-half standard deviations above the median, reflecting the fact that the ground motions for the Northridge earthquake tended to be higher than the median ground motion for other earthquakes with similar magnitude. The results give promise for making estimates of insured losses to a similar building stock under future earthquake loading. ?? 2004, Earthquake Engineering Research Institute.

  3. Factors influencing to earthquake caused economical losses on urban territories

    NASA Astrophysics Data System (ADS)

    Nurtaev, B.; Khakimov, S.

    2005-12-01

    Questions of assessment of earthquake economical losses on urban territories of Uzbekistan, taking into account damage forming factors, which are increqasing or reducing economical losses were discussed in the paper. Buildings and facilities vulnerability factors were classified. From total value (equal to 50) were selected most important ones. Factors ranging by level of impact and weight function in loss assessment were ranged. One group of damage forming factors includs seismic hazard assessment, design, construction and maintenance of building and facilities. Other one is formed by city planning characteristics and includes : density of constructions and population, area of soft soils, existence of liquefaction susceptible soils and etc. To all these factors has been given weight functions and interval values by groups. Methodical recomendations for loss asessment taking into account above mentioned factors were developed. It gives possibility to carry out preventive measures for protection of vulnerable territories, to differentiate cost assessment of each region in relation with territory peculiarity and damage value. Using developed method we have ranged cities by risk level. It has allowed to establish ratings of the general vulnerability of urban territories of cities and on their basis to make optimum decisions, oriented to loss mitigation and increase of safety of population. Besides the technique can be used by insurance companies for estimated zoning of territory, development of effective utilization schema of land resources, rational town-planning, an economic estimation of used territory for supply with information of the various works connected to an estimation of seismic hazard. Further improvement of technique of establishment of rating of cities by level of damage from earthquakes will allow to increase quality of construction, rationality of accommodation of buildings, will be an economic stimulator for increasing of seismic resistance of

  4. Development of fragility functions to estimate homelessness after an earthquake

    NASA Astrophysics Data System (ADS)

    Brink, Susan A.; Daniell, James; Khazai, Bijan; Wenzel, Friedemann

    2014-05-01

    used to estimate homelessness as a function of information that is readily available immediately after an earthquake. These fragility functions could be used by relief agencies and governments to provide an initial assessment of the need for allocation of emergency shelter immediately after an earthquake. Daniell JE (2014) The development of socio-economic fragility functions for use in worldwide rapid earthquake loss estimation procedures, Ph.D. Thesis (in publishing), Karlsruhe, Germany. Daniell, J. E., Khazai, B., Wenzel, F., & Vervaeck, A. (2011). The CATDAT damaging earthquakes database. Natural Hazards and Earth System Science, 11(8), 2235-2251. doi:10.5194/nhess-11-2235-2011 Daniell, J.E., Wenzel, F. and Vervaeck, A. (2012). "The Normalisation of socio-economic losses from historic worldwide earthquakes from 1900 to 2012", 15th WCEE, Lisbon, Portugal, Paper No. 2027. Jaiswal, K., & Wald, D. (2010). An Empirical Model for Global Earthquake Fatality Estimation. Earthquake Spectra, 26(4), 1017-1037. doi:10.1193/1.3480331

  5. Creating a Global Building Inventory for Earthquake Loss Assessment and Risk Management

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2008-01-01

    Earthquakes have claimed approximately 8 million lives over the last 2,000 years (Dunbar, Lockridge and others, 1992) and fatality rates are likely to continue to rise with increased population and urbanizations of global settlements especially in developing countries. More than 75% of earthquake-related human casualties are caused by the collapse of buildings or structures (Coburn and Spence, 2002). It is disheartening to note that large fractions of the world's population still reside in informal, poorly-constructed & non-engineered dwellings which have high susceptibility to collapse during earthquakes. Moreover, with increasing urbanization half of world's population now lives in urban areas (United Nations, 2001), and half of these urban centers are located in earthquake-prone regions (Bilham, 2004). The poor performance of most building stocks during earthquakes remains a primary societal concern. However, despite this dark history and bleaker future trends, there are no comprehensive global building inventories of sufficient quality and coverage to adequately address and characterize future earthquake losses. Such an inventory is vital both for earthquake loss mitigation and for earthquake disaster response purposes. While the latter purpose is the motivation of this work, we hope that the global building inventory database described herein will find widespread use for other mitigation efforts as well. For a real-time earthquake impact alert system, such as U.S. Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER), (Wald, Earle and others, 2006), we seek to rapidly evaluate potential casualties associated with earthquake ground shaking for any region of the world. The casualty estimation is based primarily on (1) rapid estimation of the ground shaking hazard, (2) aggregating the population exposure within different building types, and (3) estimating the casualties from the collapse of vulnerable buildings. Thus, the

  6. Social vulnerability analysis of earthquake risk using HAZUS-MH losses from a M7.8 scenario earthquake on the San Andreas fault

    NASA Astrophysics Data System (ADS)

    Noriega, G. R.; Grant Ludwig, L.

    2010-12-01

    Natural hazards research indicates earthquake risk is not equitably distributed. Demographic differences are significant in determining the risks people encounter, whether and how they prepare for disasters, and how they fare when disasters occur. In this study, we analyze the distribution of economic and social losses in all 88 cities of Los Angeles County from the 2008 ShakeOut scenario earthquake. The ShakeOut scenario earthquake is a scientifically plausible M 7.8 scenario earthquake on the San Andreas fault that was developed and applied for regional earthquake preparedness planning and risk mitigation from a compilation of collaborative studies and findings by the 2007 Working Group on California Earthquake Probabilities (WGCEP). The scenario involved 1) developing a realistic scenario earthquake using the best available and most recent earthquake research findings, 2) estimation of physical damage, 3) estimation of social impact of the earthquake, and 4) identifying changes that will help to prevent a catastrophe due to an earthquake. Estimated losses from this scenario earthquake include 1,800 deaths and $213 billion dollars in economic losses. We use regression analysis to examine the relationship between potential city losses due to the ShakeOut scenario earthquake and the cities' demographic composition. The dependent variables are economic and social losses calculated in HAZUS-MH methodology for the scenario earthquake. The independent variables -median household income, tenure and race/ethnicity- have been identified as indicators of social vulnerability to natural disasters (Mileti, 1999; Cutter, 2006; Cutter & Finch, 2008). Preliminary Ordinary Least Squares (OLS) regression analysis of economic losses on race/ethnicity, income and tenure, indicates that cities with lower Hispanic population are associated with lower economic losses. Cities with higher Hispanic population are associated with higher economic losses, though this relationship is

  7. Earthquakes trigger the loss of groundwater biodiversity.

    PubMed

    Galassi, Diana M P; Lombardo, Paola; Fiasca, Barbara; Di Cioccio, Alessia; Di Lorenzo, Tiziana; Petitta, Marco; Di Carlo, Piero

    2014-01-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and "ecosystem engineers", we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems.

  8. Earthquakes trigger the loss of groundwater biodiversity.

    PubMed

    Galassi, Diana M P; Lombardo, Paola; Fiasca, Barbara; Di Cioccio, Alessia; Di Lorenzo, Tiziana; Petitta, Marco; Di Carlo, Piero

    2014-01-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and "ecosystem engineers", we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems. PMID:25182013

  9. Earthquakes trigger the loss of groundwater biodiversity

    NASA Astrophysics Data System (ADS)

    Galassi, Diana M. P.; Lombardo, Paola; Fiasca, Barbara; di Cioccio, Alessia; di Lorenzo, Tiziana; Petitta, Marco; di Carlo, Piero

    2014-09-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and ``ecosystem engineers'', we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems.

  10. Earthquakes trigger the loss of groundwater biodiversity

    PubMed Central

    Galassi, Diana M. P.; Lombardo, Paola; Fiasca, Barbara; Di Cioccio, Alessia; Di Lorenzo, Tiziana; Petitta, Marco; Di Carlo, Piero

    2014-01-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and “ecosystem engineers”, we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems. PMID:25182013

  11. Human losses and damage expected in future earthquakes in Faial Island - Azores applying the QLARM tool.

    NASA Astrophysics Data System (ADS)

    Fontiela, João.; Rosset, Philippe; Trendafiloski, Goran; Wyss, Max

    2010-05-01

    QLARM (http:// qlarm.etzh.ch) is a second generation tool to estimate building damage and human losses due to earthquakes, developed jointly by WAPMERR and the Swiss Seismological Service. In 2009 WAPMERR distributed 76 earthquake alerts in real time. The tool can be used to calculate expected human losses in future earthquakes in countries where it has been calibrated. In the last thirty years, the Azores islands were struck by several earthquakes with the following being the most important ones. The 1980 Terceira island earthquake Mw7.2 caused 61 deaths, hundreds of injuries and buildings were heavily damaged. The 1998 Faial island earthquake, Mw 6.1, caused 8 deaths a few hundred injuries and in some settlements buildings were heavily damaged. Faial Island was also affected by the 1926 and 1958 earthquakes. The latter event occurred during an eruption and caused heavy damage to the building stock but there were no fatalities and only few injuries. To estimate human losses and buildings for future likely earthquakes in Faial and in the rest of the Azores we need to calibrate QLARM and establish the following parameters: a) distribution of population by settlement; b) distribution of building stock and population into vulnerability classes of the EMS-98 classes, and c) attenuation function and soil amplification. Because of the special tectonic environment we paid special attention to the attenuation relation. Damage and human losses are obtained from 1) vulnerability models pertinent to EMS-98 vulnerability classes, 2) building collapse rates pertinent to Faial is derived from the validation of past earthquakes that occurred on the island, and 3) casualty matrices pertinent to EMS-98 vulnerability classes.

  12. A Method for Estimation of Death Tolls in Disastrous Earthquake

    NASA Astrophysics Data System (ADS)

    Pai, C.; Tien, Y.; Teng, T.

    2004-12-01

    Fatality tolls caused by the disastrous earthquake are the one of the most important items among the earthquake damage and losses. If we can precisely estimate the potential tolls and distribution of fatality in individual districts as soon as the earthquake occurrences, it not only make emergency programs and disaster management more effective but also supply critical information to plan and manage the disaster and the allotments of disaster rescue manpower and medicine resources in a timely manner. In this study, we intend to reach the estimation of death tolls caused by the Chi-Chi earthquake in individual districts based on the Attributive Database of Victims, population data, digital maps and Geographic Information Systems. In general, there were involved many factors including the characteristics of ground motions, geological conditions, types and usage habits of buildings, distribution of population and social-economic situations etc., all are related to the damage and losses induced by the disastrous earthquake. The density of seismic stations in Taiwan is the greatest in the world at present. In the meantime, it is easy to get complete seismic data by earthquake rapid-reporting systems from the Central Weather Bureau: mostly within about a minute or less after the earthquake happened. Therefore, it becomes possible to estimate death tolls caused by the earthquake in Taiwan based on the preliminary information. Firstly, we form the arithmetic mean of the three components of the Peak Ground Acceleration (PGA) to give the PGA Index for each individual seismic station, according to the mainshock data of the Chi-Chi earthquake. To supply the distribution of Iso-seismic Intensity Contours in any districts and resolve the problems for which there are no seismic station within partial districts through the PGA Index and geographical coordinates in individual seismic station, the Kriging Interpolation Method and the GIS software, The population density depends on

  13. The Enormous Challenge faced by China to Reduce Earthquake Losses

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Mooney, W. D.; Wang, B.

    2014-12-01

    In past six years, several big earthquakes occurred in Chinese continent that have caused enormous economic loss and casualties. These earthquakes include the following: 2008 Mw=7.9 Wenchuan; 2010 Mw=6.9 Yushu; 2013 Mw=6.6 Lushan; and 2013 Mw=5.9 Minxian events. On August 4, 2014 the Mw=6.1 earthquake struck Ludian in Yunnan province. Althought it was a moderate size earthquake, the casualties have reached at least 589 people. In fact, more than 50% of Chinese cities and more than 70% of large to medium size cities are located in the areas where the seismic intensity may reach Ⅶ or higher. Collapsing buildings are the main cause of Chinese earthquake casualties; the secondary causes are induced geological disasters such as landslide and barrier lakes. Several enormous challenges must be overcome to reduce hazards from earthquakes and secondary disasters.(1)Much of the infrastructure in China cannot meet the engineering standard for adequate seismic protection. In particular, some buildings are not strong enough to survive the potential strong ground shaking, and some of them did do not keep away from the active fault with a safe distance. It will be very costly to reinforce or rebuild such buildings. (2) There is lack of the rigorous legislation on earthquake disaster protection. (3) It appears that both government and citizen rely too much on earthquake prediction to avoid earthquake casualties. (4) Geologic conditions is very complicate and in need of additional studies, especially in southwest of China. There still lack of detail survey on potential geologic disasters, such as landslides. Although we still cannot predict earthquakes, it is possible to greatly reduce earthquake hazards. For example, some Chinese scientists have begun studies with the aim of identifying active faults under large cities and to propose higher building standards. It will be a very difficult work to improve the quality and scope of earthquake disaster protection dramatically in

  14. Future Earth: Reducing Loss By Automating Response to Earthquake Shaking

    NASA Astrophysics Data System (ADS)

    Allen, R. M.

    2014-12-01

    Earthquakes pose a significant threat to society in the U.S. and around the world. The risk is easily forgotten given the infrequent recurrence of major damaging events, yet the likelihood of a major earthquake in California in the next 30 years is greater than 99%. As our societal infrastructure becomes ever more interconnected, the potential impacts of these future events are difficult to predict. Yet, the same inter-connected infrastructure also allows us to rapidly detect earthquakes as they begin, and provide seconds, tens or seconds, or a few minutes warning. A demonstration earthquake early warning system is now operating in California and is being expanded to the west coast (www.ShakeAlert.org). In recent earthquakes in the Los Angeles region, alerts were generated that could have provided warning to the vast majority of Los Angelinos who experienced the shaking. Efforts are underway to build a public system. Smartphone technology will be used not only to issue that alerts, but could also be used to collect data, and improve the warnings. The MyShake project at UC Berkeley is currently testing an app that attempts to turn millions of smartphones into earthquake-detectors. As our development of the technology continues, we can anticipate ever-more automated response to earthquake alerts. Already, the BART system in the San Francisco Bay Area automatically stops trains based on the alerts. In the future, elevators will stop, machinery will pause, hazardous materials will be isolated, and self-driving cars will pull-over to the side of the road. In this presentation we will review the current status of the earthquake early warning system in the US. We will illustrate how smartphones can contribute to the system. Finally, we will review applications of the information to reduce future losses.

  15. An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling

    USGS Publications Warehouse

    Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.

    2009-01-01

    We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global

  16. Application of the loss estimation tool QLARM in Algeria

    NASA Astrophysics Data System (ADS)

    Rosset, P.; Trendafiloski, G.; Yelles, K.; Semmane, F.; Wyss, M.

    2009-04-01

    During the last six years, WAPMERR has used Quakeloss for real-time loss estimation for more than 440 earthquakes worldwide. Loss reports, posted with an average delay of 30 minutes, include a map showing the average degree of damage in settlements near the epicenter, the total number of fatalities, the total number of injured, and a detailed list of casualties and damage rates in these settlements. After the M6.7 Boumerdes earthquake in 2003, we reported 1690-3660 fatalities. The official death toll was around 2270. Since the El Asnam earthquake, seismic events in Algeria have killed about 6,000 people, injured more than 20,000 and left more than 300,000 homeless. On average, one earthquake with the potential to kill people (M>5.4) happens every three years in Algeria. In the frame of a collaborative project between WAPMERR and CRAAG, we propose to calibrate our new loss estimation tool QLARM (qlarm.ethz.ch) and estimate human losses for future likely earthquakes in Algeria. The parameters needed for this calculation are the following. (1) Ground motion relation and soil amplification factors (2) distribution of building stock and population into vulnerability classes of the European Macroseismic Scale (EMS-98) as given in the PAGER database and (3) population by settlement. Considering the resolution of the available data, we construct 1) point city models for cases where only summary data for the city are available and, 2) discrete city models when data regarding city districts are available. Damage and losses are calculated using: (a) vulnerability models pertinent to EMS-98 vulnerability classes previously validated with the existing ones in Algeria (Tipaza and Chlef) (b) building collapse models pertinent to Algeria as given in the World Housing Encyclopedia and, (c) casualty matrices pertinent to EMS-98 vulnerability classes assembled from HAZUS casualty rates. As a first trial, we simulated the 2003 Boumerdes earthquake to check the validity of the proposed

  17. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  18. Modelling the Epistemic Uncertainty in the Vulnerability Assessment Component of an Earthquake Loss Model

    NASA Astrophysics Data System (ADS)

    Crowley, H.; Modica, A.

    2009-04-01

    Loss estimates have been shown in various studies to be highly sensitive to the methodology employed, the seismicity and ground-motion models, the vulnerability functions, and assumed replacement costs (e.g. Crowley et al., 2005; Molina and Lindholm, 2005; Grossi, 2000). It is clear that future loss models should explicitly account for these epistemic uncertainties. Indeed, a cause of frequent concern in the insurance and reinsurance industries is precisely the fact that for certain regions and perils, available commercial catastrophe models often yield significantly different loss estimates. Of equal relevance to many users is the fact that updates of the models sometimes lead to very significant changes in the losses compared to the previous version of the software. In order to model the epistemic uncertainties that are inherent in loss models, a number of different approaches for the hazard, vulnerability, exposure and loss components should be clearly and transparently applied, with the shortcomings and benefits of each method clearly exposed by the developers, such that the end-users can begin to compare the results and the uncertainty in these results from different models. This paper looks at an application of a logic-tree type methodology to model the epistemic uncertainty in the vulnerability component of a loss model for Tunisia. Unlike other countries which have been subjected to damaging earthquakes, there has not been a significant effort to undertake vulnerability studies for the building stock in Tunisia. Hence, when presented with the need to produce a loss model for a country like Tunisia, a number of different approaches can and should be applied to model the vulnerability. These include empirical procedures which utilise observed damage data, and mechanics-based methods where both the structural characteristics and response of the buildings are analytically modelled. Some preliminary applications of the methodology are presented and discussed

  19. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part A, Prehistoric earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.

  20. Seismic Risk Assessment and Loss Estimation for Tbilisi City

    NASA Astrophysics Data System (ADS)

    Tsereteli, Nino; Alania, Victor; Varazanashvili, Otar; Gugeshashvili, Tengiz; Arabidze, Vakhtang; Arevadze, Nika; Tsereteli, Emili; Gaphrindashvili, Giorgi; Gventcadze, Alexander; Goguadze, Nino; Vephkhvadze, Sophio

    2013-04-01

    The proper assessment of seismic risk is of crucial importance for society protection and city sustainable economic development, as it is the essential part to seismic hazard reduction. Estimation of seismic risk and losses is complicated tasks. There is always knowledge deficiency on real seismic hazard, local site effects, inventory on elements at risk, infrastructure vulnerability, especially for developing countries. Lately great efforts was done in the frame of EMME (earthquake Model for Middle East Region) project, where in the work packages WP1, WP2 , WP3 and WP4 where improved gaps related to seismic hazard assessment and vulnerability analysis. Finely in the frame of work package wp5 "City Scenario" additional work to this direction and detail investigation of local site conditions, active fault (3D) beneath Tbilisi were done. For estimation economic losses the algorithm was prepared taking into account obtained inventory. The long term usage of building is very complex. It relates to the reliability and durability of buildings. The long term usage and durability of a building is determined by the concept of depreciation. Depreciation of an entire building is calculated by summing the products of individual construction unit' depreciation rates and the corresponding value of these units within the building. This method of calculation is based on an assumption that depreciation is proportional to the building's (constructions) useful life. We used this methodology to create a matrix, which provides a way to evaluate the depreciation rates of buildings with different type and construction period and to determine their corresponding value. Finally loss was estimated resulting from shaking 10%, 5% and 2% exceedance probability in 50 years. Loss resulting from scenario earthquake (earthquake with possible maximum magnitude) also where estimated.

  1. RAINFALL-LOSS PARAMETER ESTIMATION FOR ILLINOIS.

    USGS Publications Warehouse

    Weiss, Linda S.; Ishii, Audrey

    1986-01-01

    The U. S. Geological Survey is currently conducting an investigation to estimate values of parameters for two rainfall-loss computation methods used in a commonly used flood-hydrograph model. Estimates of six rainfall-loss parameters are required: four for the Exponential Loss-Rate method and two for the Initial and Uniform Loss-Rate method. Multiple regression analyses on calibrated data from 616 storms at 98 gaged basins are being used to develop parameter-estimating techniques for these six parameters at ungaged basins in Illinois. Parameter-estimating techniques are being verified using data from a total of 105 storms at 35 uncalibrated gaged basins.

  2. Building losses assessment for Lushan earthquake utilization multisource remote sensing data and GIS

    NASA Astrophysics Data System (ADS)

    Nie, Juan; Yang, Siquan; Fan, Yida; Wen, Qi; Xu, Feng; Li, Lingling

    2015-12-01

    On 20 April 2013, a catastrophic earthquake of magnitude 7.0 struck the Lushan County, northwestern Sichuan Province, China. This earthquake named Lushan earthquake in China. The Lushan earthquake damaged many buildings. The situation of building loss is one basis for emergency relief and reconstruction. Thus, the building losses of the Lushan earthquake must be assessed. Remote sensing data and geographic information systems (GIS) can be employed to assess the building loss of the Lushan earthquake. The building losses assessment results for Lushan earthquake disaster utilization multisource remote sensing dada and GIS were reported in this paper. The assessment results indicated that 3.2% of buildings in the affected areas were complete collapsed. 12% and 12.5% of buildings were heavy damaged and slight damaged, respectively. The complete collapsed buildings, heavy damaged buildings, and slight damaged buildings mainly located at Danling County, Hongya County, Lushan County, Mingshan County, Qionglai County, Tianquan County, and Yingjing County.

  3. Earthquake Loss Assessment for Post-2000 Buildings in Istanbul

    NASA Astrophysics Data System (ADS)

    Hancilar, Ufuk; Cakti, Eser; Sesetyan, Karin

    2016-04-01

    Current building inventory of Istanbul city, which was compiled by street surveys in 2008, consists of more than 1.2 million buildings. The inventory provides information on lateral-load carrying system, number of floors and construction year, where almost 200,000 buildings are reinforced concrete frame type structures built after 2000. These buildings are assumed to be designed based on the provisions of Turkish Earthquake Resistant Design Code (1998) and are tagged as high-code buildings. However, there are no empirical or analytical fragility functions associated with these types of buildings. In this study we perform a damage and economic loss assessment exercise focusing on the post-2000 building stock of Istanbul. Three M7.4 scenario earthquakes near the city represent the input ground motion. As for the fragility functions, those provided by Hancilar and Cakti (2015) for code complying reinforced concrete frames are used. The results are compared with the number of damaged buildings given in the loss assessment studies available in the literature wherein expert judgment based fragilities for post-2000 buildings were used.

  4. A Multidisciplinary Approach for Estimation of Seismic Losses: A Case Study in Turkey

    NASA Astrophysics Data System (ADS)

    Askan, A.; Erberik, M.; Un, E.

    2012-12-01

    Estimation of seismic losses including the physical, economic and social losses as well as casualties concern a wide range of authorities varying from geophysical and earthquake engineers, physical and economic planners to insurance companies. Due to the inherent uncertainties involved at each component, a probabilistic framework is required to estimate seismic losses. This study aims to propose an integrated method for predicting the potential seismic loss for a selected urban region. The main components of the proposed loss model are the seismic hazard estimation tool, building vulnerability functions, human losses and economic losses as functions of damage states of buildings. The input data for risk calculations involves regional seismicity and building fragility information. The casualty model for a given damage level considers the occupancy type, population of the building, occupancy at the time of earthquake occurrence, number of trapped occupants in the collapse, injury distribution at collapse and mortality post collapse. The economic loss module involves direct economic loss to buildings in terms of replacement, structural repair, non-structural repair costs and contents losses. Finally, the proposed loss model combines the input components within a conditional probability approach. The results are expressed in terms of expected loss. We calibrate the method with loss data from the 12 November 1999 Düzce earthquake and then predict losses for another city in Turkey (Bursa) with high seismic hazard.

  5. Benefits of multidisciplinary collaboration for earthquake casualty estimation models: recent case studies

    NASA Astrophysics Data System (ADS)

    So, E.

    2010-12-01

    Earthquake casualty loss estimation, which depends primarily on building-specific casualty rates, has long suffered from a lack of cross-disciplinary collaboration in post-earthquake data gathering. An increase in our understanding of what contributes to casualties in earthquakes involve coordinated data-gathering efforts amongst disciplines; these are essential for improved global casualty estimation models. It is evident from examining past casualty loss models and reviewing field data collected from recent events, that generalized casualty rates cannot be applied globally for different building types, even within individual countries. For a particular structure type, regional and topographic building design effects, combined with variable material and workmanship quality all contribute to this multi-variant outcome. In addition, social factors affect building-specific casualty rates, including social status and education levels, and human behaviors in general, in that they modify egress and survivability rates. Without considering complex physical pathways, loss models purely based on historic casualty data, or even worse, rates derived from other countries, will be of very limited value. What’s more, as the world’s population, housing stock, and living and cultural environments change, methods of loss modeling must accommodate these variables, especially when considering casualties. To truly take advantage of observed earthquake losses, not only do damage surveys need better coordination of international and national reconnaissance teams, but these teams must integrate difference areas of expertise including engineering, public health and medicine. Research is needed to find methods to achieve consistent and practical ways of collecting and modeling casualties in earthquakes. International collaboration will also be necessary to transfer such expertise and resources to the communities in the cities which most need it. Coupling the theories and findings from

  6. An empirical model for global earthquake fatality estimation

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David

    2010-01-01

    We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U. S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits. [DOI: 10.1193/1.3480331

  7. Rapid Ice Mass Loss: Does It Have an Influence on Earthquake Occurrence in Southern Alaska?

    NASA Technical Reports Server (NTRS)

    Sauber, Jeanne M.

    2008-01-01

    The glaciers of southern Alaska are extensive, and many of them have undergone gigatons of ice wastage on time scales on the order of the seismic cycle. Since the ice loss occurs directly above a shallow main thrust zone associated with subduction of the Pacific-Yakutat plate beneath continental Alaska, the region between the Malaspina and Bering Glaciers is an excellent test site for evaluating the importance of recent ice wastage on earthquake faulting potential. We demonstrate the influence of cumulative glacial mass loss following the 1899 Yakataga earthquake (M=8.1) by using a two dimensional finite element model with a simple representation of ice fluctuations to calculate the incremental stresses and change in the fault stability margin (FSM) along the main thrust zone (MTZ) and on the surface. Along the MTZ, our results indicate a decrease in FSM between 1899 and the 1979 St. Elias earthquake (M=7.4) of 0.2 - 1.2 MPa over an 80 km region between the coast and the 1979 aftershock zone; at the surface, the estimated FSM was larger but more localized to the lower reaches of glacial ablation zones. The ice-induced stresses were large enough, in theory, to promote the occurrence of shallow thrust earthquakes. To empirically test the influence of short-term ice fluctuations on fault stability, we compared the seismic rate from a reference background time period (1988-1992) against other time periods (1993-2006) with variable ice or tectonic change characteristics. We found that the frequency of small tectonic events in the Icy Bay region increased in 2002-2006 relative to the background seismic rate. We hypothesize that this was due to a significant increase in the rate of ice wastage in 2002-2006 instead of the M=7.9, 2002 Denali earthquake, located more than 100km away.

  8. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  9. An Account of Preliminary Landslide Damage and Losses Resulting from the February 28, 2001, Nisqually, Washington, Earthquake

    USGS Publications Warehouse

    Highland, Lynn M.

    2003-01-01

    The February 28, 2001, Nisqually, Washington, earthquake (Mw = 6.8) damaged an area of the northwestern United States that previously experienced two major historical earthquakes, in 1949 and in 1965. Preliminary estimates of direct monetary losses from damage due to earthquake-induced landslides is approximately $34.3 million. However, this figure does not include costs from damages to the elevated portion of the Alaskan Way Viaduct, a major highway through downtown Seattle, Washington that will be repaired or rebuilt, depending on the future decision of local and state authorities. There is much debate as to the cause of the damage to this viaduct with evaluations of cause ranging from earthquake shaking and liquefaction to lateral spreading to a combination of these effects. If the viaduct is included in the costs, the losses increase to $500+ million (if it is repaired) or to more than $1+ billion (if it is replaced). Preliminary estimate of losses due to all causes of earthquake damage is approximately $2 billion, which includes temporary repairs to the Alaskan Way Viaduct. These preliminary dollar figures will no doubt increase when plans and decisions regarding the Viaduct are completed.

  10. Seismic velocity change after the 2011 Tohoku-Oki earthquake estimated from repeating earthquake data

    NASA Astrophysics Data System (ADS)

    Takagi, R.; Uchida, N.; Okada, T.; Hasegawa, A.

    2012-12-01

    We analyzed repeating earthquake data to estimate velocity change in the overriding plate in NE Japan associated with the 2011 M9.0 Tohoku-Oki earthquake. Because repeating earthquakes occur as the repeating slips on the same patch on the Pacific plate with the same source mechanism at different time, waveform data of repeating earthquake is suitable for detecting temporal change in subsurface structure. We tried to use the direct part of seismograms to estimate location of the velocity change. This is because the time-shift in direct part simply reflects the velocity change only along a direct ray path in contrast to the complex path of coda waves. We analyzed repeating earthquake records from 2003 to December 2011. The travel time shifts before and after the Tohoku-Oki earthquake are measured by the cross-spectral method in the frequency range of 1-10 Hz. One problem for using the direct part is an error of origin time. However, because the errors of origin times are identical at all stations, we can estimate the relative delay in many stations for a pair of repeating earthquakes that can be used for the estimation of the spatial variation of time-shift in direct part. After removing outliers, we subtracted a mean of the time-shifts of direct P wave for all stations from the time-shifts of P and S wave at every station. From the result of many repeating earthquake sequences, we can recognize clear relative time delays of approximately 0.01 sec for S wave in both the fore-arc and back-arc region from Fukushima to Iwate prefecture. The travel-time delays of P wave are approximately 3 times smaller than those of S wave. From a consideration of observed travel time shifts and ray paths, the velocity change seems to distribute in land area. One possible mechanism of the receiver-side velocity reduction is crack opening as a result of the static stress change due to coseismic slip on the plate boundary. The velocity change due to the static stress change can be

  11. How Good are our Source Parameter Estimates for Small Earthquakes?

    NASA Astrophysics Data System (ADS)

    Abercrombie, R. E.

    2002-12-01

    Measuring reliable and accurate source parameters for small earthquakes (M<3) is a long term goal for seismologists. Small earthquakes are important as they bridge the gap between laboratory measurements of stick-slip sliding and large damaging earthquakes. They also provide insights into the nucleation process of unstable slip. Unfortunately, uncertainties in such parameters as the stress drop and radiated energy of small earthquakes are as large as an order of magnitude. This is a consequence of the high frequency radiation (> 100 Hz) needed to resolve the source process. High frequency energy is severely attenuated and distorted along the ray path. The best records of small earthquakes are from deep (> 1km) boreholes and mines, where the waves are recorded before passing through the near-surface rocks. Abercrombie (1995) and Prejean & Ellsworth (2001) used such deep recordings to investigate source scaling and discovered that the radiated energy is a significantly smaller fraction of the total energy than for larger earthquakes. Richardson and Jordan (2002) obtained a similar result from seismograms recorded in deep mines. Ide and Beroza (2001) investigated the effect of limited recording bandwidth in such studies and found that there was evidence of selection bias. Recalculating the source parameters of earthquakes recorded in the Cajon Pass borehole, correcting for the limited bandwidth, does not remove the scale dependence. Ide et al. (2002) used empirical Green's function methods to improve source parameter estimates, and found that even deep borehole recording is not a guarantee of negligible site effects. Another problem is that the lack of multiple recordings of small earthquakes means that very simple source models have to be used to calculate source parameters. The rupture velocity must also be assumed. There are still significant differences (nearly a factor of 10 in stress drop) between the predictions of even the simple models commonly in use. Here I

  12. Development of a Global Slope Dataset for Estimation of Landslide Occurrence Resulting from Earthquakes

    USGS Publications Warehouse

    Verdin, Kristine L.; Godt, Jonathan W.; Funk, Christopher C.; Pedreros, Diego; Worstell, Bruce; Verdin, James

    2007-01-01

    Landslides resulting from earthquakes can cause widespread loss of life and damage to critical infrastructure. The U.S. Geological Survey (USGS) has developed an alarm system, PAGER (Prompt Assessment of Global Earthquakes for Response), that aims to provide timely information to emergency relief organizations on the impact of earthquakes. Landslides are responsible for many of the damaging effects following large earthquakes in mountainous regions, and thus data defining the topographic relief and slope are critical to the PAGER system. A new global topographic dataset was developed to aid in rapidly estimating landslide potential following large earthquakes. We used the remotely-sensed elevation data collected as part of the Shuttle Radar Topography Mission (SRTM) to generate a slope dataset with nearly global coverage. Slopes from the SRTM data, computed at 3-arc-second resolution, were summarized at 30-arc-second resolution, along with statistics developed to describe the distribution of slope within each 30-arc-second pixel. Because there are many small areas lacking SRTM data and the northern limit of the SRTM mission was lat 60?N., statistical methods referencing other elevation data were used to fill the voids within the dataset and to extrapolate the data north of 60?. The dataset will be used in the PAGER system to rapidly assess the susceptibility of areas to landsliding following large earthquakes.

  13. Large Earthquakes in Developing Countries: Estimating and Reducing their Consequences

    NASA Astrophysics Data System (ADS)

    Tucker, B. E.

    2003-12-01

    Recent efforts to reduce the risk of earthquakes in developing countries have been diverse, earnest, and inadequate. The earthquake risk in developing countries is large and growing rapidly. It is largely ignored. Unless something is done - quickly - to reduce it, both developing and developed countries will suffer human and economic losses far greater than have been experienced in the past. GeoHazards International (GHI) is a nonprofit organization that has attempted to reduce the death and suffering caused by earthquakes in the world's most vulnerable communities, through preparedness, mitigation and prevention. Its approach has included raising awareness, strengthening local institutions and launching mitigation activities, particularly for schools. GHI and its partners around the world have achieved some success: thousands of school children are safer, hundreds of cities are aware of their risk, tens of cities have been assessed and advised, and some local organizations have been strengthened. But there is disturbing evidence that what is being done is insufficient. The problem outpaces the cure. A new program is now being considered that would attempt to improve earthquake-resistant construction of schools, internationally, by publicizing well-managed programs around the world that design, construct and maintain earthquake-resistant schools. While focused on schools, this program might have broader applications in the future.

  14. Rapid estimate of earthquake source duration: application to tsunami warning.

    NASA Astrophysics Data System (ADS)

    Reymond, Dominique; Jamelot, Anthony; Hyvernaud, Olivier

    2016-04-01

    We present a method for estimating the source duration of the fault rupture, based on the high-frequency envelop of teleseismic P-Waves, inspired from the original work of (Ni et al., 2005). The main interest of the knowledge of this seismic parameter is to detect abnormal low velocity ruptures that are the characteristic of the so called 'tsunami-earthquake' (Kanamori, 1972). The validation of the results of source duration estimated by this method are compared with two other independent methods : the estimated duration obtained by the Wphase inversion (Kanamori and Rivera, 2008, Duputel et al., 2012) and the duration calculated by the SCARDEC process that determines the source time function (M. Vallée et al., 2011). The estimated source duration is also confronted to the slowness discriminant defined by Newman and Okal, 1998), that is calculated routinely for all earthquakes detected by our tsunami warning process (named PDFM2, Preliminary Determination of Focal Mechanism, (Clément and Reymond, 2014)). Concerning the point of view of operational tsunami warning, the numerical simulations of tsunami are deeply dependent on the source estimation: better is the source estimation, better will be the tsunami forecast. The source duration is not directly injected in the numerical simulations of tsunami, because the cinematic of the source is presently totally ignored (Jamelot and Reymond, 2015). But in the case of a tsunami-earthquake that occurs in the shallower part of the subduction zone, we have to consider a source in a medium of low rigidity modulus; consequently, for a given seismic moment, the source dimensions will be decreased while the slip distribution increased, like a 'compact' source (Okal, Hébert, 2007). Inversely, a rapid 'snappy' earthquake that has a poor tsunami excitation power, will be characterized by higher rigidity modulus, and will produce weaker displacement and lesser source dimensions than 'normal' earthquake. References: CLément, J

  15. Global earthquake casualties due to secondary effects: A quantitative analysis for improving rapid loss analyses

    USGS Publications Warehouse

    Marano, K.D.; Wald, D.J.; Allen, T.I.

    2010-01-01

    This study presents a quantitative and geospatial description of global losses due to earthquake-induced secondary effects, including landslide, liquefaction, tsunami, and fire for events during the past 40 years. These processes are of great importance to the US Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, which is currently being developed to deliver rapid earthquake impact and loss assessments following large/significant global earthquakes. An important question is how dominant are losses due to secondary effects (and under what conditions, and in which regions)? Thus, which of these effects should receive higher priority research efforts in order to enhance PAGER's overall assessment of earthquakes losses and alerting for the likelihood of secondary impacts? We find that while 21.5% of fatal earthquakes have deaths due to secondary (non-shaking) causes, only rarely are secondary effects the main cause of fatalities. The recent 2004 Great Sumatra-Andaman Islands earthquake is a notable exception, with extraordinary losses due to tsunami. The potential for secondary hazards varies greatly, and systematically, due to regional geologic and geomorphic conditions. Based on our findings, we have built country-specific disclaimers for PAGER that address potential for each hazard (Earle et al., Proceedings of the 14th World Conference of the Earthquake Engineering, Beijing, China, 2008). We will now focus on ways to model casualties from secondary effects based on their relative importance as well as their general predictability. ?? Springer Science+Business Media B.V. 2009.

  16. Global Earthquake Casualties due to Secondary Effects: A Quantitative Analysis for Improving PAGER Losses

    USGS Publications Warehouse

    Wald, David J.

    2010-01-01

    This study presents a quantitative and geospatial description of global losses due to earthquake-induced secondary effects, including landslide, liquefaction, tsunami, and fire for events during the past 40 years. These processes are of great importance to the US Geological Survey’s (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, which is currently being developed to deliver rapid earthquake impact and loss assessments following large/significant global earthquakes. An important question is how dominant are losses due to secondary effects (and under what conditions, and in which regions)? Thus, which of these effects should receive higher priority research efforts in order to enhance PAGER’s overall assessment of earthquakes losses and alerting for the likelihood of secondary impacts? We find that while 21.5% of fatal earthquakes have deaths due to secondary (non-shaking) causes, only rarely are secondary effects the main cause of fatalities. The recent 2004 Great Sumatra–Andaman Islands earthquake is a notable exception, with extraordinary losses due to tsunami. The potential for secondary hazards varies greatly, and systematically, due to regional geologic and geomorphic conditions. Based on our findings, we have built country-specific disclaimers for PAGER that address potential for each hazard (Earle et al., Proceedings of the 14th World Conference of the Earthquake Engineering, Beijing, China, 2008). We will now focus on ways to model casualties from secondary effects based on their relative importance as well as their general predictability.

  17. Earthquakes

    USGS Publications Warehouse

    Shedlock, Kaye M.; Pakiser, Louis Charles

    1998-01-01

    One of the most frightening and destructive phenomena of nature is a severe earthquake and its terrible aftereffects. An earthquake is a sudden movement of the Earth, caused by the abrupt release of strain that has accumulated over a long time. For hundreds of millions of years, the forces of plate tectonics have shaped the Earth as the huge plates that form the Earth's surface slowly move over, under, and past each other. Sometimes the movement is gradual. At other times, the plates are locked together, unable to release the accumulating energy. When the accumulated energy grows strong enough, the plates break free. If the earthquake occurs in a populated area, it may cause many deaths and injuries and extensive property damage. Today we are challenging the assumption that earthquakes must present an uncontrollable and unpredictable hazard to life and property. Scientists have begun to estimate the locations and likelihoods of future damaging earthquakes. Sites of greatest hazard are being identified, and definite progress is being made in designing structures that will withstand the effects of earthquakes.

  18. Public Release of Estimated Impact-Based Earthquake Alerts - An Update to the U.S. Geological Survey PAGER System

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Jaiswal, K. S.; Marano, K.; Hearne, M.; Earle, P. S.; So, E.; Garcia, D.; Hayes, G. P.; Mathias, S.; Applegate, D.; Bausch, D.

    2010-12-01

    The U.S. Geological Survey (USGS) has begun publicly releasing earthquake alerts for significant earthquakes around the globe based on estimates of potential casualties and economic losses. These estimates should significantly enhance the utility of the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system that has been providing estimated ShakeMaps and computing population exposures to specific shaking intensities since 2007. Quantifying earthquake impacts and communicating loss estimates (and their uncertainties) to the public has been the culmination of several important new and evolving components of the system. First, the operational PAGER system now relies on empirically-based loss models that account for estimated shaking hazard, population exposure, and employ country-specific fatality and economic loss functions derived using analyses of losses due to recent and past earthquakes. In some countries, our empirical loss models are informed in part by PAGER’s semi-empirical and analytical loss models, and building exposure and vulnerability data sets, all of which are being developed in parallel to the empirical approach. Second, human and economic loss information is now portrayed as a supplement to existing intensity/exposure content on both PAGER summary alert (available via cell phone/email) messages and web pages. Loss calculations also include estimates of the economic impact with respect to the country’s gross domestic product. Third, in order to facilitate rapid and appropriate earthquake responses based on our probable loss estimates, in early 2010 we proposed a four-level Earthquake Impact Scale (EIS). Instead of simply issuing median estimates for losses—which can be easily misunderstood and misused—this scale provides ranges of losses from which potential responders can gauge expected overall impact from strong shaking. EIS is based on two complementary criteria: the estimated cost of damage, which is most suitable for U

  19. Estimating earthquake location and magnitude from seismic intensity data

    USGS Publications Warehouse

    Bakun, W.H.; Wentworth, C.M.

    1997-01-01

    Analysis of Modified Mercalli intensity (MMI) observations for a training set of 22 California earthquakes suggests a strategy for bounding the epicentral region and moment magnitude M from MMI observations only. We define an intensity magnitude MI that is calibrated to be equal in the mean to M. MI = mean (Mi), where Mi = (MMIi + 3.29 + 0.0206 * ??i)/1.68 and ??i is the epicentral distance (km) of observation MMIi. The epicentral region is bounded by contours of rms [MI] = rms (MI - Mi) - rms0 (MI - Mi-), where rms is the root mean square, rms0 (MI - Mi) is the minimum rms over a grid of assumed epicenters, and empirical site corrections and a distance weighting function are used. Empirical contour values for bounding the epicenter location and empirical bounds for M estimated from MI appropriate for different levels of confidence and different quantities of intensity observations are tabulated. The epicentral region bounds and MI obtained for an independent test set of western California earthquakes are consistent with the instrumental epicenters and moment magnitudes of these earthquakes. The analysis strategy is particularly appropriate for the evaluation of pre-1900 earthquakes for which the only available data are a sparse set of intensity observations.

  20. Time-varying loss forecast for an earthquake scenario in Basel, Switzerland

    NASA Astrophysics Data System (ADS)

    Herrmann, Marcus; Zechar, Jeremy D.; Wiemer, Stefan

    2014-05-01

    When an unexpected earthquake occurs, people suddenly want advice on how to cope with the situation. The 2009 L'Aquila quake highlighted the significance of public communication and pushed the usage of scientific methods to drive alternative risk mitigation strategies. For instance, van Stiphout et al. (2010) suggested a new approach for objective evacuation decisions on short-term: probabilistic risk forecasting combined with cost-benefit analysis. In the present work, we apply this approach to an earthquake sequence that simulated a repeat of the 1356 Basel earthquake, one of the most damaging events in Central Europe. A recent development to benefit society in case of an earthquake are probabilistic forecasts of the aftershock occurrence. But seismic risk delivers a more direct expression of the socio-economic impact. To forecast the seismic risk on short-term, we translate aftershock probabilities to time-varying seismic hazard and combine this with time-invariant loss estimation. Compared with van Stiphout et al. (2010), we use an advanced aftershock forecasting model and detailed settlement data to allow us spatial forecasts and settlement-specific decision-making. We quantify the risk forecast probabilistically in terms of human loss. For instance one minute after the M6.6 mainshock, the probability for an individual to die within the next 24 hours is 41 000 times higher than the long-term average; but the absolute value remains at minor 0.04 %. The final cost-benefit analysis adds value beyond a pure statistical approach: it provides objective statements that may justify evacuations. To deliver supportive information in a simple form, we propose a warning approach in terms of alarm levels. Our results do not justify evacuations prior to the M6.6 mainshock, but in certain districts afterwards. The ability to forecast the short-term seismic risk at any time-and with sufficient data anywhere-is the first step of personal decision-making and raising risk

  1. Blood Loss Estimation Using Gauze Visual Analogue

    PubMed Central

    Ali Algadiem, Emran; Aleisa, Abdulmohsen Ali; Alsubaie, Huda Ibrahim; Buhlaiqah, Noora Radhi; Algadeeb, Jihad Bagir; Alsneini, Hussain Ali

    2016-01-01

    Background Estimating intraoperative blood loss can be a difficult task, especially when blood is mostly absorbed by gauze. In this study, we have provided an improved method for estimating blood absorbed by gauze. Objectives To develop a guide to estimate blood absorbed by surgical gauze. Materials and Methods A clinical experiment was conducted using aspirated blood and common surgical gauze to create a realistic amount of absorbed blood in the gauze. Different percentages of staining were photographed to create an analogue for the amount of blood absorbed by the gauze. Results A visual analogue scale was created to aid the estimation of blood absorbed by the gauze. The absorptive capacity of different gauze sizes was determined when the gauze was dripping with blood. The amount of reduction in absorption was also determined when the gauze was wetted with normal saline before use. Conclusions The use of a visual analogue may increase the accuracy of blood loss estimation and decrease the consequences related to over or underestimation of blood loss. PMID:27626017

  2. Blood Loss Estimation Using Gauze Visual Analogue

    PubMed Central

    Ali Algadiem, Emran; Aleisa, Abdulmohsen Ali; Alsubaie, Huda Ibrahim; Buhlaiqah, Noora Radhi; Algadeeb, Jihad Bagir; Alsneini, Hussain Ali

    2016-01-01

    Background Estimating intraoperative blood loss can be a difficult task, especially when blood is mostly absorbed by gauze. In this study, we have provided an improved method for estimating blood absorbed by gauze. Objectives To develop a guide to estimate blood absorbed by surgical gauze. Materials and Methods A clinical experiment was conducted using aspirated blood and common surgical gauze to create a realistic amount of absorbed blood in the gauze. Different percentages of staining were photographed to create an analogue for the amount of blood absorbed by the gauze. Results A visual analogue scale was created to aid the estimation of blood absorbed by the gauze. The absorptive capacity of different gauze sizes was determined when the gauze was dripping with blood. The amount of reduction in absorption was also determined when the gauze was wetted with normal saline before use. Conclusions The use of a visual analogue may increase the accuracy of blood loss estimation and decrease the consequences related to over or underestimation of blood loss.

  3. The ratio of injured to fatalities in earthquakes, estimated from intensity and building properties

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Trendafiloski, G.

    2009-04-01

    a city with poorly constructed buildings. The over all ratio for Bam was R=0.33 and for three districts it was R=0.2. In the only other city in the epicentral area, Baravat, located within about four kilometers of the epicenter R=0.55. Our contention that R is a function of I is further supported by analyzing R(I) for earthquakes where R is known for several settlements. The uncertainties in input parameters like earthquake source properties and Fat are moderate, those in Inj are large. Nevertheless our results are robust because the difference between R in the developed and developing world is enormous and the dependence on I is obvious. We conclude that R in most earthquakes results from a mixture of low values near the epicenter and high values farther away where intensities decrease to VI. The range between settlements in one single earthquake can be approximately 0.2 < R < 100, due to varying distance and hence varying I. Further, R(developed) = 25 R(developing), approximately. We also simulated several past earthquakes in Algeria, Peru and Iran to compare the values of estimated R(I) resulting from the use of ATC-13 and HAZUS casualty matrices with observations. We evaluated these matrices because they are supposed to apply worldwide and they consider all damage states as possible cause of casualties. Our initial conclusion is that the later matrices fit the observations better, in particular for intensity range VII-IX. However, to improve the estimates for all intensity values, we propose that casualty matrices for estimating human losses due to earthquakes should account for differences in I and in the building quality in different parts of the world.

  4. Soil amplification maps for estimating earthquake ground motions in the Central US

    USGS Publications Warehouse

    Bauer, R.A.; Kiefer, J.; Hester, N.

    2001-01-01

    The State Geologists of the Central United States Earthquake Consortium (CUSEC) are developing maps to assist State and local emergency managers and community officials in evaluating the earthquake hazards for the CUSEC region. The state geological surveys have worked together to produce a series of maps that show seismic shaking potential for eleven 1 X 2 degree (scale 1:250 000 or 1 in. ??? 3.9 miles) quadrangles that cover the high-risk area of the New Madrid Seismic Zone in eight states. Shear wave velocity values for the surficial materials were gathered and used to classify the soils according to their potential to amplify earthquake ground motions. Geologic base maps of surficial materials or 3-D material maps, either existing or produced for this project, were used in conjunction with shear wave velocities to classify the soils for the upper 15-30 m. These maps are available in an electronic form suitable for inclusion in the federal emergency management agency's earthquake loss estimation program (HAZUS). ?? 2001 Elsevier Science B.V. All rights reserved.

  5. Earthquake Lights and Estimates of Electric Fields and Currents

    NASA Astrophysics Data System (ADS)

    Nemtchinov, I. V.; Losseva, T. V.

    2003-12-01

    Luminous phenomena were observed during a number of earthquakes, e.g. the 1995 Kobe earthquake in Japan. Estimates based on the eyewitness's reports show that to produce lightning-type and corona-type discharges in the air the electric charge delivered to the ground surface should be of the order of several Coulomb. We assume that this charge was formed and transported by "mechanically produced current" flowing through the fault. The most difficult problem is why these charges survive in the highly conductive soil. The electric conductivity may drastically decrease due to heating and evaporation of water, but only in a thin central part of the fault, while the width of the luminous zone, and thus the width of the zone with high electric field strengths is of the order of hundred meters or even several km. So we propose the mechanism of charge localization - the so-called skin effect. Spreading of the currents and charges is described by a system of diffusion-type electromagnetodynamic equations. There are various models of rupturing during earthquakes. In the wrinkle-like self-healing pulse model the porosity and hydraulic and electric conductivity increase at the leading edge of the ruptured segment and decrease at the trailing edge. Assuming the electrokinetic mechanism as the most effective one we find that the charged fluid flows from these wedge-like ends of the ruptured segment to its center. The two currents flowing in the opposite direction produce magnetic fields with opposite rotation and reconnection due to magnetic diffusion decreases magnetic signals at large distances from the source region. Co-seismic electric and magnetic signals obtained during the Kobe earthquake at two stations at distances of about 100 km from the epicenter do not contradict our estimates and 3D numerical simulations based on our model of electric currents, magnetic fields and mechanical processes of rupturing. We suggest that similar current system may be formed even in the

  6. Earthquake Loss Assessment for the Evaluation of the Sovereign Risk and Financial Sustainability of Countries and Cities

    NASA Astrophysics Data System (ADS)

    Cardona, O. D.

    2013-05-01

    Recently earthquakes have struck cities both from developing as well as developed countries, revealing significant knowledge gaps and the need to improve the quality of input data and of the assumptions of the risk models. The quake and tsunami in Japan (2011) and the disasters due to earthquakes in Haiti (2010), Chile (2010), New Zealand (2011) and Spain (2011), only to mention some unexpected impacts in different regions, have left several concerns regarding hazard assessment as well as regarding the associated uncertainties to the estimation of the future losses. Understanding probable losses and reconstruction costs due to earthquakes creates powerful incentives for countries to develop planning options and tools to cope with sovereign risk, including allocating the sustained budgetary resources necessary to reduce those potential damages and safeguard development. Therefore the use of robust risk models is a need to assess the future economic impacts, the country's fiscal responsibilities and the contingent liabilities for governments and to formulate, justify and implement risk reduction measures and optimal financial strategies of risk retention and transfer. Special attention should be paid to the understanding of risk metrics such as the Loss Exceedance Curve (empiric and analytical) and the Expected Annual Loss in the context of conjoint and cascading hazards.

  7. The global historical and future economic loss and cost of earthquakes during the production of adaptive worldwide economic fragility functions

    NASA Astrophysics Data System (ADS)

    Daniell, James; Wenzel, Friedemann

    2014-05-01

    Over the past decade, the production of economic indices behind the CATDAT Damaging Earthquakes Database has allowed for the conversion of historical earthquake economic loss and cost events into today's terms using long-term spatio-temporal series of consumer price index (CPI), construction costs, wage indices, and GDP from 1900-2013. As part of the doctoral thesis of Daniell (2014), databases and GIS layers for a country and sub-country level have been produced for population, GDP per capita, net and gross capital stock (depreciated and non-depreciated) using studies, census information and the perpetual inventory method. In addition, a detailed study has been undertaken to collect and reproduce as many historical isoseismal maps, macroseismic intensity results and reproductions of earthquakes as possible out of the 7208 damaging events in the CATDAT database from 1900 onwards. a) The isoseismal database and population bounds from 3000+ collected damaging events were compared with the output parameters of GDP and net and gross capital stock per intensity bound and administrative unit, creating a spatial join for analysis. b) The historical costs were divided into shaking/direct ground motion effects, and secondary effects costs. The shaking costs were further divided into gross capital stock related and GDP related costs for each administrative unit, intensity bound couplet. c) Costs were then estimated based on the optimisation of the function in terms of costs vs. gross capital stock and costs vs. GDP via the regression of the function. Losses were estimated based on net capital stock, looking at the infrastructure age and value at the time of the event. This dataset was then used to develop an economic exposure for each historical earthquake in comparison with the loss recorded in the CATDAT Damaging Earthquakes Database. The production of economic fragility functions for each country was possible using a temporal regression based on the parameters of

  8. Estimation of earthquake effects associated with a great earthquake in the New Madrid seismic zone

    USGS Publications Warehouse

    Hopper, Margaret G.; Algermissen, Sylvester Theodore; Dobrovolny, Ernest E.

    1983-01-01

    Estimates have been made of the effects of a large Ms = 8.6, Io = XI earthquake hypothesed to occur anywhere in the New Madrid seismic zone. The estimates are based on the distributions of intensities associated with the earthquakes of 1811-12, 1843 and 1895 although the effects of other historical shocks are also considered. The resulting composite type intensity map for a maximum intensity XI is believed to represent the upper level of shaking likely to occur. Specific intensity maps have been developed for six cities near the epicentral region taking into account the most likely distribution of site response in each city. Intensities found are: IX for Carbondale, IL; VIII and IX for Evansville, IN; VI and VIII for Little Rock, AR; IX and X for Memphis, TN; VIII, IX, and X for Paducah, KY; and VIII and X for Poplar Bluff, MO. On a regional scale, intensities are found to attenuate from the New Madrid seismic zone most rapidly to the west and southwest sides of the zone, most slowly to the northwest along the Mississippi River, on the northeast along the Ohio River, and on the southeast toward Georgia and South Carolina. Intensities attenuate toward the north, east, and south in a more normal fashion. Known liquefaction effects are documented but much more research is needed to define the liquefaction potential.

  9. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  10. Physics-based estimates of maximum magnitude of induced earthquakes

    NASA Astrophysics Data System (ADS)

    Ampuero, Jean-Paul; Galis, Martin; Mai, P. Martin

    2016-04-01

    In this study, we present new findings when integrating earthquake physics and rupture dynamics into estimates of maximum magnitude of induced seismicity (Mmax). Existing empirical relations for Mmax lack a physics-based relation between earthquake size and the characteristics of the triggering stress perturbation. To fill this gap, we extend our recent work on the nucleation and arrest of dynamic ruptures derived from fracture mechanics theory. There, we derived theoretical relations between the area and overstress of overstressed asperity and the ability of ruptures to either stop spontaneously (sub-critical ruptures) or runaway (super-critical ruptures). These relations were verified by comparison with simulation and laboratory results, namely 3D dynamic rupture simulations on faults governed by slip-weakening friction, and laboratory experiments of frictional sliding nucleated by localized stresses. Here, we apply and extend these results to situations that are representative for the induced seismicity environment. We present physics-based predictions of Mmax on a fault intersecting cylindrical reservoir. We investigate Mmax dependence on pore-pressure variations (by varying reservoir parameters), frictional parameters and stress conditions of the fault. We also derive Mmax as a function of injected volume. Our approach provides results that are consistent with observations but suggests different scaling with injected volume than that of empirical relation by McGarr, 2014.

  11. Likely Human Losses in Future Earthquakes in Central Myanmar, Beyond the Northern end of the M9.3 Sumatra Rupture of 2004

    NASA Astrophysics Data System (ADS)

    Wyss, B. M.; Wyss, M.

    2007-12-01

    We estimate that the city of Rangoon and adjacent provinces (Rangoon, Rakhine, Ayeryarwady, Bago) represent an earthquake risk similar in severity to that of Istanbul and the Marmara Sea region. After the M9.3 Sumatra earthquake of December 2004 that ruptured to a point north of the Andaman Islands, the likelihood of additional ruptures in the direction of Myanmar and within Myanmar is increased. This assumption is especially plausible since M8.2 and M7.9 earthquakes in September 2007 extended the 2005 ruptures to the south. Given the dense population of the aforementioned provinces, and the fact that historically earthquakes of M7.5 class have occurred there (in 1858, 1895 and three in 1930), it would not be surprising, if similar sized earthquakes would occur in the coming decades. Considering that we predicted the extent of human losses in the M7.6 Kashmir earthquake of October 2005 approximately correctly six month before it occurred, it seems reasonable to attempt to estimate losses in future large to great earthquakes in central Myanmar and along its coast of the Bay of Bengal. We have calculated the expected number of fatalities for two classes of events: (1) M8 ruptures offshore (between the Andaman Islands and the Myanmar coast, and along Myanmar's coast of the Bay of Bengal. (2) M7.5 repeats of the historic earthquakes that occurred in the aforementioned years. These calculations are only order of magnitude estimates because all necessary input parameters are poorly known. The population numbers, the condition of the building stock, the regional attenuation law, the local site amplification and of course the parameters of future earthquakes can only be estimated within wide ranges. For this reason, we give minimum and maximum estimates, both within approximate error limits. We conclude that the M8 earthquakes located offshore are expected to be less harmful than the M7.5 events on land: For M8 events offshore, the minimum number of fatalities is estimated

  12. Quantitative Estimates of the Numbers of Casualties to be Expected due to Major Earthquakes Near Megacities

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Wenzel, F.

    2004-12-01

    Defining casualties as the sum of the fatalities plus injured, we use their mean number, as calculated by QUAKELOSS (developed by Extreme Situations Research Center, Moscow) as a measure of the extent of possible disasters due to earthquakes. Examples of cities we examined include Algiers, Cairo, Istanbul, Mumbai and Teheran, with populations ranging from about 3 to 20 million. With the assumption that the properties of the building stock has not changed with time since 1950, we find that the number of expected casualties will have increased about 5 to 10 fold by the year 2015. This increase is directly proportional to the increase of the population. For the assumed magnitude, we used M7 and M6.5 because shallow earthquakes in this range can occur in the seismogenic layer, without rupturing the surface. This means, they could occur anywhere in a seismically active area, not only along known faults. As a function of epicentral distance the fraction of casualties of the population decrease from about 6% at 20 km, to 3% at 30 km and 0.5% at 50 km, for an earthquake of M7. At 30 km distance, the assumed variation of the properties of the building stock from country to country give rise to variations of 1% to 5% for the estimate of the percent of the population that become casualties. As a function of earthquake size, the expected number of casualties drop by approximately an order of magnitude for an M6.5, compared to an M7, at 30 km distance. Because the computer code and database in QUAKELOSS are calibrated based on about 1000 earthquakes with fatalities, and verified by real-time loss estimates for about 60 cases, these results are probably of the correct order of magnitude. However, the results should not be taken as overly reliable, because (1) the probability calculations of the losses result in uncertainties of about a factor of two, (2) the method has been tested for medium size cities, not for megacities, and (3) many assumptions were made. Nevertheless, it is

  13. Rupture Process of the 1969 and 1975 Kurile Earthquakes Estimated from Tsunami Waveform Analyses

    NASA Astrophysics Data System (ADS)

    Ioki, Kei; Tanioka, Yuichiro

    2016-09-01

    The 1969 and 1975 great Kurile earthquakes occurred along the Kurile trench. Tsunamis generated by these earthquakes were observed at tide gauge stations around the coasts of the Okhotsk Sea and Pacific Ocean. To understand rupture process of the 1969 and 1975 earthquakes, slip distributions of the 1969 and 1975 events were estimated using tsunami waveform inversion technique. Seismic moments estimated from slip distributions of the 1969 and 1975 earthquakes were 1.1 × 1021 Nm (M w 8.0) and 0.6 × 1021 Nm (M w 7.8), respectively. The 1973 Nemuro-Oki earthquake occurred at the plate interface adjacent to that ruptured by the 1969 Kurile earthquake. The 1975 Shikotan earthquake occurred in a shallow region of the plate interface where was not ruptured by the 1969 Kurile earthquake. Further, like a sequence of the 1969 and 1975 earthquakes, it is possible that a great earthquake may occur in a shallow part of the plate interface a few years after a great earthquake that occurs in a deeper part of the same region along the trench.

  14. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  15. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  16. Global Earthquake and Volcanic Eruption Economic losses and costs from 1900-2014: 115 years of the CATDAT database - Trends, Normalisation and Visualisation

    NASA Astrophysics Data System (ADS)

    Daniell, James; Skapski, Jens-Udo; Vervaeck, Armand; Wenzel, Friedemann; Schaefer, Andreas

    2015-04-01

    Over the past 12 years, an in-depth database has been constructed for socio-economic losses from earthquakes and volcanoes. The effects of earthquakes and volcanic eruptions have been documented in many databases, however, many errors and incorrect details are often encountered. To combat this, the database was formed with socioeconomic checks of GDP, capital stock, population and other elements, as well as providing upper and lower bounds to each available event loss. The definition of economic losses within the CATDAT Damaging Earthquakes Database (Daniell et al., 2011a) as of v6.1 has now been redefined to provide three options of natural disaster loss pricing, including reconstruction cost, replacement cost and actual loss, in order to better define the impact of historical disasters. Similarly for volcanoes as for earthquakes, a reassessment has been undertaken looking at the historical net and gross capital stock and GDP at the time of the event, including the depreciated stock, in order to calculate the actual loss. A normalisation has then been undertaken using updated population, GDP and capital stock. The difference between depreciated and gross capital can be removed from the historical loss estimates which have been all calculated without taking depreciation of the building stock into account. The culmination of time series from 1900-2014 of net and gross capital stock, GDP, direct economic loss data, use of detailed studies of infrastructure age, and existing damage surveys, has allowed the first estimate of this nature. The death tolls in earthquakes from 1900-2014 are presented in various forms, showing around 2.32 million deaths due to earthquakes (with a range of 2.18 to 2.63 million) and around 59% due to masonry buildings and 28% from secondary effects. For the death tolls from the volcanic eruption database, 98000 deaths with a range from around 83000 to 107000 is seen from 1900-2014. The application of VSL life costing from death and injury

  17. Errors in Expected Human Losses Due to Incorrect Seismic Hazard Estimates

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Nekrasova, A.; Kossobokov, V. G.

    2011-12-01

    The probability of strong ground motion is presented in seismic hazard maps, in which peak ground accelerations (PGA) with 10% probability of exceedance in 50 years are shown by color codes. It has become evident that these maps do not correctly give the seismic hazard. On the seismic hazard map of Japan, the epicenters of the recent large earthquakes are located in the regions of relatively low hazard. The errors of the GSHAP maps have been measured by the difference between observed and expected intensities due to large earthquakes. Here, we estimate how the errors in seismic hazard estimates propagate into errors in estimating the potential fatalities and affected population. We calculated the numbers of fatalities that would have to be expected in the regions of the nine earthquakes with more than 1,000 fatalities during the last 10 years with relatively reliable estimates of fatalities, assuming a magnitude which generates as a maximum intensity the one given by the GSHAP maps. This value is the number of fatalities to be exceeded with probability of 10% during 50 years. In most regions of devastating earthquakes, there are no instruments to measure ground accelerations. Therefore, we converted the PGA expected as a likely maximum based on the GSHAP maps to intensity. The magnitude of the earthquake that would cause the intensity expected by GSHAP as a likely maximum was calculated by M(GSHAP) = (I0 +1.5)/1.5. The numbers of fatalities, which were expected, based on earthquakes with M(GSHAP), were calculated using the loss estimating program QLARM. We calibrated this tool for each case by calculating the theoretical damage and numbers of fatalities (Festim) for the disastrous test earthquakes, generating a match with the observe numbers of fatalities (Fobs=Festim) by adjusting the attenuation relationship within the bounds of commonly observed laws. Calculating the numbers of fatalities expected for the earthquakes with M(GSHAP) will thus yield results that

  18. Estimate hydrocarbon losses during tank loading

    SciTech Connect

    Novacek, J.P.

    1996-05-01

    A very important parameter in estimating emissions from loading operations is the AP-42 S factor for petroleum loading losses. This factor accounts for the variance of a vapor-liquid system from equilibrium at a given atmospheric temperature. As such, it is critical to the design of pollution control equipment. If a vessel with a small vent to the atmosphere is half full of a liquid, and the ambient temperature remains relatively constant for an extended period of time, then eventually enough of the liquid in the container will vaporize to reach its vapor pressure at the ambient temperature. When this condition is reached, the liquid and vapor are in equilibrium, and the S factor equals one. If that same container is now being filled with liquid, then the vapor from the added liquid may not have a chance to reach equilibrium. When the vapor concentration is below its equilibrium concentration (i.e., the air still has some vapor-holding capacity), then the S factor is less than one. An S factor greater than one indicates supersaturated conditions. For a top-splash loading arrangement the S factor can be greater than one. The detailed explanation of this phenomena is beyond the scope of this article, but in general supersaturated conditions result from the air stripping effect and increased liquid surface area produced by the splashing.

  19. A comparison of socio-economic loss analysis from the 2013 Haiyan Typhoon and Bohol Earthquake events in the Philippines in near real-time

    NASA Astrophysics Data System (ADS)

    Daniell, James; Mühr, Bernhard; Kunz-Plapp, Tina; Brink, Susan A.; Kunz, Michael; Khazai, Bijan; Wenzel, Friedemann

    2014-05-01

    In the aftermath of a disaster, the extent of the socioeconomic loss (fatalities, homelessness and economic losses) is often not known and it may take days before a reasonable estimate is known. Using the technique of socio-economic fragility functions developed (Daniell, 2014) using a regression of socio-economic indicators through time against historical empirical loss vs. intensity data, a first estimate can be established. With more information from the region as the disaster unfolds, a more detailed estimate can be provided via a calibration of the initial loss estimate parameters. In 2013, two main disasters hit the Philippines; the Bohol earthquake in October and the Haiyan typhoon in November. Although both disasters were contrasting and hit different regions, the same generalised methodology was used for initial rapid estimates and then the updating of the disaster loss estimate through time. The CEDIM Forensic Disaster Analysis Group of KIT and GFZ produced 6 reports for Bohol and 2 reports for Haiyan detailing various aspects of the disasters from the losses to building damage, the socioeconomic profile and also the social networking and disaster response. This study focusses on the loss analysis undertaken. The following technique was used:- 1. A regression of historical earthquake and typhoon losses for the Philippines was examined using the CATDAT Damaging Earthquakes Database, and various Philippines databases respectively. 2. The historical intensity impact of the examined events were placed in a GIS environment in order to allow correlation with the population and capital stock database from 1900-2013 to create a loss function. The modified human development index from 1900-2013 was also used to also calibrate events through time. 3. The earthquake intensity and the wind speed intensity was used from the 2013 events as well as the 2013 capital stock and population in order to calculate the number of fatalities (except in Haiyan), homeless and

  20. Estimation of strong ground motions from hypothetical earthquakes on the Cascadia subduction zone, Pacific Northwest

    USGS Publications Warehouse

    Heaton, T.H.; Hartzell, S.H.

    1989-01-01

    Strong ground motions are estimated for the Pacific Northwest assuming that large shallow earthquakes, similar to those experienced in southern Chile, southwestern Japan, and Colombia, may also occur on the Cascadia subduction zone. Fifty-six strong motion recordings for twenty-five subduction earthquakes of Ms???7.0 are used to estimate the response spectra that may result from earthquakes Mw<81/4. Large variations in observed ground motion levels are noted for a given site distance and earthquake magnitude. When compared with motions that have been observed in the western United States, large subduction zone earthquakes produce relatively large ground motions at surprisingly large distances. An earthquake similar to the 22 May 1960 Chilean earthquake (Mw 9.5) is the largest event that is considered to be plausible for the Cascadia subduction zone. This event has a moment which is two orders of magnitude larger than the largest earthquake for which we have strong motion records. The empirical Green's function technique is used to synthesize strong ground motions for such giant earthquakes. Observed teleseismic P-waveforms from giant earthquakes are also modeled using the empirical Green's function technique in order to constrain model parameters. The teleseismic modeling in the period range of 1.0 to 50 sec strongly suggests that fewer Green's functions should be randomly summed than is required to match the long-period moments of giant earthquakes. It appears that a large portion of the moment associated with giant earthquakes occurs at very long periods that are outside the frequency band of interest for strong ground motions. Nevertheless, the occurrence of a giant earthquake in the Pacific Northwest may produce quite strong shaking over a very large region. ?? 1989 Birkha??user Verlag.

  1. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  2. Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory

    NASA Astrophysics Data System (ADS)

    Deyi, Feng; Ichikawa, M.

    1989-11-01

    In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.

  3. Mathematical models for estimating earthquake casualties and damage cost through regression analysis using matrices

    NASA Astrophysics Data System (ADS)

    Urrutia, J. D.; Bautista, L. A.; Baccay, E. B.

    2014-04-01

    The aim of this study was to develop mathematical models for estimating earthquake casualties such as death, number of injured persons, affected families and total cost of damage. To quantify the direct damages from earthquakes to human beings and properties given the magnitude, intensity, depth of focus, location of epicentre and time duration, the regression models were made. The researchers formulated models through regression analysis using matrices and used α = 0.01. The study considered thirty destructive earthquakes that hit the Philippines from the inclusive years 1968 to 2012. Relevant data about these said earthquakes were obtained from Philippine Institute of Volcanology and Seismology. Data on damages and casualties were gathered from the records of National Disaster Risk Reduction and Management Council. The mathematical models made are as follows: This study will be of great value in emergency planning, initiating and updating programs for earthquake hazard reductionin the Philippines, which is an earthquake-prone country.

  4. Quantifying uncertainty in NDSHA estimates due to earthquake catalogue

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano

    2014-05-01

    The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate

  5. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    USGS Publications Warehouse

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  6. Improving Estimates of Coseismic Subsidence from southern Cascadia Subduction Zone Earthquakes at northern Humboldt Bay, California

    NASA Astrophysics Data System (ADS)

    Padgett, J. S.; Engelhart, S. E.; Hemphill-Haley, E.; Kelsey, H. M.; Witter, R. C.

    2015-12-01

    Geological estimates of subsidence from past earthquakes help to constrain Cascadia subduction zone (CSZ) earthquake rupture models. To improve subsidence estimates for past earthquakes along the southern CSZ, we apply transfer function analysis on microfossils from 3 intertidal marshes in northern Humboldt Bay, California, ~60 km north of the Mendocino Triple Junction. The transfer function method uses elevation-dependent intertidal foraminiferal and diatom assemblages to reconstruct relative sea-level (RSL) change indicated by shifts in microfossil assemblages. We interpret stratigraphic evidence associated with sudden shifts in microfossils to reflect sudden RSL rise due to subsidence during past CSZ earthquakes. Laterally extensive (>5 km) and sharp mud-over-peat contacts beneath marshes at Jacoby Creek, Mad River Slough, and McDaniel Slough demonstrate widespread earthquake subsidence in northern Humboldt Bay. C-14 ages of plant macrofossils taken from above and below three contacts that correlate across all three sites, provide estimates of the times of subsidence at ~250 yr BP, ~1300 yr BP and ~1700 yr BP. Two further contacts observed at only two sites provide evidence for subsidence during possible CSZ earthquakes at ~900 yr BP and ~1100 yr BP. Our study contributes 20 AMS radiocarbon ages, of identifiable plant macrofossils, that improve estimates of the timing of past earthquakes along the southern CSZ. We anticipate that our results will provide more accurate and precise reconstructions of RSL change induced by southern CSZ earthquakes. Prior to our work, studies in northern Humboldt Bay provided subsidence estimates with vertical uncertainties >±0.5 m; too imprecise to adequately constrain earthquake rupture models. Our method, applied recently in coastal Oregon, has shown that subsidence during past CSZ earthquakes can be reconstructed with a precision of ±0.3m and substantially improves constraints on rupture models used for seismic hazard

  7. Estimating surface faulting impacts from the shakeout scenario earthquake

    USGS Publications Warehouse

    Treiman, J.A.; Pontib, D.J.

    2011-01-01

    An earthquake scenario, based on a kinematic rupture model, has been prepared for a Mw 7.8 earthquake on the southern San Andreas Fault. The rupture distribution, in the context of other historic large earthquakes, is judged reasonable for the purposes of this scenario. This model is used as the basis for generating a surface rupture map and for assessing potential direct impacts on lifelines and other infrastructure. Modeling the surface rupture involves identifying fault traces on which to place the rupture, assigning slip values to the fault traces, and characterizing the specific displacements that would occur to each lifeline impacted by the rupture. Different approaches were required to address variable slip distribution in response to a variety of fault patterns. Our results, involving judgment and experience, represent one plausible outcome and are not predictive because of the variable nature of surface rupture. ?? 2011, Earthquake Engineering Research Institute.

  8. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  9. Bayesian estimation of system reliability under asymmetric loss

    NASA Astrophysics Data System (ADS)

    Thompson, Ronald David

    This research is concerned with estimating the reliability of a k-out-of-p system when the lifetimes of its p components are iid, when subjective beliefs about the behavior of the system's individual components are available, and when losses corresponding to overestimation and underestimation errors can be approximated by a suitable family of asymmetric loss functions. Point estimates for such systems are discussed in the context of Bayes estimation with respect to loss functions. A set of properties is proposed as being minimal properties that all loss functions appropriate to reliability estimation might satisfy. Several families of asymmetric loss functions that satisfy these minimal properties are discussed, and their corresponding posterior Bayes estimators are derived. One of these families, squarex loss functions, is a generalization of linex loss functions. The concept of loss robustness is discussed in the context of parametric families of asymmetric loss functions. As an application, the reliability of O-rings critical to the 1986 catastrophic failure of the Space Shuttle Challenger is estimated. Point estimation of negative exponential stress-strength k-out-of-p systems with respect to reference priors is discussed in this context of asymmetric loss functions.

  10. Uncertainty of earthquake losses due to model uncertainty of input ground motions in the Los Angeles area

    USGS Publications Warehouse

    Cao, T.; Petersen, M.D.

    2006-01-01

    In a recent study we used the Monte Carlo simulation method to evaluate the ground-motion uncertainty of the 2002 update of the California probabilistic seismic hazard model. The resulting ground-motion distribution is used in this article to evaluate the contribution of the hazard model to the uncertainty in earthquake loss ratio, the ratio of the expected loss to the total value of a structure. We use the Hazards U.S. (HAZUS) methodology for loss estimation because it is a widely used and publicly available risk model and intended for regional studies by public agencies and for use by governmental decision makers. We found that the loss ratio uncertainty depends not only on the ground-motion uncertainty but also on the mean ground-motion level. The ground-motion uncertainty, as measured by the coefficient of variation (COV), is amplified when converting to the loss ratio uncertainty because loss increases concavely with ground motion. By comparing the ground-motion uncertainty with the corresponding loss ratio uncertainty for the structural damage of light wood-frame buildings in Los Angeles area, we show that the COV of loss ratio is almost twice the COV of ground motion with a return period of 475 years around the San Andreas fault and other major faults in the area. The loss ratio for the 2475-year ground-motion maps is about a factor of three higher than for the 475-year maps. However, the uncertainties in ground motion and loss ratio for the longer return periods are lower than for the shorter return periods because the uncertainty parameters in the hazard logic tree are independent of the return period, but the mean ground motion increases with return period.

  11. PAGER--Rapid assessment of an earthquake?s impact

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  12. Estimation of completeness magnitude with a Bayesian modeling of daily and weekly variations in earthquake detectability

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2014-12-01

    In the analysis of seismic activity, assessment of earthquake detectability of a seismic network is a fundamental issue. For this assessment, the completeness magnitude Mc, the minimum magnitude above which all earthquakes are recorded, is frequently estimated. In most cases, Mc is estimated for an earthquake catalog of duration longer than several weeks. However, owing to human activity, noise level in seismic data is higher on weekdays than on weekends, so that earthquake detectability has a weekly variation [e.g., Atef et al., 2009, BSSA]; the consideration of such a variation makes a significant contribution to the precise assessment of earthquake detectability and Mc. For a quantitative evaluation of the weekly variation, we introduced the statistical model of a magnitude-frequency distribution of earthquakes covering an entire magnitude range [Ogata & Katsura, 1993, GJI]. The frequency distribution is represented as the product of the Gutenberg-Richter law and a detection rate function. Then, the weekly variation in one of the model parameters, which corresponds to the magnitude where the detection rate of earthquakes is 50%, was estimated. Because earthquake detectability also have a daily variation [e.g., Iwata, 2013, GJI], and the weekly and daily variations were estimated simultaneously by adopting a modification of a Bayesian smoothing spline method for temporal change in earthquake detectability developed in Iwata [2014, Aust. N. Z. J. Stat.]. Based on the estimated variations in the parameter, the value of Mc was estimated. In this study, the Japan Meteorological Agency catalog from 2006 to 2010 was analyzed; this dataset is the same as analyzed in Iwata [2013] where only the daily variation in earthquake detectability was considered in the estimation of Mc. A rectangular grid with 0.1° intervals covering in and around Japan was deployed, and the value of Mc was estimated for each gridpoint. Consequently, a clear weekly variation was revealed; the

  13. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  14. Using a genetic algorithm to estimate the details of earthquake slip distributions from point surface displacements

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nic Bhloscaidh, M.

    2016-03-01

    Examining fault activity over several earthquake cycles is necessary for long-term modeling of the fault strain budget and stress state. While this requires knowledge of coseismic slip distributions for successive earthquakes along the fault, these exist only for the most recent events. However, overlying the Sunda Trench, sparsely distributed coral microatolls are sensitive to tectonically induced changes in relative sea levels and provide a century-spanning paleogeodetic and paleoseismic record. Here we present a new technique called the Genetic Algorithm Slip Estimator to constrain slip distributions from observed surface deformations of corals. We identify a suite of models consistent with the observations, and from them we compute an ensemble estimate of the causative slip. We systematically test our technique using synthetic data. Applying the technique to observed coral displacements for the 2005 Nias-Simeulue earthquake and 2007 Mentawai sequence, we reproduce key features of slip present in previously published inversions such as the magnitude and location of slip asperities. From the displacement data available for the 1797 and 1833 Mentawai earthquakes, we present slip estimates reproducing observed displacements. The areas of highest modeled slip in the paleoearthquake are nonoverlapping, and our solutions appear to tile the plate interface, complementing one another. This observation is supported by the complex rupture pattern of the 2007 Mentawai sequence, underlining the need to examine earthquake occurrence through long-term strain budget and stress modeling. Although developed to estimate earthquake slip, the technique is readily adaptable for a wider range of applications.

  15. Probability estimates of seismic event occurrence compared to health hazards - Forecasting Taipei's Earthquakes

    NASA Astrophysics Data System (ADS)

    Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.

    2014-12-01

    Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.

  16. Estimation of flood losses to agricultural crops using remote sensing

    NASA Astrophysics Data System (ADS)

    Tapia-Silva, Felipe-Omar; Itzerott, Sibylle; Foerster, Saskia; Kuhlmann, Bernd; Kreibich, Heidi

    2011-01-01

    The estimation of flood damage is an important component of risk-oriented flood design, risk mapping, financial appraisals and comparative risk analyses. However, research on flood loss modelling, especially in the agricultural sector, has not yet gained much attention. Agricultural losses strongly depend on the crops affected, which need to be predicted accurately. Therefore, three different methods to predict flood-affected crops using remote sensing and ancillary data were developed, applied and validated. These methods are: (a) a hierarchical classification based on standard curves of spectral response using satellite images, (b) disaggregation of crop statistics using a Monte Carlo simulation and probabilities of crops to be cultivated on specific soils and (c) analysis of crop rotation with data mining Net Bayesian Classifiers (NBC) using soil data and crop data derived from a multi-year satellite image analysis. A flood loss estimation model for crops was applied and validated in flood detention areas (polders) at the Havel River (Untere Havelniederung) in Germany. The polders were used for temporary storage of flood water during the extreme flood event in August 2002. The flood loss to crops during the extreme flood event in August 2002 was estimated based on the results of the three crop prediction methods. The loss estimates were then compared with official loss data for validation purposes. The analysis of crop rotation with NBC obtained the best result, with 66% of crops correctly classified. The accuracy of the other methods reached 34% with identification using Normalized Difference Vegetation Index (NDVI) standard curves and 19% using disaggregation of crop statistics. The results were confirmed by evaluating the loss estimation procedure, in which the damage model using affected crops estimated by NBC showed the smallest overall deviation (1%) when compared to the official losses. Remote sensing offers various possibilities for the improvement of

  17. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  18. The effect of band loss on estimates of annual survival

    USGS Publications Warehouse

    Nelson, Louis J.; Anderson, David R.; Burnham, Kenneth P.

    1980-01-01

    Banding has proven to be a useful technique in the study of population dynamics of avian species. However, band loss has long been recognized as a potential problem, (Hickey, 1952; Ludwig, 1967). Recently, Brownie et al. (1978) presented 14 models based on an array of explicit assumptions for the analysis of band recovery data. Various estimation models (assumption sets) allowed survival and/or recovery rates to be (a) constant, (b) time-specific, or (c) time- and age-specific. Optimal inference methods were employed and statistical tests of critical assumptions were developed and emphasized. The methods of Brownie et al. (1978), as with all previously published methods of which we are aware, assume no loss of bands during the study. However, some band loss is certain to occur and this potentially biases the estimates of annual survival rates whatever the analysis method. A few empirical studies have estimated band loss rates (a notable exception is Ludwig, 1967); consequently, for almost all band recovery data, the exact rate of band loss is unknown. In this paper we investigate the bias in estimates of annual survival rates due to varying degrees of hypothesized band loss. Our main results are based on perhaps the most useful model, originally developed by Seber (1970), for estimation of annual survival rate. Inferences are made concerning the bias of estimated survival rates in other models because the structure of these estimators is similar.

  19. Coastal land loss and gain as potential earthquake trigger mechanism in SCRs

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2007-12-01

    In stable continental regions (SCRs), historic data show earthquakes can be triggered by natural tectonic sources in the interior of the crust and also by sources stemming from the Earth's sub/surface. Building off of this framework, the following abstract will discuss both as potential sources that might have triggered the 2007 ML4.2 Folkestone earthquake in Kent, England. Folkestone, located along the Southeast coast of Kent in England, is a mature aseismic region. However, a shallow earthquake with a local magnitude of ML = 4.2 occurred on April 28 2007 at 07:18 UTC about 1 km East of Folkestone (51.008° N, 1.206° E) between Dover and New Romney. The epicentral error is about ±5 km. While coastal land loss has major effects towards the Southwest and the Northeast of Folkestone, research observations suggest that erosion and landsliding do not exist in the immediate Folkestone city area (<1km). Furthermore, erosion removes rock material from the surface. This mass reduction decreases the gravitational stress component and would bring a fault away from failure, given a tectonic normal and strike-slip fault regime. In contrast, land gain by geoengineering (e.g., shingle accumulation) in the harbor of Folkestone dates back to 1806. The accumulated mass of sand and gravel accounted for a 2.8·109 kg (2.8 Mt) in 2007. This concentrated mass change less than 1 km away from the epicenter of the mainshock was able to change the tectonic stress in the strike-slip/normal stress regime. Since 1806, shear and normal stresses increased at most on oblique faults dipping 60±10°. The stresses reached values ranging between 1.0 KPa and 30.0 KPa in up to 2 km depth, which are critical for triggering earthquakes. Furthermore, the ratio between holding and driving forces continuously decreased for 200 years. In conclusion, coastal engineering at the surface most likely dominates as potential trigger mechanism for the 2007 ML4.2 Folkestone earthquake. It can be anticipated that

  20. Ground motion modeling of the 1906 San Francisco earthquake II: Ground motion estimates for the 1906 earthquake and scenario events

    SciTech Connect

    Aagaard, B; Brocher, T; Dreger, D; Frankel, A; Graves, R; Harmsen, S; Hartzell, S; Larsen, S; McCandless, K; Nilsson, S; Petersson, N A; Rodgers, A; Sjogreen, B; Tkalcic, H; Zoback, M L

    2007-02-09

    We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  1. ShakeMap Atlas 2.0: an improved suite of recent historical earthquake ShakeMaps for global hazard analyses and loss model calibration

    USGS Publications Warehouse

    Garcia, D.; Mah, R.T.; Johnson, K.L.; Hearne, M.G.; Marano, K.D.; Lin, K.-W.; Wald, D.J.

    2012-01-01

    We introduce the second version of the U.S. Geological Survey ShakeMap Atlas, which is an openly-available compilation of nearly 8,000 ShakeMaps of the most significant global earthquakes between 1973 and 2011. This revision of the Atlas includes: (1) a new version of the ShakeMap software that improves data usage and uncertainty estimations; (2) an updated earthquake source catalogue that includes regional locations and finite fault models; (3) a refined strategy to select prediction and conversion equations based on a new seismotectonic regionalization scheme; and (4) vastly more macroseismic intensity and ground-motion data from regional agencies All these changes make the new Atlas a self-consistent, calibrated ShakeMap catalogue that constitutes an invaluable resource for investigating near-source strong ground-motion, as well as for seismic hazard, scenario, risk, and loss-model development. To this end, the Atlas will provide a hazard base layer for PAGER loss calibration and for the Earthquake Consequences Database within the Global Earthquake Model initiative.

  2. USGS approach to real-time estimation of earthquake-triggered ground failure - Results of 2015 workshop

    USGS Publications Warehouse

    Allstadt, Kate E.; Thompson, Eric M.; Wald, David J.; Hamburger, Michael W.; Godt, Jonathan W.; Knudsen, Keith L.; Jibson, Randall W.; Jessee, M. Anna; Zhu, Jing; Hearne, Michael; Baise, Laurie G.; Tanyas, Hakan; Marano, Kristin D.

    2016-01-01

    The U.S. Geological Survey (USGS) Earthquake Hazards and Landslide Hazards Programs are developing plans to add quantitative hazard assessments of earthquake-triggered landsliding and liquefaction to existing real-time earthquake products (ShakeMap, ShakeCast, PAGER) using open and readily available methodologies and products. To date, prototype global statistical models have been developed and are being refined, improved, and tested. These models are a good foundation, but much work remains to achieve robust and defensible models that meet the needs of end users. In order to establish an implementation plan and identify research priorities, the USGS convened a workshop in Golden, Colorado, in October 2015. This document summarizes current (as of early 2016) capabilities, research and operational priorities, and plans for further studies that were established at this workshop. Specific priorities established during the meeting include (1) developing a suite of alternative models; (2) making use of higher resolution and higher quality data where possible; (3) incorporating newer global and regional datasets and inventories; (4) reducing barriers to accessing inventory datasets; (5) developing methods for using inconsistent or incomplete datasets in aggregate; (6) developing standardized model testing and evaluation methods; (7) improving ShakeMap shaking estimates, particularly as relevant to ground failure, such as including topographic amplification and accounting for spatial variability; and (8) developing vulnerability functions for loss estimates.

  3. USGS approach to real-time estimation of earthquake-triggered ground failure - Results of 2015 workshop

    USGS Publications Warehouse

    Allstadt, Kate E.; Thompson, Eric M.; Wald, David J.; Hamburger, Michael W.; Godt, Jonathan W.; Knudsen, Keith L.; Jibson, Randall W.; Jessee, M. Anna; Zhu, Jing; Hearne, Michael; Baise, Laurie G.; Tanyas, Hakan; Marano, Kristin D.

    2016-03-30

    The U.S. Geological Survey (USGS) Earthquake Hazards and Landslide Hazards Programs are developing plans to add quantitative hazard assessments of earthquake-triggered landsliding and liquefaction to existing real-time earthquake products (ShakeMap, ShakeCast, PAGER) using open and readily available methodologies and products. To date, prototype global statistical models have been developed and are being refined, improved, and tested. These models are a good foundation, but much work remains to achieve robust and defensible models that meet the needs of end users. In order to establish an implementation plan and identify research priorities, the USGS convened a workshop in Golden, Colorado, in October 2015. This document summarizes current (as of early 2016) capabilities, research and operational priorities, and plans for further studies that were established at this workshop. Specific priorities established during the meeting include (1) developing a suite of alternative models; (2) making use of higher resolution and higher quality data where possible; (3) incorporating newer global and regional datasets and inventories; (4) reducing barriers to accessing inventory datasets; (5) developing methods for using inconsistent or incomplete datasets in aggregate; (6) developing standardized model testing and evaluation methods; (7) improving ShakeMap shaking estimates, particularly as relevant to ground failure, such as including topographic amplification and accounting for spatial variability; and (8) developing vulnerability functions for loss estimates.

  4. A Probabilistic Estimate of the Most Perceptible Earthquake Magnitudes in the NW Himalaya and Adjoining Regions

    NASA Astrophysics Data System (ADS)

    Yadav, R. B. S.; Koravos, G. Ch.; Tsapanos, T. M.; Vougiouka, G. E.

    2015-02-01

    NW Himalaya and its neighboring region (25°-40°N and 65°-85°E) is one of the most seismically hazardous regions in the Indian subcontinent, a region that has historically experienced large to great damaging earthquakes. In the present study, the most perceptible earthquake magnitudes, M p, are estimated for intensity I = VII, horizontal peak ground acceleration a = 300 cm/s2 and horizontal peak ground velocity v = 10 cm/s in 28 seismogenic zones using the two earthquake recurrence models of Kijko and Sellevoll (Bulletin of the Seismological Society of America 82(1):120-134 1992 ) and Gumbel's third asymptotic distribution of extremes (GIII). Both methods deal with maximum magnitudes. The earthquake perceptibility is calculated by combining earthquake recurrence models with ground motion attenuation relations at a particular level of intensity, acceleration and velocity. The estimated results reveal that the values of M p for velocity v = 10 cm/s show higher estimates than corresponding values for intensity I = VII and acceleration a = 300 cm/s2. It is also observed that differences in perceptible magnitudes calculated by the Kijko-Sellevoll method and GIII statistics show significantly high values, up to 0.7, 0.6 and 1.7 for intensity, acceleration and velocity, respectively, revealing the importance of earthquake recurrence model selection. The estimated most perceptible earthquake magnitudes, M p, in the present study vary from M W 5.1 to 7.7 in the entire zone of the study area. Results of perceptible magnitudes are also represented in the form of spatial maps in 28 seismogenic zones for the aforementioned threshold levels of intensity, acceleration and velocity, estimated from two recurrence models. The spatial maps show that the Quetta of Pakistan, the Hindukush-Pamir Himalaya, the Caucasus mountain belt and the Himalayan frontal thrust belt (Kashmir-Kangra-Uttarkashi-Chamoli regions) exhibit higher values of the most perceptible earthquake magnitudes ( M

  5. Stress transfer in earthquakes, hazard estimation and ensemble forecasting: Inferences from numerical simulations

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Rundle, Paul B.; Donnellan, Andrea; Li, P.; Klein, W.; Morein, Gleb; Turcotte, D. L.; Grant, Lisa

    2006-02-01

    "research-quality" forecasts that we discuss here. Finally, we provide a brief discussion of future problems and issues related to the development of ensemble earthquake hazard estimation and forecasting techniques.

  6. A discussion of the socio-economic losses and shelter impacts from the Van, Turkey Earthquakes of October and November 2011

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Kunz-Plapp, T.; Vervaeck, A.; Muehr, B.; Markus, M.

    2012-04-01

    The Van earthquake in 2011 hit at 10:41 GMT (13:41 Local) on Sunday, October 23rd, 2011. It was a Mw7.1-7.3 event located at a depth of around 10 km with the epicentre located directly between Ercis (pop. 75,000) and Van (pop. 370,000). Since then, the CEDIM Forensic Analysis Group (using a team of seismologists, engineers, sociologists and meteorologists) and www.earthquake-report.com has reported and analysed on the Van event. In addition, many damaging aftershocks occurring after the main eventwere analysed including a major aftershock centered in Van-Edremit on November 9th, 2011, causing much additional losses. The province of Van has around 1.035 million people as of the last census. The Van province is one of the poorest in Turkey and has much inequality between the rural and urban centers with an average HDI (Human Development Index) around that of Bhutan or Congo. The earthquakes are estimated to have caused 604 deaths (23 October) and 40 deaths (9 November); mostly due to falling debris and house collapse). In addition, between 1 billion TRY to 4 billion TRY (approx. 555 million USD - 2.2 billion USD) is estimated as total economic losses. This represents around 17 to 66% of the provincial GDP of the Van Province (approx. 3.3 billion USD) as of 2011. From the CATDAT Damaging Earthquakes Database, major earthquakes such as this one have occurred in the year 1111 causing major damage and having a magnitude around 6.5-7. In the year 1646 or 1648, Van was again struck by a M6.7 quake killing around 2000 people. In 1881, a M6.3 earthquake near Van killed 95 people. Again, in 1941, a M5.9 earthquake affected Ercis and Van killing between 190 and 430 people. 1945-1946 as well as 1972 brought again damaging and casualty-bearing earthquakes to the Van province. In 1976, the Van-Muradiye earthquake struck the border region with a M7, killing around 3840 people and causing around 51,000 people to become homeless. Key immediate lessons from similar historic

  7. Using Modified Mercalli Intensities to estimate acceleration response spectra for the 1906 San Francisco earthquake

    USGS Publications Warehouse

    Boatwright, J.; Bundock, H.; Seekins, L.C.

    2006-01-01

    We derive and test relations between the Modified Mercalli Intensity (MMI) and the pseudo-acceleration response spectra at 1.0 and 0.3 s - SA(1.0 s) and SA(0.3 s) - in order to map response spectral ordinates for the 1906 San Francisco earthquake. Recent analyses of intensity have shown that MMI ??? 6 correlates both with peak ground velocity and with response spectra for periods from 0.5 to 3.0 s. We use these recent results to derive a linear relation between MMI and log SA(1.0 s), and we refine this relation by comparing the SA(1.0 s) estimated from Boatwright and Bundock's (2005) MMI map for the 1906 earthquake to the SA(1.0 s) calculated from recordings of the 1989 Loma Prieta earthquake. South of San Jose, the intensity distributions for the 1906 and 1989 earthquakes are remarkably similar, despite the difference in magnitude and rupture extent between the two events. We use recent strong motion regressions to derive a relation between SA(1.0 s) and SA(0.3 s) for a M7.8 strike-slip earthquake that depends on soil type, acceleration level, and source distance. We test this relation by comparing SA(0.3 s) estimated for the 1906 earthquake to SA(0.3 s) calculated from recordings of both the 1989 Loma Prieta and 1994 Northridge earthquakes, as functions of distance from the fault. ?? 2006, Earthquake Engineering Research Institute.

  8. Estimation of the occurrence rate of strong earthquakes based on hidden semi-Markov models

    NASA Astrophysics Data System (ADS)

    Votsi, I.; Limnios, N.; Tsaklidis, G.; Papadimitriou, E.

    2012-04-01

    The present paper aims at the application of hidden semi-Markov models (HSMMs) in an attempt to reveal key features for the earthquake generation, associated with the actual stress field, which is not accessible to direct observation. The models generalize the hidden Markov models by considering the hidden process to form actually a semi-Markov chain. Considering that the states of the models correspond to levels of actual stress fields, the stress field level at the occurrence time of each strong event is revealed. The dataset concerns a well catalogued seismically active region incorporating a variety of tectonic styles. More specifically, the models are applied in Greece and its surrounding lands, concerning a complete data sample with strong (M≥ 6.5) earthquakes that occurred in the study area since 1845 up to present. The earthquakes that occurred are grouped according to their magnitudes and the cases of two and three magnitude ranges for a corresponding number of states are examined. The parameters of the HSMMs are estimated and their confidence intervals are calculated based on their asymptotic behavior. The rate of the earthquake occurrence is introduced through the proposed HSMMs and its maximum likelihood estimator is calculated. The asymptotic properties of the estimator are studied, including the uniformly strongly consistency and the asymptotical normality. The confidence interval for the proposed estimator is given. We assume the state space of both the observable and the hidden process to be finite, the hidden Markov chain to be homogeneous and stationary and the observations to be conditionally independent. The hidden states at the occurrence time of each strong event are revealed and the rate of occurrence of an anticipated earthquake is estimated on the basis of the proposed HSMMs. Moreover, the mean time for the first occurrence of a strong anticipated earthquake is estimated and its confidence interval is calculated.

  9. The importance of in-situ observations for rapid loss estimates in the Euro-Med region

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Mazet Roux, G.; Gilles, S.

    2009-04-01

    A major (M>7) earthquake occurring in a densely populated area will inevitably cause significant damage and generally speaking the poorer the country the higher the number of fatalities. It was clear for any earthquake monitoring agency that the M7.8 Wenchuan earthquake in May 2008 was a disaster as soon its magnitude and location had been estimated. However, the loss estimates of moderate to strong earthquakes (M5 to M6) occurring close to an urban area is much trickier because the losses are the result of the convolution of many parameters (location, magnitude, depth, directivity, seismic attenuation, site effects, building vulnerability, repartition of the population at the time of the event…) which are either affected by non-negligible uncertainties or poorly constrained at least at a global scale. Just considering one of this parameter, the epicentral location: In this range of magnitude, the characteristic size of the potentially damaged area is comparable to the typical epicentral location uncertainty obtained in real time, i.e. 10 to 15 km. It is then not possible to discriminate in real time between an earthquake location right below a town which could cause significant damage and a location 15 km away which impact would be much lower. Clearly, if the uncertainties affecting each of the parameters are properly taken into account, for such earthquakes the resulting scenarios of losses will range from no impact to very significant impact and then the results will not be of much use. The way to reduce the uncertainties on the loss estimates in such cases is then to collect in-situ information on the local shaking level and/or on the actual damage at a number of localities. In area of low seismic hazard, the cost of installing dense accelerometric network is, in practice, too high and the only remaining solution is to rapidly collect observations of the damage. That is what the EMSC has been developing for the last few years by involving the Citizen in

  10. A General Method to Estimate Earthquake Moment and Magnitude using Regional Phase Amplitudes

    SciTech Connect

    Pasyanos, M E

    2009-11-19

    This paper presents a general method of estimating earthquake magnitude using regional phase amplitudes, called regional M{sub o} or regional M{sub w}. Conceptually, this method uses an earthquake source model along with an attenuation model and geometrical spreading which accounts for the propagation to utilize regional phase amplitudes of any phase and frequency. Amplitudes are corrected to yield a source term from which one can estimate the seismic moment. Moment magnitudes can then be reliably determined with sets of observed phase amplitudes rather than predetermined ones, and afterwards averaged to robustly determine this parameter. We first examine in detail several events to demonstrate the methodology. We then look at various ensembles of phases and frequencies, and compare results to existing regional methods. We find regional M{sub o} to be a stable estimator of earthquake size that has several advantages over other methods. Because of its versatility, it is applicable to many more events, particularly smaller events. We make moment estimates for earthquakes ranging from magnitude 2 to as large as 7. Even with diverse input amplitude sources, we find magnitude estimates to be more robust than typical magnitudes and existing regional methods and might be tuned further to improve upon them. The method yields a more meaningful quantity of seismic moment, which can be recast as M{sub w}. Lastly, it is applied here to the Middle East region using an existing calibration model, but it would be easy to transport to any region with suitable attenuation calibration.

  11. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    USGS Publications Warehouse

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  12. Heterogeneous rupture in the great Cascadia earthquake of 1700 inferred from coastal subsidence estimates

    USGS Publications Warehouse

    Wang, Pei-Ling; Engelhart, Simon E.; Wang, Kelin; Hawkes, Andrea D.; Horton, Benjamin P.; Nelson, Alan R.; Witter, Robert C.

    2013-01-01

    Past earthquake rupture models used to explain paleoseismic estimates of coastal subsidence during the great A.D. 1700 Cascadia earthquake have assumed a uniform slip distribution along the megathrust. Here we infer heterogeneous slip for the Cascadia margin in A.D. 1700 that is analogous to slip distributions during instrumentally recorded great subduction earthquakes worldwide. The assumption of uniform distribution in previous rupture models was due partly to the large uncertainties of then available paleoseismic data used to constrain the models. In this work, we use more precise estimates of subsidence in 1700 from detailed tidal microfossil studies. We develop a 3-D elastic dislocation model that allows the slip to vary both along strike and in the dip direction. Despite uncertainties in the updip and downdip slip extensions, the more precise subsidence estimates are best explained by a model with along-strike slip heterogeneity, with multiple patches of high-moment release separated by areas of low-moment release. For example, in A.D. 1700, there was very little slip near Alsea Bay, Oregon (~44.4°N), an area that coincides with a segment boundary previously suggested on the basis of gravity anomalies. A probable subducting seamount in this area may be responsible for impeding rupture during great earthquakes. Our results highlight the need for more precise, high-quality estimates of subsidence or uplift during prehistoric earthquakes from the coasts of southern British Columbia, northern Washington (north of 47°N), southernmost Oregon, and northern California (south of 43°N), where slip distributions of prehistoric earthquakes are poorly constrained.

  13. Loss Estimation Modeling Of Scenario Lahars From Mount Rainier, Washington State, Using HAZUS-MH

    NASA Astrophysics Data System (ADS)

    Walsh, T. J.; Cakir, R.

    2011-12-01

    We have adapted lahar hazard zones developed by Hoblitt and others (1998) and converted to digital data by Schilling and others (2008) into the appropriate format for HAZUS-MH, which is FEMA's loss estimation model. We assume that structures engulfed by cohesive lahars will suffer complete loss, and structures affected by post-lahar flooding will be appropriately modeled by the HAZUS-MH flood model. Another approach investigated is to estimate the momentum of lahars, calculate a lateral force, and apply the earthquake model, substituting the lahar lateral force for PGA. Our initial model used the HAZUS default data, which include estimates of building type and value from census data. This model estimated a loss of about 12 billion for a repeat lahar similar to the Electron Mudflow down the Puyallup River. Because HAZUS data are based on census tracts, this estimated damage includes everything in the census tract, even buildings outside of the lahar hazard zone. To correct this, we acquired assessors data from all of the affected counties and converted them into HAZUS format. We then clipped it to the boundaries of the lahar hazard zone to more precisely delineate those properties actually at risk in each scenario. This refined our initial loss estimate to about 6 billion with exclusion of building content values. We are also investigating rebuilding the lahar hazard zones applying Lahar-Z to a more accurate topographic grid derived from recent Lidar data acquired from the Puget Sound Lidar Consortium and Mount Rainier National Park. Final results of these models for the major drainages of Mount Rainier will be posted to the Washington Interactive Geologic Map (http://www.dnr.wa.gov/ResearchScience/Topics/GeosciencesData/Pages/geology_portal.aspx).

  14. Strong earthquake motion estimates for three sites on the U.C. San Diego campus

    SciTech Connect

    Day, S; Doroudian, M; Elgamal, A; Gonzales, S; Heuze, F; Lai, T; Minster, B; Oglesby, D; Riemer, M; Vernon, F; Vucetic, M; Wagoner, J; Yang, Z

    2002-05-07

    The approach of the Campus Earthquake Program (CEP) is to combine the substantial expertise that exists within the UC system in geology, seismology, and geotechnical engineering, to estimate the earthquake strong motion exposure of UC facilities. These estimates draw upon recent advances in hazard assessment, seismic wave propagation modeling in rocks and soils, and dynamic soil testing. The UC campuses currently chosen for application of our integrated methodology are Riverside, San Diego, and Santa Barbara. The procedure starts with the identification of possible earthquake sources in the region and the determination of the most critical fault(s) related to earthquake exposure of the campus. Combined geological, geophysical, and geotechnical studies are then conducted to characterize each campus with specific focus on the location of particular target buildings of special interest to the campus administrators. We drill, sample, and geophysically log deep boreholes next to the target structure, to provide direct in-situ measurements of subsurface material properties, and to install uphole and downhole 3-component seismic sensors capable of recording both weak and strong motions. The boreholes provide access below the soil layers, to deeper materials that have relatively high seismic shear-wave velocities. Analyses of conjugate downhole and uphole records provide a basis for optimizing the representation of the low-strain response of the sites. Earthquake rupture scenarios of identified causative faults are combined with the earthquake records and with nonlinear soil models to provide site-specific estimates of strong motions at the selected target locations. The predicted ground motions are shared with the UC consultants, so that they can be used as input to the dynamic analysis of the buildings. Thus, for each campus targeted by the CEP project, the strong motion studies consist of two phases, Phase 1--initial source and site characterization, drilling

  15. Time-Reversal to Estimate Focal Depth for Local, Shallow Earthquakes in Southern California

    NASA Astrophysics Data System (ADS)

    Pearce, F.; Lu, R.; Toksoz, N.

    2007-12-01

    Current approaches for focal depth estimation are typically based on travel times and result in large uncertainties primarily due to poor data coverage and inaccurate travel time picks. We propose an alternative method based on an adaptation of time-reversed acoustics (TRA). In the context of TRA theory, the autocorrelation of an earthquake recording can be thought of as the convolution of the source autocorrelation function with the autocorrelation of the Green's function describing propagation between source and receiver. Furthermore, the signal to noise ratio (S/N) of stationary phases in the Green's function may be improved by stacking the autocorrelations from many receivers. In this study, we employ such an approach to estimate the focal depth of shallow earthquakes based on the time lag between the direct P phase and pP converted phase, which is assumed to be stationary across the receiver array. Focal depth estimates are easily obtained by multiplying half the pP time lag by the average velocity above the earthquake. We apply this methodology to estimate focal depths for several local earthquakes in Southern California. Earthquake recordings were obtained from the Southern California Earthquake Center (SCEC) for events with accurate, independent estimates of focal depth below about 15 km, and local magnitudes between 4.0 and 6.0. We observe pP in the stacked autocorrelations that correspond to the focal depths listed in the SCEC catalog for earthquakes located throughout Southern California. The predictive capability of the method is limited by S/N, defined as the pP amplitude divided by the background noise level of the stacked correlation. By considering subsets of the Southern California array, we explore the sensitivity of the S/N on station density and location (i.e. epicentral distance & azimuth). We find S/N is generally better for subsets of receivers within regions with relatively simple geologic structure. We are currently developing an extension

  16. Strong Earthquake Motion Estimates for Three Sites on the U.C. Riverside Campus

    SciTech Connect

    Archuleta, R.; Elgamal, A.; Heuze, F.; Lai, T.; Lavalle, D.; Lawrence, B.; Liu, P.C.; Matesic, L.; Park, S.; Riemar, M.; Steidl, J.; Vucetic, M.; Wagoner, J.; Yang, Z.

    2000-11-01

    The approach of the Campus Earthquake Program (CEP) is to combine the substantial expertise that exists within the UC system in geology, seismology, and geotechnical engineering, to estimate the earthquake strong motion exposure of UC facilities. These estimates draw upon recent advances in hazard assessment, seismic wave propagation modeling in rocks and soils, and dynamic soil testing. The UC campuses currently chosen for application of our integrated methodology are Riverside, San Diego, and Santa Barbara. The procedure starts with the identification of possible earthquake sources in the region and the determination of the most critical fault(s) related to earthquake exposure of the campus. Combined geological, geophysical, and geotechnical studies are then conducted to characterize each campus with specific focus on the location of particular target buildings of special interest to the campus administrators. We drill and geophysically log deep boreholes next to the target structure, to provide direct in-situ measurements of subsurface material properties, and to install uphole and downhole 3-component seismic sensors capable of recording both weak and strong motions. The boreholes provide access below the soil layers, to deeper materials that have relatively high seismic shear-wave velocities. Analyses of conjugate downhole and uphole records provide a basis for optimizing the representation of the low-strain response of the sites. Earthquake rupture scenarios of identified causative faults are combined with the earthquake records and with nonlinear soil models to provide site-specific estimates of strong motions at the selected target locations. The predicted ground motions are shared with the UC consultants, so that they can be used as input to the dynamic analysis of the buildings. Thus, for each campus targeted by the CEP project, the strong motion studies consist of two phases, Phase 1--initial source and site characterization, drilling, geophysical

  17. Toward reliable automated estimates of earthquake source properties from body wave spectra

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Ben-Zion, Yehuda

    2016-06-01

    We develop a two-stage methodology for automated estimation of earthquake source properties from body wave spectra. An automated picking algorithm is used to window and calculate spectra for both P and S phases. Empirical Green's functions are stacked to minimize nongeneric source effects such as directivity and are used to deconvolve the spectra of target earthquakes for analysis. In the first stage, window lengths and frequency ranges are defined automatically from the event magnitude and used to get preliminary estimates of the P and S corner frequencies of the target event. In the second stage, the preliminary corner frequencies are used to update various parameters to increase the amount of data and overall quality of the deconvolved spectral ratios (target event over stacked Empirical Green's function). The obtained spectral ratios are used to estimate the corner frequencies, strain/stress drops, radiated seismic energy, apparent stress, and the extent of directivity for both P and S waves. The technique is applied to data generated by five small to moderate earthquakes in southern California at hundreds of stations. Four of the five earthquakes are found to have significant directivity. The developed automated procedure is suitable for systematic processing of large seismic waveform data sets with no user involvement.

  18. A Hierarchical Bayesian Approcah for Earthquake Location and Data Uncertainty Estimation in 3D Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Arroucau, P.; Custodio, S.

    2014-12-01

    Solving inverse problems requires an estimate of data uncertainties. This usually takes the form of a data covariance matrix, which determines the shape of the model posterior distribution. Those uncertainties are yet not always known precisely and it is common practice to simply set them to a fixed, reasonable value. In the case of earthquake location, the hypocentral parameters (longitude, latitude, depth and origin time) are typically inverted for using seismic phase arrival times. But quantitative data variance estimates are rarely provided. Instead, arrival time catalogs usually associate phase picks with a quality factor, which is subsequently interpreted more or less arbitrarily in terms of data uncertainty in the location procedure. Here, we present a hierarchical Bayesian algorithm for earthquake location in 3D heterogeneous media, in which not only the earthquake hypocentral parameters, but also the P- and S-wave arrival time uncertainties, are inverted for, hence allowing more realistic posterior model covariance estimates. Forward modeling is achieved by means of the Fast Marching Method (FMM), an eikonal solver which has the ability to take interfaces into account, so direct, reflected and refracted phases can be used in the inversion. We illustrate the ability of our algorithm to retrieve earthquake hypocentral parameters as well as data uncertainties through synthetic examples and using a subset of arrival time catalogs for mainland Portugal and its Atlantic margin.

  19. Strong Earthquake Motion Estimates for the UCSB Campus, and Related Response of the Engineering 1 Building

    SciTech Connect

    Archuleta, R.; Bonilla, F.; Doroudian, M.; Elgamal, A.; Hueze, F.

    2000-06-06

    This is the second report on the UC/CLC Campus Earthquake Program (CEP), concerning the estimation of exposure of the U.C. Santa Barbara campus to strong earthquake motions (Phase 2 study). The main results of Phase 1 are summarized in the current report. This document describes the studies which resulted in site-specific strong motion estimates for the Engineering I site, and discusses the potential impact of these motions on the building. The main elements of Phase 2 are: (1) determining that a M 6.8 earthquake on the North Channel-Pitas Point (NCPP) fault is the largest threat to the campus. Its recurrence interval is estimated at 350 to 525 years; (2) recording earthquakes from that fault on March 23, 1998 (M 3.2) and May 14, 1999 (M 3.2) at the new UCSB seismic station; (3) using these recordings as empirical Green's functions (EGF) in scenario earthquake simulations which provided strong motion estimates (seismic syntheses) at a depth of 74 m under the Engineering I site; 240 such simulations were performed, each with the same seismic moment, but giving a broad range of motions that were analyzed for their mean and standard deviation; (4) laboratory testing, at U.C. Berkeley and U.C. Los Angeles, of soil samples obtained from drilling at the UCSB station site, to determine their response to earthquake-type loading; (5) performing nonlinear soil dynamic calculations, using the soil properties determined in-situ and in the laboratory, to calculate the surface strong motions resulting from the seismic syntheses at depth; (6) comparing these CEP-generated strong motion estimates to acceleration spectra based on the application of state-of-practice methods - the IBC 2000 code, UBC 97 code and Probabilistic Seismic Hazard Analysis (PSHA), this comparison will be used to formulate design-basis spectra for future buildings and retrofits at UCSB; and (7) comparing the response of the Engineering I building to the CEP ground motion estimates and to the design

  20. Earthquake slip vectors and estimates of present-day plate motions

    NASA Technical Reports Server (NTRS)

    Demets, Charles

    1993-01-01

    Two alternative models for present-day global plate motions are derived from subsets of the NUVEL-1 data in order to investigate the degree to which earthquake slip vectors affect the NUVEL-1 model and to provide estimates of present-day plate velocities that are independent of earthquake slip vectors. The data set used to derive the first model excludes subduction zone slip vectors. The primary purpose of this model is to demonstrate that the 240 subduction zone slip vectors in the NUVEL-1 data set do not greatly affect the plate velocities predicted by NUVEL-1. A data set that excludes all of the 724 earthquake slip vectors used to derive NUVEL-1 is used to derive the second model. This model is suitable as a reference model for kinematic studies that require plate velocity estimates unaffected by earthquake slip vectors. The slip-dependent slip vector bias along transform faults is investigated using the second model, and evidence is sought for biases in slip directions along spreading centers.

  1. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    USGS Publications Warehouse

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200  km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  2. Dose estimates in a loss of lead shielding truck accident.

    SciTech Connect

    Dennis, Matthew L.; Osborn, Douglas M.; Weiner, Ruth F.; Heames, Terence John

    2009-08-01

    The radiological transportation risk & consequence program, RADTRAN, has recently added an updated loss of lead shielding (LOS) model to it most recent version, RADTRAN 6.0. The LOS model was used to determine dose estimates to first-responders during a spent nuclear fuel transportation accident. Results varied according to the following: type of accident scenario, percent of lead slump, distance to shipment, and time spent in the area. This document presents a method of creating dose estimates for first-responders using RADTRAN with potential accident scenarios. This may be of particular interest in the event of high speed accidents or fires involving cask punctures.

  3. A phase coherence approach to estimating the spatial extent of earthquakes

    NASA Astrophysics Data System (ADS)

    Hawthorne, Jessica C.; Ampuero, Jean-Paul

    2016-04-01

    We present a new method for estimating the spatial extent of seismic sources. The approach takes advantage of an inter-station phase coherence computation that can identify co-located sources (Hawthorne and Ampuero, 2014). Here, however, we note that the phase coherence calculation can eliminate the Green's function and give high values only if both earthquakes are point sources---if their dimensions are much smaller than the wavelengths of the propagating seismic waves. By examining the decrease in coherence at higher frequencies (shorter wavelengths), we can estimate the spatial extents of the earthquake ruptures. The approach can to some extent be seen as a simple way of identifying directivity or variations in the apparent source time functions recorded at various stations. We apply this method to a set of well-recorded earthquakes near Parkfield, CA. We show that when the signal to noise ratio is high, the phase coherence remains high well above 50 Hz for closely spaced M<1.5 earthquake. The high-frequency phase coherence is smaller for larger earthquakes, suggesting larger spatial extents. The implied radii scale roughly as expected from typical magnitude-corner frequency scalings. We also examine a second source of high-frequency decoherence: spatial variation in the shape of the Green's functions. This spatial decoherence appears to occur on a similar wavelengths as the decoherence associated with the apparent source time functions. However, the variation in Green's functions can be normalized away to some extent by comparing observations at multiple components on a single station, which see the same apparent source time functions.

  4. Probabilistic estimates of surface coseismic slip and afterslip for Hayward fault earthquakes

    USGS Publications Warehouse

    Aagaard, Brad T.; Lienkaemper, James J.; Schwartz, David P.

    2012-01-01

    We examine the partition of long‐term geologic slip on the Hayward fault into interseismic creep, coseismic slip, and afterslip. Using Monte Carlo simulations, we compute expected coseismic slip and afterslip at three alinement array sites for Hayward fault earthquakes with nominal moment magnitudes ranging from about 6.5 to 7.1. We consider how interseismic creep might affect the coseismic slip distribution as well as the variability in locations of large and small slip patches and the magnitude of an earthquake for a given rupture area. We calibrate the estimates to be consistent with the ratio of interseismic creep rate at the alinement array sites to the geologic slip rate for the Hayward fault. We find that the coseismic slip at the surface is expected to comprise only a small fraction of the long‐term geologic slip. The median values of coseismic slip are less than 0.2 m in nearly all cases as a result of the influence of interseismic creep and afterslip. However, afterslip makes a substantial contribution to the long‐term geologic slip and may be responsible for up to 0.5–1.5 m (median plus one standard deviation [S.D.]) of additional slip following an earthquake rupture. Thus, utility and transportation infrastructure could be severely impacted by afterslip in the hours and days following a large earthquake on the Hayward fault that generated little coseismic slip. Inherent spatial variability in earthquake slip combined with the uncertainty in how interseismic creep affects coseismic slip results in large uncertainties in these slip estimates.

  5. Estimation of organic carbon loss potential in north of Iran

    NASA Astrophysics Data System (ADS)

    Shahriari, A.; Khormali, F.; Kehl, M.; Welp, G.; Scholz, Ch.

    2009-04-01

    The development of sustainable agricultural systems requires techniques that accurately monitor changes in the amount, nature and breakdown rate of soil organic matter and can compare the rate of breakdown of different plant or animal residues under different management systems. In this research, the study area includes the southern alluvial and piedmont plains of Gorgan River extended from east to west direction in Golestan province, Iran. Samples from 10 soil series and were collected from cultivation depth (0-30 cm). Permanganate-oxidizable carbon (POC) an index of soil labile carbon, was used to show soil potential loss of organic carbon. In this index shows the maximum loss of OC in a given soil. Maximum loss of OC for each soil series was estimated through POC and bulk density (BD). The potential loss of OC were estimated between 1253263 and 2410813 g/ha Carbon. Stable organic constituents in the soil include humic substances and other organic macromolecules that are intrinsically resistant against microbial attack, or that are physically protected by adsorption on mineral surfaces or entrapment within clay and mineral aggregates. However, the (Clay + Silt)/OC ratio had a negative significant (p < 0.001) correlation with POC content, confirming the preserving effect of fine particle.

  6. Regional intensity attenuation models for France and the estimation of magnitude and location of historical earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Scotti, O.

    2006-01-01

    Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.

  7. Building Time-Dependent Earthquake Recurrence Models for Probabilistic Loss Computations

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Nyst, M.

    2013-12-01

    We present a Risk Management perspective on earthquake recurrence on mature faults, and the ways that it can be modeled. The specificities of Risk Management relative to Probabilistic Seismic Hazard Assessment (PSHA), include the non-linearity of the exceedance probability curve for losses relative to the frequency of event occurrence, the fact that losses at all return periods are needed (and not at discrete values of the return period), and the set-up of financial models which sometimes require the modeling of realizations of the order in which events may occur (I.e., simulated event dates are important, whereas only average rates of occurrence are routinely used in PSHA). We use New Zealand as a case study and review the physical characteristics of several faulting environments, contrasting them against properties of three probability density functions (PDFs) widely used to characterize the inter-event time distributions in time-dependent recurrence models. We review the data available to help constrain both the priors and the recurrence process. And we propose that with the current level of knowledge, the best way to quantify the recurrence of large events on mature faults is to use a Bayesian combination of models, i.e., the decomposition of the inter-event time distribution into a linear combination of individual PDFs with their weight given by the posterior distribution. Finally we propose to the community : 1. A general debate on how best to incorporate our knowledge (e.g., from geology, geomorphology) on plausible models and model parameters, but also preserve the information on what we do not know; and 2. The creation and maintenance of a global database of priors, data, and model evidence, classified by tectonic region, special fluid characteristic (pH, compressibility, pressure), fault geometry, and other relevant properties so that we can monitor whether some trends emerge in terms of which model dominates in which conditions.

  8. Estimation of source parameters and scaling relations for moderate size earthquakes in North-West Himalaya

    NASA Astrophysics Data System (ADS)

    Kumar, Vikas; Kumar, Dinesh; Chopra, Sumer

    2016-10-01

    The scaling relation and self similarity of earthquake process have been investigated by estimating the source parameters of 34 moderate size earthquakes (mb 3.4-5.8) occurred in the NW Himalaya. The spectral analysis of body waves of 217 accelerograms recorded at 48 sites have been carried out using in the present analysis. The Brune's ω-2 model has been adopted for this purpose. The average ratio of the P-wave corner frequency, fc(P), to the S-wave corner frequency, fc(S), has been found to be 1.39 with fc(P) > fc(S) for 90% of the events analyzed here. This implies the shift in the corner frequency in agreement with many other similar studies done for different regions. The static stress drop values for all the events analyzed here lie in the range 10-100 bars average stress drop value of the order of 43 ± 19 bars for the region. This suggests the likely estimate of the dynamic stress drop, which is 2-3 times the static stress drop, is in the range of about 80-120 bars. This suggests the relatively high seismic hazard in the NW Himalaya as high frequency strong ground motions are governed by the stress drop. The estimated values of stress drop do not show significant variation with seismic moment for the range 5 × 1014-2 × 1017 N m. This observation along with the cube root scaling of corner frequencies suggests the self similarity of the moderate size earthquakes in the region. The scaling relation between seismic moment and corner frequency Mo fc3 = 3.47 ×1016Nm /s3 estimated in the present study can be utilized to estimate the source dimension given the seismic moment of the earthquake for the hazard assessment. The present study puts the constrains on the important parameters stress drop and source dimension required for the synthesis of strong ground motion from the future expected earthquakes in the region. Therefore, the present study is useful for the seismic hazard and risk related studies for NW Himalaya.

  9. Estimating signal loss in regularized GRACE gravity field solutions

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Wahr, J. M.

    2011-05-01

    Gravity field solutions produced using data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission are subject to errors that increase as a function of increasing spatial resolution. Two commonly used techniques to improve the signal-to-noise ratio in the gravity field solutions are post-processing, via spectral filters, and regularization, which occurs within the least-squares inversion process used to create the solutions. One advantage of post-processing methods is the ability to easily estimate the signal loss resulting from the application of the spectral filter by applying the filter to synthetic gravity field coefficients derived from models of mass variation. This is a critical step in the construction of an accurate error budget. Estimating the amount of signal loss due to regularization, however, requires the execution of the full gravity field determination process to create synthetic instrument data; this leads to a significant cost in computation and expertise relative to post-processing techniques, and inhibits the rapid development of optimal regularization weighting schemes. Thus, while a number of studies have quantified the effects of spectral filtering, signal modification in regularized GRACE gravity field solutions has not yet been estimated. In this study, we examine the effect of one regularization method. First, we demonstrate that regularization can in fact be performed as a post-processing step if the solution covariance matrix is available. Regularization then is applied as a post-processing step to unconstrained solutions from the Center for Space Research (CSR), using weights reported by the Centre National d'Etudes Spatiales/Groupe de Recherches de geodesie spatiale (CNES/GRGS). After regularization, the power spectra of the CSR solutions agree well with those of the CNES/GRGS solutions. Finally, regularization is performed on synthetic gravity field solutions derived from a land surface model, revealing that in

  10. Rapid estimation of the moment magnitude of the 2011 off the Pacific coast of Tohoku earthquake from coseismic strain steps

    NASA Astrophysics Data System (ADS)

    Itaba, S.; Matsumoto, N.; Kitagawa, Y.; Koizumi, N.

    2012-12-01

    The 2011 off the Pacific coast of Tohoku earthquake, of moment magnitude (Mw) 9.0, occurred at 14:46 Japan Standard Time (JST) on March 11, 2011. The coseismic strain steps caused by the fault slip of this earthquake were observed in the Tokai, Kii Peninsula and Shikoku by the borehole strainmeters which were carefully set by Geological Survey of Japan, AIST. Using these strain steps, we estimated a fault model for the earthquake on the boundary between the Pacific and North American plates. Our model, which is estimated only from several minutes' strain data, is largely consistent with the final fault models estimated from GPS and seismic wave data. The moment magnitude can be estimated about 6 minutes after the origin time, and 4 minutes after wave arrival. According to the fault model, the moment magnitude of the earthquake is 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency announced just after earthquake occurrence was 7.9. Generally coseismic strain steps are considered to be less reliable than seismic waves and GPS data. However our results show that the coseismic strain steps observed by the borehole strainmeters, which were carefully set and monitored, can be relied enough to decide the earthquake magnitude precisely and rapidly. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using strain steps is one of the strong methods for rapid estimation of the magnitude of great earthquakes.

  11. Ground motion estimation in Delhi from postulated regional and local earthquakes

    NASA Astrophysics Data System (ADS)

    Mittal, Himanshu; Kumar, Ashok; Kamal

    2013-04-01

    Ground motions are estimated at 55 sites in Delhi, the capital of India from four postulated earthquakes (three regional M w = 7.5, 8.0, and 8.5 and one local). The procedure consists of (1) synthesis of ground motion at a hard reference site (NDI) and (2) estimation of ground motion at other sites in the city via known transfer functions and application of the random vibration theory. This work provides a more extensive coverage than earlier studies (e.g., Singh et al., Bull Seism Soc Am 92:555-569, 2002; Bansal et al., J Seismol 13:89-105, 2009). The Indian code response spectra corresponding to Delhi (zone IV) are found to be conservative at hard soil sites for all postulated earthquakes but found to be deficient for M w = 8.0 and 8.5 earthquakes at soft soil sites. Spectral acceleration maps at four different natural periods are strongly influenced by the shallow geological and soil conditions. Three pockets of high acceleration values are seen. These pockets seem to coincide with the contacts of (a) Aravalli quartzite and recent Yamuna alluvium (towards the East), (b) Aravalli quartzite and older quaternary alluvium (towards the South), and (c) older quaternary alluvium and recent Yamuna alluvium (towards the North).

  12. Earthquake impact scale

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.

    2011-01-01

    With the advent of the USGS prompt assessment of global earthquakes for response (PAGER) system, which rapidly assesses earthquake impacts, U.S. and international earthquake responders are reconsidering their automatic alert and activation levels and response procedures. To help facilitate rapid and appropriate earthquake response, an Earthquake Impact Scale (EIS) is proposed on the basis of two complementary criteria. On the basis of the estimated cost of damage, one is most suitable for domestic events; the other, on the basis of estimated ranges of fatalities, is generally more appropriate for global events, particularly in developing countries. Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are quite effective in communicating predicted impact and response needed after an event through alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1,000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses reaching $1M, $100M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness predominate in countries in which local building practices typically lend themselves to high collapse and casualty rates, and these impacts lend to prioritization for international response. In contrast, financial and overall societal impacts often trigger the level of response in regions or countries in which prevalent earthquake resistant construction practices greatly reduce building collapse and resulting fatalities. Any newly devised alert, whether economic- or casualty-based, should be intuitive and consistent with established lexicons and procedures. Useful alerts should

  13. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

  14. Coseismic landsliding estimates for an Alpine Fault earthquake and the consequences for erosion of the Southern Alps, New Zealand

    NASA Astrophysics Data System (ADS)

    Robinson, T. R.; Davies, T. R. H.; Wilson, T. M.; Orchiston, C.

    2016-06-01

    Landsliding resulting from large earthquakes in mountainous terrain presents a substantial hazard and plays an important role in the evolution of mountain ranges. However estimating the scale and effect of landsliding from an individual earthquake prior to its occurrence is difficult. This study presents first order estimates of the scale and effects of coseismic landsliding resulting from a plate boundary earthquake in the South Island of New Zealand. We model an Mw 8.0 earthquake on the Alpine Fault, which has produced large (M 7.8-8.2) earthquakes every 329 ± 68 years over the last 8 ka, with the last earthquake ~ 300 years ago. We suggest that such an earthquake could produce ~ 50,000 ± 20,000 landslides at average densities of 2-9 landslides km- 2 in the area of most intense landsliding. Between 50% and 90% are expected to occur in a 7000 km2 zone between the fault and the main divide of the Southern Alps. Total landslide volume is estimated to be 0.81 + 0.87/- 0.55 km3. In major northern and southern river catchments, total landslide volume is equivalent to up to a century of present-day aseismic denudation measured from suspended sediment yields. This suggests that earthquakes occurring at century-timescales are a major driver of erosion in these regions. In the central Southern Alps, coseismic denudation is equivalent to less than a decade of aseismic denudation, suggesting precipitation and uplift dominate denudation processes. Nevertheless, the estimated scale of coseismic landsliding is considered to be a substantial hazard throughout the entire Southern Alps and is likely to present a substantial issue for post-earthquake response and recovery.

  15. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    NASA Astrophysics Data System (ADS)

    Boyd, O. S.; Cramer, C. H.

    2011-12-01

    We develop an intensity-attenuation relation for the central and eastern United States (CEUS) and estimate the magnitudes of the 1811-1812 New Madrid, MO and 1886 Charleston, SC earthquakes. This relation incorporates an unprecedented number of intensity observations, uses a simple but sufficient form, and minimizes residuals of predicted and observed log epicentral distance rather than maximizing the likelihood of an observed intensity. We constrain the relation with the modified Mercalli intensity dataset of the National Oceanic and Atmospheric Administration along with the 'Did You Feel It?' dataset of the U.S. Geological Survey through April, 2011. We find that the new relation leads to lower magnitude estimates for the New Madrid earthquakes than many previous studies. Depending on the modified Mercalli intensity dataset used, the new relation results in estimates for the moment magnitudes of the December 16th, 1811, January 23rd, 1812, and February 7th, 1812 mainshocks and December 16th dawn aftershock of 6.6-6.9, 6.6-7.0, 6.9-7.3, and 6.4-6.8, respectively, with a magnitude uncertainty of ±0.4. We also estimate a magnitude of 6.7±0.3 for the 1886 Charleston, SC earthquake. We find a greater range of epistemic uncertainty when also accounting for multiple intensity-attenuation relations. The magnitude ranges for the December 16th, January 23rd, and February 7th mainshocks and December 16th dawn aftershock are 6.6-7.8, 6.6-7.6, 6.9-8.1, and 6.4-7.2, respectively. Relative to the 2008 national seismic hazard maps, our estimate of epistemic uncertainty increases the coefficient of variation of seismic hazard estimates by 46-60 percent for ground motions expected to be exceeded with a 2-percent probability in 50 years and by 39-48 percent for ground motions expected to be exceeded with a 10-percent probability in 50 years. The reason for the large epistemic uncertainty is due to the lack of large instrumental CEUS earthquakes, which are needed to determine the

  16. Development of Classification and Story Building Data for Accurate Earthquake Damage Estimation

    NASA Astrophysics Data System (ADS)

    Sakai, Yuki; Fukukawa, Noriko; Arai, Kensuke

    We investigated the method of developing classification and story building data from census population database in order to estimate earthquake damage more accurately especially in the urban area presuming that there are correlation between numbers of non-wooden or high-rise buildings and the population. We formulated equations of estimating numbers of wooden houses, low-to-mid-rise(1-9 story) and high-rise(over 10 story) non-wooden buildings in the 1km mesh from night and daytime population database based on the building data we investigated and collected in the selected 20 meshs in Kanto area. We could accurately estimate the numbers of three classified buildings by the formulated equations, but in some special cases, such as the apartment block mesh, the estimated values are quite different from actual values.

  17. Microzonation of Seismic Hazards and Estimation of Human Fatality for Scenario Earthquakes in Chianan Area, Taiwan

    NASA Astrophysics Data System (ADS)

    Liu, K. S.; Chiang, C. L.; Ho, T. T.; Tsai, Y. B.

    2015-12-01

    In this study, we assess seismic hazards in the 57 administration districts of Chianan area, Taiwan in the form of ShakeMaps as well as to estimate potential human fatalities from scenario earthquakes on the three Type I active faults in this area. As a result, it is noted that two regions with high MMI intensity greater than IX in the map of maximum ground motion. One is in the Chiayi area around Minsyong, Dalin and Meishan due to presence of the Meishan fault and large site amplification factors which can reach as high as 2.38 and 2.09 for PGA and PGV, respectively, in Minsyong. The other is in the Tainan area around Jiali, Madou, Siaying, Syuejia, Jiangjyun and Yanshuei due to a disastrous earthquake occurred near the border between Jiali and Madou with a magnitude of Mw 6.83 in 1862 and large site amplification factors which can reach as high as 2.89 and 2.97 for PGA and PGV, respectively, in Madou. In addition, the probabilities in 10, 30, and 50-year periods with seismic intensity exceeding MMII VIII in above areas are greater than 45%, 80% and 95%, respectively. Moreover, from the distribution of probabilities, high values of greater than 95% over a 10 year period with seismic intensity corresponding to CWBI V and MMI VI are found in central and northern Chiayi and northern Tainan. At last, from estimation of human fatalities for scenario earthquakes on three active faults in Chianan area, it is noted that the numbers of fatalities increase rapidly for people above age 45. Compared to the 1946 Hsinhua earthquake, the number of fatality estimated from the scenario earthquake on the Hsinhua active fault is significantly high. However, the higher number of fatality in this case is reasonable after considering the probably reasons. Hence, we urge local and the central governments to pay special attention on seismic hazard mitigation in this highly urbanized area with large number of old buildings.

  18. Maximum magnitude estimations of induced earthquakes at Paradox Valley, Colorado, from cumulative injection volume and geometry of seismicity clusters

    NASA Astrophysics Data System (ADS)

    Yeck, William L.; Block, Lisa V.; Wood, Christopher K.; King, Vanessa M.

    2015-01-01

    The Paradox Valley Unit (PVU), a salinity control project in southwest Colorado, disposes of brine in a single deep injection well. Since the initiation of injection at the PVU in 1991, earthquakes have been repeatedly induced. PVU closely monitors all seismicity in the Paradox Valley region with a dense surface seismic network. A key factor for understanding the seismic hazard from PVU injection is the maximum magnitude earthquake that can be induced. The estimate of maximum magnitude of induced earthquakes is difficult to constrain as, unlike naturally occurring earthquakes, the maximum magnitude of induced earthquakes changes over time and is affected by injection parameters. We investigate temporal variations in maximum magnitudes of induced earthquakes at the PVU using two methods. First, we consider the relationship between the total cumulative injected volume and the history of observed largest earthquakes at the PVU. Second, we explore the relationship between maximum magnitude and the geometry of individual seismicity clusters. Under the assumptions that: (i) elevated pore pressures must be distributed over an entire fault surface to initiate rupture and (ii) the location of induced events delineates volumes of sufficiently high pore-pressure to induce rupture, we calculate the largest allowable vertical penny-shaped faults, and investigate the potential earthquake magnitudes represented by their rupture. Results from both the injection volume and geometrical methods suggest that the PVU has the potential to induce events up to roughly MW 5 in the region directly surrounding the well; however, the largest observed earthquake to date has been about a magnitude unit smaller than this predicted maximum. In the seismicity cluster surrounding the injection well, the maximum potential earthquake size estimated by these methods and the observed maximum magnitudes have remained steady since the mid-2000s. These observations suggest that either these methods

  19. Reevaluation of the macroseismic effects of the 1887 Sonora, Mexico earthquake and its magnitude estimation

    USGS Publications Warehouse

    Suárez, Gerardo; Hough, Susan E.

    2008-01-01

    The Sonora, Mexico, earthquake of 3 May 1887 occurred a few years before the start of the instrumental era in seismology. We revisit all available accounts of the earthquake and assign Modified Mercalli Intensities (MMI), interpreting and analyzing macroseismic information using the best available modern methods. We find that earlier intensity assignments for this important earthquake were unjustifiably high in many cases. High intensity values were assigned based on accounts of rock falls, soil failure or changes in the water table, which are now known to be very poor indicators of shaking severity and intensity. Nonetheless, reliable accounts reveal that light damage (intensity VI) occurred at distances of up to ~200 km in both Mexico and the United States. The resulting set of 98 reevaluated intensity values is used to draw an isoseismal map of this event. Using the attenuation relation proposed by Bakun (2006b), we estimate an optimal moment magnitude of Mw7.6. Assuming this magnitude is correct, a fact supported independently by documented rupture parameters assuming standard scaling relations, our results support the conclusion that northern Sonora as well as the Basin and Range province are characterized by lower attenuation of intensities than California. However, this appears to be at odds with recent results that Lg attenuation in the Basin and Range province is comparable to that in California.

  20. Earthquake shaking hazard estimates and exposure changes in the conterminous United States

    USGS Publications Warehouse

    Jaiswal, Kishor S.; Petersen, Mark D.; Rukstales, Kenneth S.; Leith, William S.

    2015-01-01

    A large portion of the population of the United States lives in areas vulnerable to earthquake hazards. This investigation aims to quantify population and infrastructure exposure within the conterminous U.S. that are subjected to varying levels of earthquake ground motions by systematically analyzing the last four cycles of the U.S. Geological Survey's (USGS) National Seismic Hazard Models (published in 1996, 2002, 2008 and 2014). Using the 2013 LandScan data, we estimate the numbers of people who are exposed to potentially damaging ground motions (peak ground accelerations at or above 0.1g). At least 28 million (~9% of the total population) may experience 0.1g level of shaking at relatively frequent intervals (annual rate of 1 in 72 years or 50% probability of exceedance (PE) in 50 years), 57 million (~18% of the total population) may experience this level of shaking at moderately frequent intervals (annual rate of 1 in 475 years or 10% PE in 50 years), and 143 million (~46% of the total population) may experience such shaking at relatively infrequent intervals (annual rate of 1 in 2,475 years or 2% PE in 50 years). We also show that there is a significant number of critical infrastructure facilities located in high earthquake-hazard areas (Modified Mercalli Intensity ≥ VII with moderately frequent recurrence interval).

  1. Earthquake source scaling and self-similarity estimation from stacking P and S spectra

    NASA Astrophysics Data System (ADS)

    Prieto, GermáN. A.; Shearer, Peter M.; Vernon, Frank L.; Kilb, Debi

    2004-08-01

    We study the scaling relationships of source parameters and the self-similarity of earthquake spectra by analyzing a cluster of over 400 small earthquakes (ML = 0.5 to 3.4) recorded by the Anza seismic network in southern California. We compute P, S, and preevent noise spectra from each seismogram using a multitaper technique and approximate source and receiver terms by iteratively stacking the spectra. To estimate scaling relationships, we average the spectra in size bins based on their relative moment. We correct for attenuation by using the smallest moment bin as an empirical Green's function (EGF) for the stacked spectra in the larger moment bins. The shapes of the log spectra agree within their estimated uncertainties after shifting along the ω-3 line expected for self-similarity of the source spectra. We also estimate corner frequencies and radiated energy from the relative source spectra using a simple source model. The ratio between radiated seismic energy and seismic moment (proportional to apparent stress) is nearly constant with increasing moment over the magnitude range of our EGF-corrected data (ML = 1.8 to 3.4). Corner frequencies vary inversely as the cube root of moment, as expected from the observed self-similarity in the spectra. The ratio between P and S corner frequencies is observed to be 1.6 ± 0.2. We obtain values for absolute moment and energy by calibrating our results to local magnitudes for these earthquakes. This yields a S to P energy ratio of 9 ± 1.5 and a value of apparent stress of about 1 MPa.

  2. Estimation of Radiated Energy of Recent Great Earthquakes Using the Normal-mode Theory

    NASA Astrophysics Data System (ADS)

    Rivera, L. A.; Kanamori, H.

    2014-12-01

    Despite its fundamental importance in seismology, accurate estimation of radiated energy remains challenging. The interaction of the elastic field with the near-source structure, especially the free surface, makes the radiation field very complex. Here we address this problem using the normal-mode theory. Radiated energy estimations require a detailed finite source model for the spatial and temporal slip distribution. We use the slip models for recent great earthquakes provided by various investigators. We place a slip model in a spherically symmetric Earth (PREM), and compute the radiated energy by modal summation. For each mode, the volume integral of the energy density over the Earth's volume can be obtained analytically. The final expression involves a sum over the source patches nested in the modal summation itself. In practice we perform modal summation up to 80 mHz. We explore the effect of several factors such as the focal mechanism, the source depth, the source duration, the source directivity and the seismic moment. Not surprisingly, the source depth plays a key role. The effect can be very significant for events presenting large slip at shallow depths. Deep earthquakes and strike-slip earthquakes are essentially unaffected by the free surface. Similar to the situation in moment tensor determinations, shallow dipping reverse or normal focal mechanisms can be heavily affected. The preliminary estimates of the radiated energy for the frequency ≤ 80 mHz are; the 2004 Sumatra earthquake, 8.3x1016 J (average for 2 rupture models), the 2010 Maule, 1.6x1017 J (2), the 2011 Tohoku-oki, 1.1x1017 J (5), the 2012 Sumatra, 2.4x1017 J (2), the 1994 Bolivia, 4.1x1015 J (1), the 2013 Okhotsk, 2.0x1016 J (1), and the 2010 Mentawai, 2.9x1014 J (1). To obtain the total radiated energy, the radiated energy for frequency ≥ 80 mHz estimated with other methods (e.g., integration of velocity records) needs to be added.

  3. Model parameter estimation bias induced by earthquake magnitude cut-off

    NASA Astrophysics Data System (ADS)

    Harte, D. S.

    2016-02-01

    We evaluate the bias in parameter estimates of the ETAS model. We show that when a simulated catalogue is magnitude-truncated there is considerable bias, whereas when it is not truncated there is no discernible bias. We also discuss two further implied assumptions in the ETAS and other self-exciting models. First, that the triggering boundary magnitude is equivalent to the catalogue completeness magnitude. Secondly, the assumption in the Gutenberg-Richter relationship that numbers of events increase exponentially as magnitude decreases. These two assumptions are confounded with the magnitude truncation effect. We discuss the effect of these problems on analyses of real earthquake catalogues.

  4. The CATDAT damaging earthquakes database

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  5. Twitter as Information Source for Rapid Damage Estimation after Major Earthquakes

    NASA Astrophysics Data System (ADS)

    Eggert, Silke; Fohringer, Joachim

    2014-05-01

    Natural disasters like earthquakes require a fast response from local authorities. Well trained rescue teams have to be available, equipment and technology has to be ready set up, information have to be directed to the right positions so the head quarter can manage the operation precisely. The main goal is to reach the most affected areas in a minimum of time. But even with the best preparation for these cases, there will always be the uncertainty of what really happened in the affected area. Modern geophysical sensor networks provide high quality data. These measurements, however, are only mapping disjoint values from their respective locations for a limited amount of parameters. Using observations of witnesses represents one approach to enhance measured values from sensors ("humans as sensors"). These observations are increasingly disseminated via social media platforms. These "social sensors" offer several advantages over common sensors, e.g. high mobility, high versatility of captured parameters as well as rapid distribution of information. Moreover, the amount of data offered by social media platforms is quite extensive. We analyze messages distributed via Twitter after major earthquakes to get rapid information on what eye-witnesses report from the epicentral area. We use this information to (a) quickly learn about damage and losses to support fast disaster response and to (b) densify geophysical networks in areas where there is sparse information to gain a more detailed insight on felt intensities. We present a case study from the Mw 7.1 Philippines (Bohol) earthquake that happened on Oct. 15 2013. We extract Twitter messages, so called tweets containing one or more specified keywords from the semantic field of "earthquake" and use them for further analysis. For the time frame of Oct. 15 to Oct 18 we get a data base of in total 50.000 tweets whereof 2900 tweets are geo-localized and 470 have a photo attached. Analyses for both national level and locally for

  6. Delayed Segment Rupture during Great Earthquake along the Nankai Trough - Estimation from Historical Documents and Tsunami Trace Heights of the 1707 Hoei Earthquake -

    NASA Astrophysics Data System (ADS)

    Imai, K.; Nishiyama, A.; Maeda, T.; Ishibe, T.; Satake, K.; Furumura, T.

    2010-12-01

    Along the Nankai Trough, recurrence of at least nine great (M~8) interplate earthquakes have been inferred from historical documents in Japan. Historical documents also record timing of ground shaking or arrival time of seismic waves. Iida (1985) and Usami (2003) compiled arrival times of seismic waves due to the 1707 Hoei earthquake and concluded that the Tokai, Tonankai and Nankai segments simultaneously ruptured or separately ruptured within 2 hours. We statistically estimate the origin time of the Hoei earthquake based on only reliable historical documents including additional ones found after their studies, and suggest a possibility of delayed rupture on the earthquake segments. In addition, tsunami heights are computed on the basis of the result, and compared with tsunami trace heights of the 1707 Hoei earthquake. We selected primary and reliable historical documents from Kyushu to Tohoku region written within 30 years after the Hoei earthquake. The arrival times of the Hoei earthquake at each location were then retrieved from these documents. The arrival times were spatially averaged to estimate the origin time of the Hoei earthquake. The time count interval in 1707 is different from the current system: day and night were divided into 6 “koku”, hence each “koku” corresponds to about 2 hours. However, timing is often indicated with early, middle or late “koku”, making the temporal resolution as small as 40 minutes. Because the timing was measured from sunrise to sunset, the local time varies with longitude, hence a correction is applied to convert the local time to the standard time. Our likelihood estimate shows that the origin time of the Hoei earthquake was 13:47 with the standard error of 1.02 hours. In addition, we statistically tested whether the Hoei earthquake had simultaneously ruptured all the segments or separately ruptured, and evaluated the best segmentation using the Akaike’s Information Criterion (AIC; Akaike, 1974). The result

  7. Estimating conditional quantiles with the help of the pinball loss

    SciTech Connect

    Steinwart, Ingo

    2008-01-01

    Using the so-called pinball loss for estimating conditional quantiles is a well-known tool in both statistics and machine learning. So far, however, only little work has been done to quantify the efficiency of this tool for non-parametric (modified) empirical risk minimization approaches. The goal of this work is to fill this gap by establishing inequalities that describe how close approximate pinball risk minimizers are to the corresponding conditional quantile. These inequalities, which hold under mild assumptions on the data-generating distribution, are then used to establish so-called variance bounds which recently turned out to play an important role in the statistical analysis of (modified) empirical risk minimization approaches. To illustrate the use of the established inequalities, we then use them to establish an oracle inequality for support vector machines that use the pinball loss. Here, it turns out that we obtain learning rates which are optimal in a min-max sense under some standard assumptions on the regularity of the conditional quantile function.

  8. A Simplified Approach to Earthquake Risk in Mainland China

    NASA Astrophysics Data System (ADS)

    Chen, Qi-Fu; Mi, Hongliang; Huang, Jing

    2005-06-01

    There are limitations in conventional earthquake loss procedures if attempts are made to apply these to assess the social and economic impacts of recent disastrous earthquakes. This paper addresses the need to develop an applicable model for estimating the significant increases of earthquake loss in mainland China. The casualties of earthquakes were studied first. The casualties of earthquakes are strongly related to earthquake strength, occurrence time (day or night) and the distribution of population in the affected area. Using data on earthquake casualties in mainland China from 1980 to 2000, we suggest a relationship between average losses of life and the magnitude of earthquakes. Combined with information on population density and earthquake occurrence times, we use these data to give a further relationship between the loss of life and factors like population density, intensity and occurrence time of the earthquake. Earthquakes that occurred from 2001 to 2003 were tested for the given relationships. This paper also explores the possibility of using a macroeconomic indicator, here GDP (Gross Domestic Product), to roughly estimate earthquake exposure in situations where no detailed insurance or similar inventories exist, thus bypassing some problems of the conventional method.

  9. Application of universal kriging for estimation of earthquake ground motion: Statistical significance of results

    SciTech Connect

    Carr, J.R.; Roberts, K.P.

    1989-02-01

    Universal kriging is compared with ordinary kriging for estimation of earthquake ground motion. Ordinary kriging is based on a stationary random function model; universal kriging is based on a nonstationary random function model representing first-order drift. Accuracy of universal kriging is compared with that for ordinary kriging; cross-validation is used as the basis for comparison. Hypothesis testing on these results shows that accuracy obtained using universal kriging is not significantly different from accuracy obtained using ordinary kriging. Test based on normal distribution assumptions are applied to errors measured in the cross-validation procedure; t and F tests reveal no evidence to suggest universal and ordinary kriging are different for estimation of earthquake ground motion. Nonparametric hypothesis tests applied to these errors and jackknife statistics yield the same conclusion: universal and ordinary kriging are not significantly different for this application as determined by a cross-validation procedure. These results are based on application to four independent data sets (four different seismic events).

  10. Estimation of seismic source parameters for earthquakes in the southern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Rhee, H.; Sheen, D.

    2013-12-01

    Recent seismicity in the Korean Peninsula is shown to be low but there is the potential for more severe seismic activity. Historical records show that there were many damaging earthquakes around the Peninsula. Absence of instrumental records of damaging earthquakes hinders our efforts to understand seismotectonic characteristics in the Peninsula and predict seismic hazards. Therefore it is important to analyze instrumental records precisely to help improve our knowledge of seismicity in this region. Several studies on seismic source parameters in the Korean Peninsula were performed to find source parameters for a single event (Kim, 2001; Jo and Baag, 2007; Choi, 2009; Choi and Shim, 2009; Choi, 2010; Choi and Noh, 2010; Kim et al., 2010), to find relationships between source parameters (Kim and Kim, 2008; Shin and Kang, 2008) or to determine the input parameters for the stochastic strong ground motion simulation (Jo and Baag, 2001; Junn et al., 2002). In all previous studies, however, the source parameters were estimated only from small numbers of large earthquakes in this region. To understand the seismotectonic environment in low seismicity region, it will be better that a study on the source parameters is performed by using as many data as we can. In this study, therefore, we estimated seismic source parameters, such as the corner frequency, Brune stress drop and moment magnitude, from 503 events with ML≥1.6 that occurred in the southern part of the Korean Peninsula from 2001 to 2012. The data set consist of 2,834 S-wave trains on three-component seismograms recorded at broadband seismograph stations which have been operating by the Korea Meteorological Administration and the Korea Institute of Geoscience and Mineral Resources. To calculate the seismic source parameters, we used the iterative method of Jo and Baag (2001) based on the methods of Snoke (1987) and Andrews (1986). In this method, the source parameters are estimated by using the integration of

  11. Seismic moment of the 1891 Nobi, Japan, earthquake estimated from historical seismograms

    NASA Astrophysics Data System (ADS)

    Fukuyama, E.; Muramatu, I.; Mikumo, T.

    2007-06-01

    The seismic moment of the 1891 Nobi, Japan, earthquake has been evaluated from the historical seismogram recorded at the Central Meteorological Observatory in Tokyo. For this purpose, synthetic seismograms from point and finite source models with various fault parameters have been calculated by a discrete wave-number method, incorporating the instrumental response of the Gray-Milne-Ewing seismograph, and then compared with the original records. Our estimate of the seismic moment (Mo) is 1.8 × 1020 N m corresponding to a moment magnitude (Mw) 7.5. This is significantly smaller than the previous estimates from the distribution of damage, but is consistent with that inferred from geological field survey (Matsuda, 1974) of the surface faults.

  12. Uncertainty estimations for moment tensor inversions: the issue of the 2012 May 20 Emilia earthquake

    NASA Astrophysics Data System (ADS)

    Scognamiglio, Laura; Magnoni, Federica; Tinti, Elisa; Casarotti, Emanuele

    2016-08-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Geoscientists ordinarily use moment tensor catalogues, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their analysis. The 2012 May 20 Emilia main shock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. A variability of ˜0.5 units in magnitude leads to a controversial knowledge of the real size of the event and reveals how the solutions could be poorly constrained. In this work, we investigate the stability of the moment tensor solution for this earthquake, studying the effect of five different 1-D velocity models, the number and the distribution of the stations used in the inversion procedure. We also introduce a 3-D velocity model to account for structural heterogeneity. We finally estimate the uncertainties associated to the computed focal planes and the obtained Mw. We conclude that our reliable source solutions provide a moment magnitude that ranges from 5.87, 1-D model, to 5.96, 3-D model, reducing the variability of the literature to ˜0.1. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, requires coming out with disclosed assumptions and explicit processing workflows. Finally and, probably more important, when moment tensor solution is used for secondary analyses it has to be combined with the same main boundary conditions (e.g. wave-velocity propagation model) to avoid conflicting results.

  13. Estimating earthquake-induced failure probability and downtime of critical facilities.

    PubMed

    Porter, Keith; Ramer, Kyle

    2012-01-01

    Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways. PMID:22576139

  14. Re-estimated fault model of the 17th century great earthquake off Hokkaido using tsunami deposit data

    NASA Astrophysics Data System (ADS)

    Ioki, Kei; Tanioka, Yuichiro

    2016-01-01

    Paleotsunami researches revealed that a great earthquake occurred off eastern Hokkaido, Japan and generated a large tsunami in the 17th century. Tsunami deposits from this event have been found at far inland from the Pacific coast in eastern Hokkaido. Previous study estimated the fault model of the 17th century great earthquake by comparing locations of lowland tsunami deposits and computed tsunami inundation areas. Tsunami deposits were also traced at high cliff near the coast as high as 18 m above the sea level. Recent paleotsunami study also traced tsunami deposits at other high cliffs along the Pacific coast. The fault model estimated from previous study cannot explain the tsunami deposit data at high cliffs near the coast. In this study, we estimated the fault model of the 17th century great earthquake to explain both lowland widespread tsunami deposit areas and tsunami deposit data at high cliffs near the coast. We found that distributions of lowland tsunami deposits were mainly explained by wide rupture area at the plate interface in Tokachi-Oki segment and Nemuro-Oki segment. Tsunami deposits at high cliff near the coast were mainly explained by very large slip of 25 m at the shallow part of the plate interface near the trench in those segments. The total seismic moment of the 17th century great earthquake was calculated to be 1.7 ×1022 Nm (Mw 8.8). The 2011 great Tohoku earthquake ruptured large area off Tohoku and very large slip amount was found at the shallow part of the plate interface near the trench. The 17th century great earthquake had the same characteristics as the 2011 great Tohoku earthquake.

  15. Comparison between scaling law and nonparametric Bayesian estimate for the recurrence time of strong earthquakes

    NASA Astrophysics Data System (ADS)

    Rotondi, R.

    2009-04-01

    According to the unified scaling theory the probability distribution function of the recurrence time T is a scaled version of a base function and the average value of T can be used as a scale parameter for the distribution. The base function must belong to the scale family of distributions: tested on different catalogues and for different scale levels, for Corral (2005) the (truncated) generalized gamma distribution is the best model, for German (2006) the Weibull distribution. The scaling approach should overcome the difficulty of estimating distribution functions over small areas but theorical limitations and partial instability of the estimated distributions have been pointed out in the literature. Our aim is to analyze the recurrence time of strong earthquakes that occurred in the Italian territory. To satisfy the hypotheses of independence and identical distribution we have evaluated the times between events that occurred in each area of the Database of Individual Seismogenic Sources and then we have gathered them by eight tectonically coherent regions, each of them dominated by a well characterized geodynamic process. To solve problems like: paucity of data, presence of outliers and uncertainty in the choice of the functional expression for the distribution of t, we have followed a nonparametric approach (Rotondi (2009)) in which: (a) the maximum flexibility is obtained by assuming that the probability distribution is a random function belonging to a large function space, distributed as a stochastic process; (b) nonparametric estimation method is robust when the data contain outliers; (c) Bayesian methodology allows to exploit different information sources so that the model fitting may be good also to scarce samples. We have compared the hazard rates evaluated through the parametric and nonparametric approach. References Corral A. (2005). Mixing of rescaled data and Bayesian inference for earthquake recurrence times, Nonlin. Proces. Geophys., 12, 89

  16. The range split-spectrum method for ionosphere estimation applied to the 2008 Kyrgyzstan earthquake

    NASA Astrophysics Data System (ADS)

    Gomba, Giorgio; Eineder, Michael

    2015-04-01

    L-band remote sensing systems, like the future Tandem-L mission, are disrupted by the ionized upper part of the atmosphere called ionosphere. The ionosphere is a region of the upper atmosphere composed by gases that are ionized by the solar radiation. The extent of the effects induced on a SAR measurement is given by the electron density integrated along the radio-wave paths and on its spatial variations. The main effect of the ionosphere on microwaves is to cause an additional delay, which introduces a phase difference between SAR measurements modifying the interferometric phase. The objectives of the Tandem-L mission are the systematic monitoring of dynamic Earth processes like Earth surface deformations, vegetation structure, ice and glacier changes and ocean surface currents. The scientific requirements regarding the mapping of surface deformation due to tectonic processes, earthquakes, volcanic cycles and anthropogenic factors demand deformation measurements; namely one, two or three dimensional displacement maps with resolutions of a few hundreds of meters and accuracies of centimeter to millimeter level. Ionospheric effects can make impossible to produce deformation maps with such accuracy and must therefore be estimated and compensated. As an example of this process, the implementation of the range split-spectrum method proposed in [1,2] will be presented and applied to an example dataset. The 2008 Kyrgyzstan Earthquake of October 5 is imaged by an ALOS PALSAR interferogram; a part from the earthquake, many fringes due to strong ionospheric variations can also be seen. The compensated interferogram shows how the ionosphere-related fringes were successfully estimated and removed. [1] Rosen, P.A.; Hensley, S.; Chen, C., "Measurement and mitigation of the ionosphere in L-band Interferometric SAR data," Radar Conference, 2010 IEEE , vol., no., pp.1459,1463, 10-14 May 2010 [2] Brcic, R.; Parizzi, A.; Eineder, M.; Bamler, R.; Meyer, F., "Estimation and

  17. A simple approach to estimate earthquake magnitude from the arrival time of the peak acceleration amplitude

    NASA Astrophysics Data System (ADS)

    Noda, S.; Yamamoto, S.

    2014-12-01

    In order for Earthquake Early Warning (EEW) to be effective, the rapid determination of magnitude (M) is important. At present, there are no methods which can accurately determine M even for extremely large events (ELE) for EEW, although a number of the methods have been suggested. In order to solve the problem, we use a simple approach derived from the fact that the time difference (Top) from the onset of the body wave to the arrival time of the peak acceleration amplitude of the body wave scales with M. To test this approach, we use 15,172 accelerograms of regional earthquakes (most of them are M4-7 events) from the K-NET, as the first step. Top is defined by analyzing the S-wave in this step. The S-onsets are calculated by adding the theoretical S-P times to the P-onsets which are manually picked. As the result, it is confirmed that logTop has high correlation with Mw, especially for the higher frequency band (> 2Hz). The RMS of residuals between Mw and M estimated in this step is less than 0.5. In case of the 2011 Tohoku earthquake, M is estimated to be 9.01 at 150 seconds after the initiation of the event.To increase the number of the ELE data, we add the teleseismic high frequency P-wave records to the analysis, as the second step. According to the result of various back-projection analyses, we consider the teleseismic P-waves to contain information on the entire rupture process. The BHZ channel data of the Global Seismographic Network for 24 events are used in this step. 2-4Hz data from the stations in the epicentral distance range of 30-85 degrees are used following the method of Hara [2007]. All P-onsets are manually picked. Top obtained from the teleseimic data show good correlation with Mw, complementing the one obtained from the regional data. We conclude that the proposed approach is quite useful for estimating reliable M for EEW, even for the ELE.

  18. Crustal parameters estimated from P-waves of earthquakes recorded at a small array

    USGS Publications Warehouse

    Murdock, J.N.; Steppe, J.A.

    1980-01-01

    The P-arrival times of local and regional earthquakes that are outside of a small network of seismometers can be used to interpret crustal parameters beneath the network by employing the time-term technique. Even when the estimate of the refractor velocity is poorly determined, useful estimates of the station time-terms can be made. The method is applied to a 20 km diameter network of eight seismic stations which was operated near Castaic, California, during the winter of 1972-73. The stations were located in sedimentary basins. Beneath the network, the sedimentary rocks of the basins are known to range from 1 to more than 4 km in thickness. Relative time-terms are estimated from P-waves assumed to be propagated by a refractor in the mid-crust, and again from P-waves propagated by a refractor in the upper basement. For the range of velocities reported by others, the two sets of time-terms are very similar. They suggest that both refractors dip to the southwest, and the geology also indicates that the basement dips in this direction. In addition, the P-wave velocity estimated for the refractor of mid-crustal depths, roughly 6.7 km/sec, agrees with values reported by others. Thus, even in this region of complicated geologic structure, the method appears to give realistic results. ?? 1980 Birkha??user Verlag.

  19. PRECISION OF REAL-TIME ESTIMATION OF LIQUEFACTION POTENCIALS DURING THE 2011 OFF THE PACIFIC COAST OF TOHOKU EARTHQUAKE

    NASA Astrophysics Data System (ADS)

    Ishida, Eisuke; Suetomi, Iwao; Tsukamoto, Hiroyuki; Inomata, Wataru; Hamanaka, Ryo; Norito, Yuuki; Yasuda, Susumu

    The coast area in Tokyo-wan which is far from the earthquake fault, were heavily liquefied during the 2011 off the Pacific coast of Tohoku earthquake. It is very important to predict the occurence and degree of liquafaction, because the liquafaction affects the safety of underground pipilines and facilities of roads and ports. The real-time disaster prevention system "SUPREME" is established and used by Tokyo Gas supply system in order to secure the safety. The system collected the SI value from 4,000 sensor, calculated the distribution of SI value, liquafaction potencial, damages of pipelines for about 20 minutes after the earthquake. In this paper, it is shown that estimated liquafaction area corresponds actual liquafaction area very well, and the reason is that "SUPREME" uses very dense SPT data and SI sensors and the estimate method of liquafaction considers the effect of duration time of earthqauke groun motion.

  20. Estimation of earthquake source parameters by the inversion of waveform data: synthetic waveforms

    USGS Publications Warehouse

    Sipkin, S.A.

    1982-01-01

    Two methods are presented for the recovery of a time-dependent moment-tensor source from waveform data. One procedure utilizes multichannel signal-enhancement theory; in the other a multichannel vector-deconvolution approach, developed by Oldenburg (1982) and based on Backus-Gilbert inverse theory, is used. These methods have the advantage of being extremely flexible; both may be used either routinely or as research tools for studying particular earthquakes in detail. Both methods are also robust with respect to small errors in the Green's functions and may be used to refine estimates of source depth by minimizing the misfits to the data. The multichannel vector-deconvolution approach, although it requires more interaction, also allows a trade-off between resolution and accuracy, and complete statistics for the solution are obtained. The procedures have been tested using a number of synthetic body-wave data sets, including point and complex sources, with satisfactory results. ?? 1982.

  1. Estimation of co-seismic stress change of the 2008 Wenchuan Ms8.0 earthquake

    SciTech Connect

    Sun Dongsheng; Wang Hongcai; Ma Yinsheng; Zhou Chunjing

    2012-09-26

    In-situ stress change near the fault before and after a great earthquake is a key issue in the geosciences field. In this work, based on the 2008 Great Wenchuan earthquake fault slip dislocation model, the co-seismic stress tensor change due to the Wenchuan earthquake and the distribution functions around the Longmen Shan fault are given. Our calculated results are almost consistent with the before and after great Wenchuan earthquake in-situ measuring results. The quantitative assessment results provide a reference for the study of the mechanism of earthquakes.

  2. Estimation of coda wave attenuation for NW Himalayan region using local earthquakes

    NASA Astrophysics Data System (ADS)

    Kumar, Naresh; Parvez, Imtiyaz A.; Virk, H. S.

    2005-08-01

    The attenuation of seismic wave energy in NW Himalayas has been estimated using local earthquakes. Most of the analyzed events are from the vicinity of the Main Boundary Thrust (MBT) and the Main Central Thrust (MCT), which are well-defined tectonic discontinuities in the Himalayas. The time-domain coda-decay method of a single back-scattering model is employed to calculate frequency dependent values of Coda Q (Qc). A total of 36 local earthquakes of magnitude range 2.1-4.8 have been used for Qc estimation at central frequencies 1.5, 3.0, 6.0, 9.0, 12.0 and 18.0 Hz through eight lapse time windows from 25 to 60 s starting at double the time of the primary S-wave from the origin time. The estimated average frequency dependence quality factor gives the relation, Qc = 158 f1.05, while the average Qc values vary from 210 at 1.5 Hz to 2861 at 18 Hz central frequencies. The observed coda quality factor is strongly dependent on frequency, which indicates that the region is seismic and tectonically active with high heterogeneities. The variation of the quality factor Qc has been estimated at different lapse times to observe its effect with depth. The estimated average frequency dependent relations of Qc vary from 85 f1.16 to 216 f0.91 at 25 to 60 s lapse window length respectively. For 25 s lapse time window, the average Qc value of the region varies from 131 ± 36 at 1.5 Hz to 2298 ± 397 at 18 Hz, while for 60 s lapse time window its variation is from 285 ± 95 at 1.5 Hz to 2868 ± 336 at 18 Hz of central frequency. The variation of Qc with frequency and lapse time shows that the upper crustal layers are seismically more active compared to the lower lithosphere. The decreasing value of the frequency parameter with increasing lapse time shows that the lithosphere acquires homogeneity with depth.

  3. Ground-motion modeling of the 1906 San Francisco Earthquake, part II: Ground-motion estimates for the 1906 earthquake and scenario events

    USGS Publications Warehouse

    Aagaard, B.T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.

    2008-01-01

    We estimate the ground motions produce by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  4. Testing earthquake source inversion methodologies

    USGS Publications Warehouse

    Page, M.; Mai, P.M.; Schorlemmer, D.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  5. BEAM LOSS ESTIMATES AND CONTROL FOR THE BNL NEUTRINO FACILITY.

    SciTech Connect

    WENG, W.-T.; LEE, Y.Y.; RAPARIA, D.; TSOUPAS, N.; BEEBE-WANG, J.; WEI, J.; ZHANG, S.Y.

    2005-05-16

    The requirement for low beam loss is very important both to protect the beam component, and to make the hands-on maintenance possible. In this report, the design considerations to achieving high intensity and low loss will be presented. We start by specifying the beam loss limit at every physical process followed by the proper design and parameters for realizing the required goals. The process considered in this paper include the emittance growth in the linac, the H{sup -} injection, the transition crossing, the coherent instabilities and the extraction losses.

  6. Stable isotope values in coastal sediment estimate subsidence near Girdwood during the 1964 great Alaska earthquake

    NASA Astrophysics Data System (ADS)

    Bender, A. M.; Witter, R. C.; Rogers, M.; Saenger, C. P.

    2013-12-01

    Subsidence during the Mw 9.2, 1964 great Alaska earthquake lowered Turnagain Arm near Girdwood, Alaska by ~1.5m and caused rapid relative sea-level (RSL) rise that shifted estuary mud flats inland over peat-forming wetlands. Sharp mud-over-peat contacts record these environment shifts at sites along Turnagain Arm including Bird Point, 11km west of Girdwood. Transfer functions based on changes in intertidal microfossil populations across these contacts accurately estimate earthquake subsidence at Girdwood, but poor preservation of microfossils hampers this method at other sites in Alaska. We test a new method that employs compositions of stable carbon and nitrogen isotopes in intertidal sediments as proxies for elevation. Because marine sediment sources are expected to have higher δ13C and δ15N than terrestrial sources, we hypothesize that these values should decrease with elevation in modern intertidal sediment, and should also be more positive in estuarine mud above sharp contacts that record RSL rise than in peaty sediment below. We relate δ13C and δ15N values above and below the 1964 mud/peat contact to values in modern sediment of known elevation, and use these values qualitatively to indicate sediment source, and quantitatively to estimate the amount of RSL rise across the contact. To establish a site-specific sea level datum, we deployed a pressure transducer and compensatory barometer to record a 2-month tide series at Bird Point. We regressed the high tides from this series against corresponding NOAA verified high tides at Anchorage (~50km west of Bird Point) to calculate a high water datum within ×0.14m standard error (SE). To test whether or not modern sediment isotope values decrease with elevation, we surveyed a 60-m-long modern transect, sampling surface sediment at ~0.10m vertical intervals. Results from this transect show a decrease of 4.64‰ in δ13C and 3.97‰ in δ15N between tide flat and upland sediment. To evaluate if δ13C and δ15N

  7. The energy radiated by the 26 December 2004 Sumatra-Andaman earthquake estimated from 10-minute P-wave windows

    USGS Publications Warehouse

    Choy, G.L.; Boatwright, J.

    2007-01-01

    The rupture process of the Mw 9.1 Sumatra-Andaman earthquake lasted for approximately 500 sec, nearly twice as long as the teleseismic time windows between the P and PP arrival times generally used to compute radiated energy. In order to measure the P waves radiated by the entire earthquake, we analyze records that extend from the P-wave to the S-wave arrival times from stations at distances ?? >60??. These 8- to 10-min windows contain the PP, PPP, and ScP arrivals, along with other multiply reflected phases. To gauge the effect of including these additional phases, we form the spectral ratio of the source spectrum estimated from extended windows (between TP and TS) to the source spectrum estimated from normal windows (between TP and TPP). The extended windows are analyzed as though they contained only the P-pP-sP wave group. We analyze four smaller earthquakes that occurred in the vicinity of the Mw 9.1 mainshock, with similar depths and focal mechanisms. These smaller events range in magnitude from an Mw 6.0 aftershock of 9 January 2005 to the Mw 8.6 Nias earthquake that occurred to the south of the Sumatra-Andaman earthquake on 28 March 2005. We average the spectral ratios for these four events to obtain a frequency-dependent operator for the extended windows. We then correct the source spectrum estimated from the extended records of the 26 December 2004 mainshock to obtain a complete or corrected source spectrum for the entire rupture process (???600 sec) of the great Sumatra-Andaman earthquake. Our estimate of the total seismic energy radiated by this earthquake is 1.4 ?? 1017 J. When we compare the corrected source spectrum for the entire earthquake to the source spectrum from the first ???250 sec of the rupture process (obtained from normal teleseismic windows), we find that the mainshock radiated much more seismic energy in the first half of the rupture process than in the second half, especially over the period range from 3 sec to 40 sec.

  8. Estimation over Unreliable Communication Links without Arrival Information of Packet Losses

    NASA Astrophysics Data System (ADS)

    Ma, Xiao; Djouadi, Seddik M.; Kuruganti, Teja; Nutaro, James J.; Drira, Anis

    2009-03-01

    This paper considers the estimation of a linear stochastic discrete-time system through an unreliable network. Packet losses from the sensor to the estimator are assumed to follow a Bernoulli distribution, while information loss at any time is not assumed to be available at the receiver. A new estimator is derived from the Kalman filter by using the probability of packet arrivals. It is shown that the new estimator provides a good estimate under unreliable communications with packet losses. The estimator is used successfully in a Linear Quadratic Gaussian (LQG) controller to stabilize an inverted pendulum system and compared to the standard Kalman filter.

  9. Study of an image restoration method based on Poisson-maximum likelihood estimation method for earthquake ruin scene

    NASA Astrophysics Data System (ADS)

    Song, Yanxing; Yang, Jingsong; Cheng, Lina; Liu, Shucong

    2014-09-01

    An image restoration method based on Poisson-maximum likelihood estimation method (PMLE) for earthquake ruin scene is proposed in this paper. The PMLE algorithm is introduced at first, and automatic acceleration method is used in the algorithm to accelerate the iterative process, then an image of earthquake ruin scene is processed with this image restoration method. The spectral correlation method and PSNR (peak signal-to-noise ratio) are chosen respectively to validate the restoration effect of the method, the simulation results show that iterations in this method will effect the PSNR of the processed image and operation time, and this method can restore image of earthquake ruin scene effectively and has a good practicability.

  10. Understanding earthquake hazards in urban areas - Evansville Area Earthquake Hazards Mapping Project

    USGS Publications Warehouse

    Boyd, Oliver S.

    2012-01-01

    The region surrounding Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the proximity of Evansville to the Wabash Valley and New Madrid seismic zones, there is concern among nearby communities about hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake and are able to design structures to withstand this estimated ground shaking. Earthquake-hazard maps provide one way of conveying such information and can help the region of Evansville prepare for future earthquakes and reduce earthquake-caused loss of life and financial and structural loss. The Evansville Area Earthquake Hazards Mapping Project (EAEHMP) has produced three types of hazard maps for the Evansville area: (1) probabilistic seismic-hazard maps show the ground motion that is expected to be exceeded with a given probability within a given period of time; (2) scenario ground-shaking maps show the expected shaking from two specific scenario earthquakes; (3) liquefaction-potential maps show how likely the strong ground shaking from the scenario earthquakes is to produce liquefaction. These maps complement the U.S. Geological Survey's National Seismic Hazard Maps but are more detailed regionally and take into account surficial geology, soil thickness, and soil stiffness; these elements greatly affect ground shaking.

  11. Using safety inspection data to estimate shaking intensity for the 1994 Northridge earthquake

    USGS Publications Warehouse

    Thywissen, K.; Boatwright, J.

    1998-01-01

    We map the shaking intensity suffered in Los Angeles County during the 17 January 1994, Northridge earthquake using municipal safety inspection data. The intensity is estimated from the number of buildings given red, yellow, or green tags, aggregated by census tract. Census tracts contain from 200 to 4000 residential buildings and have an average area of 6 km2 but are as small as 2 and 1 km2 in the most densely populated areas of the San Fernando Valley and downtown Los Angeles, respectively. In comparison, the zip code areas on which standard MMI intensity estimates are based are six times larger, on average, than the census tracts. We group the buildings by age (before and after 1940 and 1976), by number of housing units (one, two to four, and five or more), and by construction type, and we normalize the tags by the total number of similar buildings in each census tract. We analyze the seven most abundant building categories. The fragilities (the fraction of buildings in each category tagged within each intensity level) for these seven building categories are adjusted so that the intensity estimates agree. We calibrate the shaking intensity to correspond with the modified Mercalli intensities (MMI) estimated and compiled by Dewey et al. (1995); the shapes of the resulting isoseismals are similar, although we underestimate the extent of the MMI = 6 and 7 areas. The fragility varies significantly between different building categories (by factors of 10 to 20) and building ages (by factors of 2 to 6). The post-1940 wood-frame multi-family (???5 units) dwellings make up the most fragile building category, and the post-1940 wood-frame single-family dwellings make up the most resistant building category.

  12. Spatial and temporal variations of radiated seismic energy estimated for repeating earthquakes in northeastern Japan; implication for healing process

    NASA Astrophysics Data System (ADS)

    Ara, M.; Ide, S.; Uchida, N.

    2015-12-01

    Repeating earthquakes are shear slip on the plate interface, and helpful to monitor long-term deformation in subduction zones. Previous studies have measured the size of repeating earthquakes mainly using seismic moment, to calculate slip amount in each event. As another measure of event size, seismic energy may provide some information related to the frictional property on the plate interface. We estimated radiated seismic energy for 620 repeating earthquakes of MJMA from 2.5 to 5.9, detected by the method of Uchida and Matsuzawa [2013], in the Tohoku-Oki region. The study period is from 2001 to 2013, extending before and after the 2011 Tohoku-Oki earthquake of Mw 9, which is also accompanied with large afterslip [e.g., Ozawa et al., 2012]. The seismograms recorded by NIED Hi-net were used. We measured coda wave amplitude by the method of Mayeda et al. [2003] and estimated source spectra and radiated seismic energy by the method of Baltay et al. [2010] after slight modifications. The estimated scaled energy, the ratio between radiated seismic energy and seismic moment, shows a slight increase with seismic moment. The scaled energy increases with depth, while its temporal change before and after the Tohoku-Oki earthquake is not systematic. The scaled energy also increases with the inter-event time of repeating earthquakes. This might be explained by the difference of fault strength, proportional to the logarithm of time. In addition to this healing relation, scaling relationship between seismic moment and the inter-event time of repeating earthquake is well known [Nadeau and Johnson, 1998]. From these healing and scaling relationships, it is expected that scaled energy is proportional to the logarithm of seismic moment. This prediction is generally consistent with our observation, though the moment dependency is too small to be recognized as power or log. This healing-related scaling may be applicable to general earthquakes, and might be associated with the

  13. Effects of tag loss on direct estimates of population growth rate

    USGS Publications Warehouse

    Rotella, J.J.; Hines, J.E.

    2005-01-01

    The temporal symmetry approach of R. Pradel can be used with capture-recapture data to produce retrospective estimates of a population's growth rate, lambda(i), and the relative contributions to lambda(i) from different components of the population. Direct estimation of lambda(i) provides an alternative to using population projection matrices to estimate asymptotic lambda and is seeing increased use. However, the robustness of direct estimates of lambda(1) to violations of several key assumptions has not yet been investigated. Here, we consider tag loss as a possible source of bias for scenarios in which the rate of tag loss is (1) the same for all marked animals in the population and (2) a function of tag age. We computed analytic approximations of the expected values for each of the parameter estimators involved in direct estimation and used those values to calculate bias and precision for each parameter estimator. Estimates of lambda(i) were robust to homogeneous rates of tag loss. When tag loss rates varied by tag age, bias occurred for some of the sampling situations evaluated, especially those with low capture probability, a high rate of tag loss, or both. For situations with low rates of tag loss and high capture probability, bias was low and often negligible. Estimates of contributions of demographic components to lambda(i) were not robust to tag loss. Tag loss reduced the precision of all estimates because tag loss results in fewer marked animals remaining available for estimation. Clearly tag loss should be prevented if possible, and should be considered in analyses of lambda(i), but tag loss does not necessarily preclude unbiased estimation of lambda(i).

  14. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  15. Strong near-trench locking and its temporal change in the rupture area of the 2011 Tohoku-oki earthquake estimated from cumulative slip and slip vectors of interplate earthquakes

    NASA Astrophysics Data System (ADS)

    Uchida, N.; Hasegawa, A.; Matsuzawa, T.

    2012-12-01

    The 2011 Mw 9.0 Tohoku-oki earthquake is characterized by large near-trench slip that excited disastrous Tsunami. It is of great importance to estimate the coupling state near the trench to understand temporal evolution of interplate coupling near the earthquake source as well as for the assessment of tsunami risk along the trench. However, the coupling states at the near trench areas far from the land are usually not well constrained. The cumulative offset of small repeating earthquakes reflects the in situ slip history on a fault and the slip vectors of interplate earthquakes reflect heterogeneous distribution of coupling on the plate boundary. In this study, we use the repeating earthquake and slip vector data to estimate spatio-temporal change in slip and coupling in and around the source area of the Tohoku-oki earthquake near the Japan trench. The repeating earthquake data for 27 years before the Tohoku-oki earthquake show absence of repeating earthquake groups in the large-coseismic-slip area and low and variable slip rates in the moderate-coseismic-slip region surrounding the large-slip. The absence of repeaters itself could have been explained by both models with very weak coupling and very strong coupling. However, the rotation of slip vectors of interplate earthquakes at the deeper extension of the large-coseismic-slip suggest the plate boundary was locked in the near-trench area before the earthquake, which is consistent with the estimation by Hasegawa et al. (2012) based on stress tensor analysis of the upper plate events near the trench axis. The repeating earthquake data, on the other hand, show small but distinct increases in the slip rate in the 3-5 years before the earthquake near the area of large coseismic slip suggesting preseismic unfastening of the locked area in the last stage of the earthquake cycle. After the Tohoku-oki earthquake, repeating earthquakes activity in the main rupture area disappeared almost completely and slip vectors of

  16. The tsunami source area of the 2003 Tokachi-oki earthquake estimated from tsunami travel times and its relationship to the 1952 Tokachi-oki earthquake

    USGS Publications Warehouse

    Hirata, K.; Tanioka, Y.; Satake, K.; Yamaki, S.; Geist, E.L.

    2004-01-01

    We estimate the tsunami source area of the 2003 Tokachi-oki earthquake (Mw 8.0) from observed tsunami travel times at 17 Japanese tide gauge stations. The estimated tsunami source area (???1.4 ?? 104 km2) coincides with the western-half of the ocean-bottom deformation area (???2.52 ?? 104 km2) of the 1952 Tokachi-oki earthquake (Mw 8.1), previously inferred from tsunami waveform inversion. This suggests that the 2003 event ruptured only the western-half of the 1952 rupture extent. Geographical distribution of the maximum tsunami heights in 2003 differs significantly from that of the 1952 tsunami, supporting this hypothesis. Analysis of first-peak tsunami travel times indicates that a major uplift of the ocean-bottom occurred approximately 30 km to the NNW of the mainshock epicenter, just above a major asperity inferred from seismic waveform inversion. Copyright ?? The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences.

  17. Estimates of stress drop and crustal tectonic stress from the 27 February 2010 Maule, Chile, earthquake: Implications for fault strength

    USGS Publications Warehouse

    Luttrell, K.M.; Tong, X.; Sandwell, D.T.; Brooks, B.A.; Bevis, M.G.

    2011-01-01

    The great 27 February 2010 Mw 8.8 earthquake off the coast of southern Chile ruptured a ???600 km length of subduction zone. In this paper, we make two independent estimates of shear stress in the crust in the region of the Chile earthquake. First, we use a coseismic slip model constrained by geodetic observations from interferometric synthetic aperture radar (InSAR) and GPS to derive a spatially variable estimate of the change in static shear stress along the ruptured fault. Second, we use a static force balance model to constrain the crustal shear stress required to simultaneously support observed fore-arc topography and the stress orientation indicated by the earthquake focal mechanism. This includes the derivation of a semianalytic solution for the stress field exerted by surface and Moho topography loading the crust. We find that the deviatoric stress exerted by topography is minimized in the limit when the crust is considered an incompressible elastic solid, with a Poisson ratio of 0.5, and is independent of Young's modulus. This places a strict lower bound on the critical stress state maintained by the crust supporting plastically deformed accretionary wedge topography. We estimate the coseismic shear stress change from the Maule event ranged from-6 MPa (stress increase) to 17 MPa (stress drop), with a maximum depth-averaged crustal shear-stress drop of 4 MPa. We separately estimate that the plate-driving forces acting in the region, regardless of their exact mechanism, must contribute at least 27 MPa trench-perpendicular compression and 15 MPa trench-parallel compression. This corresponds to a depth-averaged shear stress of at least 7 MPa. The comparable magnitude of these two independent shear stress estimates is consistent with the interpretation that the section of the megathrust fault ruptured in the Maule earthquake is weak, with the seismic cycle relieving much of the total sustained shear stress in the crust. Copyright 2011 by the American

  18. Uncertainty in Climatology-Based Estimates of Soil Water Infiltration Losses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Local climatology is often used to estimate infiltration losses at the field scale. The objective of this work was to assess the uncertainty associated with such estimates. We computed infiltration losses from the water budget of a soil layer from monitoring data on water flux values at the soil su...

  19. Optimized sensor location for estimating story-drift angle for tall buildings subject to earthquakes

    NASA Astrophysics Data System (ADS)

    Ozawa, Sayuki; Mita, Akira

    2016-04-01

    Structural Health Monitoring (SHM) is a technology that can evaluate the extent of the deterioration or the damage of the building quantitatively. Most SHM systems utilize only a few sensors and the sensors are placed equally including the roof. However, the location of the sensors has not been verified. Therefore, in this study, the optimal location of the sensors is studied for estimating the inter-story drift angle which is used in immediate diagnosis after an earthquake. This study proposes a practical optimal sensor location method after testing all the possible sensor location combinations. From the simulation results of all location patterns, it was proved that placing the sensor on the roof is not always optimal. This result is practically useful as it is difficult to place the sensor on the roof in most cases. Modal Assurance Criterion (MAC) is one of the practical optimal sensor location methods. I proposed MASS Modal Assurance Criterion (MAC*) which incorporate the mass matrix of the building into the MAC. Either the mass matrix or the stiffness matrix needs to be considered for the orthogonality of the mode vectors, normal MAC does not consider this condition. The location of sensors determined by MAC* was superior to the previous method, MAC. In this study, an important knowledge of the location of sensors was provided for implementing SHM systems.

  20. Magnitude estimates of two large aftershocks of the 16 December 1811 New Madrid earthquake

    USGS Publications Warehouse

    Hough, S.E.; Martin, S.

    2002-01-01

    The three principal New Madrid mainshocks of 1811-1812 were followed by extensive aftershock sequences that included numerous felt events. Although no instrumental data are available for either the mainshocks or the aftershocks, available historical accounts do provide information that can be used to estimate magnitudes and locations for the large events. In this article we investigate two of the largest aftershocks: one near dawn following the first mainshock on 16 December 1811, and one near midday on 17 December 1811. We reinterpret original felt reports to obtain a set of 48 and 20 modified Mercalli intensity values of the two aftershocks, respectively. For the dawn aftershock, we infer a Mw of approximately 7.0 based on a comparison of its intensities with those of the smallest New Madrid mainshock. Based on a detailed account that appears to describe near-field ground motions, we further propose a new fault rupture scenario for the dawn aftershock. We suggest that the aftershock had a thrust mechanism and occurred on a southeastern limb of the Reelfoot fault. For the 17 December 1811 aftershock, we infer a Mw of approximately 6.1 ?? 0.2. This value is determined using the method of Bakun et al. (2002), which is based on a new calibration of intensity versus distance for earthquakes in central and eastern North America. The location of this event is not well constrained, but the available accounts suggest an epicenter beyond the southern end of the New Madrid Seismic Zone.

  1. Source rupture processes of the 2016 Kumamoto, Japan, earthquakes estimated from strong-motion waveforms

    NASA Astrophysics Data System (ADS)

    Kubo, Hisahiko; Suzuki, Wataru; Aoi, Shin; Sekiguchi, Haruko

    2016-10-01

    The detailed source rupture process of the M 7.3 event (April 16, 2016, 01:25, JST) of the 2016 Kumamoto, Japan, earthquakes was derived from strong-motion waveforms using multiple-time-window linear waveform inversion. Based on the observations of surface ruptures, the spatial distribution of aftershocks, and the geodetic data, a realistic curved fault model was developed for source-process analysis of this event. The seismic moment and maximum slip were estimated as 5.5 × 1019 Nm ( M w 7.1) and 3.8 m, respectively. The source model of the M 7.3 event had two significant ruptures. One rupture propagated toward the northeastern shallow region at 4 s after rupture initiation and continued with large slips to approximately 16 s. This rupture caused a large slip region 10-30 km northeast of the hypocenter that reached the caldera of Mt. Aso. Another rupture propagated toward the surface from the hypocenter at 2-6 s and then propagated toward the northeast along the near surface at 6-10 s. A comparison with the result of using a single fault plane model demonstrated that the use of the curved fault model led to improved waveform fit at the stations south of the fault. The source process of the M 6.5 event (April 14, 2016, 21:26, JST) was also estimated. In the source model obtained for the M 6.5 event, the seismic moment was 1.7 × 1018 Nm ( M w 6.1), and the rupture with large slips propagated from the hypocenter to the surface along the north-northeast direction at 1-6 s. The results in this study are consistent with observations of the surface ruptures. [Figure not available: see fulltext. Caption: .

  2. Source parameters of the 2008 Bukavu-Cyangugu earthquake estimated from InSAR and teleseismic data

    NASA Astrophysics Data System (ADS)

    D'Oreye, Nicolas; González, Pablo J.; Shuler, Ashley; Oth, Adrien; Bagalwa, Louis; Ekström, Göran; Kavotha, Déogratias; Kervyn, François; Lucas, Celia; Lukaya, François; Osodundu, Etoy; Wauthier, Christelle; Fernández, José

    2011-02-01

    Earthquake source parameter determination is of great importance for hazard assessment, as well as for a variety of scientific studies concerning regional stress and strain release and volcano-tectonic interaction. This is especially true for poorly instrumented, densely populated regions such as encountered in Africa, where even the distribution of seismicity remains poorly documented. In this paper, we combine data from satellite radar interferometry (InSAR) and teleseismic waveforms to determine the source parameters of the Mw 5.9 earthquake that occurred on 2008 February 3 near the cities of Bukavu (DR Congo) and Cyangugu (Rwanda). This was the second largest earthquake ever to be recorded in the Kivu basin, a section of the western branch of the East African Rift (EAR). This earthquake is of particular interest due to its shallow depth and proximity to active volcanoes and Lake Kivu, which contains high concentrations of dissolved carbon dioxide and methane. The shallow depth and possible similarity with dyking events recognized in other parts of EAR suggested the potential association of the earthquake with a magmatic intrusion, emphasizing the necessity of accurate source parameter determination. In general, we find that estimates of fault plane geometry, depth and scalar moment are highly consistent between teleseismic and InSAR studies. Centroid-moment-tensor (CMT) solutions locate the earthquake near the southern part of Lake Kivu, while InSAR studies place it under the lake itself. CMT solutions characterize the event as a nearly pure double-couple, normal faulting earthquake occurring on a fault plane striking 350° and dipping 52° east, with a rake of -101°. This is consistent with locally mapped faults, as well as InSAR data, which place the earthquake on a fault striking 355° and dipping 55° east, with a rake of -98°. The depth of the earthquake was constrained by a joint analysis of teleseismic P and SH waves and the CMT data set, showing that

  3. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Earle, Paul; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  4. Earthquake Analysis.

    ERIC Educational Resources Information Center

    Espinoza, Fernando

    2000-01-01

    Indicates the importance of the development of students' measurement and estimation skills. Analyzes earthquake data recorded at seismograph stations and explains how to read and modify the graphs. Presents an activity for student evaluation. (YDS)

  5. Real-Time Estimation of Earthquake Location, Magnitude and Rapid Shake map Computation for the Campania Region, Southern Italy

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Convertito, V.; de Matteis, R.; Iannaccone, G.; Lancieri, M.; Lomax, A.; Satriano, C.

    2005-12-01

    introducing an evolutionary strategy which is aimed at obtaining a more and more refined estimate of the maximum probability volume as the time goes on. The real time magnitude estimate will take advantage from the high spatial density of the network in the source region and the wide dynamic range of installed instruments. Based on the offline analysis of high quality strong-motion data bases recorded in Italy and worldwide, several methods will be checked and validated , using different observed quantities (peak amplitude, dominant frequency, square velocity integral, .) to be measured on seismograms, as a function of time. Following the ElarmS methodology (Allen,2004), peak ground attenuation relations can be used to predict the distribution of maximum ground shaking, as updated estimates of earthquake location and magnitude are progressively available from the Early Warning system starting from the time of first P-wave detection. As measurements of peak ground quantities for the current earthquake become available from the network, these values are progressively used to adjust an "ad hoc" determined attenuation relation for the Campania region using the stochastic approach proposed by Boore (1993).

  6. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    PubMed

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  7. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    PubMed Central

    Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  8. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    PubMed

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  9. A teleseismic study of the 2002 Denali fault, Alaska, earthquake and implications for rapid strong-motion estimation

    USGS Publications Warehouse

    Ji, C.; Helmberger, D.V.; Wald, D.J.

    2004-01-01

    Slip histories for the 2002 M7.9 Denali fault, Alaska, earthquake are derived rapidly from global teleseismic waveform data. In phases, three models improve matching waveform data and recovery of rupture details. In the first model (Phase I), analogous to an automated solution, a simple fault plane is fixed based on the preliminary Harvard Centroid Moment Tensor mechanism and the epicenter provided by the Preliminary Determination of Epicenters. This model is then updated (Phase II) by implementing a more realistic fault geometry inferred from Digital Elevation Model topography and further (Phase III) by using the calibrated P-wave and SH-wave arrival times derived from modeling of the nearby 2002 M6.7 Nenana Mountain earthquake. These models are used to predict the peak ground velocity and the shaking intensity field in the fault vicinity. The procedure to estimate local strong motion could be automated and used for global real-time earthquake shaking and damage assessment. ?? 2004, Earthquake Engineering Research Institute.

  10. Methodology for estimating crop loss from acidic deposition

    SciTech Connect

    Irving, P.M.

    1982-01-01

    Crop losses affect the production, availability and cost of food, and therefore have important economic, social, and political implications especially during this period of rapid world population growth. The fact that air-borne pollutants affect vegetative growth has been known for more than a century. Recently, the acidic deposition phenomenon has gained increasing attention, especially when implicated as a factor potentially responsible for crop yield losses. Experimental approaches utilized in traditional pollution effects research include: field surveys, sensitivity classification, dose-response studies, and regional-impact evaluation. Acid rain is a unique pollutant having special problems associated with researching its effects. For example, the description of dose for this pollutant should include rain chemistry (not just pH), rainfall rate, duration of event, total deposition, droplet size, etc. These parameters must also be considered when simulating rain in controlled studies. Due to the potential for interactions with biotic and abiotic entities, factorial research designs and multivariate analyses may be necessary for investigations of acid-rain impacts on crops. Results from well-planned mechanistic studies and dose-response experiments may be used to predict effects (both positive and negative), assess economic impacts, and establish tolerance thresholds for this form of pollution.

  11. Loss estimation and damage forecast using database provided

    NASA Astrophysics Data System (ADS)

    Pyrchenko, V.; Byrova, V.; Petrasov, A.

    2009-04-01

    There is a wide spectrum of development of natural hazards is observed in Russian territory. It the necessity of investigation of numerous events of dangerous natural processes, researches of mechanisms of their development and interaction with each other (synergetic amplification or new hazards emerging) with the purpose of the forecast of possible losses. Employees of Laboratory of the analysis of geological risk IEG RAS have created a database about displays of natural hazards in territory of Russia, which contains the information on 1310 cases of their display during 1991 - 2008. The wide range of the used sources has determined certain difficulties in creation of Database and has demanded to develop a special new technique of unification of the information received at different times. One of points of this technique is classification of negative consequences of display of the natural hazards, considering a death-roll, wounded mans, victims and direct economic damage. This Database has allowed to track dynamics of natural hazards and the emergency situations caused by them (ES) for the considered period, and also to define laws of their development in territory of Russia in time and space. It gives the chance to create theoretical, methodological and methodical bases of forecasting of possible losses with a certain degree of probability for territory of Russia and for its separate regions that guarantees in the future maintenance of adequate, operative and efficient pre-emptive decision-making.

  12. Pictorial estimation of blood loss in a birthing pool--an aide memoire.

    PubMed

    Goodman, Anushia

    2015-04-01

    The aim of this article is to share some photographic images to help midwives visually estimate blood loss at water births. PubMed, CINAHL and MEDLINE databases were searched for relevant research. There is little evidence to inform the practice of visually estimating blood loss in water, as discussed further on in the article. This article outlines a simulation where varying amounts of blood were poured into a birthing pool, captured by photo images. Photo images of key amounts like 150mls, 300mls and 450mls can be useful visual markers when estimating blood loss at water births. The speed of spread across the pool may be a significant factor in assessing blood loss. The author recommends that midwives and educators embark on similar simulations to inform their skill in estimating blood loss at water births.

  13. Long-period earthquake simulations in the Wasatch Front, UT: misfit characterization and ground motion estimates

    USGS Publications Warehouse

    Moschetti, Morgan P.; Ramírez-Guzmán, Leonardo

    2011-01-01

    In this research we characterize the goodness-of-fit between observed and synthetic seismograms from three small magnitude (M3.6-4.5) earthquakes in the region using the Wasatch Front community velocity model (WCVM) in order to determine the ability of the WCVM to predict earthquake ground motions for scenario earthquake modeling efforts. We employ the goodness-of-fit algorithms and criteria of Olsen and Mayhew (2010). In focusing comparisons on the ground motion parameters that are of greatest importance in engineering seismology, we find that the synthetic seismograms calculated using the WCVM produce a fair fit to the observed ground motion records up to a frequency of 0.5 Hz for two of the modeled earthquakes and up to 0.1 Hz for one of the earthquakes. In addition to the reference seismic material model (WCVM), we carry out earthquake simulations using material models with perturbations to the regional seismic model and with perturbations to the deep sedimentary basins. Simple perturbations to the regional seismic velocity model and to the seismic velocities of the sedimentary basin result in small improvements in the observed misfit but do not indicate a significantly improved material model. Unresolved differences between the observed and synthetic seismograms are likely due to un-modeled heterogeneities and incorrect basin geometries in the WCVM. These differences suggest that ground motion prediction accuracy from deterministic modeling varies across the region and further efforts to improve the WCVM are needed.

  14. Combined UAVSAR and GPS Estimates of Fault Slip for the M 6.0 South Napa Earthquake

    NASA Astrophysics Data System (ADS)

    Donnellan, A.; Parker, J. W.; Hawkins, B.; Hensley, S.; Jones, C. E.; Owen, S. E.; Moore, A. W.; Wang, J.; Pierce, M. E.; Rundle, J. B.

    2014-12-01

    Combined UAVSAR and GPS Estimates of Fault Slip for the M 6.0 South Napa Earthquake Andrea Donnellan, Jay Parker, Brian Hawkins, Scott Hensley, Cathleen Jones, Susan Owen, Angelyn Moore Jet Propulsion Laboratory, California Institute of Technology Marlon Pierce, Jun Wang Indiana University John Rundle University of California, Davis The South Napa to Santa Rosa area has been observed with NASA's UAVSAR since late 2009 as part of an experiment to monitor areas identified as having a high probability of an earthquake. The M 6.0 South Napa earthquake occurred on 24 August 2014. The area was flown 29 May 2014 preceeding the earthquake, and again on 29 August 2014, five days after the earthquake. The UAVSAR results show slip on a single fault at the south end of the rupture near the epicenter of the event. The rupture branches out into multiple faults further north near the Napa area. A combined inversion of rapid GPS results and the unwrapped UAVSAR interferogram indicate nearly pure strike slip motion. Using this assumption, the UAVSAR data show horizontal right-lateral slip across the fault of 19 cm at the south end of the rupture and increasing to 70 cm northward over a distance of 6.5 km. The joint inversion indicates slip of ~30 cm on a network of sub-parallel faults is concentrated in a zone about 17 km long. The lower depths of the faults are 5-8.5 km. The eastern two sub-parallel faults break the surface, while three faults to the west are buried at depths ranging from 2-6 km with deeper depths to the north and west. The geodetic moment release is equivalent to a M 6.1 event. Additional ruptures are observed in the interferogram, but the inversions suggest that they represent superficial slip that does not contribute to the overall moment release.

  15. Estimating Serious Decompression Sickness after Loss of Spacecraft Atmosphere

    NASA Technical Reports Server (NTRS)

    Gernhardt, Michael; Abercromby, Andrew F. J.

    2016-01-01

    INTRODUCTION: Pressure suits are worn inside spacecraft to protect crewmembers in the event of contamination or depressurization of the spacecraft cabin. Protection against serious (Type II) decompression sickness (DCS) in the event of an unplanned rapid cabin depressurization depends on providing adequate suit pressure to crewmembers because there is no opportunity for oxygen prebreathe. METHODS: A model was developed using literature reports from 41 altitude chamber tests totaling 3,256 decompressions (1,445 including exercise at altitude) with 282 cases of serious DCS. All data involved prebreathe durations < 30 min followed by = 120 min exposures at 13.8 to 34.5 kPa (2 to 5 psia) in young men. A time-dependent index of decompression stress was calculated for the historical decompressions using an existing Tissue Bubble Dynamics Model. This index, in combination with physical activity level at altitude (resting vs. active), provided significant prediction of serious DCS in the dataset when used in a logistic regression model, which was then used to estimate serious DCS risk for a range of hypothetical suit pressures and decompression scenarios. RESULTS: The probability of one or more cases of serious DCS in a four person crew was estimated as 0.73 assuming initial saturation at 1 atmosphere, no prebreathe, ascent to 24.1 kPa (3.5 psia) in 30 sec, and 120 min of activity at 3.5 psia. The estimated probability reduced to 0.36 and 0.16 for equivalent exposures at 31.0 and 40.0 kPa (4.5 and 5.8 psia), respectively. Extrapolation to exposures longer than 120 min suggest further increases in serous DCS risk. DISCUSSION: The need to operate critical spacecraft functions coupled with delayed access to hyperbaric treatment further increases the risk to crewmember safety if serious DCS symptoms are experienced following cabin depressurization. A suit pressure of 5.8 psia provides significantly greater protection to crewmembers than lower pressure alternatives. Lower

  16. Estimating the Circulation and Net Plasma Loss from Ionospheric Outflow

    NASA Astrophysics Data System (ADS)

    Haaland, S.; Engwall, E.; Eriksson, A. I.; Nilsson, H.; Foerster, M.; Lybekk, B.; Svenes, K.; Pedersen, A.

    2010-12-01

    An important source of magnetospheric plasma is outflow from the terrestrial ionosphere. Low energy ions travel along the magnetic field lines and enter the magnetospheric lobes and are convected towards the tail plasma sheet. Results from Cluster indicate that the field aligned outflow velocity is sometimes much higher than the convection towards the central plasma sheet. A substantial amount of plasma therefore escape downtail without ever reaching the central plasma sheet. In this work, we use Cluster measurements of the ionospheric outflow and lobe convection velocities combined with a model of the magnetic field in an attempt to quantify the plasma loss for various magnetospheric conditions. The results show that both the circulation of plasma but also the tailward escape of ions increase significantly during disturbed magnetospheric conditions. For strong solar wind driving with a southward interplanetary magnetic field, also typically associated with high geomagnetic activity, most of the outflowing plasma are convected to the plasma sheet and recirculated. For periods with northward interplanetary magnetic field, the convection is nearly stagnant, whereas the outflow, although limited, still persist. During such conditions, the outflowing ions escape downtail and are lost into the solar wind.

  17. Equations for estimating horizontal response spectra and peak acceleration from western North American earthquakes: A summary of recent work

    USGS Publications Warehouse

    Boore, D.M.; Joyner, W.B.; Fumal, T.E.

    1997-01-01

    In this paper we summarize our recently-published work on estimating horizontal response spectra and peak acceleration for shallow earthquakes in western North America. Although none of the sets of coefficients given here for the equations are new, for the convenience of the reader and in keeping with the style of this special issue, we provide tables for estimating random horizontal-component peak acceleration and 5 percent damped pseudo-acceleration response spectra in terms of the natural, rather than common, logarithm of the ground-motion parameter. The equations give ground motion in terms of moment magnitude, distance, and site conditions for strike-slip, reverse-slip, or unspecified faulting mechanisms. Site conditions are represented by the shear velocity averaged over the upper 30 m, and recommended values of average shear velocity are given for typical rock and soil sites and for site categories used in the National Earthquake Hazards Reduction Program's recommended seismic code provisions. In addition, we stipulate more restrictive ranges of magnitude and distance for the use of our equations than in our previous publications. Finally, we provide tables of input parameters that include a few corrections to site classifications and earthquake magnitude (the corrections made a small enough difference in the ground-motion predictions that we chose not to change the coefficients of the prediction equations).

  18. Workshop on continuing actions to reduce potential losses from future earthquakes in the Northeastern United States: proceedings of conference XXI

    SciTech Connect

    Hays, W.W.; Gori, P.L.

    1983-01-01

    This workshop was designed to define the earthquake threat in the eastern United States and to improve earthquake preparedness. Four major themes were addressed: (1) the nature of the earthquake threat in the northeast and what can be done to improve the state of preparedness; (2) increasing public awareness and concern for the earthquake hazard in the northeast; (3) improving the state of preparedness through scientific, engineering, and social science research; and (4) possible functions of one or more seismic safety organizations. Papers have been abstracted separately. (ACR)

  19. Revisiting borehole strain, typhoons, and slow earthquakes using quantitative estimates of precipitation-induced strain changes

    NASA Astrophysics Data System (ADS)

    Hsu, Ya-Ju; Chang, Yuan-Shu; Liu, Chi-Ching; Lee, Hsin-Ming; Linde, Alan T.; Sacks, Selwyn I.; Kitagawa, Genshio; Chen, Yue-Gau

    2015-06-01

    Taiwan experiences high deformation rates, particularly along its eastern margin where a shortening rate of about 30 mm/yr is experienced in the Longitudinal Valley and the Coastal Range. Four Sacks-Evertson borehole strainmeters have been installed in this area since 2003. Liu et al. (2009) proposed that a number of strain transient events, primarily coincident with low-barometric pressure during passages of typhoons, were due to deep-triggered slow slip. Here we extend that investigation with a quantitative analysis of the strain responses to precipitation as well as barometric pressure and the Earth tides in order to isolate tectonic source effects. Estimates of the strain responses to barometric pressure and groundwater level changes for the different stations vary over the ranges -1 to -3 nanostrain/millibar(hPa) and -0.3 to -1.0 nanostrain/hPa, respectively, consistent with theoretical values derived using Hooke's law. Liu et al. (2009) noted that during some typhoons, including at least one with very heavy rainfall, the observed strain changes were consistent with only barometric forcing. By considering a more extensive data set, we now find that the strain response to rainfall is about -5.1 nanostrain/hPa. A larger strain response to rainfall compared to that to air pressure and water level may be associated with an additional strain from fluid pressure changes that take place due to infiltration of precipitation. Using a state-space model, we remove the strain response to rainfall, in addition to those due to air pressure changes and the Earth tides, and investigate whether corrected strain changes are related to environmental disturbances or tectonic-original motions. The majority of strain changes attributed to slow earthquakes seem rather to be associated with environmental factors. However, some events show remaining strain changes after all corrections. These events include strain polarity changes during passages of typhoons (a characteristic that is

  20. Defeating Earthquakes

    NASA Astrophysics Data System (ADS)

    Stein, R. S.

    2012-12-01

    The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by

  1. An Optimum Model to Estimate Path Losses for 400 MHz Band Land Mobile Radio

    NASA Astrophysics Data System (ADS)

    Miyashita, Michifumi; Terada, Takashi; Serizawa, Yoshizumi

    It is difficult to estimate path loss for land mobile radio using a single path loss model such as diffraction model or Okumura model individually when mobile radio is utilized in widespread area. Furthermore, high accuracy of the path loss estimation is needed when the radio system is digitized because degradation of CNR due to interference deteriorates communications. In this paper, conventional path loss models, i.e. the diffraction model, Okumura model and two-ray model, were evaluated with 400 MHz land mobile radio field measurements, and a method of improving path loss estimation by using each of these conventional models selectively was proposed. The ratio of error between -10 dB and +10 dB for the method applying the correction factors derived from our field measurements was 71.41 %, while the ratios for the conventional diffraction and Okumura models without any correction factors were 26.71 % and 49.42 %, respectively.

  2. Earthquake related VLF activity and Electron Precipitation as a Major Agent of the Inner Radiation Belt Losses

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Georgios C.; Sidiropoulos, Nikolaos; Barlas, Georgios

    2015-04-01

    The radiation belt electron precipitation (RBEP) into the topside ionosphere is a phenomenon which is known for several decades. However, the inner radiation belt source and loss mechanisms have not still well understood, including PBEP. Here we present the results of a systematic study of RBEP observations, as obtained from the satellite DEMETER and the series of POES satellites, in comparison with variation of seismic activity. We found that a type of RBEP bursts lasting for ~1-3 min present special characteristics in the inner region of the inner radiation belt before large (M >~7, or even M>~5) earthquakes (EQs), as for instance: characteristic (a) flux-time profiles, (b) energy spectrum, (c) electron flux temporal evolution, (d) spatial distributions (e) broad band VLF activity, some days before an EQ and (f) stopping a few hours before the EQ occurrence above the epicenter. In this study we present results from both case and statistical studies which provide significant evidence that, among EQs-lightings-Earth based transmitters, strong seismic activity during a substorm makes the main contribution to the long lasting (~1-3 min) RBEP events at middle latitudes.

  3. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  4. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  5. Estimation of furrow irrigation sediment loss using an artificial neural network

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The area irrigated by furrow irrigation in the U.S. has been steadily decreasing but still represents about 20% of the total irrigated area in the U.S. Furrow irrigation sediment loss is a major water quality issue and a method for estimating sediment loss is needed to quantify the environmental imp...

  6. Coseismic Subsidence in the 1700 Great Cascadia Earthquake: Coastal Geological Estimates Versus the Predictions of Elastic Dislocation Models

    NASA Astrophysics Data System (ADS)

    Leonard, L. J.; Hyndman, R. D.; Mazzotti, S.

    2002-12-01

    Coastal estuaries from N. California to central Vancouver Island preserve evidence of the subsidence that has occurred in Holocene megathrust earthquakes at the Cascadia subduction zone (CSZ). Seismic hazard assessments in Cascadia are primarily based on the rupture area of 3-D dislocation models constrained by geodetic data. It is important to test the model by comparing predicted coseismic subsidence with that estimated in coastal marsh studies. Coseismic subsidence causes the burial of soils that are preserved as peat layers in the tidal-marsh stratigraphy. The most recent (1700) event is commonly marked by a peat layer overlain by intertidal mud, often with an intervening sand layer inferred as a tsunami deposit. Estimates of the amount of coseismic subsidence are made using two methods. (1) Contrasts in lithology, macrofossil content, and microfossil assemblages allow elevation changes to be deduced via modern marsh calibrations. (2) Measurements of the subsurface depth of the buried soil, corrected for eustatic sea level rise and interseismic uplift (assessed using a geodetically-constrained elastic dislocation model), provide independent estimates. Further corrections may include postglacial rebound and local tectonics. An elastic dislocation model is used to predict the expected coseismic subsidence, for a magnitude 9 earthquake (assuming 16 m uniform rupture), at the locations of geological subsidence estimates for the 1700 event. From preliminary comparisons, the correlation is remarkably good, corroborating the dislocation model rupture. The model produces a similar N-S trend of coastal subsidence, and for parts of the margin, e.g. N. Oregon and S. Washington, subsidence of similar magnitude (+/- ~ 0.25 m). A significant discrepancy (up to ~ 1.0 m) exists elsewhere, e.g. N. California, S. Oregon, and central Vancouver Island. The discrepancy may arise from measurement uncertainty, uncertainty in the elastic model, the assumption of elastic rather than

  7. Combining MODIS and Landsat imagery to estimate and map boreal forest cover loss

    USGS Publications Warehouse

    Potapov, P.; Hansen, M.C.; Stehman, S.V.; Loveland, T.R.; Pittman, K.

    2008-01-01

    Estimation of forest cover change is important for boreal forests, one of the most extensive forested biomes, due to its unique role in global timber stock, carbon sequestration and deposition, and high vulnerability to the effects of global climate change. We used time-series data from the MODerate Resolution Imaging Spectroradiometer (MODIS) to produce annual forest cover loss hotspot maps. These maps were used to assign all blocks (18.5 by 18.5??km) partitioning the boreal biome into strata of high, medium and low likelihood of forest cover loss. A stratified random sample of 118 blocks was interpreted for forest cover and forest cover loss using high spatial resolution Landsat imagery from 2000 and 2005. Area of forest cover gross loss from 2000 to 2005 within the boreal biome is estimated to be 1.63% (standard error 0.10%) of the total biome area, and represents a 4.02% reduction in year 2000 forest cover. The proportion of identified forest cover loss relative to regional forest area is much higher in North America than in Eurasia (5.63% to 3.00%). Of the total forest cover loss identified, 58.9% is attributable to wildfires. The MODIS pan-boreal change hotspot estimates reveal significant increases in forest cover loss due to wildfires in 2002 and 2003, with 2003 being the peak year of loss within the 5-year study period. Overall, the precision of the aggregate forest cover loss estimates derived from the Landsat data and the value of the MODIS-derived map displaying the spatial and temporal patterns of forest loss demonstrate the efficacy of this protocol for operational, cost-effective, and timely biome-wide monitoring of gross forest cover loss. ?? 2008 Elsevier Inc.

  8. Impact-based earthquake alerts with the U.S. Geological Survey's PAGER system: what's next?

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Garcia, D.; So, E.; Hearne, M.

    2012-01-01

    In September 2010, the USGS began publicly releasing earthquake alerts for significant earthquakes around the globe based on estimates of potential casualties and economic losses with its Prompt Assessment of Global Earthquakes for Response (PAGER) system. These estimates significantly enhanced the utility of the USGS PAGER system which had been, since 2006, providing estimated population exposures to specific shaking intensities. Quantifying earthquake impacts and communicating estimated losses (and their uncertainties) to the public, the media, humanitarian, and response communities required a new protocol—necessitating the development of an Earthquake Impact Scale—described herein and now deployed with the PAGER system. After two years of PAGER-based impact alerting, we now review operations, hazard calculations, loss models, alerting protocols, and our success rate for recent (2010-2011) events. This review prompts analyses of the strengths, limitations, opportunities, and pressures, allowing clearer definition of future research and development priorities for the PAGER system.

  9. Fuzzy Discrimination Analysis Method for Earthquake Energy K-Class Estimation with respect to Local Magnitude Scale

    NASA Astrophysics Data System (ADS)

    Mumladze, T.; Gachechiladze, J.

    2014-12-01

    The purpose of the present study is to establish relation between earthquake energy K-class (the relative energy characteristic) defined as logarithm of seismic waves energy E in joules obtained from analog stations data and local (Richter) magnitude ML obtained from digital seismograms. As for these data contain uncertainties the effective tools of fuzzy discrimination analysis are suggested for subjective estimates. Application of fuzzy analysis methods is an innovative approach to solving a complicated problem of constracting a uniform energy scale through the whole earthquake catalogue, also it avoids many of the data collection problems associated with probabilistic approaches; and it can handle incomplete information, partial inconsistency and fuzzy descriptions of data in a natural way. Another important task is to obtain frequency-magnitude relation based on K parameter, calculation of the Gutenberg-Richter parameters (a, b) and examining seismic activity in Georgia. Earthquake data files are using for periods: from 1985 to 1990 and from 2004 to 2009 for area j=410 - 430.5, l=410 - 470.

  10. Coseismic Fault Slip of the September 16, 2015 Mw 8.3 Illapel, Chile Earthquake Estimated from InSAR Data

    NASA Astrophysics Data System (ADS)

    Zhang, Yingfeng; Zhang, Guohong; Hetland, Eric A.; Shan, Xinjian; Wen, Shaoyan; Zuo, Ronghu

    2016-04-01

    The complete surface deformation of 2015 Mw 8.3 Illapel, Chile earthquake is obtained using SAR interferograms obtained for descending and ascending Sentinel-1 orbits. We find that the Illapel event is predominantly thrust, as expected for an earthquake on the interface between the Nazca and South America plates, with a slight right-lateral strike slip component. The maximum thrust-slip and right-lateral strike slip reach 8.3 and 1.5 m, respectively, both located at a depth of 8 km, northwest to the epicenter. The total estimated seismic moment is 3.28 × 1021 N.m, corresponding to a moment magnitude Mw 8.27. In our model, the rupture breaks all the way up to the sea-floor at the trench, which is consistent with the destructive tsunami following the earthquake. We also find the slip distribution correlates closely with previous estimates of interseismic locking distribution. We argue that positive coulomb stress changes caused by the Illapel earthquake may favor earthquakes on the extensional faults in this area. Finally, based on our inferred coseismic slip model and coulomb stress calculation, we envision that the subduction interface that last slipped in the 1922 Mw 8.4 Vallenar earthquake might be near the upper end of its seismic quiescence, and the earthquake potential in this region is urgent.

  11. GPS estimates of microplate motions, northern Caribbean: evidence for a Hispaniola microplate and implications for earthquake hazard

    NASA Astrophysics Data System (ADS)

    Benford, B.; DeMets, C.; Calais, E.

    2012-09-01

    We use elastic block modelling of 126 GPS site velocities from Jamaica, Hispaniola, Puerto Rico and other islands in the northern Caribbean to test for the existence of a Hispaniola microplate and estimate angular velocities for the Gônave, Hispaniola, Puerto Rico-Virgin Islands and two smaller microplates relative to each other and the Caribbean and North America plates. A model in which the Gônave microplate spans the whole plate boundary between the Cayman spreading centre and Mona Passage west of Puerto Rico is rejected at a high confidence level. The data instead require an independently moving Hispaniola microplate between the Mona Passage and a likely diffuse boundary within or offshore from western Hispaniola. Our updated angular velocities predict 6.8 ± 1.0 mm yr-1 of left-lateral slip along the seismically hazardous Enriquillo-Plantain Garden fault zone of southwest Hispaniola, 9.8 ± 2.0 mm yr-1 of slip along the Septentrional fault of northern Hispaniola and ˜14-15 mm yr-1 of left-lateral slip along the Oriente fault south of Cuba. They also predict 5.7 ± 1 mm yr-1 of fault-normal motion in the vicinity of the Enriquillo-Plantain Garden fault zone, faster than previously estimated and possibly accommodated by folds and faults in the Enriquillo-Plantain Garden fault zone borderlands. Our new and a previous estimate of Gônave-Caribbean plate motion suggest that enough elastic strain accumulates to generate one to two Mw˜ 7 earthquakes per century along the Enriquillo-Plantain Garden and nearby faults of southwest Hispaniola. That the 2010 M= 7.0 Haiti earthquake ended a 240-yr-long period of seismic quiescence in this region raises concerns that it could mark the onset of a new earthquake sequence that will relieve elastic strain that has accumulated since the late 18th century.

  12. Tag loss can bias Jolly-Seber capture-recapture estimates

    USGS Publications Warehouse

    McDonald, T.L.; Amstrup, Steven C.; Manly, B.F.J.

    2003-01-01

    We identified cases where the Jolly-Seber estimator of population size is biased under tag loss and tag-induced mortality by examining the mathematical arguments and performing computer simulations. We found that, except under certain tag-loss models and high sample sizes, the population size estimators (uncorrected for tag loss) are severely biased high when tag loss or tag-induced mortality occurs. Our findings verify that this misconception about effects of tag loss and tag-induced mortality could have serious consequences for field biologists interested in population size. Reiterating common sense, we encourage those engaged in capture-recapture studies to be careful and humane when handling animals during tagging, to use tags with high retention rates, to double-tag animals when possible, and to strive for the highest capture probabilities possible.

  13. Slip distribution of the 2014 Mw = 8.1 Pisagua, northern Chile, earthquake sequence estimated from coseismic fore-arc surface cracks

    NASA Astrophysics Data System (ADS)

    Loveless, John P.; Scott, Chelsea P.; Allmendinger, Richard W.; González, Gabriel

    2016-10-01

    The 2014 Mw = 8.1 Iquique (Pisagua), Chile, earthquake sequence ruptured a segment of the Nazca-South America subduction zone that last hosted a great earthquake in 1877. The sequence opened >3700 surface cracks in the fore arc of decameter-scale length and millimeter-to centimeter-scale aperture. We use the strikes of measured cracks, inferred to be perpendicular to coseismically applied tension, to estimate the slip distribution of the main shock and largest aftershock. The slip estimates are compatible with those based on seismic, geodetic, and tsunami data, indicating that geologic observations can also place quantitative constraints on rupture properties. The earthquake sequence ruptured between two asperities inferred from a regional-scale distribution of surface cracks, interpreted to represent a modal or most common rupture scenario for the northern Chile subduction zone. We suggest that past events, including the 1877 earthquake, broke the 2014 Pisagua source area together with adjacent sections in a throughgoing rupture.

  14. Napa Earthquake impact on water systems

    NASA Astrophysics Data System (ADS)

    Wang, J.

    2014-12-01

    South Napa earthquake occurred in Napa, California on August 24 at 3am, local time, and the magnitude is 6.0. The earthquake was the largest in SF Bay Area since the 1989 Loma Prieta earthquake. Economic loss topped $ 1 billion. Wine makers cleaning up and estimated the damage on tourism. Around 15,000 cases of lovely cabernet were pouring into the garden at the Hess Collection. Earthquake potentially raise water pollution risks, could cause water crisis. CA suffered water shortage recent years, and it could be helpful on how to prevent underground/surface water pollution from earthquake. This research gives a clear view on drinking water system in CA, pollution on river systems, as well as estimation on earthquake impact on water supply. The Sacramento-San Joaquin River delta (close to Napa), is the center of the state's water distribution system, delivering fresh water to more than 25 million residents and 3 million acres of farmland. Delta water conveyed through a network of levees is crucial to Southern California. The drought has significantly curtailed water export, and salt water intrusion reduced fresh water outflows. Strong shaking from a nearby earthquake can cause saturated, loose, sandy soils liquefaction, and could potentially damage major delta levee systems near Napa. Napa earthquake is a wake-up call for Southern California. It could potentially damage freshwater supply system.

  15. Reassessment of liquefaction potential and estimation of earthquake- induced settlements at Paducah Gaseous Diffusion Plant, Paducah, Kentucky. Final report

    SciTech Connect

    Sykora, D.W.; Yule, D.E.

    1996-04-01

    This report documents a reassessment of liquefaction potential and estimation of earthquake-induced settlements for the U.S. Department of Energy (DOE), Paducah Gaseous Diffusion Plant (PGDP), located southwest of Paducah, KY. The U.S. Army Engineer Waterways Experiment Station (WES) was authorized to conduct this study from FY91 to FY94 by the DOE, Oak Ridge Operations (ORO), Oak Ridge, TN, through Inter- Agency Agreement (IAG) No. DE-AI05-91OR21971. The study was conducted under the Gaseous Diffusion Plant Safety Analysis Report (GDP SAR) Program.

  16. Volcano-tectonic earthquakes: A new tool for estimating intrusive volumes and forecasting eruptions

    NASA Astrophysics Data System (ADS)

    White, Randall; McCausland, Wendy

    2016-01-01

    We present data on 136 high-frequency earthquakes and swarms, termed volcano-tectonic (VT) seismicity, which preceded 111 eruptions at 83 volcanoes, plus data on VT swarms that preceded intrusions at 21 other volcanoes. We find that VT seismicity is usually the earliest reported seismic precursor for eruptions at volcanoes that have been dormant for decades or more, and precedes eruptions of all magma types from basaltic to rhyolitic and all explosivities from VEI 0 to ultraplinian VEI 6 at such previously long-dormant volcanoes. Because large eruptions occur most commonly during resumption of activity at long-dormant volcanoes, VT seismicity is an important precursor for the Earth's most dangerous eruptions. VT seismicity precedes all explosive eruptions of VEI ≥ 5 and most if not all VEI 4 eruptions in our data set. Surprisingly we find that the VT seismicity originates at distal locations on tectonic fault structures at distances of one or two to tens of kilometers laterally from the site of the eventual eruption, and rarely if ever starts beneath the eruption site itself. The distal VT swarms generally occur at depths almost equal to the horizontal distance of the swarm from the summit out to about 15 km distance, beyond which hypocenter depths level out. We summarize several important characteristics of this distal VT seismicity including: swarm-like nature, onset days to years prior to the beginning of magmatic eruptions, peaking of activity at the time of the initial eruption whether phreatic or magmatic, and large non-double couple component to focal mechanisms. Most importantly we show that the intruded magma volume can be simply estimated from the cumulative seismic moment of the VT seismicity from: Log10 V = 0.77 Log ΣMoment - 5.32, with volume, V, in cubic meters and seismic moment in Newton meters. Because the cumulative seismic moment can be approximated from the size of just the few largest events, and is quite insensitive to precise locations

  17. Estimating Phosphorus Loss at the Whole-Farm Scale with User-Friendly Models

    NASA Astrophysics Data System (ADS)

    Vadas, P.; Powell, M.; Brink, G.; Busch, D.; Good, L.

    2014-12-01

    Phosphorus (P) loss from agricultural fields and delivery to surface waters persists as a water quality impairment issue. For dairy farms, P can be lost from cropland, pastures, barnyards, and open-air cattle lots; and all these sources must be evaluated to determine which ones are a priority for P loss remediation. We used interview surveys to document land use, cattle herd characteristics, and manure management for four grazing-based dairy farms in Wisconsin, USA. We then used the APLE and Snap-Plus models to estimate annual P loss from all areas on these farms and determine their relative contribution to whole-farm P loss. At the whole-farm level, average annual P loss (kg ha-1) from grazing-based dairy farms was low (0.6 to 1.8 kg ha-1), generally because a significant portion of land was in permanently vegetated pastures or hay and had low erosion. However, there were areas on the farms that represented sources of significant P loss. For cropland, the greatest P loss was from areas with exposed soil, typically for corn production, and especially on steeper sloping land. The farm areas with the greatest P loss had concentrated animal housing, including barnyards, and over-wintering and young-stock lots. These areas can represent from about 5% to almost 30% of total farm P loss, depending on lot management and P loss from other land uses. Our project builds on research to show that producer surveys can provide reliable management information to assess whole-farm P loss. It also shows that we can use models like RUSLE2, Snap-Plus, and APLE to rapidly, reliably, and quantitatively estimate P loss in runoff from all areas on a dairy farm and identify areas in greatest need of alternative management to reduce P loss.

  18. Estimation of ground motion for Bhuj (26 January 2001; Mw 7.6 and for future earthquakes in India

    USGS Publications Warehouse

    Singh, S.K.; Bansal, B.K.; Bhattacharya, S.N.; Pacheco, J.F.; Dattatrayam, R.S.; Ordaz, M.; Suresh, G.; ,; Hough, S.E.

    2003-01-01

    Only five moderate and large earthquakes (Mw ???5.7) in India-three in the Indian shield region and two in the Himalayan arc region-have given rise to multiple strong ground-motion recordings. Near-source data are available for only two of these events. The Bhuj earthquake (Mw 7.6), which occurred in the shield region, gave rise to useful recordings at distances exceeding 550 km. Because of the scarcity of the data, we use the stochastic method to estimate ground motions. We assume that (1) S waves dominate at R < 100 km and Lg waves at R ??? 100 km, (2) Q = 508f0.48 is valid for the Indian shield as well as the Himalayan arc region, (3) the effective duration is given by fc-1 + 0.05R, where fc is the corner frequency, and R is the hypocentral distance in kilometer, and (4) the acceleration spectra are sharply cut off beyond 35 Hz. We use two finite-source stochastic models. One is an approximate model that reduces to the ??2-source model at distances greater that about twice the source dimension. This model has the advantage that the ground motion is controlled by the familiar stress parameter, ????. In the other finite-source model, which is more reliable for near-source ground-motion estimation, the high-frequency radiation is controlled by the strength factor, sfact, a quantity that is physically related to the maximum slip rate on the fault. We estimate ???? needed to fit the observed Amax and Vmax data of each earthquake (which are mostly in the far field). The corresponding sfact is obtained by requiring that the predicted curves from the two models match each other in the far field up to a distance of about 500 km. The results show: (1) The ???? that explains Amax data for shield events may be a function of depth, increasing from ???50 bars at 10 km to ???400 bars at 36 km. The corresponding sfact values range from 1.0-2.0. The ???? values for the two Himalayan arc events are 75 and 150 bars (sfact = 1.0 and 1.4). (2) The ???? required to explain Vmax data

  19. Earthquake-triggered liquefaction in Southern Siberia and surroundings: a base for predictive models and seismic hazard estimation

    NASA Astrophysics Data System (ADS)

    Lunina, Oksana

    2016-04-01

    The forms and location patterns of soil liquefaction induced by earthquakes in southern Siberia, Mongolia, and northern Kazakhstan in 1950 through 2014 have been investigated, using field methods and a database of coseismic effects created as a GIS MapInfo application, with a handy input box for large data arrays. Statistical analysis of the data has revealed regional relationships between the magnitude (Ms) of an earthquake and the maximum distance of its environmental effect to the epicenter and to the causative fault (Lunina et al., 2014). Estimated limit distances to the fault for the Ms = 8.1 largest event are 130 km that is 3.5 times as short as those to the epicenter, which is 450 km. Along with this the wider of the fault the less liquefaction cases happen. 93% of them are within 40 km from the causative fault. Analysis of liquefaction locations relative to nearest faults in southern East Siberia shows the distances to be within 8 km but 69% of all cases are within 1 km. As a result, predictive models have been created for locations of seismic liquefaction, assuming a fault pattern for some parts of the Baikal rift zone. Base on our field and world data, equations have been suggested to relate the maximum sizes of liquefaction-induced clastic dikes (maximum width, visible maximum height and intensity index of clastic dikes) with Ms and local shaking intensity corresponding to the MSK-64 macroseismic intensity scale (Lunina and Gladkov, 2015). The obtained results make basis for modeling the distribution of the geohazard for the purposes of prediction and for estimating the earthquake parameters from liquefaction-induced clastic dikes. The author would like to express their gratitude to the Institute of the Earth's Crust, Siberian Branch of the Russian Academy of Sciences for providing laboratory to carry out this research and Russian Scientific Foundation for their financial support (Grant 14-17-00007).

  20. Estimating high frequency energy radiation of large earthquakes by image deconvolution back-projection

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Takeuchi, Nozomu; Kawakatsu, Hitoshi; Mori, Jim

    2016-09-01

    High frequency energy radiation of large earthquakes is a key to evaluating shaking damage and is an important source characteristic for understanding rupture dynamics. We developed a new inversion method, Image Deconvolution Back-Projection (IDBP) to retrieve high frequency energy radiation of seismic sources by linear inversion of observed images from a back-projection approach. The observed back-projection image for multiple sources is considered as a convolution of the image of the true radiated energy and the array response for a point source. The array response that spreads energy both in space and time is evaluated by using data of a smaller reference earthquake that can be assumed to be a point source. The synthetic test of the method shows that the spatial and temporal resolution of the source is much better than that for the conventional back-projection method. We applied this new method to the 2001 Mw 7.8 Kunlun earthquake using data recorded by Hi-net in Japan. The new method resolves a sharp image of the high frequency energy radiation with a significant portion of supershear rupture.

  1. Postpartum blood loss: visual estimation versus objective quantification with a novel birthing drape

    PubMed Central

    Lertbunnaphong, Tripop; Lapthanapat, Numporn; Leetheeragul, Jarunee; Hakularb, Pussara; Ownon, Amporn

    2016-01-01

    INTRODUCTION Immediate postpartum haemorrhage (PPH) is the most common cause of maternal mortality worldwide. Most recommendations focus on its prevention and management. Visual estimation of blood loss is widely used for the early detection of PPH, but the most appropriate method remains unclear. This study aimed to compare the efficacy of visual estimation and objective measurement using a sterile under-buttock drape, to determine the volume of postpartum blood loss. METHODS This study evaluated patients aged ≥ 18 years with low-risk term pregnancies, who delivered vaginally. Immediately after delivery, a birth attendant inserted the drape under the patient’s buttocks. Postpartum blood loss was measured by visual estimation and then compared with objective measurement using the drape. All participants received standard intra- and postpartum care. RESULTS In total, 286 patients with term pregnancies were enrolled. There was a significant difference in postpartum blood loss between visual estimation and objective measurement using the under-buttock drape (178.6 ± 133.1 mL vs. 259.0 ± 174.9 mL; p < 0.0001). Regarding accuracy at 100 mL discrete categories of postpartum blood loss, visual estimation was found to be inaccurate, resulting in underestimation, with low correspondence (27.6%) and poor agreement (Cohen’s kappa coefficient 0.07; p < 0.05), compared with objective measurement using the drape. Two-thirds of cases of immediate PPH (65.4%) were misdiagnosed using visual estimation. CONCLUSION Visual estimation is not optimal for measurement of postpartum blood loss in PPH. This method should be withdrawn from standard obstetric practice and replaced with objective measurement using the sterile under-buttock drape. PMID:27353510

  2. The use of streambed temperatures to estimate transmission losses on an experimental channel.

    SciTech Connect

    Ramon C. Naranjo; Michael H. Young; Richard Niswonger; Julianne J. Miller; Richard H. French

    2001-10-18

    Quantifying channel transmission losses in arid environments is important for a variety of reasons, from engineering design of flood control structures to evaluating recharge. To quantify the losses in an alluvial channel, an experiment was performed on a 2-km reach of an alluvial fan located on the Nevada Test Site. The channel was subjected to three separate flow events. Transmission losses were estimated using standard discharge monitoring and subsurface temperature modeling approach. Four stations were equipped to continuously monitor stage, temperature, and water content. Streambed temperatures measured at 0, 30, 50 and 100 cm depths were used to calibrate VS2DH, a two-dimensional, variably saturated flow model. Average losses based on the difference in flow between each station indicate that 21 percent, 27 percent, and 53 percent of the flow was reduced downgradient of the source. Results from the temperature monitoring identified locations with large thermal gradients, suggesting a conduction-dominated heat transfer on streambed sediments where caliche-cemented surfaces were present. Transmission losses at the lowermost segment corresponded to the smallest thermal gradient, suggesting an advection-dominated heat transfer. Losses predicted by VS2DH are within an order of magnitude of the estimated losses based on discharge measurements. The differences in losses are a result of the spatial extent to which the modeling results are applied and lateral subsurface flow.

  3. Estimation of Damaged Areas due to the 2010 Chile Earthquake and Tsunami Using SAR Imagery of Alos/palsar

    NASA Astrophysics Data System (ADS)

    Made, Pertiwi Jaya Ni; Miura, Fusanori; Besse Rimba, A.

    2016-06-01

    A large-scale earthquake and tsunami affect thousands of people and cause serious damages worldwide every year. Quick observation of the disaster damage is extremely important for planning effective rescue operations. In the past, acquiring damage information was limited to only field surveys or using aerial photographs. In the last decade, space-borne images were used in many disaster researches, such as tsunami damage detection. In this study, SAR data of ALOS/PALSAR satellite images were used to estimate tsunami damage in the form of inundation areas in Talcahuano, the area near the epicentre of the 2010 Chile earthquake. The image processing consisted of three stages, i.e. pre-processing, analysis processing, and post-processing. It was conducted using multi-temporal images before and after the disaster. In the analysis processing, inundation areas were extracted through the masking processing. It consisted of water masking using a high-resolution optical image of ALOS/AVNIR-2 and elevation masking which built upon the inundation height using DEM image of ASTER-GDEM. The area result was 8.77 Km2. It showed a good result and corresponded to the inundation map of Talcahuano. Future study in another area is needed in order to strengthen the estimation processing method.

  4. Real time earthquake information and tsunami estimation system for Indonesia, Philippines and Central-South American regions

    NASA Astrophysics Data System (ADS)

    Pulido Hernandez, N. E.; Inazu, D.; Saito, T.; Senda, J.; Fukuyama, E.; Kumagai, H.

    2015-12-01

    Southeast Asia as well as Central-South American regions are within the most active seismic regions in the world. To contribute to the understanding of source process of earthquakes the National Research Institute for Earth Science and Disaster Prevention NIED maintains the international seismic Network (ISN) since 2007. Continuous seismic waveforms from 294 broadband seismic stations in Indonesia, Philippines, and Central-South America regions are received in real time at NIED, and used for automatic location of seismic events. Using these data we perform automatic and manual estimation of moment tensor of seismic events (Mw>4.5) by using the SWIFT program developed at NIED. We simulate the propagation of local tsunamis in these regions using a tsunami simulation code and visualization system developed at NIED, combined with CMT parameters estimated by SWIFT. The goals of the system are to provide a rapid and reliable earthquake and tsunami information in particular for large seismic, and produce an appropriate database of earthquake source parameters and tsunami simulations for research. The system uses the hypocenter location and magnitude of earthquakes automatically determined at NIED by the SeisComP3 system (GFZ) from the continuous seismic waveforms in the region, to perform the automated calculation of moment tensors by SWIFT, and then carry out the automatic simulation and visualization of tsunami. The system generates maps of maximum tsunami heights within the target regions and along the coasts and display them with the fault model parameters used for tsunami simulations. Tsunami calculations are performed for all events with available automatic SWIFT/CMT solutions. Tsunami calculations are re-computed using SWIFT manual solutions for events with Mw>5.5 and centroid depths shallower than 100 km. Revised maximum tsunami heights as well as animation of tsunami propagation are also calculated and displayed for the two double couple solutions by SWIFT

  5. Estimating tag loss of the Atlantic Horseshoe crab, Limulus polyphemus, using a multi-state model

    USGS Publications Warehouse

    Butler, Catherine Alyssa; McGowan, Conor P.; Grand, James B.; Smith, David

    2012-01-01

    The Atlantic Horseshoe crab, Limulus polyphemus, is a valuable resource along the Mid-Atlantic coast which has, in recent years, experienced new management paradigms due to increased concern about this species role in the environment. While current management actions are underway, many acknowledge the need for improved and updated parameter estimates to reduce the uncertainty within the management models. Specifically, updated and improved estimates of demographic parameters such as adult crab survival in the regional population of interest, Delaware Bay, could greatly enhance these models and improve management decisions. There is however, some concern that difficulties in tag resighting or complete loss of tags could be occurring. As apparent from the assumptions of a Jolly-Seber model, loss of tags can result in a biased estimate and underestimate a survival rate. Given that uncertainty, as a first step towards estimating an unbiased estimate of adult survival, we first took steps to estimate the rate of tag loss. Using data from a double tag mark-resight study conducted in Delaware Bay and Program MARK, we designed a multi-state model to allow for the estimation of mortality of each tag separately and simultaneously.

  6. Simultaneous estimation of b-values and detection rates of earthquakes for the application to aftershock probability forecasting

    NASA Astrophysics Data System (ADS)

    Katsura, K.; Ogata, Y.

    2004-12-01

    Reasenberg and Jones [Science, 1989, 1994] proposed the aftershock probability forecasting based on the joint distribution [Utsu, J. Fac. Sci. Hokkaido Univ., 1970] of the modified Omori formula of aftershock decay and Gutenberg-Richter law of magnitude frequency, where the respective parameters are estimated by the maximum likelihood method [Ogata, J. Phys. Earth, 1983; Utsu, Geophys Bull. Hokkaido Univ., 1965, Aki, Bull. Earthq. Res. Inst., 1965]. The public forecast has been implemented by the responsible agencies in California and Japan. However, a considerable difficulty in the above procedure is that, due to the contamination of arriving seismic waves, detection rate of aftershocks is extremely low during a period immediately after the main shock, say, during the first day, when the forecasting is most critical for public in the affected area. Therefore, for the forecasting of a probability during such a period, they adopt a generic model with a set of the standard parameter values in California or Japan. For an effective and realistic estimation, I propose to utilize the statistical model introduced by Ogata and Katsura [Geophys. J. Int., 1993] for the simultaneous estimation of the b-values of Gutenberg-Richter law together with detection-rate (probability) of earthquakes of each magnitude-band from the provided data of all detected events, where the both parameters are allowed for changing in time. Thus, by using all detected aftershocks from the beginning of the period, we can estimate the underlying modified Omori rate of both detected and undetected events and their b-value changes, taking the time-varying missing rates of events into account. The similar computation is applied to the ETAS model for complex aftershock activity or regional seismicity where substantial missing events are expected immediately after a large aftershock or another strong earthquake in the vicinity. Demonstrations of the present procedure will be shown for the recent examples

  7. A chemodynamic approach for estimating losses of target organic chemicals from water during sample holding time

    USGS Publications Warehouse

    Capel, P.D.; Larson, S.J.

    1995-01-01

    Minimizing the loss of target organic chemicals from environmental water samples between the time of sample collection and isolation is important to the integrity of an investigation. During this sample holding time, there is a potential for analyte loss through volatilization from the water to the headspace, sorption to the walls and cap of the sample bottle; and transformation through biotic and/or abiotic reactions. This paper presents a chemodynamic-based, generalized approach to estimate the most probable loss processes for individual target organic chemicals. The basic premise is that the investigator must know which loss process(es) are important for a particular analyte, based on its chemodynamic properties, when choosing the appropriate method(s) to prevent loss.

  8. Method for estimating spatially variable seepage loss and hydraulic conductivity in intermittent and ephemeral streams

    USGS Publications Warehouse

    Niswonger, R.G.; Prudic, D.E.; Fogg, G.E.; Stonestrom, D.A.; Buckland, E.M.

    2008-01-01

    A method is presented for estimating seepage loss and streambed hydraulic conductivity along intermittent and ephemeral streams using streamflow front velocities in initially dry channels. The method uses the kinematic wave equation for routing streamflow in channels coupled to Philip's equation for infiltration. The coupled model considers variations in seepage loss both across and along the channel. Water redistribution in the unsaturated zone is also represented in the model. Sensitivity of the streamflow front velocity to parameters used for calculating seepage loss and for routing streamflow shows that the streambed hydraulic conductivity has the greatest sensitivity for moderate to large seepage loss rates. Channel roughness, geometry, and slope are most important for low seepage loss rates; however, streambed hydraulic conductivity is still important for values greater than 0.008 m/d. Two example applications are presented to demonstrate the utility of the method. Copyright 2008 by the American Geophysical Union.

  9. Estimation of insurance-related losses resulting from coastal flooding in France

    NASA Astrophysics Data System (ADS)

    Naulin, J. P.; Moncoulon, D.; Le Roy, S.; Pedreros, R.; Idier, D.; Oliveros, C.

    2016-01-01

    A model has been developed in order to estimate insurance-related losses caused by coastal flooding in France. The deterministic part of the model aims at identifying the potentially flood-impacted sectors and the subsequent insured losses a few days after the occurrence of a storm surge event on any part of the French coast. This deterministic component is a combination of three models: a hazard model, a vulnerability model, and a damage model. The first model uses the PREVIMER system to estimate the water level resulting from the simultaneous occurrence of a high tide and a surge caused by a meteorological event along the coast. A storage-cell flood model propagates these water levels over the land and thus determines the probable inundated areas. The vulnerability model, for its part, is derived from the insurance schedules and claims database, combining information such as risk type, class of business, and insured values. The outcome of the vulnerability and hazard models are then combined with the damage model to estimate the event damage and potential insured losses. This system shows satisfactory results in the estimation of the magnitude of the known losses related to the flood caused by the Xynthia storm. However, it also appears very sensitive to the water height estimated during the flood period, conditioned by the junction between seawater levels and coastal topography, the accuracy for which is still limited by the amount of information in the system.

  10. Estimation of insurance related losses resulting from coastal flooding in France

    NASA Astrophysics Data System (ADS)

    Naulin, J. P.; Moncoulon, D.; Le Roy, S.; Pedreros, R.; Idier, D.; Oliveros, C.

    2015-04-01

    A model has been developed in order to estimate insurance-related losses caused by coastal flooding in France. The deterministic part of the model aims at identifying the potentially flood-impacted sectors and the subsequent insured losses a few days after the occurrence of a storm surge event on any part of the French coast. This deterministic component is a combination of three models: a hazard model, a vulnerability model and a damage model. The first model uses the PREVIMER system to estimate the water level along the coast. A storage-cell flood model propagates these water levels over the land and thus determines the probable inundated areas. The vulnerability model, for its part, is derived from the insurance schedules and claims database; combining information such as risk type, class of business and insured values. The outcome of the vulnerability and hazard models are then combined with the damage model to estimate the event damage and potential insured losses. This system shows satisfactory results in the estimation of the magnitude of the known losses related to the flood caused by the Xynthia storm. However, it also appears very sensitive to the water height estimated during the flood period, conditioned by the junction between sea water levels and coastal topography for which the accuracy is still limited in the system.

  11. An Atlas of ShakeMaps for Selected Global Earthquakes

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul; Marano, Kristin D.

    2008-01-01

    An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.

  12. Uncertainty in sample estimates and the implicit loss function for soil information.

    NASA Astrophysics Data System (ADS)

    Lark, Murray

    2015-04-01

    One significant challenge in the communication of uncertain information is how to enable the sponsors of sampling exercises to make a rational choice of sample size. One way to do this is to compute the value of additional information given the loss function for errors. The loss function expresses the costs that result from decisions made using erroneous information. In certain circumstances, such as remediation of contaminated land prior to development, loss functions can be computed and used to guide rational decision making on the amount of resource to spend on sampling to collect soil information. In many circumstances the loss function cannot be obtained prior to decision making. This may be the case when multiple decisions may be based on the soil information and the costs of errors are hard to predict. The implicit loss function is proposed as a tool to aid decision making in these circumstances. Conditional on a logistical model which expresses costs of soil sampling as a function of effort, and statistical information from which the error of estimates can be modelled as a function of effort, the implicit loss function is the loss function which makes a particular decision on effort rational. In this presentation the loss function is defined and computed for a number of arbitrary decisions on sampling effort for a hypothetical soil monitoring problem. This is based on a logistical model of sampling cost parameterized from a recent geochemical survey of soil in Donegal, Ireland and on statistical parameters estimated with the aid of a process model for change in soil organic carbon. It is shown how the implicit loss function might provide a basis for reflection on a particular choice of sample size by comparing it with the values attributed to soil properties and functions. Scope for further research to develop and apply the implicit loss function to help decision making by policy makers and regulators is then discussed.

  13. Estimation of Age Using Alveolar Bone Loss: Forensic and Anthropological Applications.

    PubMed

    Ruquet, Michel; Saliba-Serre, Bérengère; Tardivo, Delphine; Foti, Bruno

    2015-09-01

    The objective of this study was to utilize a new odontological methodological approach based on radiographic for age estimation. The study was comprised of 397 participants aged between 9 and 87 years. A clinical examination and a radiographic assessment of alveolar bone loss were performed. Direct measures of alveolar bone level were recorded using CT scans. A medical examination report was attached to the investigation file. Because of the link between alveolar bone loss and age, a model was proposed to enable simple, reliable, and quick age estimation. This work added new arguments for age estimation. This study aimed to develop a simple, standardized, and reproducible technique for age estimation of adults of actual populations in forensic medicine and ancient populations in funeral anthropology.

  14. A smartphone application for earthquakes that matter!

    NASA Astrophysics Data System (ADS)

    Bossu, Rémy; Etivant, Caroline; Roussel, Fréderic; Mazet-Roux, Gilles; Steed, Robert

    2014-05-01

    level of shaking intensity with empirical models of fatality losses calibrated on past earthquakes in each country. Non-seismic detections and macroseismic questionnaires collected online are combined to identify as many as possible of the felt earthquakes regardless their magnitude. Non seismic detections include Twitter earthquake detections, developed by the US Geological Survey, where the number of tweets containing the keyword "earthquake" is monitored in real time and flashsourcing, developed by the EMSC, which detect traffic surges on its rapid earthquake information website caused by the natural convergence of eyewitnesses who rush to the Internet to investigate the cause of the shaking that they have just felt. All together, we estimate that the number of detected felt earthquakes is around 1 000 per year, compared with the 35 000 earthquakes annually reported by the EMSC! Felt events are already the subject of the web page "Latest significant earthquakes" on EMSC website (http://www.emsc-csem.org/Earthquake/significant_earthquakes.php) and of a dedicated Twitter service @LastQuake. We will present the identification process of the earthquakes that matter, the smartphone application itself (to be released in May) and its future evolutions.

  15. Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984

    USGS Publications Warehouse

    Sipkin, S.A.

    1987-01-01

    The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.

  16. Landslides in Colorado, USA--Impacts and loss estimation for 2010

    USGS Publications Warehouse

    Highland, Lynn M.

    2012-01-01

    The focus of this study is to investigate landslides and consequent losses which affected Colorado in the year 2010. By obtaining landslide reports from a variety of sources, this report will demonstrate the feasibility of creating a profile of landslides and their effects on communities. A short overview of the current status of landslide-loss studies for the United States is introduced, followed by a compilation of landslide occurrence and associated losses and impacts which affected Colorado for the year 2010. Direct costs are summarized in descriptive and tabular form, and where possible, indirect costs are also noted or estimated. Total direct costs of landslides in Colorado for the year 2010 were approximately $9,149,335.00 (2010 U.S. dollars). (Since not all data for damages and costs were obtained, this figure realistically could be considerably higher.) Indirect costs were noted where available but are not totaled due to the fact that most indirect costs were not obtainable for various reasons outlined later in this report. Casualty data are considered as being within the scope of loss evaluation, and are reported in Appendix 1, but are not assigned dollar losses. More details on the source material for loss data not found in the reference section are reported in Appendix 2, and Appendix 3 summarizes notes on landslide-loss investigations in general and lessons learned during the process of loss-data collection.

  17. Identification and Estimation of Postseismic Deformation: Implications for Plate Motion Models, Models of the Earthquake Cycle, and Terrestrial Reference Frame Definition

    NASA Astrophysics Data System (ADS)

    Kedar, S.; Bock, Y.; Moore, A. W.; Argus, D. F.; Fang, P.; Liu, Z.; Haase, J. S.; Su, L.; Owen, S. E.; Goldberg, D.; Squibb, M. B.; Geng, J.

    2015-12-01

    Postseismic deformation indicates a viscoelastic response of the lithosphere. It is critical, then, to identify and estimate the extent of postseismic deformation in both space and time, not only for its inherent information on crustal rheology and earthquake physics, but also since it must considered for plate motion models that are derived geodetically from the "steady-state" interseismic velocities, models of the earthquake cycle that provide interseismic strain accumulation and earthquake probability forecasts, as well as terrestrial reference frame definition that is the basis for space geodetic positioning. As part of the Solid Earth Science ESDR System) SESES project under a NASA MEaSUREs grant, JPL and SIO estimate combined daily position time series for over 1800 GNSS stations, both globally and at plate boundaries, independently using the GIPSY and GAMIT software packages, but with a consistent set of a prior epoch-date coordinates and metadata. The longest time series began in 1992, and many of them contain postseismic signals. For example, about 90 of the global GNSS stations out of more than 400 that define the ITRF have experienced one or more major earthquakes and 36 have had multiple earthquakes; as expected, most plate boundary stations have as well. We quantify the spatial (distance from rupture) and temporal (decay time) extent of postseismic deformation. We examine parametric models (log, exponential) and a physical model (rate- and state-dependent friction) to fit the time series. Using a PCA analysis, we determine whether or not a particular earthquake can be uniformly fit by a single underlying postseismic process - otherwise we fit individual stations. Then we investigate whether the estimated time series velocities can be directly used as input to plate motion models, rather than arbitrarily removing the apparent postseismic portion of a time series and/or eliminating stations closest to earthquake epicenters.

  18. Deep Heterogeneous Structure and Earthquake Generating Properties in the Yamasaki Fault Zone Estimated from Dense Seismic Observation

    NASA Astrophysics Data System (ADS)

    Nishigami, K.; Shibutani, T.; Katao, H.; Yamaguchi, S.; Mamada, Y.

    2011-12-01

    The Yamasaki fault zone is a left-lateral, strike-slip active fault with a total length of about 80 km in southwest Japan. We deployed dense seismic observation network, which is composed of 32 stations with average spacing of 5-10 km, around the Yamasaki fault zone. We have been estimating detailed fault structure such as fault dip and shape, segmentation, and possible location of asperities and rupture initiation point, as well as generating properties of earthquakes in the fault zone, through analyses of accurate hypocenter distribution, focal mechanism, 3-D velocity tomography, coda wave inversion, and other waveform analyses. We also deployed a linear seismic array across the fault, composed of 20 stations with about 20 m spacing, in order to delineate the fault-zone structure in more detail using the seismic waves trapped inside the low velocity fault-zone. We also estimated resistivity structure at shallow depth of the fault zone by AMT (audio-frequency magnetotelluric) and MT surveys. In the scattering analysis of coda waves, we used the waveform data of dense temporary stations from 2008 to 2010 and also the routine stations in 2002 and 2003. Fig.1 shows an example of the result, 3-D distribution of relative scattering coefficients estimated around the Yamasaki fault zone. In this analysis, 2,391 waveforms recorded at 60 stations for 121 earthquakes were used. This result shows that microseismicity is high and scattering coefficient is relatively larger in the upper crust along the entire fault zone. The distribution of strong scatterers suggests that the Ohara and Hijima faults, which are the segments in the northwestern part of the Yamasaki fault zone, have almost vertical fault plane from surface to a depth of about 15 km. We will construct a fault structure model and discuss its relation to seismic activity in the Yamasaki fault zone. We used seismic network data operated by Univs., NIED, AIST, and JMA. This study is carried out as a part of the

  19. Deep Structure and Earthquake Generating Properties in the Yamasaki Fault Zone, Southwest Japan, Estimated from Dense Seismic Observation

    NASA Astrophysics Data System (ADS)

    Nishigami, K.; Shibutani, T.; Katao, H.; Yamaguchi, S.; Mamada, Y.

    2012-12-01

    The Yamasaki fault zone is a left-lateral, strike-slip active fault with a total length of about 80 km in southwest Japan. We deployed dense seismic observation network, which is composed of 32 stations with average spacing of 5-10 km, around the Yamasaki fault zone. We have been estimating detailed fault structure such as fault dip and shape, segmentation, and possible location of asperities and rupture initiation point, as well as generating properties of earthquakes in and around the fault zone, through analyses of accurate hypocenter distribution, focal mechanism, 3-D velocity tomography, coda wave inversion, and other waveform analyses. We also deployed a linear seismic array across the fault, composed of 20 stations with about 20 m spacing, in order to delineate the fault-zone structure in more detail using the seismic waves trapped inside the low velocity fault-zone. We also estimated detailed resistivity structure at shallow depth of the fault zone by AMT (audio-frequency magnetotelluric) surveys. In the scattering analysis of seismic coda waves, we used the waveform data of dense temporary stations from 2008 to 2010 and also the routine-stations data in 2002 and 2003, and estimated 3-D distribution of relative scattering coefficients around the Yamasaki fault zone. In this analysis, 3,033 waveforms recorded at 60 stations for 136 earthquakes were used. This result shows that microseismicity is high and scattering coefficient is relatively larger in the upper crust along the entire fault zone. The distribution of strong scatterers suggests that the Ohara and Hijima faults, which are the segments in the northwestern part of the Yamasaki fault zone, have almost vertical fault plane from surface to a depth of about 15 km. We will construct a fault structure model and discuss its relation to seismic activity in the Yamasaki fault zone. We used seismic network data operated by Universities, NIED, AIST, and JMA. This study has been carried out as a part of the

  20. The size of earthquakes

    USGS Publications Warehouse

    Kanamori, H.

    1980-01-01

    How we should measure the size of an earthquake has been historically a very important, as well as a very difficult, seismological problem. For example, figure 1 shows the loss of life caused by earthquakes in recent times and clearly demonstrates that 1976 was the worst year for earthquake casualties in the 20th century. However, the damage caused by an earthquake is due not only to its physical size but also to other factors such as where and when it occurs; thus, figure 1 is not necessarily an accurate measure of the "size" of earthquakes in 1976. the point is that the physical process underlying an earthquake is highly complex; we therefore cannot express every detail of an earthquake by a simple straightforward parameter. Indeed, it would be very convenient if we could find a single number that represents the overall physical size of an earthquake. This was in fact the concept behind the Richter magnitude scale introduced in 1935. 

  1. Nitrogen losses from dairy manure estimated through nitrogen mass balance and chemical markers

    USGS Publications Warehouse

    Hristov, Alexander N.; Zaman, S.; Vander Pol, M.; Ndegwa, P.; Campbell, L.; Silva, S.

    2009-01-01

    Ammonia is an important air and water pollutant, but the spatial variation in its concentrations presents technical difficulties in accurate determination of ammonia emissions from animal feeding operations. The objectives of this study were to investigate the relationship between ammonia volatilization and ??15N of dairy manure and the feasibility of estimating ammonia losses from a dairy facility using chemical markers. In Exp. 1, the N/P ratio in manure decreased by 30% in 14 d as cumulative ammonia losses increased exponentially. Delta 15N of manure increased throughout the course of the experiment and ??15N of emitted ammonia increased (p < 0.001) quadratically from -31??? to -15 ???. The relationship between cumulative ammonia losses and ??15N of manure was highly significant (p < 0.001; r2 = 0.76). In Exp. 2, using a mass balance approach, approximately half of the N excreted by dairy cows (Bos taurus) could not be accounted for in 24 h. Using N/P and N/K ratios in fresh and 24-h manure, an estimated 0.55 and 0.34 (respectively) of the N excreted with feces and urine could not be accounted for. This study demonstrated that chemical markers (P, K) can be successfully used to estimate ammonia losses from cattle manure. The relationship between manure ??15N and cumulative ammonia loss may also be useful for estimating ammonia losses. Although promising, the latter approach needs to be further studied and verified in various experimental conditions and in the field. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  2. Estimated crop yield losses due to surface ozone exposure and economic damage in India.

    PubMed

    Debaje, S B

    2014-06-01

    In this study, we estimate yield losses and economic damage of two major crops (winter wheat and rabi rice) due to surface ozone (O3) exposure using hourly O3 concentrations for the period 2002-2007 in India. This study estimates crop yield losses according to two indices of O3 exposure: 7-h seasonal daytime (0900-1600 hours) mean measured O3 concentration (M7) and AOT40 (accumulation exposure of O3 concentration over a threshold of 40 parts per billion by volume during daylight hours (0700-1800 hours), established by field studies. Our results indicate that relative yield loss from 5 to 11% (6-30%) for winter wheat and 3-6% (9-16%) for rabi rice using M7 (AOT40) index of the mean total winter wheat 81 million metric tons (Mt) and rabi rice 12 Mt production per year for the period 2002-2007. The estimated mean crop production loss (CPL) for winter wheat are from 9 to 29 Mt, account for economic cost loss was from 1,222 to 4,091 million US$ annually. Similarly, the mean CPL for rabi rice are from 0.64 to 2.1 Mt, worth 86-276 million US$. Our calculated winter wheat and rabi rice losses agree well with previous results, providing the further evidence that large crop yield losses occurring in India due to current O3 concentration and further elevated O3 concentration in future may pose threat to food security.

  3. Proceedings of Conference XVIII: a workshop on "Continuing actions to reduce losses from earthquakes in the Mississippi Valley area," 24-26 May, 1982, St. Louis, Missouri

    USGS Publications Warehouse

    Gori, Paula L.; Hays, Walter W.; Kitzmiller, Carla

    1983-01-01

    payoff and trre lowest cost and effort requirements. These action plans, which identify steps that can be undertaken immediately to reduce losses from earthquakes in each of the seven States in the Mississippi Valley area, are contained in this report. The draft 5-year plan for the Central United States, prepared in the Knoxville workshop, was the starting point of the small group discussions in the St. Louis workshop which lead to the action plans contained in this report. For completeness, the draft 5-year plan for the Central United States is reproduced as Appendix B.

  4. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  5. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  6. Development of an online tool for tsunami inundation simulation and tsunami loss estimation

    NASA Astrophysics Data System (ADS)

    Srivihok, P.; Honda, K.; Ruangrassamee, A.; Muangsin, V.; Naparat, P.; Foytong, P.; Promdumrong, N.; Aphimaeteethomrong, P.; Intavee, A.; Layug, J. E.; Kosin, T.

    2014-05-01

    The devastating impacts of the 2004 Indian Ocean tsunami highlighted the need for an effective end-to-end tsunami early warning system in the region that connects the scientific components of warning with preparedness of institutions and communities to respond to an emergency. Essential to preparedness planning is knowledge of tsunami risks. In this study, development of an online tool named “INSPIRE” for tsunami inundation simulation and tsunami loss estimation is presented. The tool is designed to accommodate various accuracy levels of tsunami exposure data which will support the users to undertake preliminary tsunami risk assessment from the existing data with progressive improvement with the use of more detailed and accurate datasets. Sampling survey technique is introduced to improve the local vulnerability data with lower cost and manpower. The performance of the proposed methodology and the INSPIRE tool were tested against the dataset in Kamala and Patong municipalities, Phuket province, Thailand. The estimated building type ratios from the sampling survey show the satisfactory agreement with the actual building data at the test sites. Sub-area classification by land use can improve the accuracy of the building type ratio estimation. For the resulting loss estimation, the exposure data generated from detailed field survey can provide the agreeable results when comparing to the actual building damage recorded for the Indian Ocean tsunami event in 2004. However, lower accuracy exposure data derived from sampling survey and remote sensing can still provide a comparative overview of estimated loss.

  7. A quadratic energy minimization framework for signal loss estimation from arbitrarily sampled ultrasound data.

    PubMed

    Hennersperger, Christoph; Mateus, Diana; Baust, Maximilian; Navab, Nassir

    2014-01-01

    We present a flexible and general framework to iteratively solve quadratic energy problems on a non uniform grid, targeted at ultrasound imaging. Therefore, we model input samples as the nodes of an irregular directed graph, and define energies according to the application by setting weights to the edges. To solve the energy, we derive an effective optimization scheme, which avoids both the explicit computation of a linear system, as well as the compounding of the input data on a regular grid. The framework is validated in the context of 3D ultrasound signal loss estimation with the goal of providing an uncertainty estimate for each 3D data sample. Qualitative and quantitative results for 5 subjects and two target regions, namely US of the bone and the carotid artery, show the benefits of our approach, yielding continuous loss estimates. PMID:25485401

  8. Preventing land loss in coastal Louisiana: estimates of WTP and WTA.

    PubMed

    Petrolia, Daniel R; Kim, Tae-Goun

    2011-03-01

    A dichotomous-choice contingent-valuation survey was conducted in the State of Louisiana (USA) to estimate compensating surplus (CS) and equivalent surplus (ES) welfare measures for the prevention of future coastal wetland losses in Louisiana. Valuations were elicited using both willingness to pay (WTP) and willingness to accept compensation (WTA) payment vehicles. Mean CS (WTP) estimates based on a probit model using a Box-Cox specification on income was $825 per household annually, and mean ES (WTA) was estimated at $4444 per household annually. Regression results indicate that the major factors influencing support for land-loss prevention were income (positive, WTP model only), perceived hurricane protection benefits (positive), environmental and recreation protection (positive), distrust of government (negative), age (positive, WTA model only), and race (positive for whites).

  9. Perspectives on earthquake hazards in the New Madrid seismic zone, Missouri

    USGS Publications Warehouse

    Thenhaus, P.C.

    1990-01-01

    A sequence of three great earthquakes struck the Central United States during the winter of 1811-1812 in the area of New Madrid, Missouri. they are considered to be the greatest earthquakes in the conterminous U.S because they were felt and caused damage at far greater distances than any other earthquakes in U.S history. The large population currently living within the damage area of these earthquakes means that widespread destruction and loss of life is likely if the sequence were repeated. In contrast to California, where the earthquakes are felt frequently, the damaging earthquakes that have occurred in the Easter U.S-in 155 (Cape Ann, Mass.), 1811-12 (New Madrid, Mo.), 1886 (Charleston S.C) ,and 1897 (Giles County, Va.- are generally regarded as only historical phenomena (fig. 1). The social memory of these earthquakes no longer exists. A fundamental problem in the Eastern U.S, therefore, is that the earthquake hazard is not generally considered today in land-use and civic planning. This article offers perspectives on the earthquake hazard of the New Madrid seismic zone through discussions of the geology of the Mississippi Embayment, the historical earthquakes that have occurred there, the earthquake risk, and the "tools" that geoscientists have to study the region. The so-called earthquake hazard is defined  by the characterization of the physical attributes of the geological structures that cause earthquakes, the estimation of the recurrence times of the earthquakes, the estimation of the recurrence times of the earthquakes, their potential size, and the expected ground motions. the term "earthquake risk," on the other hand, refers to aspects of the expected damage to manmade strctures and to lifelines as a result of the earthquake hazard.  

  10. Annual South American Forest Loss Estimates (1989-2011) Based on Passive Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    van Marle, M.; van der Werf, G.; de Jeu, R.; Liu, Y.

    2014-12-01

    Vegetation dynamics, such as forest loss, are an important factor in global climate, but long-term and consistent information on these dynamics on continental scales is lacking. We have quantified large-scale forest loss over the 90s and 00s in the tropical biomes of South America using a passive-microwave satellite-based vegetation product. Our forest loss estimates are based on remotely sensed vegetation optical depth (VOD), which is an indicator of vegetation water content simultaneously retrieved with soil moisture. The advantage of low-frequency microwave remote sensing is that aerosols and clouds do not affect the observations. Furthermore, the longer wavelengths of passive microwaves penetrate deeper into vegetation than other products derived from optical and thermal sensors. This has the consequence that both woody parts of vegetation and leaves can be observed. The merged VOD product of AMSR-E and SSM/I observations, which covers over 23 years of daily observations, is used. We used this data stream and an outlier detection algorithm to quantify spatial and temporal variations in forest loss dynamics. Qualitatively, our results compared favorably to the newly developed Global Forest Change (GFC) maps based on Landsat data (r2=0.96), and this allowed us to convert the VOD outlier count to forest loss. Our results are spatially explicit with a 0.25-degree resolution and annual time step and we will present our estimates on country level. The added benefit of our results compared to GFC is the longer time period. The results indicate a relatively steady increase in forest loss in Brazil from 1989 until 2003, followed by two high forest loss years and a declining trend afterwards. This contrasts with other South American countries such as Bolivia and Peru, where forest losses increased in almost the whole 00s in comparison with the 90s.

  11. Estimate of tsunami source using optimized unit sources and including dispersion effects during tsunami propagation: The 2012 Haida Gwaii earthquake

    NASA Astrophysics Data System (ADS)

    Gusman, Aditya Riadi; Mulia, Iyan Eka; Satake, Kenji; Watada, Shingo; Heidarzadeh, Mohammad; Sheehan, Anne F.

    2016-09-01

    We apply a genetic algorithm to find the optimized unit sources using dispersive tsunami synthetics to estimate the tsunami source of the 2012 Haida Gwaii earthquake. The optimal number and distribution of unit sources gives the sea surface elevation similar to that from our previous slip distribution on a fault using tsunami data, but different from that using seismic data. The difference is possibly due to submarine mass failure in the source region. Dispersion effects during tsunami propagation reduce the maximum amplitudes by up to 20% of conventional linear longwave propagation model. Dispersion effects also increase tsunami travel time by approximately 1 min per 1300 km on average. The dispersion effects on amplitudes depend on the azimuth from the tsunami source reflecting the directivity of tsunami source, while the effects on travel times depend only on the distance from the source.

  12. Annual South American forest loss estimates based on passive microwave remote sensing (1990-2010)

    NASA Astrophysics Data System (ADS)

    van Marle, M. J. E.; van der Werf, G. R.; de Jeu, R. A. M.; Liu, Y. Y.

    2016-02-01

    Consistent forest loss estimates are important to understand the role of forest loss and deforestation in the global carbon cycle, for biodiversity studies, and to estimate the mitigation potential of reducing deforestation. To date, most studies have relied on optical satellite data and new efforts have greatly improved our quantitative knowledge on forest dynamics. However, most of these studies yield results for only a relatively short time period or are limited to certain countries. We have quantified large-scale forest loss over a 21-year period (1990-2010) in the tropical biomes of South America using remotely sensed vegetation optical depth (VOD). This passive microwave satellite-based indicator of vegetation water content and vegetation density has a much coarser spatial resolution than optical data but its temporal resolution is higher and VOD is not impacted by aerosols and cloud cover. We used the merged VOD product of the Advanced Microwave Scanning Radiometer (AMSR-E) and Special Sensor Microwave Imager (SSM/I) observations, and developed a change detection algorithm to quantify spatial and temporal variations in forest loss dynamics. Our results compared reasonably well with the newly developed Landsat-based Global Forest Change (GFC) maps, available for the 2001 onwards period (r2 = 0.90 when comparing annual country-level estimates). This allowed us to convert our identified changes in VOD to forest loss area and compute these from 1990 onwards. We also compared these calibrated results to PRODES (r2 = 0.60 when comparing annual state-level estimates). We found that South American forest exhibited substantial interannual variability without a clear trend during the 1990s, but increased from 2000 until 2004. After 2004, forest loss decreased again, except for two smaller peaks in 2007 and 2010. For a large part, these trends were driven by changes in Brazil, which was responsible for 56 % of the total South American forest loss area over our study

  13. The radiated seismic energy and apparent stress of interplate and intraplate earthquakes at subduction zone environments; implications for seismic hazard estimation

    USGS Publications Warehouse

    Choy, George L.; Boatwright, John L.; Kirby, Stephen H.

    2001-01-01

    The radiated seismic energies (ES) of 980 shallow subduction-zone earthquakes with magnitudes ? 5.8 are used to examine global patterns of energy release and apparent stress. In contrast to traditional methods which have relied upon empirical formulas, these energies are computed through direct spectral analysis of broadband seismic waveforms. Energy gives a physically different measure of earthquake size than moment. Moment, being derived from the low-frequency asymptote of the displacement spectra, is related to the final static displacement. Thus, moment is crucial to the long-term tectonic implication of an earthquake. In contrast, energy, being derived from the velocity power spectra, is more a measure of seismic potential for damage to anthropogenic structures. There is considerable scatter in the plot of ES-M0 for worldwide earthquakes. For any given M0, the ES can vary by as much as an order of magnitude about the mean regression line. The global variation between ES and M0, while large, is not random. When subsets of ES-M0 are plotted as a function of seismic region, tectonic setting and faulting type, the scatter in data is often substantially reduced. There are two profound implications for the estimation of seismic and tsunamic hazard. First, it is now feasible to characterize the apparent stress for particular regions. Second, a given M0 does not have a unique ES. This means that M0 alone is not sufficient to describe all aspects of an earthquake. In particular, we have found examples of interplate thrust-faulting earthquakes and intraslab normal-faulting earthquakes occurring in the same epicentral region with vastly different macroseismic effects. Despite the gross macroseismic disparities, the MW?s in these examples were identical. However, the Me?s (energy magnitudes) successfully distinguished the earthquakes that were more damaging.

  14. Comparison of the Cut-and-Paste and Full Moment Tensor Methods for Estimating Earthquake Source Parameters

    NASA Astrophysics Data System (ADS)

    Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.

    2008-12-01

    Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the

  15. Earthquake Hazard and Risk Assessment for Turkey

    NASA Astrophysics Data System (ADS)

    Betul Demircioglu, Mine; Sesetyan, Karin; Erdik, Mustafa

    2010-05-01

    Using a GIS-environment to present the results, seismic risk analysis is considered as a helpful tool to support the decision making for planning and prioritizing seismic retrofit intervention programs at large scale. The main ingredients of seismic risk analysis consist of seismic hazard, regional inventory of buildings and vulnerability analysis. In this study, the assessment of the national earthquake hazard based on the NGA ground motion prediction models and the comparisons of the results with the previous models have been considered, respectively. An evaluation of seismic risk based on the probabilistic intensity ground motion prediction for Turkey has been investigated. According to the Macroseismic approach of Giovinazzi and Lagomarsino (2005), two alternative vulnerability models have been used to estimate building damage. The vulnerability and ductility indices for Turkey have been taken from the study of Giovinazzi (2005). These two vulnerability models have been compared with the observed earthquake damage database. A good agreement between curves has been clearly observed. In additional to the building damage, casualty estimations based on three different methods for each return period and for each vulnerability model have been presented to evaluate the earthquake loss. Using three different models of building replacement costs, the average annual loss (AAL) and probable maximum loss ratio (PMLR) due to regional earthquake hazard have been provided to form a basis for the improvement of the parametric insurance model and the determination of premium rates for the compulsory earthquake insurance in Turkey.

  16. Calorie Estimation in Adults Differing in Body Weight Class and Weight Loss Status

    PubMed Central

    Brown, Ruth E; Canning, Karissa L; Fung, Michael; Jiandani, Dishay; Riddell, Michael C; Macpherson, Alison K; Kuk, Jennifer L

    2016-01-01

    Purpose Ability to accurately estimate calories is important for weight management, yet few studies have investigated whether individuals can accurately estimate calories during exercise, or in a meal. The objective of this study was to determine if accuracy of estimation of moderate or vigorous exercise energy expenditure and calories in food is associated with body weight class or weight loss status. Methods Fifty-eight adults who were either normal weight (NW) or overweight (OW), and either attempting (WL) or not attempting weight loss (noWL), exercised on a treadmill at a moderate (60% HRmax) and a vigorous intensity (75% HRmax) for 25 minutes. Subsequently, participants estimated the number of calories they expended through exercise, and created a meal that they believed to be calorically equivalent to the exercise energy expenditure. Results The mean difference between estimated and measured calories in exercise and food did not differ within or between groups following moderate exercise. Following vigorous exercise, OW-noWL overestimated energy expenditure by 72%, and overestimated the calories in their food by 37% (P<0.05). OW-noWL also significantly overestimated exercise energy expenditure compared to all other groups (P<0.05), and significantly overestimated calories in food compared to both WL groups (P<0.05). However, among all groups there was a considerable range of over and underestimation (−280 kcal to +702 kcal), as reflected by the large and statistically significant absolute error in calorie estimation of exercise and food. Conclusion There was a wide range of under and overestimation of calories during exercise and in a meal. Error in calorie estimation may be greater in overweight adults who are not attempting weight loss. PMID:26469988

  17. Handbook for the estimation of microwave propagation effects: Link calculations for earth-space paths (path loss and noise estimation)

    NASA Technical Reports Server (NTRS)

    Crane, R. K.; Blood, D. W.

    1979-01-01

    A single model for a standard of comparison for other models when dealing with rain attenuation problems in system design and experimentation is proposed. Refinements to the Global Rain Production Model are incorporated. Path loss and noise estimation procedures as the basic input to systems design for earth-to-space microwave links operating at frequencies from 1 to 300 GHz are provided. Topics covered include gaseous absorption, attenuation by rain, ionospheric and tropospheric scintillation, low elevation angle effects, radome attenuation, diversity schemes, link calculation, and receiver noise emission by atmospheric gases, rain, and antenna contributions.

  18. Annual South American forest loss estimates based on passive microwave remote sensing (1990-2010)

    NASA Astrophysics Data System (ADS)

    van Marle, M. J. E.; van der Werf, G. R.; de Jeu, R. A. M.; Liu, Y. Y.

    2015-07-01

    Consistent forest loss estimates are important to understand the role of forest loss and deforestation in the global carbon cycle, for biodiversity studies, and to estimate the mitigation potential of reducing deforestation. To date, most studies have relied on optical satellite data and new efforts have greatly improved our quantitative knowledge on forest dynamics. However, most of these studies yield results for only a relatively short time period or are limited to certain countries. We have quantified large-scale forest losses over a 21 year period (1990-2010) in the tropical biomes of South America using remotely sensed vegetation optical depth (VOD). This passive microwave satellite-based indicator of vegetation water content and vegetation density has a much coarser spatial resolution than optical but its temporal resolution is higher and VOD is not impacted by aerosols and cloud cover. We used the merged VOD product of the Advanced Microwave Scanning Radiometer (AMSR-E) and Special Sensor Microwave Imager (SSM/I) observations, and developed a change detection algorithm to quantify spatial and temporal variations in forest loss dynamics. Our results compared favorably to the newly developed Global Forest Change (GFC) maps based on Landsat data and available for the 2001 onwards period (r2 = 0.90 when comparing annual country-level estimates), which allowed us to convert our results to forest loss area and compute these from 1990 onwards. We found that South American forest exhibited substantial interannual variability without a clear trend during the 1990s, but increased from 2000 until 2004. After 2004, forest loss decreased again, except for two smaller peaks in 2007 and 2010. For a large part, these trends were driven by changes in Brazil, which was responsible for 56 % of the total South American forest loss over our study period according to our results. One of the key findings of our study is that while forest losses decreased in Brazil after 2005

  19. Estimating the mitigation of anthropogenic loss of phosphorus in New Zealand grassland catchments.

    PubMed

    McDowell, R W

    2014-01-15

    Managing phosphorus in catchments is central to improving surface water quality, but knowing how much can be mitigated from agricultural land, and at what cost relative to a natural baseline (or reference condition), is difficult to assess. The difference between median concentrations now and under reference was defined as the anthropogenic loss, while the manageable loss was defined as the median P concentration possible without costing more than 10% of farm profitability (measured as earnings before interest and tax, EBIT). Nineteen strategies to mitigate P loss were ranked according to cost (low, medium, high, very high). Using the average dairy and drystock farms in 14 grassland catchments as test cases, the potential to mitigate P loss from land to water was then modelled for different strategies, beginning with strategies within the lowest cost category from best to least effective, before applying a strategy from a more expensive category. The anthropogenic contribution to stream median FRP and TP concentrations was estimated as 44 and 69%, respectively. However, applying up to three strategies per farm theoretically enabled mitigation of FRP and TP losses sufficient to maintain aesthetic and trout fishery values to be met and at a cost <1% EBIT for drystock farms and <6% EBIT for dairy farms. This shows that defining and acting upon the manageable loss in grassland catchments (with few point sources) has potential to achieve a water quality outcome within an ecological target at little cost. PMID:23579204

  20. Urbanization and agricultural land loss in India: comparing satellite estimates with census data.

    PubMed

    Pandey, Bhartendu; Seto, Karen C

    2015-01-15

    We examine the impacts of urbanization on agricultural land loss in India from 2001 to 2010. We combined a hierarchical classification approach with econometric time series analysis to reconstruct land-cover change histories using time series MODIS 250 m VI images composited at 16-day intervals and night time lights (NTL) data. We compared estimates of agricultural land loss using satellite data with agricultural census data. Our analysis highlights six key results. First, agricultural land loss is occurring around smaller cities more than around bigger cities. Second, from 2001 to 2010, each state lost less than 1% of its total geographical area due to agriculture to urban expansion. Third, the northeastern states experienced the least amount of agricultural land loss. Fourth, agricultural land loss is largely in states and districts which have a larger number of operational or approved SEZs. Fifth, urban conversion of agricultural land is concentrated in a few districts and states with high rates of economic growth. Sixth, agricultural land loss is predominantly in states with higher agricultural land suitability compared to other states. Although the total area of agricultural land lost to urban expansion has been relatively low, our results show that since 2006, the amount of agricultural land converted has been increasing steadily. Given that the preponderance of India's urban population growth has yet to occur, the results suggest an increase in the conversion of agricultural land going into the future.

  1. Estimating the mitigation of anthropogenic loss of phosphorus in New Zealand grassland catchments.

    PubMed

    McDowell, R W

    2014-01-15

    Managing phosphorus in catchments is central to improving surface water quality, but knowing how much can be mitigated from agricultural land, and at what cost relative to a natural baseline (or reference condition), is difficult to assess. The difference between median concentrations now and under reference was defined as the anthropogenic loss, while the manageable loss was defined as the median P concentration possible without costing more than 10% of farm profitability (measured as earnings before interest and tax, EBIT). Nineteen strategies to mitigate P loss were ranked according to cost (low, medium, high, very high). Using the average dairy and drystock farms in 14 grassland catchments as test cases, the potential to mitigate P loss from land to water was then modelled for different strategies, beginning with strategies within the lowest cost category from best to least effective, before applying a strategy from a more expensive category. The anthropogenic contribution to stream median FRP and TP concentrations was estimated as 44 and 69%, respectively. However, applying up to three strategies per farm theoretically enabled mitigation of FRP and TP losses sufficient to maintain aesthetic and trout fishery values to be met and at a cost <1% EBIT for drystock farms and <6% EBIT for dairy farms. This shows that defining and acting upon the manageable loss in grassland catchments (with few point sources) has potential to achieve a water quality outcome within an ecological target at little cost.

  2. Neglect of bandwidth of Odontocetes echo location clicks biases propagation loss and single hydrophone population estimates.

    PubMed

    Ainslie, Michael A

    2013-11-01

    Passive acoustic monitoring with a single hydrophone has been suggested as a cost-effective method to monitor population density of echolocating marine mammals, by estimating the distance at which the hydrophone is able to intercept the echolocation clicks and distinguish these from the background. To avoid a bias in the estimated population density, this method relies on an unbiased estimate of the detection range and therefore of the propagation loss (PL). When applying this method, it is common practice to estimate PL at the center frequency of a broadband echolocation click and to assume this narrowband PL applies also to the broadband click. For a typical situation this narrowband approximation overestimates PL, underestimates the detection range and consequently overestimates the population density by an amount that for fixed center frequency increases with increasing pulse bandwidth and sonar figure of merit.

  3. Estimating earthquake-rupture rates on a fault or fault system

    USGS Publications Warehouse

    Field, E.H.; Page, M.T.

    2011-01-01

    Previous approaches used to determine the rates of different earthquakes on a fault have made assumptions regarding segmentation, have been difficult to document and reproduce, and have lacked the ability to satisfy all available data constraints. We present a relatively objective and reproducible inverse methodology for determining the rate of different ruptures on a fault or fault system. The data used in the inversion include slip rate, event rate, and other constraints such as an optional a priori magnitude-frequency distribution. We demonstrate our methodology by solving for the long-term rate of ruptures on the southern San Andreas fault. Our results imply that a Gutenberg-Richter distribution is consistent with the data available for this fault; however, more work is needed to test the robustness of this assertion. More importantly, the methodology is extensible to an entire fault system (thereby including multifault ruptures) and can be used to quantify the relative benefits of collecting additional paleoseismic data at different sites.

  4. Damping loss factor estimation of two-dimensional orthotropic structures from a displacement field measurement

    NASA Astrophysics Data System (ADS)

    Cherif, Raef; Chazot, Jean-Daniel; Atalla, Noureddine

    2015-11-01

    This paper presents a damping loss factor estimation method of two-dimensional orthotropic structures. The method is based on a scanning laser vibrometer measurement. The dispersion curves of the studied structures are first estimated at several chosen angles of propagation with a spatial Fourier transform. Next the global damping loss factor is evaluated with the proposed inverse wave method. The method is first tested using numerical results obtained from a finite element model. The accuracy of the proposed method is then experimentally investigated on an isotropic aluminium panel and two orthoropic sandwich composite panels with a honeycomb core. The results are finally compared and validated over a large frequency band with classical methods such as the half-power bandwidth method (3 dB method), the decay rate method and the steady-state power input method. The present method offers the possibility of structural characterization with a simple measurement scan.

  5. Estimating formation properties from early-time recovery in wells subject to turbulent head losses

    USGS Publications Warehouse

    Shapiro, A.M.; Oki, D.S.; Greene, E.A.

    1998-01-01

    A mathematical model is developed to interpret the early-time recovering water level following the termination of pumping in wells subject to turbulent head losses. The model assumes that turbulent head losses dissipate immediately when pumping ends. In wells subject to both borehole storage and turbulent head losses, the early-time recovery exhibits a slope equal to 1/2 on log-log plots of the recovery versus time. This half-slope response should not be confused with the half-slope response associated with a linear flow regime during aquifer tests. The presence of a borehole skin due to formation damage or stimulation around the pumped well alters the early-time recovery in wells subject to turbulent head losses and gives the appearance of borehole storage, where the recovery exhibits a unit slope on log-log plots of recovery versus time. Type curves can be used to estimate the formation storafivity from the early-time recovery data. In wells that are suspected of having formation damage or stimulation, the type curves can be used to estimate the 'effective' radius of the pumped well, if an estimate of the formation storativity is available from observation wells or other information. Type curves for a homogeneous and isotropic dual-porosity aquifer are developed and applied to estimate formation properties and the effect of formation stimulation from a single-well test conducted in the Madison limestone near Rapid City, South Dakota.A mathematical model is developed to interpret the early-time recovering water level following the termination of pumping in wells subject to turbulent head losses. The model assumes that turbulent head losses dissipate immediately when pumping ends. In wells subject to both borehole storage and turbulent head losses, the early-time recovery exhibits a slope equal to 1/2 on log-log plots of the recovery versus time. This half-slope response should not be confused with the half-slope response associated with a linear flow regime during

  6. Estimation of the Iron Loss in Deep-Sea Permanent Magnet Motors considering Seawater Compressive Stress

    PubMed Central

    Wei, Yanyu; Zou, Jibin; Li, Jianjun; Qi, Wenjuan; Li, Yong

    2014-01-01

    Deep-sea permanent magnet motor equipped with fluid compensated pressure-tolerant system is compressed by the high pressure fluid both outside and inside. The induced stress distribution in stator core is significantly different from that in land type motor. Its effect on the magnetic properties of stator core is important for deep-sea motor designers but seldom reported. In this paper, the stress distribution in stator core, regarding the seawater compressive stress, is calculated by 2D finite element method (FEM). The effect of compressive stress on magnetic properties of electrical steel sheet, that is, permeability, BH curves, and BW curves, is also measured. Then, based on the measured magnetic properties and calculated stress distribution, the stator iron loss is estimated by stress-electromagnetics-coupling FEM. At last the estimation is verified by experiment. Both the calculated and measured results show that stator iron loss increases obviously with the seawater compressive stress. PMID:25177717

  7. Estimation of the iron loss in deep-sea permanent magnet motors considering seawater compressive stress.

    PubMed

    Xu, Yongxiang; Wei, Yanyu; Zou, Jibin; Li, Jianjun; Qi, Wenjuan; Li, Yong

    2014-01-01

    Deep-sea permanent magnet motor equipped with fluid compensated pressure-tolerant system is compressed by the high pressure fluid both outside and inside. The induced stress distribution in stator core is significantly different from that in land type motor. Its effect on the magnetic properties of stator core is important for deep-sea motor designers but seldom reported. In this paper, the stress distribution in stator core, regarding the seawater compressive stress, is calculated by 2D finite element method (FEM). The effect of compressive stress on magnetic properties of electrical steel sheet, that is, permeability, BH curves, and BW curves, is also measured. Then, based on the measured magnetic properties and calculated stress distribution, the stator iron loss is estimated by stress-electromagnetics-coupling FEM. At last the estimation is verified by experiment. Both the calculated and measured results show that stator iron loss increases obviously with the seawater compressive stress.

  8. Systems, methods and computer readable media for estimating capacity loss in rechargeable electrochemical cells

    DOEpatents

    Gering, Kevin L.

    2013-06-18

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples charge characteristics of the electrochemical cell. The computing system periodically determines cell information from the charge characteristics of the electrochemical cell. The computing system also periodically adds a first degradation characteristic from the cell information to a first sigmoid expression, periodically adds a second degradation characteristic from the cell information to a second sigmoid expression and combines the first sigmoid expression and the second sigmoid expression to develop or augment a multiple sigmoid model (MSM) of the electrochemical cell. The MSM may be used to estimate a capacity loss of the electrochemical cell at a desired point in time and analyze other characteristics of the electrochemical cell. The first and second degradation characteristics may be loss of active host sites and loss of free lithium for Li-ion cells.

  9. Combining double difference and amplitude ratio approaches for Q estimates at the NW Bohemia earthquake swarm region

    NASA Astrophysics Data System (ADS)

    Kriegerowski, Marius; Cesca, Simone; Krüger, Frank; Dahm, Torsten; Horálek, Josef

    2016-04-01

    Aside from the propagation velocity of seismic waves, their attenuation can provide a direct measure of rock properties in the sampled subspace. We present a new attenuation tomography approach exploiting relative amplitude spectral ratios of earthquake pairs. We focus our investigation on North West Bohemia - a region characterized by intense earthquake swarm activity in a confined source region. The inter-event distances are small compared to the epicentral distances to the receivers meeting a fundamental requirement of the method. Due to the similar event locations also the ray paths are very similar. Consequently, the relative spectral ratio is affected mostly by rock properties along the path of the vector distance and thus representative of the focal region. In order to exclude effects of the seismic source spectra, only the high frequency content beyond the corner frequency is taken into consideration. This requires high quality as well as high sampling records. Future improvements in that respect can be expected from the ICDP proposal "Eger rift", which includes plans to install borehole monitoring in the investigated region. 1D and 3D synthetic tests show the feasibility of the presented method. Furthermore, we demonstrate influences of perturbations in source locations and travel time estimates on the determination of Q. Errors in Q scale linearly with errors in the differential travel times. These sources of errors can be attributed to the complex velocity structure of the investigated region. A critical aspect is the signal-to-noise ratio, which imposes a strong limitation and emphasizes the demand for high quality recordings. Hence, the presented method is expected to benefit from bore hole installations. Since we focus our analysis on the NW Bohemia case study example, a synthetic earthquake catalog incorporating source characteristics deduced from preceding moment tensor inversions coupled with a realistic velocity model provides us with a realistic

  10. Estimating the rate of retinal ganglion cell loss to detect glaucoma progression: An observational cohort study.

    PubMed

    Hirooka, Kazuyuki; Izumibata, Saeko; Ukegawa, Kaori; Nitta, Eri; Tsujikawa, Akitaka

    2016-07-01

    This study aimed to evaluate the relationship between glaucoma progression and estimates of the retinal ganglion cells (RGCs) obtained by combining structural and functional measurements in patients with glaucoma.In the present observational cohort study, we examined 116 eyes of 62 glaucoma patients. Using Cirrus optical coherence tomography (OCT), a minimum of 5 serial retinal nerve fiber layer (RNFL) measurements were performed in all eyes. There was a 3-year separation between the first and last measurements. Visual field (VF) testing was performed on the same day as the RNFL imaging using the Swedish Interactive Threshold Algorithm Standard 30-2 program of the Humphrey Field Analyzer. Estimates of the RGC counts were obtained from standard automated perimetry (SAP) and OCT, with a weighted average then used to determine a final estimate of the number of RGCs for each eye. Linear regression was used to calculate the rate of the RGC loss, and trend analysis was used to evaluate both serial RNFL thicknesses and VF progression.Use of the average RNFL thickness parameter of OCT led to detection of progression in 14 of 116 eyes examined, whereas the mean deviation slope detected progression in 31 eyes. When the rates of RGC loss were used, progression was detected in 41 of the 116 eyes, with a mean rate of RGC loss of -28,260 ± 8110 cells/year.Estimation of the rate of RGC loss by combining structural and functional measurements resulted in better detection of glaucoma progression compared to either OCT or SAP. PMID:27472691

  11. Bayesian Tsunami-Waveform Inversion and Tsunami-Source Uncertainty Estimation for the 2011 Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Dettmer, J.; Hossen, M. J.; Cummins, P. R.

    2014-12-01

    This paper develops a Bayesian inversion to infer spatio-temporal parameters of the tsunami source (sea surface) due to megathrust earthquakes. To date, tsunami-source parameter uncertainties are poorly studied. In particular, the effects of parametrization choices (e.g., discretisation, finite rupture velocity, dispersion) on uncertainties have not been quantified. This approach is based on a trans-dimensional self-parametrization of the sea surface, avoids regularization, and provides rigorous uncertainty estimation that accounts for model-selection ambiguity associated with the source discretisation. The sea surface is parametrized using self-adapting irregular grids which match the local resolving power of the data and provide parsimonious solutions for complex source characteristics. Finite and spatially variable rupture velocity fields are addressed by obtaining causal delay times from the Eikonal equation. Data are considered from ocean-bottom pressure and coastal wave gauges. Data predictions are based on Green-function libraries computed from ocean-basin scale tsunami models for cases that include/exclude dispersion effects. Green functions are computed for elementary waves of Gaussian shape and grid spacing which is below the resolution of the data. The inversion is applied to tsunami waveforms from the great Mw=9.0 2011 Tohoku-Oki (Japan) earthquake. Posterior results show a strongly elongated tsunami source along the Japan trench, as obtained in previous studies. However, we find that the tsunami data is fit with a source that is generally simpler than obtained in other studies, with a maximum amplitude less than 5 m. In addition, the data are sensitive to the spatial variability of rupture velocity and require a kinematic source model to obtain satisfactory fits which is consistent with other work employing linear multiple time-window parametrizations.

  12. Loss estimation of debris flow events in mountain areas - An integrated tool for local authorities

    NASA Astrophysics Data System (ADS)

    Papathoma-Koehle, M.; Zischg, A.; Fuchs, S.; Keiler, M.; Glade, T.

    2012-04-01

    Torrents prone to debris flows regularly cause extensive destruction of the built environment, loss of life stock, agricultural land and loss of life in mountain areas. Climate change may increase the frequency and intensity of such events. On the other hand, extensive development of mountain areas is expected to change the spatial pattern of elements at risk exposed and their vulnerability. Consequently, the costs of debris flow events are likely to increase in the coming years. Local authorities responsible for disaster risk reduction are in need of tools that may enable them to assess the future consequences of debris flow events, in particular with respect to the vulnerability of elements at risk. An integrated tool for loss estimation is presented here which is based on a newly developed vulnerability curve and which is applied in test sites in the Province of South Tyrol, Italy. The tool has a dual function: 1) continuous updating of the database regarding damages and process intensities that will eventually improve the existing vulnerability curve and 2) loss estimation of future events and hypothetical events or built environment scenarios by using the existing curve. The tool integrates the vulnerability curve together with new user friendly forms of damage documentation. The integrated tool presented here can be used by local authorities not only for the recording of damage caused by debris flows and the allocation of compensation to the owners of damaged buildings but also for land use planning, cost benefit analysis of structural protection measures and emergency planning.

  13. Period-dependent source rupture behavior of the 2011 Tohoku earthquake estimated by multi period-band Bayesian waveform inversion

    NASA Astrophysics Data System (ADS)

    Kubo, H.; Asano, K.; Iwata, T.; Aoi, S.

    2014-12-01

    Previous studies for the period-dependent source characteristics of the 2011 Tohoku earthquake (e.g., Koper et al., 2011; Lay et al., 2012) were based on the short and long period source models using different method. Kubo et al. (2013) obtained source models of the 2011 Tohoku earthquake using multi period-bands waveform data by a common inversion method and discussed its period-dependent source characteristics. In this study, to achieve more in detail spatiotemporal source rupture behavior of this event, we introduce a new fault surface model having finer sub-fault size and estimate the source models in multi period-bands using a Bayesian inversion method combined with a multi-time-window method. Three components of velocity waveforms at 25 stations of K-NET, KiK-net, and F-net of NIED are used in this analysis. The target period band is 10-100 s. We divide this period band into three period bands (10-25 s, 25-50 s, and 50-100 s) and estimate a kinematic source model in each period band using a Bayesian inversion method with MCMC sampling (e.g., Fukuda & Johnson, 2008; Minson et al., 2013, 2014). The parameterization of spatiotemporal slip distribution follows the multi-time-window method (Hartzell & Heaton, 1983). The Green's functions are calculated by the 3D FDM (GMS; Aoi & Fujiwara, 1999) using a 3D velocity structure model (JIVSM; Koketsu et al., 2012). The assumed fault surface model is based on the Pacific plate boundary of JIVSM and is divided into 384 subfaults of about 16 * 16 km^2. The estimated source models in multi period-bands show the following source image: (1) First deep rupture off Miyagi at 0-60 s toward down-dip mostly radiating relatively short period (10-25 s) seismic waves. (2) Shallow rupture off Miyagi at 45-90 s toward up-dip with long duration radiating long period (50-100 s) seismic wave. (3) Second deep rupture off Miyagi at 60-105 s toward down-dip radiating longer period seismic waves then that of the first deep rupture. (4) Deep

  14. Single-station estimates of the seismic moment of the 1960 Chilean and 1964 Alaskan earthquakes, using the mantle magnitude M m

    NASA Astrophysics Data System (ADS)

    Okal, Emile A.; Talandier, Jacques

    1991-05-01

    Measurements are taken of the mantle magnitude M m , developed and introduced in previous papers, in the case of the 1960 Chilean and 1964 Alaskan earthquakes, by far the largest events ever recorded instrumentally. We show that the M m algorithm recovers the seismic moment of these gigantic earthquakes with an accuracy (typically 0.2 to 0.3 units of magnitude, or a factor of 1.5 to 2 on the seismic moment) comparable to that achieved on modern, digital, datasets. In particular, this study proves that the mantle magnitude M m does not saturate for large events, as do standard magnitude scales, but rather keeps growing with seismic moment, even for the very largest earthquakes. We further prove that the algorithm can be applied in unfavorable experimental conditions, such as instruments with poor response at mantle periods, seismograms clipped due to limited recording dynamics, or even on microbarograph records of air coupled Rayleigh waves. In addition, we show that it is feasible to use acoustic-gravity air waves generated by those very largest earthquakes, to obtain an estimate of the seismic moment of the event along the general philosophy of the magnitude concept: a single-station measurement ignoring the details of the earthquake's focal mechanism and exact depth.

  15. Source parameters of the 2014 Mw 6.1 South Napa earthquake estimated from the Sentinel 1A, COSMO-SkyMed and GPS data

    NASA Astrophysics Data System (ADS)

    Guangcai, Feng; Zhiwei, Li; Xinjian, Shan; Bing, Xu; Yanan, Du

    2015-08-01

    Using the combination of two InSAR and one GPS data sets, we present the detailed source model of the 2014 Mw 6.1 South Napa earthquake, the biggest tremor to hit the San Francisco Bay Area since the 1989 Mw 6.9 Loma Prieta earthquake. The InSAR data are from the Sentinel-1A (S1A) and COSMO-SkyMed (CS) satellites, and GPS data are provided by Nevada Geodetic Laboratory. We firstly obtain the complete coseismic deformation fields of this event and estimate the InSAR data errors, then using the S1A data to construct the fault geometry, one main and two short parallel sub-faults which haven't been identified by field investigation. As expected the geometry is in good agreement with the aftershock distribution. By inverting the InSAR and GPS data, we derive a three segment slip and rake models. Our model indicates that this event was a right-lateral strike-slip earthquake with a slight reverse component in the West Napa Fault as we estimated. The fault is ~ 30 km long and more than 80% of the seismic moment was released at the center of the fault segment, where the slip reached its maximum (up to 1 m). We also find that our geodetic moment magnitude is 2.07 × 1018 Nm, corresponding to Mw 6.18, larger than that of USGS (Mw 6.0) and GCMT (Mw 6.1). This difference may partly be explained by our InSAR data including about one week's postseismic deformation and aftershocks. The results also demonstrate high SNR and great ability of the newly launched Sentinel-1A in earthquake study. Furthermore, this study suggests that this earthquake has potential to trigger nearby faults, especially the Green Valley fault where the coulomb stress was imparted by the 2014 South Napa earthquake.

  16. Statistical estimation of transmission loss from geoacoustic inversion using a towed array.

    PubMed

    Goh, Yong Han; Gerstoft, Peter; Hodgkiss, William S; Huang, Chen-Fen

    2007-11-01

    Geoacoustic inversion estimates environmental parameters from measured acoustic fields (e.g., received on a towed array). The inversion results have some uncertainty due to noise in the data and modeling errors. Based on the posterior probability density of environmental parameters obtained from inversion, a statistical estimation of transmission loss (TL) can be performed and a credibility level envelope or uncertainty band for the TL generated. This uncertainty band accounts for the inherent variability of the environment not usually contained in sonar performance prediction model inputs. The approach follows [Gerstoft et al. IEEE J. Ocean. Eng. 31, 299-307 (2006)] and is demonstrated with data obtained from the MAPEX2000 experiment conducted by the NATO Undersea Research Center using a towed array and a moored source in the Mediterranean Sea in November 2000. Based on the geoacoustic inversion results, the TL and its variability are estimated and compared with the measured TL.

  17. Uncertainty of canal seepage losses estimated using flowing water balance with acoustic Doppler devices

    NASA Astrophysics Data System (ADS)

    Martin, Chad A.; Gates, Timothy K.

    2014-09-01

    Seepage losses from unlined irrigation canals amount to a large fraction of the total volume of water diverted for agricultural use, posing problems to both water conservation and water quality. Quantifying these losses and identifying areas where they are most prominent are crucial for determining the severity of seepage-related complications and for assessing the potential benefits of seepage reduction technologies and materials. A relatively easy and inexpensive way to estimate losses over an extensive segment of a canal is the flowing water balance, or inflow-outflow, method. Such estimates, however, have long been considered fraught with ambiguity due both to measurement error and to spatial and temporal variability. This paper presents a water balance analysis that evaluates uncertainty in 60 tests on two typical earthen irrigation canals. Monte Carlo simulation is used to account for a number of different sources of uncertainty. Issues of errors in acoustic Doppler flow measurement, in water level readings, and in evaporation estimates are considered. Storage change and canal wetted perimeter area, affected by variability in the canal prism, as well as lagged vs. simultaneous measurements of discharge at the inflow and outflow ends also are addressed. Mean estimated seepage loss rates for the tested canal reaches ranged from about -0.005 (gain) to 0.110 m3 s-1 per hectare of canal wetted perimeter (or -0.043 to 0.95 m d-1) with estimated probability distributions revealing substantial uncertainty. Across the tests, the average coefficient of variation was about 240% and the average 90th inter-percentile range was 0.143 m3 s-1 per hectare (1.24 m d-1). Sensitivity analysis indicates that while the predominant influence on seepage uncertainty is error in measured discharge at the upstream and downstream ends of the canal test reach, the magnitude and uncertainty of storage change due to unsteady flow also is a significant influence. Recommendations are

  18. The impact of uncertain precipitation data on insurance loss estimates using a flood catastrophe model

    NASA Astrophysics Data System (ADS)

    Sampson, C. C.; Fewtrell, T. J.; O'Loughlin, F.; Pappenberger, F.; Bates, P. B.; Freer, J. E.; Cloke, H. L.

    2014-06-01

    Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and reinsurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland, in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than most commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module, and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge-corrected rainfall radar, meteorological reanalysis data (European Centre for Medium-Range Weather Forecasts Reanalysis-Interim; ERA-Interim) and a satellite rainfall product (The Climate Prediction Center morphing method; CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find the loss estimates to be more sensitive to uncertainties propagated from the driving precipitation data sets than to other uncertainties in the hazard and

  19. The impact of uncertain precipitation data on insurance loss estimates using a Flood Catastrophe Model

    NASA Astrophysics Data System (ADS)

    Sampson, C. C.; Fewtrell, T. J.; O'Loughlin, F.; Pappenberger, F.; Bates, P. B.; Freer, J. E.; Cloke, H. L.

    2014-01-01

    Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and re-insurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge corrected rainfall radar, meteorological re-analysis data (ERA-Interim) and a satellite rainfall product (CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find these loss estimates to be highly sensitive to uncertainties propagated from the driving observational datasets, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.

  20. Mediation analysis to estimate direct and indirect milk losses due to clinical mastitis in dairy cattle.

    PubMed

    Detilleux, J; Kastelic, J P; Barkema, H W

    2015-03-01

    Milk losses associated with mastitis can be attributed to either effects of pathogens per se (i.e., direct losses) or effects of the immune response triggered by intramammary infection (indirect losses). The distinction is important in terms of mastitis prevention and treatment. Regardless, the number of pathogens is often unknown (particularly in field studies), making it difficult to estimate direct losses, whereas indirect losses can be approximated by measuring the association between increased somatic cell count (SCC) and milk production. An alternative is to perform a mediation analysis in which changes in milk yield are allocated into their direct and indirect components. We applied this method on data for clinical mastitis, milk and SCC test-day recordings, results of bacteriological cultures (Escherichia coli, Staphylococcus aureus, Streptococcus uberis, coagulase-negative staphylococci, Streptococcus dysgalactiae, and streptococci other than Strep. dysgalactiae and Strep. uberis), and cow characteristics. Following a diagnosis of clinical mastitis, the cow was treated and changes (increase or decrease) in milk production before and after a diagnosis were interpreted counterfactually. On a daily basis, indirect changes, mediated by SCC increase, were significantly different from zero for all bacterial species, with a milk yield decrease (ranging among species from 4 to 33g and mediated by an increase of 1000 SCC/mL/day) before and a daily milk increase (ranging among species from 2 to 12g and mediated by a decrease of 1000 SCC/mL/day) after detection. Direct changes, not mediated by SCC, were only different from zero for coagulase-negative staphylococci before diagnosis (72g per day). We concluded that mixed structural equation models were useful to estimate direct and indirect effects of the presence of clinical mastitis on milk yield.

  1. Event specific simultaneous estimates of loss, diffusion, and acceleration for MeV electrons

    NASA Astrophysics Data System (ADS)

    Schiller, Q.; Li, X.; Tu, W.; Ali, A.; Godinez, H. C.

    2015-12-01

    The most significant unknown in outer radiation belt electron dynamics is the relative contribution of loss, transport, and acceleration processes inside the inner magnetosphere. Detangling each individual process is critical to improve the understanding of radiation belt dynamics, but determining any single component is difficult due to sparse measurements of a large observation space. However, in the current era, an unprecedented number of spacecraft are taking measurements, and they are sampling different regions of the inner magnetosphere. With today's observations, system dynamics can begin to be unraveled. In this work, we focus on in-situ measurements during a single outer belt enhancement event, which occurred on January 13-14, 2013. We use Van Allen Probe measurements of ULF wave activity to determine radial transport rates. We use Colorado Student Space Weather Experiment observations to model electron lifetimes from atmospheric precipitation caused by pitch-angle diffusion. To estimate the source rate, we use a data assimilative model. The Kalman filter method we use estimates the full radial phase space density profile, as well as the amplitude, location, and radial extent of a Guassian-shaped source region. The estimates are made by minimizing the residuals between a simple 1D radial diffusion model and Van Allen Probe phase space density observations for mu=750 MeV/G and K=0.11 G^(1/2)R_E. The model also quantifies electrons lost to the outer boundary, providing direct comparison between losses to the inner and outer boundaries. This work produces simultaneous, quantitative estimates of loss, transport, and acceleration mechanisms and the relative contribution from each.

  2. Routine estimate of focal depths for moderate and small earthquakes by modelling regional depth phase sPmP in eastern Canada

    NASA Astrophysics Data System (ADS)

    Ma, S.; Peci, V.; Adams, J.; McCormack, D.

    2003-04-01

    ROUTINE ESTIMATE OF FOCAL DEPTHS FOR MODERATE AND SMALL EARTHQUAKES BY MODELLING REGIONAL DEPTH PHASE sPmP IN EASTERN CANADA Shutian Ma, Veronika Peci, John Adams, and David McCormack(1) (1) National Earthquake Hazards Program, Geological Survey of Canada, 7 Observatory Crescent, Ottawa, ON, K1A 0Y3, Canada Shutian Ma (ma@seismo.nrcan.gc.ca/613-947 3520) Veronika Peci (peci@seismo.nrcan.gc.ca/613-995 7100) John Adams (adams@seismo.nrcan.gc.ca/613-995 5519) David McCormack (cormack@seismo.nrcan.gc.ca/613-992 8766) Earthquake focal depths are critical parameters for basic seismological research, seismotectonic study, seismic hazard assessment, and event discrimination. Focal depths for most earthquakes with Mw >= 4.5 can be estimated from teleseismic arrival times of P, pP and sP. For maller earthquakes, focal depths can be stimated from Pg and Sg arrival times recorded at close stations. However, for most earthquakes in eastern Canada, teleseismic signals are too weak and seismograph spacing too sparse for depth estimation. The regional phase sPmP is very sensitive to focal depth, generally well developed at epicentral distances greater than 100 km, and clearly recorded at many stations in eastern Canada for earthquakes with mN >= 2.8. We developed a procedure to estimate focal depth routinely with sPmP. We select vertical waveforms recorded at distances from about 100 to 300 km (using Geotool and SAC2000), generate synthetic waveforms (using reflectivity method) for a typical focal mechanism and for a suitable range of depths, and choose the depth at which the synthetic best matches the selected waveform. The software is easy to operate. For routine work an experienced operator can get a focal depth with waveform modelling within 10 minutes after the waveform is selected, or in a couple of minutes get a rough focal depth from sPmP and Pg or PmP arrival times without waveform modelling. We have confirmed our sPmP modelling results by two comparisons: (1) to depths

  3. Spatio-temporal evolution of the postseismic slip associated with the 2005 Miyagi-Oki earthquake (M7.2) estimated from geodetic and seismological data

    NASA Astrophysics Data System (ADS)

    Iinuma, T.; Miura, S.; Uchida, N.; Sato, M.; Saito, H.; Ishikawa, T.; Hino, R.; Matsuzawa, T.

    2010-12-01

    On August 16, 2005, a M7.2 earthquake occurred along the plate boundary off Miyagi Prefecture, northeastern Japan, where the Pacific plate is subducting beneath the overriding continental plate at a rate of about 80 mm/yr. There are at least three asperities that were ruptured during the 1978 Miyagi-Oki earthquake (M7.4) there, and one or two of them were reruptured during the 2005 earthquake. We estimated spatio-temporal evolution of the postseismic slip associated with the 2005 Miyagi-Oki earthquake using continuous land GPS and campaign ocean bottom GPS/acoustic observation data in order to investigate whether the strain accumulation process at the unruptured asperities are affected by the event in 2005 or not. Daily site coordinates were estimated using a PPP (Precise Point Positioning) strategy of GIPSY-OASISII Software based on the GPS data observed at continuous sites operated by GSI (Geospatial Information Authority of Japan) and Tohoku University. Data from January 2004 to December 2007 have been analyzed. The linear trends with annual and semi-annual variations for the period from January 1, 2004 to August 15, 2005, and co-seismic displacements due to the main shock were estimated by least square modeling and subtracted from the original onshore GPS time series. As to the ocean bottom displacement data, we calculated secular velocities at the offshore site locations from an interplate coupling model based on the linear trends of land GPS sites and subtracted them from the observed time series. We regarded that the detrended time series represented the deformation due to the afterslip of the 2005 earthquake and were inverted them to obtain spatiotemporal slip distribution on the plate boundary by using a time dependent inversion analysis. We applied an inversion method devised by Yagi and Kikuchi (2003) to estimate the evolution of the fault slip in both space and time. We also estimated spatiotemporal evolution of the aseismic slip based on the activities

  4. Estimating the spontaneous mutation rate of loss of sex in the human pathogenic fungus Cryptococcus neoformans.

    PubMed Central

    Xu, Jianping

    2002-01-01

    Few events have evolutionary consequences as pervasive as changes in reproductive behavior. Among those changes, the loss of the ability to undergo sexual reproduction is probably the most profound. However, little is known about the rate of loss of sex. Here I describe an experimental system using the fungus Cryptococcus neoformans and provide the first empirical estimate of the spontaneous mutation rate of loss of sex in fungi. Two critical steps in sexual reproduction in C. neoformans were examined: mating and filamentation. Mating, the fusion of cells of opposite sexes, is a universal first step in eukaryotic sexual reproduction. In contrast, filamentation, a prerequisite process preceding meiosis and sexual spore development, is restricted to C. neoformans and a few other fungal species. After approximately 600 mitotic divisions under favorable asexual growth conditions, mean abilities for mating and filamentation decreased significantly by >67 and 24%, respectively. Similarly, though statistically not significant, the mean vegetative growth rates also decreased and among the mutation accumulation lines, the vegetative growth rates were negatively correlated to the mating ability. The estimated mutation rates to decreases in mating ability and filamentation were in excess of 0.0172 and 0.0036, respectively. The results show that C. neoformans can be a highly attractive model for analyses of reproductive system evolution in fungi. PMID:12454063

  5. Kinematic source parameter estimation for the 1995 Mw 7.2 Gulf of Aqaba Earthquake by using InSAR and teleseismic data in a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Bathke, Hannes; Feng, Guangcai; Heimann, Sebastian; Nikkhoo, Mehdi; Zielke, Olaf; Jónsson, Sigurjon; Mai, Martin

    2016-04-01

    The 1995 Mw 7.2 Gulf of Aqaba earthquake was primarily a left-lateral strike-slip earthquake, occurring on the Dead Sea transform fault at the western border of the Arabian plate. The tectonic setting within the trans-tensional Gulf of Aqaba is complex, consisting of several en echelon transform faults and pull-apart basins. Several studies have been published, focusing on this earthquake using either InSAR or teleseismic (P and SH waves) data. However, the published finite-fault rupture models of the earthquake differ significantly. For example, it still remains unclear whether the Aqaba fault, the Aragonese fault or the Arnona fault ruptured in the event. It is also possible that several segments were activated. The main problem with past studies is that either InSAR or teleseismic data were used, but not both. Teleseismic data alone are unable to locate the event well, while the InSAR data are limited in the near field due to the earthquake's offshore location. In addition, the source fault is roughly north-south oriented and InSAR has limited sensitivity to north-south displacements. Here we improve on previous studies by using InSAR and teleseismic data jointly to constrain the source model. In addition, we use InSAR data from two additional tracks that have not been used before, which provides a more complete displacement field of the earthquake. Furthermore, in addition to the fault model parameters themselves, we also estimate the parameter uncertainties, which were not reported in previous studies. Based on these uncertainties we estimate a model-prediction covariance matrix in addition to the data covariance matrix that we then use in Bayesian inference sampling to solve for the static slip-distribution on the fault. By doing so, we avoid using a Laplacian smoothing operator, which is often subjective and may pose an unphysical constraint to the problem. Our results show that fault slip on only the Aragonese fault can satisfactorily explain the InSAR data

  6. Modal analysis of thin cylindrical shells with cardboard liners and estimation of loss factors

    NASA Astrophysics Data System (ADS)

    Koruk, Hasan; Dreyer, Jason T.; Singh, Rajendra

    2014-04-01

    Cardboard liners are often installed within automotive drive shafts to reduce radiated noise over a certain frequency range. However, the precise mechanisms that yield noise attenuation are not well understood. To overcome this void, a thin shell (under free boundaries) with different cardboard liner thicknesses is examined using analytical, computational and experimental methods. First, an experimental procedure is introduced to determine the modal behavior of a cylindrical shell with a cardboard liner. Then, acoustic and vibration frequency response functions are measured in acoustic free field, and natural frequencies and the loss factors of structures are determined. The adverse effects caused by closely spaced modes during the identification of modal loss factors are minimized, and variations in measured natural frequencies and loss factors are explored. Material properties of a cardboard liner are also determined using an elastic plate treated with a thin liner. Finally, the natural frequencies and modal loss factors of a cylindrical shell with cardboard liners are estimated using analytical and computational methods, and the sources of damping mechanisms are identified. The proposed procedure can be effectively used to model a damped cylindrical shell (with a cardboard liner) to predict its vibro-acoustic response.

  7. Estimating Earthquake Hazards in the San Pedro Shelf Region, Southern California

    NASA Astrophysics Data System (ADS)

    Baher, S.; Fuis, G.; Normark, W. R.; Sliter, R.

    2003-12-01

    The San Pedro Shelf (SPS) region of the inner California Borderland offshore southern California poses a significant seismic hazard to the contiguous Los Angeles Area, as a consequence of late Cenozoic compressional reactivation of mid-Cenozoic extensional faults. The extent of the hazard, however, is poorly understood because of the complexity of fault geometries and uncertainties in earthquake locations. The major faults in the region include the Palos Verdes, THUMS Huntington Beach and the Newport-Inglewood fault zones. We report here the analysis and interpretation of wide-angle seismic-reflection and refraction data recorded as part of the Los Angeles Region Seismic Experiment line 1 (LARSE 1), multichannel seismic (MCS) reflection data obtained by the USGS (1998-2000) and industry borehole stratigraphy. The onshore-offshore velocity model, which is based on forward modeling of the refracted P-wave arrival times, is used to depth migrate the LARSE 1 section. Borehole stratigraphy allows correlation of the onshore and offshore velocity models because state regulations prevent collection of deep-penetration acoustic data nearshore (within 3 mi.). Our refraction study is an extension of ten Brink et al., 2000 tomographic inversion of LARSE I data. They found high velocities (> 6 km/sec) at about ~3.5 km depth from the Catalina Fault (CF) to the SPS. We find these velocities, shallower (around 2 km depth) beneath the Catalina Ridge (CR) and SPS, but at a depth 2.5-3.0 km elsewhere in the study region. This change in velocity structure can provide additional constraints for the tectonic processes of this region. The structural horizons observed in the LARSE 1 reflection data are tied to adjacent MCS lines. We find localized folding and faulting at depth (~2 km) southwest of the CR and on the SPS slope. Quasi-laminar beds, possible of pelagic origin follow the contours of earlier folded (wavelength ~1 km) and faulted Cenozoic sedimentary and volcanic rocks. Depth to

  8. An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.

    1994-01-01

    Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.

  9. Completeness of the fossil record: Estimating losses due to small body size

    NASA Astrophysics Data System (ADS)

    Cooper, Roger A.; Maxwell, Phillip A.; Crampton, James S.; Beu, Alan G.; Jones, Craig M.; Marshall, Bruce A.

    2006-04-01

    Size bias in the fossil record limits its use for interpreting patterns of past biodiversity and ecological change. Using comparative size frequency distributions of exceptionally good regional records of New Zealand Holocene and Cenozoic Mollusca in museum archive collections, we derive first-order estimates of the magnitude of the bias against small body size and the effect of this bias on completeness of the fossil record. Our database of 3907 fossil species represents an original living pool of 9086 species, from which ˜36% have been removed by size culling, 27% from the smallest size class (<5 mm). In contrast, non-size-related losses compose only 21% of the total. In soft rocks, the loss of small taxa can be reduced by nearly 50% through the employment of exhaustive collection and preparation techniques.

  10. Prediction of earthquake-triggered landslide event sizes

    NASA Astrophysics Data System (ADS)

    Braun, Anika; Havenith, Hans-Balder; Schlögel, Romy

    2016-04-01

    Seismically induced landslides are a major environmental effect of earthquakes, which may significantly contribute to related losses. Moreover, in paleoseismology landslide event sizes are an important proxy for the estimation of the intensity and magnitude of past earthquakes and thus allowing us to improve seismic hazard assessment over longer terms. Not only earthquake intensity, but also factors such as the fault characteristics, topography, climatic conditions and the geological environment have a major impact on the intensity and spatial distribution of earthquake induced landslides. We present here a review of factors contributing to earthquake triggered slope failures based on an "event-by-event" classification approach. The objective of this analysis is to enable the short-term prediction of earthquake triggered landslide event sizes in terms of numbers and size of the affected area right after an earthquake event occurred. Five main factors, 'Intensity', 'Fault', 'Topographic energy', 'Climatic conditions' and 'Surface geology' were used to establish a relationship to the number and spatial extend of landslides triggered by an earthquake. The relative weight of these factors was extracted from published data for numerous past earthquakes; topographic inputs were checked in Google Earth and through geographic information systems. Based on well-documented recent earthquakes (e.g. Haiti 2010, Wenchuan 2008) and on older events for which reliable extensive information was available (e.g. Northridge 1994, Loma Prieta 1989, Guatemala 1976, Peru 1970) the combination and relative weight of the factors was calibrated. The calibrated factor combination was then applied to more than 20 earthquake events for which landslide distribution characteristics could be cross-checked. One of our main findings is that the 'Fault' factor, which is based on characteristics of the fault, the surface rupture and its location with respect to mountain areas, has the most important

  11. Regional Estimates of Drought-Induced Tree Canopy Loss across Texas

    NASA Astrophysics Data System (ADS)

    Schwantes, A.; Swenson, J. J.; González-Roglich, M.; Johnson, D. M.; Domec, J. C.; Jackson, R. B.

    2015-12-01

    The severe drought of 2011 killed millions of trees across the state of Texas. Drought-induced tree-mortality can have significant impacts to carbon cycling, regional biophysics, and community composition. We quantified canopy cover loss across the state using remotely sensed imagery from before and after the drought at multiple scales. First, we classified ~200 orthophotos (1-m spatial resolution) from the National Agriculture Imagery Program, using a supervised maximum likelihood classification. Area of canopy cover loss in these classifications was highly correlated (R2 = 0.8) with ground estimates of canopy cover loss, measured in 74 plots across 15 different sites in Texas. These 1-m orthophoto classifications were then used to calibrate and validate coarser scale (30-m) Landsat imagery to create wall-to-wall tree canopy cover loss maps across the state of Texas. We quantified percent dead and live canopy within each pixel of Landsat to create continuous maps of dead and live tree cover, using two approaches: (1) a zero-inflated beta distribution model and (2) a random forest algorithm. Widespread canopy loss occurred across all the major natural systems of Texas, with the Edwards Plateau region most affected. In this region, on average, 10% of the forested area was lost due to the 2011 drought. We also identified climatic thresholds that controlled the spatial distribution of tree canopy loss across the state. However, surprisingly, there were many local hot spots of canopy loss, suggesting that not only climatic factors could explain the spatial patterns of canopy loss, but rather other factors related to soil, landscape, management, and stand density also likely played a role. As increases in extreme droughts are predicted to occur with climate change, it will become important to define methods that can detect associated drought-induced tree mortality across large regions. These maps could then be used (1) to quantify impacts to carbon cycling and regional

  12. Estimated ground motion from the 1994 Northridge, California, earthquake at the site of interstate 10 and La Cienega Boulevard bridge collapse, West Los Angeles, California

    USGS Publications Warehouse

    Boore, D.M.; Gibbs, J.F.; Joyner, W.B.; Tinsley, J.C.; Ponti, D.J.

    2003-01-01

    We have estimated ground motions at the site of a bridge collapse during the 1994 Northridge, California, earthquake. The estimated motions are based on correcting motions recorded during the mainshock 2.3 km from the collapse site for the relative site response of the two sites. Shear-wave slownesses and damping based on analysis of borehole measurements at the two sites were used in the site response analysis. We estimate that the motions at the collapse site were probably larger, by factors ranging from 1.2 to 1.6, than at the site at which the ground motion was recorded, for periods less than about 1 sec.

  13. On Assessment and Estimation of Potential Losses due to Land Subsidence in Urban Areas of Indonesia

    NASA Astrophysics Data System (ADS)

    Abidin, Hasanuddin Z.; Andreas, Heri; Gumilar, Irwan; Sidiq, Teguh P.

    2016-04-01

    subsidence have also relation among each other, the accurate quantification of the potential losses caused by land subsidence in urban areas is not an easy task to accomplish. The direct losses can be easier to estimate than the indirect losses. For example, the direct losses due to land subsidence in Bandung was estimated to be at least 180 Million USD; but the indirect losses is still unknown.

  14. Estimation of Broadband Ground Motion at Ocean-bottom Strong-motion Stations for the 2003 Tokachi-oki Earthquake

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Takenaka, H.; Hirata, K.; Watanabe, T.

    2004-12-01

    The 2003 Tokachi-oki earthquake (MJMA8.0) occurred on September 25, 2003 (UT). In this study, we reproduce the broadband ground motion from the earthquake using near-field strong-motion records (accelerograms) at three ocean-bottom stations (KOB1, KOB2 and KOB3) on the sea floor off Kushiro, Hokkaido, installed by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC). The distance and direction from the epicenter to KOB1, KOB2 and KOB3 are 28 km, east-southeast and 83 km,east and 80 km, east-northeast, respectively. Three components (x, y, z) strong motion observation system, enclosed within a cylindrical pressure housing, can record ground motion in broadband frequency range up to DC. The x component is parallel to the axis of the cylinder which is almost horizontal. Since it is suspected that the strong-motion observation systems themselves had moved during the main shock, a simple time-integration of the original acceleration results in wrong velocity and displacement ground motion. So we apply the following processing to the data: We assume that the motion of each strong-motion seismometer can be represented by (1) rotation around the cylinder axis (i.e., roll), (2) tilting of the cylinder (i.e., pitch), and (3) parallel motion. To estimate rotation and tilting, we first use a median-filter for the original records. After the compensation of these movements, the rotated records are integrated into velocity ones. Next, we follow the base-line correction method of Boore (2001) and obtain the ground motion using the amount of submarine upheaval estimated from the two seabed tsunami sensors near KOB1 and KOB3 by Hirata and Baba (2004). By this approach we have successfully obtained broadband velocity and displacement ground motion including DC components. The maximum horizontal (vector resultant) and vertical velocities at KOB1 and KOB3 are estimated to be approximately 160 cm/s, 40 cm/s and 130 cm/s, 20 cm/s, while the corresponding maximum

  15. Use of plume mapping data to estimate chlorinated solvent mass loss

    USGS Publications Warehouse

    Barbaro, J.R.; Neupane, P.P.

    2006-01-01

    Results from a plume mapping study from November 2000 through February 2001 in the sand-and-gravel surficial aquifer at Dover Air Force Base, Delaware, were used to assess the occurrence and extent of chlorinated solvent mass loss by calculating mass fluxes across two transverse cross sections and by observing changes in concentration ratios and mole fractions along a longitudinal cross section through the core of the plume. The plume mapping investigation was conducted to determine the spatial distribution of chlorinated solvents migrating from former waste disposal sites. Vertical contaminant concentration profiles were obtained with a direct-push drill rig and multilevel piezometers. These samples were supplemented with additional ground water samples collected with a minipiezometer from the bed of a perennial stream downgradient of the source areas. Results from the field program show that the plume, consisting mainly of tetrachloroethylene (PCE), trichloroethene (TCE), and cis-1,2-dichloroethene (cis-1,2-DCE), was approximately 670 m in length and 120 m in width, extended across much of the 9- to 18-m thickness of the surficial aquifer, and discharged to the stream in some areas. The analyses of the plume mapping data show that losses of the parent compounds, PCE and TCE, were negligible downgradient of the source. In contrast, losses of cis-1,2-DCE, a daughter compound, were observed in this plume. These losses very likely resulted from biodegradation, but the specific reaction mechanism could not be identified. This study demonstrates that plume mapping data can be used to estimate the occurrence and extent of chlorinated solvent mass loss from biodegradation and assess the effectiveness of natural attenuation as a remedial measure.

  16. Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Mahdi Alavi, S. M.; Saif, Mehrdad

    2013-12-01

    This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.

  17. The Influence of Atmospheric Modeling Errors on GRACE Estimates of Mass Loss in Greenland and Antarctica

    NASA Astrophysics Data System (ADS)

    Hardy, R. A.; Nerem, R. S.; Wiese, D. N.

    2015-12-01

    The Gravity Recovery and Climate Experiment (GRACE) has produced robust estimates of the contributions of the Greenland and Antarctic ice sheets to sea level rise. A limiting factor in these estimates is the background model (AOD1B) used to remove the atmospheric contribution to the gravity signal. We test the accuracy of this background model against in situ pressure measurements in Greenland and Antarctica and find significant evidence of drift in the model relative to the instruments. Furthermore, we find that the ECMWF Reanalysis (ERA) Interim product better agrees with the in situ data over Greenland and Antarctica. Relative to ERA, biases in atmospheric pressure mask additional trends over both ice sheets and a significant acceleration in mass loss over Antarctica. Agreement with in situ measurements affirms the viability of ERA-Interim for correcting Level 2 GRACE products over these regions.

  18. Missing great earthquakes

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

  19. Bayesian Estimation of 3D Non-planar Fault Geometry and Slip: An application to the 2011 Megathrust (Mw 9.1) Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Dutta, Rishabh; Jónsson, Sigurjón

    2016-04-01

    Earthquake faults are generally considered planar (or of other simple geometry) in earthquake source parameter estimations. However, simplistic fault geometries likely result in biases in estimated slip distributions and increased fault slip uncertainties. In case of large subduction zone earthquakes, these biases and uncertainties propagate into tsunami waveform modeling and other calculations related to postseismic studies, Coulomb failure stresses, etc. In this research, we parameterize 3D non-planar fault geometry for the 2011 Tohoku-Oki earthquake (Mw 9.1) and estimate these geometrical parameters along with fault slip parameters from onland and offshore GPS using Bayesian inference. This non-planar fault is formed using several 3rd degree polynomials in along-strike (X-Y plane) and along-dip (X-Z plane) directions that are tied together using a triangular mesh. The coefficients of these polynomials constitute the fault geometrical parameters. We use the trench and locations of past seismicity as a priori information to constrain these fault geometrical parameters and the Laplacian to characterize the fault slip smoothness. Hyper-parameters associated to these a priori constraints are estimated empirically and the posterior probability distribution of the model (fault geometry and slip) parameters is sampled using an adaptive Metropolis Hastings algorithm. The across-strike uncertainties in the fault geometry (effectively the local fault location) around high-slip patches increases from 6 km at 10km depth to about 35 km at 50km depth, whereas around low-slip patches the uncertainties are larger (from 7 km to 70 km). Uncertainties in reverse slip are found to be higher at high slip patches than at low slip patches. In addition, there appears to be high correlation between adjacent patches of high slip. Our results demonstrate that we can constrain complex non-planar fault geometry together with fault slip from GPS data using past seismicity as a priori

  20. New constraints on the rupture process of the 1999 August 17 Izmit earthquake deduced from estimates of stress glut rate moments

    NASA Astrophysics Data System (ADS)

    Clévédé, E.; Bouin, M.-P.; Bukchin, B.; Mostinskiy, A.; Patau, G.

    2004-12-01

    This paper illustrates the use of integral estimates given by the stress glut rate moments of total degree 2 for constraining the rupture scenario of a large earthquake in the particular case of the 1999 Izmit mainshock. We determine the integral estimates of the geometry, source duration and rupture propagation given by the stress glut rate moments of total degree 2 by inverting long-period surface wave (LPSW) amplitude spectra. Kinematic and static models of the Izmit earthquake published in the literature are quite different from one another. In order to extract the characteristic features of this event, we calculate the same integral estimates directly from those models and compare them with those deduced from our inversion. While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. With the aim of understand this discrepancy, we use simple equivalent kinematic models to reproduce the integral estimates of the considered rupture processes (including ours) by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the joint analysis of the LPSW solution and source tomographies allows us to elucidate the scattering of source processes published for this earthquake and to discriminate between the models. Our results strongly suggest that (1) there was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; (2) the apparent rupture velocity decreases on this segment.

  1. Izmit, Turkey 1999 Earthquake Interferogram

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This image is an interferogram that was created using pairs of images taken by Synthetic Aperture Radar (SAR). The images, acquired at two different times, have been combined to measure surface deformation or changes that may have occurred during the time between data acquisition. The images were collected by the European Space Agency's Remote Sensing satellite (ERS-2) on 13 August 1999 and 17 September 1999 and were combined to produce these image maps of the apparent surface deformation, or changes, during and after the 17 August 1999 Izmit, Turkey earthquake. This magnitude 7.6 earthquake was the largest in 60 years in Turkey and caused extensive damage and loss of life. Each of the color contours of the interferogram represents 28 mm (1.1 inches) of motion towards the satellite, or about 70 mm (2.8 inches) of horizontal motion. White areas are outside the SAR image or water of seas and lakes. The North Anatolian Fault that broke during the Izmit earthquake moved more than 2.5 meters (8.1 feet) to produce the pattern measured by the interferogram. Thin red lines show the locations of fault breaks mapped on the surface. The SAR interferogram shows that the deformation and fault slip extended west of the surface faults, underneath the Gulf of Izmit. Thick black lines mark the fault rupture inferred from the SAR data. Scientists are using the SAR interferometry along with other data collected on the ground to estimate the pattern of slip that occurred during the Izmit earthquake. This then used to improve computer models that predict how this deformation transferred stress to other faults and to the continuation of the North Anatolian Fault, which extends to the west past the large city of Istanbul. These models show that the Izmit earthquake further increased the already high probability of a major earthquake near Istanbul.

  2. Economic Estimation of the Losses Caused by Surface Water Pollution Accidents in China From the Perspective of Water Bodies’ Functions

    PubMed Central

    Yao, Hong; You, Zhen; Liu, Bo

    2016-01-01

    The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies’ functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident’s origin and other indirect losses. In the valuation of damage to people’s life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water’s recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole. PMID:26805869

  3. Economic Estimation of the Losses Caused by Surface Water Pollution Accidents in China From the Perspective of Water Bodies' Functions.

    PubMed

    Yao, Hong; You, Zhen; Liu, Bo

    2016-01-22

    The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies' functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident's origin and other indirect losses. In the valuation of damage to people's life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water's recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole.

  4. Photogrammetrically Derived Estimates of Glacier Mass Loss in the Upper Susitna Drainage Basin, Alaska Range, Alaska

    NASA Astrophysics Data System (ADS)

    Wolken, G. J.; Whorton, E.; Murphy, N.

    2014-12-01

    Glaciers in Alaska are currently experiencing some of the highest rates of mass loss on Earth, with mass wastage rates accelerating during the last several decades. Glaciers, and other components of the hydrologic cycle, are expected to continue to change in response to anticipated future atmospheric warming, thus, affecting the quantity and timing of river runoff. This study uses sequential digital elevation model (DEM) analysis to estimate the mass loss of glaciers in the upper Susitna drainage basin, Alaska Range, for the purpose of validating model simulations of past runoff changes. We use mainly stereo optical airborne and satellite data for several epochs between 1949 and 2014, and employ traditional stereo-photogrammetric and structure from motion processing techniques to derive DEMs of the upper Susitna basin glaciers. This work aims to improve the record of glacier change in the central Alaska Range, and serves as a critical validation dataset for a hydrological model that simulates the potential effects of future glacier mass loss on changes in river runoff over the lifespan of the proposed Susitna-Watana Hydroelectric Project.

  5. Sound absorption coefficient in situ: an alternative for estimating soil loss factors.

    PubMed

    Freire, Rosane; Meletti de Abreu, Marco Henrique; Okada, Rafael Yuri; Soares, Paulo Fernando; GranhenTavares, Célia Regina

    2015-01-01

    The relationship between the sound absorption coefficient and factors of the Universal Soil Loss Equation (USLE) was determined in a section of the Maringá Stream basin, Paraná State, by using erosion plots. In the field, four erosion plots were built on a reduced scale, with dimensions of 2.0×12.5m. With respect to plot coverage, one was kept with bare soil and the others contained forage grass (Brachiaria), corn and wheat crops, respectively. Planting was performed without any type of conservation practice in an area with a 9% slope. A sedimentation tank was placed at the end of each plot to collect the material transported. For the acoustic system, pink noise was used in the measurement of the proposed monitoring, for collecting information on incident and reflected sound pressure levels. In general, obtained values of soil loss confirmed that 94.3% of material exported to the basin water came from the bare soil plot, 2.8% from the corn plot, 1.8% from the wheat plot, and 1.1% from the forage grass plot. With respect to the acoustic monitoring, results indicated that at 16kHz erosion plot coverage type had a significant influence on the sound absorption coefficient. High correlation coefficients were found in estimations of the A and C factors of the USLE, confirming that the acoustic technique is feasible for the determination of soil loss directly in the field.

  6. Estimated Lifetime Medical and Work-Loss Costs of Fatal Injuries--United States, 2013.

    PubMed

    Florence, Curtis; Simon, Thomas; Haegerich, Tamara; Luo, Feijun; Zhou, Chao

    2015-10-01

    Injury-associated deaths have substantial economic consequences. In 2013, unintentional injury was the fourth leading cause of death, suicide was the tenth, and homicide was the sixteenth; these three causes accounted for approximately 187,000 deaths in the United States. To assess the economic impact of fatal injuries, CDC analyzed death data from the National Vital Statistics System for 2013, along with cost of injury data using the Web-Based Injury Statistics Query and Reporting System. This report updates a previous study that analyzed death data from the year 2000, and employs recently revised methodology for determining the costs of injury outcomes, which uses the most current economic data and incorporates improvements for estimating medical costs associated with injury. Number of deaths, crude and age-specific death rates, and total lifetime work-loss costs and medical costs were calculated for fatal injuries by sex, age group, intent (intentional versus unintentional), and mechanism of injury. During 2013, the rate of fatal injury was 61.0 per 100,000 population, with combined medical and work-loss costs exceeding $214 billion. Costs from fatal injuries represent approximately one third of the total $671 billion medical and work-loss costs associated with all injuries in 2013. The magnitude of the economic burden associated with injury-associated deaths underscores the need for effective prevention.

  7. Estimation of earthquake source parameters of the June 22, 2002 Changoureh-Avaj event, NW Iran, using aftershocks distribution and far-field data

    NASA Astrophysics Data System (ADS)

    Sadeghi, H.; Suzuki, S.; Hosseini, S. K.; Fujii, Y.; Fatemi Aghda, S. M.

    2003-04-01

    The Changoureh-Avaj Earthquake occurred on the 22 June 2002 in about 225 km west of the capital of Iran-Tehran. Many houses in about 50 villages with adobe constructions collapsed, causing the death of 230 people and injuries to more than 1400. This earthquake is of great importance not only for seismological interests, but also for knowing fault activity around Tehran with about 7 million populations. The present work is to estimate the earthquake source parameters by far-field body waves of main shock and the fault geometry inferred from the aftershocks distribution. The aftershocks distribution images a thrust fault surface with 26 degrees dip to the south from 0 to 15 km depth and the dip direction is S10W. Beside of the main fault, a conjugate sub-fault could be also imaged that suggests the main shock hypocenter at about 8 km. Based on this fault geometry and using the data of Khorasan Earthquake Network (Ferdowsi University of Mashhad) broadband stations, about 750 to 1000 km epicentral distances, we study the source parameters. The far-field body waves and displacement spectral analysis, assuming a rectangular fault, yield a fault length of 28 km, stress drop of 0.4 MPa (4 bars), average dislocation of 12 cm and the seismic moment of 3.1E18 Nm. The rake is determined between 90 and 105 degrees.

  8. Comparison of ground motions estimated from prediction equations and from observed damage during the M = 4.6 1983 Liège earthquake (Belgium)

    NASA Astrophysics Data System (ADS)

    García Moreno, D.; Camelbeeck, T.

    2013-08-01

    On 8 November 1983 an earthquake of magnitude 4.6 damaged more than 16 000 buildings in the region of Liège (Belgium). The extraordinary damage produced by this earthquake, considering its moderate magnitude, is extremely well documented, giving the opportunity to compare the consequences of a recent moderate earthquake in a typical old city of Western Europe with scenarios obtained by combining strong ground motions and vulnerability modelling. The present study compares 0.3 s spectral accelerations estimated from ground motion prediction equations typically used in Western Europe with those obtained locally by applying the statistical distribution of damaged masonry buildings to two fragility curves, one derived from the HAZUS programme of FEMA (FEMA, 1999) and another developed for high-vulnerability buildings by Lang and Bachmann (2004), and to a method proposed by Faccioli et al. (1999) relating the seismic vulnerability of buildings to the damage and ground motions. The results of this comparison reveal good agreement between maxima spectral accelerations calculated from these vulnerability and fragility curves and those predicted from attenuation law equations, suggesting peak ground accelerations for the epicentral area of the 1983 earthquake of 0.13-0.20 g (g: gravitational acceleration).

  9. The USGS Earthquake Scenario Project

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Petersen, M. D.; Wald, L. A.; Frankel, A. D.; Quitoriano, V. R.; Lin, K.; Luco, N.; Mathias, S.; Bausch, D.

    2009-12-01

    The U.S. Geological Survey’s (USGS) Earthquake Hazards Program (EHP) is producing a comprehensive suite of earthquake scenarios for planning, mitigation, loss estimation, and scientific investigations. The Earthquake Scenario Project (ESP), though lacking clairvoyance, is a forward-looking project, estimating earthquake hazard and loss outcomes as they may occur one day. For each scenario event, fundamental input includes i) the magnitude and specified fault mechanism and dimensions, ii) regional Vs30 shear velocity values for site amplification, and iii) event metadata. A grid of standard ShakeMap ground motion parameters (PGA, PGV, and three spectral response periods) is then produced using the well-defined, regionally-specific approach developed by the USGS National Seismic Hazard Mapping Project (NHSMP), including recent advances in empirical ground motion predictions (e.g., the NGA relations). The framework also allows for numerical (3D) ground motion computations for specific, detailed scenario analyses. Unlike NSHMP ground motions, for ESP scenarios, local rock and soil site conditions and commensurate shaking amplifications are applied based on detailed Vs30 maps where available or based on topographic slope as a proxy. The scenario event set is comprised primarily by selection from the NSHMP events, though custom events are also allowed based on coordination of the ESP team with regional coordinators, seismic hazard experts, seismic network operators, and response coordinators. The event set will be harmonized with existing and future scenario earthquake events produced regionally or by other researchers. The event list includes approximate 200 earthquakes in CA, 100 in NV, dozens in each of NM, UT, WY, and a smaller number in other regions. Systematic output will include all standard ShakeMap products, including HAZUS input, GIS, KML, and XML files used for visualization, loss estimation, ShakeCast, PAGER, and for other systems. All products will be

  10. Demand surge following earthquakes

    USGS Publications Warehouse

    Olsen, Anna H.

    2012-01-01

    Demand surge is understood to be a socio-economic phenomenon where repair costs for the same damage are higher after large- versus small-scale natural disasters. It has reportedly increased monetary losses by 20 to 50%. In previous work, a model for the increased costs of reconstruction labor and materials was developed for hurricanes in the Southeast United States. The model showed that labor cost increases, rather than the material component, drove the total repair cost increases, and this finding could be extended to earthquakes. A study of past large-scale disasters suggested that there may be additional explanations for demand surge. Two such explanations specific to earthquakes are the exclusion of insurance coverage for earthquake damage and possible concurrent causation of damage from an earthquake followed by fire or tsunami. Additional research into these aspects might provide a better explanation for increased monetary losses after large- vs. small-scale earthquakes.

  11. Estimating nitrogen losses in furrow irrigated soil amended by compost using HYDRUS-2D model

    NASA Astrophysics Data System (ADS)

    Iqbal, Shahid; Guber, Andrey; Zaman Khan, Haroon; ullah, Ehsan

    2014-05-01

    Furrow irrigation commonly results in high nitrogen (N) losses from soil profile via deep infiltration. Estimation of such losses and their reduction is not a trivial task because furrow irrigation creates highly nonuniform distribution of soil water that leads to preferential water and N fluxes in soil profile. Direct measurements of such fluxes are impractical. The objective of this study was to assess applicability of HYDRUS-2D model for estimating nitrogen balance in manure amended soil under furrow irrigation. Field experiments were conducted in a sandy loam soil amended by poultry manure compost (PMC) and pressmud compost (PrMC) fertilizers. The PMC and PrMC contained 2.5% and 0.9% N and were applied at 5 rates: 2, 4, 6, 8 and 10 ton/ha. Plots were irrigated starting from 26th day from planting using furrows with 1x1 ridge to furrow aspect ratio. Irrigation depths were 7.5 cm and time interval between irrigations varied from 8 to 15 days. Results of the field experiments showed that approximately the same corn yield was obtained with considerably higher N application rates using PMC than using PrMC as a fertilizer. HYDRUS-2D model was implemented to evaluate N fluxes in soil amended by PMC and PrMC fertilizers. Nitrogen exchange between two pools of organic N (compost and soil) and two pools of mineral N (soil NH4-N and soil NO3-N) was modeled using mineralization and nitrification reactions. Sources of mineral N losses from soil profile included denitrification, root N uptake and leaching with deep infiltration of water. HYDRUS-2D simulations showed that the observed increases in N root water uptake and corn yields associated with compost application could not be explained by the amount of N added to soil profile with the compost. Predicted N uptake by roots significantly underestimated the field data. Good agreement between simulated and field-estimated values of N root uptake was achieved when the rate of organic N mineralization was increased

  12. Development of an Earthquake Impact Scale

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Marano, K. D.; Jaiswal, K. S.

    2009-12-01

    With the advent of the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system, domestic (U.S.) and international earthquake responders are reconsidering their automatic alert and activation levels as well as their response procedures. To help facilitate rapid and proportionate earthquake response, we propose and describe an Earthquake Impact Scale (EIS) founded on two alerting criteria. One, based on the estimated cost of damage, is most suitable for domestic events; the other, based on estimated ranges of fatalities, is more appropriate for most global events. Simple thresholds, derived from the systematic analysis of past earthquake impact and response levels, turn out to be quite effective in communicating predicted impact and response level of an event, characterized by alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (major disaster, necessitating international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses exceeding 1M, 10M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness dominate in countries where vernacular building practices typically lend themselves to high collapse and casualty rates, and it is these impacts that set prioritization for international response. In contrast, it is often financial and overall societal impacts that trigger the level of response in regions or countries where prevalent earthquake resistant construction practices greatly reduce building collapse and associated fatalities. Any newly devised alert protocols, whether financial or casualty based, must be intuitive and consistent with established lexicons and procedures. In this analysis, we make an attempt

  13. Estimating annual soil carbon loss in agricultural peatland soils using a nitrogen budget approach.

    PubMed

    Kirk, Emilie R; van Kessel, Chris; Horwath, William R; Linquist, Bruce A

    2015-01-01

    Around the world, peatland degradation and soil subsidence is occurring where these soils have been converted to agriculture. Since initial drainage in the mid-1800s, continuous farming of such soils in the California Sacramento-San Joaquin Delta (the Delta) has led to subsidence of up to 8 meters in places, primarily due to soil organic matter (SOM) oxidation and physical compaction. Rice (Oryza sativa) production has been proposed as an alternative cropping system to limit SOM oxidation. Preliminary research on these soils revealed high N uptake by rice in N fertilizer omission plots, which we hypothesized was the result of SOM oxidation releasing N. Testing this hypothesis, we developed a novel N budgeting approach to assess annual soil C and N loss based on plant N uptake and fallow season N mineralization. Through field experiments examining N dynamics during growing season and winter fallow periods, a complete annual N budget was developed. Soil C loss was calculated from SOM-N mineralization using the soil C:N ratio. Surface water and crop residue were negligible in the total N uptake budget (3 - 4 % combined). Shallow groundwater contributed 24 - 33 %, likely representing subsurface SOM-N mineralization. Assuming 6 and 25 kg N ha-1 from atmospheric deposition and biological N2 fixation, respectively, our results suggest 77 - 81 % of plant N uptake (129 - 149 kg N ha-1) was supplied by SOM mineralization. Considering a range of N uptake efficiency from 50 - 70 %, estimated net C loss ranged from 1149 - 2473 kg C ha-1. These findings suggest that rice systems, as currently managed, reduce the rate of C loss from organic delta soils relative to other agricultural practices.

  14. Estimating annual soil carbon loss in agricultural peatland soils using a nitrogen budget approach.

    PubMed

    Kirk, Emilie R; van Kessel, Chris; Horwath, William R; Linquist, Bruce A

    2015-01-01

    Around the world, peatland degradation and soil subsidence is occurring where these soils have been converted to agriculture. Since initial drainage in the mid-1800s, continuous farming of such soils in the California Sacramento-San Joaquin Delta (the Delta) has led to subsidence of up to 8 meters in places, primarily due to soil organic matter (SOM) oxidation and physical compaction. Rice (Oryza sativa) production has been proposed as an alternative cropping system to limit SOM oxidation. Preliminary research on these soils revealed high N uptake by rice in N fertilizer omission plots, which we hypothesized was the result of SOM oxidation releasing N. Testing this hypothesis, we developed a novel N budgeting approach to assess annual soil C and N loss based on plant N uptake and fallow season N mineralization. Through field experiments examining N dynamics during growing season and winter fallow periods, a complete annual N budget was developed. Soil C loss was calculated from SOM-N mineralization using the soil C:N ratio. Surface water and crop residue were negligible in the total N uptake budget (3 - 4 % combined). Shallow groundwater contributed 24 - 33 %, likely representing subsurface SOM-N mineralization. Assuming 6 and 25 kg N ha-1 from atmospheric deposition and biological N2 fixation, respectively, our results suggest 77 - 81 % of plant N uptake (129 - 149 kg N ha-1) was supplied by SOM mineralization. Considering a range of N uptake efficiency from 50 - 70 %, estimated net C loss ranged from 1149 - 2473 kg C ha-1. These findings suggest that rice systems, as currently managed, reduce the rate of C loss from organic delta soils relative to other agricultural practices. PMID:25822494

  15. Estimating Annual Soil Carbon Loss in Agricultural Peatland Soils Using a Nitrogen Budget Approach

    PubMed Central

    Kirk, Emilie R.; van Kessel, Chris; Horwath, William R.; Linquist, Bruce A.

    2015-01-01

    Around the world, peatland degradation and soil subsidence is occurring where these soils have been converted to agriculture. Since initial drainage in the mid-1800s, continuous farming of such soils in the California Sacramento-San Joaquin Delta (the Delta) has led to subsidence of up to 8 meters in places, primarily due to soil organic matter (SOM) oxidation and physical compaction. Rice (Oryza sativa) production has been proposed as an alternative cropping system to limit SOM oxidation. Preliminary research on these soils revealed high N uptake by rice in N fertilizer omission plots, which we hypothesized was the result of SOM oxidation releasing N. Testing this hypothesis, we developed a novel N budgeting approach to assess annual soil C and N loss based on plant N uptake and fallow season N mineralization. Through field experiments examining N dynamics during growing season and winter fallow periods, a complete annual N budget was developed. Soil C loss was calculated from SOM-N mineralization using the soil C:N ratio. Surface water and crop residue were negligible in the total N uptake budget (3 – 4 % combined). Shallow groundwater contributed 24 – 33 %, likely representing subsurface SOM-N mineralization. Assuming 6 and 25 kg N ha-1 from atmospheric deposition and biological N2 fixation, respectively, our results suggest 77 – 81 % of plant N uptake (129 – 149 kg N ha-1) was supplied by SOM mineralization. Considering a range of N uptake efficiency from 50 – 70 %, estimated net C loss ranged from 1149 – 2473 kg C ha-1. These findings suggest that rice systems, as currently managed, reduce the rate of C loss from organic delta soils relative to other agricultural practices. PMID:25822494

  16. A new pan-tropical estimate of carbon loss in natural and managed forests in 2000-2012

    NASA Astrophysics Data System (ADS)

    Tyukavina, A.; Baccini, A.; Hansen, M.; Potapov, P.; Stehman, S. V.; Houghton, R. A.; Krylov, A.; Turubanova, S.; Goetz, S. J.

    2015-12-01

    Clearing of tropical forests, which includes semi-permanent conversion of forests to other land uses (deforestation) and more temporary forest disturbances, is a significant source of carbon emissions. The previous estimates of tropical forest carbon loss vary among studies due to the differences in definitions, methodologies and data inputs. The best currently available satellite-derived datasets, such as a 30-m forest cover loss map by Hansen et al. (2013), may be used to produce methodologically consistent carbon loss estimates for the entire tropical region, but forest cover loss area derived from maps is biased due to classification errors. In this study we produced an unbiased estimate of forest cover loss area from a validation sample, as suggested by good practice recommendations. Stratified random sampling was implemented with forest carbon stock strata defined based on Landsat-derived tree canopy cover, height, intactness (Potapov et al., 2008) and forest cover loss (Hansen et al., 2013). The largest difference between the sample-based and Hansen et al. (2013) forest loss area estimates occurred in humid tropical Africa. This result supports the earlier finding (Tyukavina et al., 2013) that Landsat-based forest cover loss maps may significantly underestimate loss area in regions with small-scale forest dynamics while performing well in regions with large industrial forest clearing, such as Brazil and Indonesia (where differences between sample-based and map estimates were within 10%). To produce final carbon loss estimates, sample-based forest loss area estimates for each stratum were related to GLAS-lidar derived forest biomass (Baccini et al., 2012). Our sample-based results distinguish gross losses of aboveground carbon from natural forests (0.59 PgC/yr), which include primary, mature secondary forests and natural woodlands, and from managed forests (0.43 PgC/yr), which include plantations, agroforestry systems and areas of subsistence agriculture

  17. Earthquake Scenario for the City of Gyumri Including Seismic Hazard and Risk Assessment

    NASA Astrophysics Data System (ADS)

    Babayan, H.; Karakhanyan, A.; Arakelyan, A.; Avanesyan, M.; Durgaryan, R.; Babayan, S.; Gevorgyan, M.; Hovhannisyan, G.

    2012-12-01

    The city of Gyumri situated in the north of Armenia falls in the zone of active Akhouryan Fault and during the 20th century it suffered from catastrophic earthquakes twice. The Mw=6.2 earthquake in 1926 and the Spitak earthquake with a magnitude of 6.9 in 1988 killed more than 20,000 people in total. Therefore, current seismic hazard and risk assessment for the city are of great importance. It is also very important to answer how properly the lessons of the Spitak earthquake have been learned for this largest city in the Spitak earthquake disaster zone, what the real level of seismic risk is now, and what losses the city could expect if a similar or stronger earthquake occurred nowadays. For this purposes, the most probable earthquakes scenarios have been developed by means of comprehensive assessment of seismic hazard and risk, The conducted study helped to produce the actual pattern of effects caused by the Spitak earthquake in terms of losses and damage caused to diverse types of buildings and thus enabled correct selection of required parameter values to estimate vulnerability of the structures and test the ELER software package (designated for estimation of losses and damages developed in the framework of GEM-EMME Project). The work was realized by the following sequence of steps: probabilistic and deterministic assessment of seismic hazard for the Gyumri city region - choice of earthquake scenario (based on the disaggregation and seismotectonic model) - risk estimation for each selected earthquake scenario. In the framework of this study, different parameters of seismic hazard such as peak ground acceleration and spectral acceleration were investigated and mapped, and soil model for city was developed. Subsequently, these maps were used as the basic inputs to assess the expected life, building, and lifeline losses. The presented work was realized with the financial support of UNDP. The results of the Project will serve the basis for national and local

  18. Novel point estimation from a semiparametric ratio estimator (SPRE): long-term health outcomes from short-term linear data, with application to weight loss in obesity.

    PubMed

    Weissman-Miller, Deborah

    2013-01-01

    Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment. PMID:24190595

  19. Estimating landslide losses - preliminary results of a seven-State pilot project

    USGS Publications Warehouse

    Highland, Lynn M.

    2006-01-01

    reliable information on economic losses associated with landslides. Each State survey examined the availability, distribution, and inherent uncertainties of economic loss data in their study areas. Their results provide the basis for identifying the most fruitful methods of collecting landslide loss data nationally, using methods that are consistent and provide common goals. These results can enhance and establish the future directions of scientific investigation priorities by convincingly documenting landslide risks and consequences that are universal throughout the 50 States. This report is organized as follows: A general summary of the pilot project history, goals, and preliminary conclusions from the Lincoln, Neb. workshop are presented first. Internet links are then provided for each State report, which appear on the internet in PDF format and which have been placed at the end of this open-file report. A reference section follows the reports, and, lastly, an Appendix of categories of landslide loss and sources of loss information is included for the reader's information. Please note: The Oregon Geological Survey has also submitted a preliminary report on indirect loss estimation methodology, which is also linked with the others. Each State report is unique and presented in the form in which it was submitted, having been independently peer reviewed by each respective State survey. As such, no universal 'style' or format has been adopted as there have been no decisions on which inventory methods will be recommended to the 50 states, as of this writing. The reports are presented here as information for decision makers, and for the record; although several reports provide recommendations on inventory methods that could be adopted nationwide, currently no decisions have been made on adopting a uniform methodology for the States.

  20. Should Coulomb stress change calculations be used to forecast aftershocks and to influence earthquake probability estimates? (Invited)

    NASA Astrophysics Data System (ADS)

    Parsons, T.

    2009-12-01

    After a large earthquake, our concern immediately moves to the likelihood that another large shock could be triggered, threatening an already weakened building stock. A key question is whether it is best to map out Coulomb stress change calculations shortly after mainshocks to potentially highlight the most likely aftershock locations, or whether it is more prudent to wait until the best information is available. It has been shown repeatedly that spatial aftershock patterns can be matched with Coulomb stress change calculations a year or more after mainshocks. However, with the onset of rapid source slip model determinations, the method has produced encouraging results like the M=8.7 earthquake that was forecast using stress change calculations from 2004 great Sumatra earthquake by McCloskey et al. [2005]. Here, I look back at two additional prospective calculations published shortly after the 2005 M=7.6 Kashmir and 2008 M=8.0 Wenchuan earthquakes. With the benefit of 1.5-4 years of additional seismicity, it is possible to assess the performance of rapid Coulomb stress change calculations. In the second part of the talk, within the context of the ongoing Working Group on California Earthquake Probabilities (WGCEP) assessments, uncertainties associated with time-dependent probability calculations are convolved with uncertainties inherent to Coulomb stress change calculations to assess the strength of signal necessary for a physics-based calculation to merit consideration into a formal earthquake forecast. Conclusions are as follows: (1) subsequent aftershock occurrence shows that prospective static stress change calculations both for Kashmir and Wenchuan examples failed to adequately predict the spatial post-mainshock earthquake distributions. (2) For a San Andreas fault example with relatively well-understood recurrence, a static stress change on the order of 30 to 40 times the annual stressing rate would be required to cause a significant (90%) perturbation to the

  1. Magnetic Resonance Measurement of Turbulent Kinetic Energy for the Estimation of Irreversible Pressure Loss in Aortic Stenosis

    PubMed Central

    Dyverfeldt, Petter; Hope, Michael D.; Tseng, Elaine E.; Saloner, David

    2013-01-01

    OBJECTIVES The authors sought to measure the turbulent kinetic energy (TKE) in the ascending aorta of patients with aortic stenosis and to assess its relationship to irreversible pressure loss. BACKGROUND Irreversible pressure loss caused by energy dissipation in post-stenotic flow is an important determinant of the hemodynamic significance of aortic stenosis. The simplified Bernoulli equation used to estimate pressure gradients often misclassifies the ventricular overload caused by aortic stenosis. The current gold standard for estimation of irreversible pressure loss is catheterization, but this method is rarely used due to its invasiveness. Post-stenotic pressure loss is largely caused by dissipation of turbulent kinetic energy into heat. Recent developments in magnetic resonance flow imaging permit noninvasive estimation of TKE. METHODS The study was approved by the local ethics review board and all subjects gave written informed consent. Three-dimensional cine magnetic resonance flow imaging was used to measure TKE in 18 subjects (4 normal volunteers, 14 patients with aortic stenosis with and without dilation). For each subject, the peak total TKE in the ascending aorta was compared with a pressure loss index. The pressure loss index was based on a previously validated theory relating pressure loss to measures obtainable by echocardiography. RESULTS The total TKE did not appear to be related to global flow patterns visualized based on magnetic resonance–measured velocity fields. The TKE was significantly higher in patients with aortic stenosis than in normal volunteers (p < 0.001). The peak total TKE in the ascending aorta was strongly correlated to index pressure loss (R2 = 0.91). CONCLUSIONS Peak total TKE in the ascending aorta correlated strongly with irreversible pressure loss estimated by a well-established method. Direct measurement of TKE by magnetic resonance flow imaging may, with further validation, be used to estimate irreversible pressure loss

  2. National-scale estimation of gross forest aboveground carbon loss: a case study of the Democratic Republic of the Congo

    NASA Astrophysics Data System (ADS)

    Tyukavina, A.; Stehman, S. V.; Potapov, P. V.; Turubanova, S. A.; Baccini, A.; Goetz, S. J.; Laporte, N. T.; Houghton, R. A.; Hansen, M. C.

    2013-12-01

    Recent advances in remote sensing enable the mapping and monitoring of carbon stocks without relying on extensive in situ measurements. The Democratic Republic of the Congo (DRC) is among the countries where national forest inventories (NFI) are either non-existent or out of date. Here we demonstrate a method for estimating national-scale gross forest aboveground carbon (AGC) loss and associated uncertainties using remotely sensed-derived forest cover loss and biomass carbon density data. Lidar data were used as a surrogate for NFI plot measurements to estimate carbon stocks and AGC loss based on forest type and activity data derived using time-series multispectral imagery. Specifically, DRC forest type and loss from the FACET (Forêts d’Afrique Centrale Evaluées par Télédétection) product, created using Landsat data, were related to carbon data derived from the Geoscience Laser Altimeter System (GLAS). Validation data for FACET forest area loss were created at a 30-m spatial resolution and compared to the 60-m spatial resolution FACET map. We produced two gross AGC loss estimates for the DRC for the last decade (2000-2010): a map-scale estimate (53.3 ± 9.8 Tg C yr-1) accounting for whole-pixel classification errors in the 60-m resolution FACET forest cover change product, and a sub-grid estimate (72.1 ± 12.7 Tg C yr-1) that took into account 60-m cells that experienced partial forest loss. Our sub-grid forest cover and AGC loss estimates, which included smaller-scale forest disturbances, exceed published assessments. Results raise the issue of scale in forest cover change mapping and validation, and subsequent impacts on remotely sensed carbon stock change estimation, particularly for smallholder dominated systems such as the DRC.

  3. Source estimate and tsunami forecast from far-field deep-ocean tsunami waveforms—The 27 February 2010 Mw 8.8 Maule earthquake

    NASA Astrophysics Data System (ADS)

    Yoshimoto, Masahiro; Watada, Shingo; Fujii, Yushiro; Satake, Kenji

    2016-01-01

    We inverted the 2010 Maule earthquake tsunami waveforms recorded at DART (Deep-ocean Assessment and Reporting Tsunamis) stations in the Pacific Ocean by taking into account the effects of the seawater compressibility, elasticity of the solid Earth, and gravitational potential change. These effects slow down the tsunami speed and consequently move the slip offshore or updip direction, consistent with the slip distribution obtained by a joint inversion of DART, tide gauge, GPS, and coastal geodetic data. Separate inversions of only near-field DART data and only far-field DART data produce similar slip distributions. The former demonstrates that accurate tsunami arrival times and waveforms of trans-Pacific tsunamis can be forecast in real time. The latter indicates that if the tsunami source area is as large as the 2010 Maule earthquake, the tsunami source can be accurately estimated from the far-field deep-ocean tsunami records without near-field data.

  4. As estimation of the climatic effects of stratospheric ozone losses during the 1980s

    SciTech Connect

    MacKay, R.M.; Ko, M.K.W.; Yang, Yajaing

    1997-04-01

    In order to study the potential climatic effects of the ozone hole more directly and to assess the validity of previous lower resolution model results, the latest high spatial resolution version of the Atmospheric and Environmental Research, Inc., seasonal radiative dynamical climate model is used to simulate the climatic effects of ozone changes relative to the other greenhouse gases. The steady-state climatic effect of a sustained decrease in lower stratospheric ozone, similar in magnitude to the observed 1979-90 decrease, is estimated by comparing three steady-state climate simulations: (I) 1979 greenhouse gas concentrations and 1979 ozone, (II) 1990 greenhouse gas concentrations with 1979 ozone, and (III) 1990 greenhouse gas concentrations with 1990 ozone. The simulated increase in surface air temperature resulting from nonozone greenhouse gases is 0.272 K. When changes in lower stratospheric ozone are included, the greenhouse warming is 0.165 K, which is approximately 39% lower than when ozone is fixed at the 1979 concentrations. Ozone perturbations at high latitudes result in a cooling of the surface-troposphere system that is greater (by a factor of 2.8) than that estimated from the change in radiative forcing resulting from ozone depletion and the model`s 2 X CO{sub 2} climate sensitivity. The results suggest that changes in meridional heat transport from low to high latitudes combined with the decrease in the infrared opacity of the lower stratosphere are very important in determining the steady-state response to high latitude ozone losses. The 39% compensation in greenhouse warming resulting from lower stratospheric ozone losses is also larger than the 28% compensation simulated previously by the lower resolution model. The higher resolution model is able to resolve the high latitude features of the assumed ozone perturbation, which are important in determining the overall climate sensitivity to these perturbations. 39 refs., 11 figs., 4 tabs.

  5. An Estimation of the Climatic Effects of Stratospheric Ozone Losses during the 1980s. Appendix K

    NASA Technical Reports Server (NTRS)

    MacKay, Robert M.; Ko, Malcolm K. W.; Shia, Run-Lie; Yang, Yajaing; Zhou, Shuntai; Molnar, Gyula

    1997-01-01

    In order to study the potential climatic effects of the ozone hole more directly and to assess the validity of previous lower resolution model results, the latest high spatial resolution version of the Atmospheric and Environmental Research, Inc., seasonal radiative dynamical climate model is used to simulate the climatic effects of ozone changes relative to the other greenhouse gases. The steady-state climatic effect of a sustained decrease in lower stratospheric ozone, similar in magnitude to the observed 1979-90 decrease, is estimated by comparing three steady-state climate simulations: 1) 1979 greenhouse gas concentrations and 1979 ozone, II) 1990 greenhouse gas concentrations with 1979 ozone, and III) 1990 greenhouse gas concentrations with 1990 ozone. The simulated increase in surface air temperature resulting from nonozone greenhouse gases is 0.272 K. When changes in lower stratospheric ozone are included, the greenhouse warming is 0.165 K, which is approximately 39% lower than when ozone is fixed at the 1979 concentrations. Ozone perturbations at high latitudes result in a cooling of the surface-troposphere system that is greater (by a factor of 2.8) than that estimated from the change in radiative forcing resulting from ozone depiction and the model's 2 x CO, climate sensitivity. The results suggest that changes in meridional heat transport from low to high latitudes combined with the decrease in the infrared opacity of the lower stratosphere are very important in determining the steady-state response to high latitude ozone losses. The 39% compensation in greenhouse warming resulting from lower stratospheric ozone losses is also larger than the 28% compensation simulated previously by the lower resolution model. The higher resolution model is able to resolve the high latitude features of the assumed ozone perturbation, which are important in determining the overall climate sensitivity to these perturbations.

  6. Earthquake Risk Assessment and Risk Transfer

    NASA Astrophysics Data System (ADS)

    Liechti, D.; Zbinden, A.; Rüttener, E.

    Research on risk assessment of natural catastrophes is very important for estimating its economical and social impact. The loss potentials of such disasters (e.g. earthquake and storms) for property owners, insurance and nationwide economies are driven by the hazard, the damageability (vulnerability) of buildings and infrastructures and depend on the ability to transfer these losses to different parties. In addition, the geographic distribution of the exposed values, the uncertainty of building vulnerability and the individual deductible are main factors determining the size of a loss. The deductible is the key element that steers the distribution of losses between insured and insurer. Therefore the risk analysis concentrates on deductible and vulnerability of insured buildings and maps their variations to allow efficient decisions. With consideration to stochastic event sets, the corresponding event losses can be modelled as expected loss grades of a Beta probability density function. Based on deductible and standard deviation of expected loss grades, the loss for the insured and for the insurer can be quantified. In addition, the varying deductible impact on different geographic regions can be described. This analysis has been carried out for earthquake insurance portfolios with various building types and different deductibles. Besides quantifying loss distributions between insured and insurer based on uncertainty assumptions and deductible consideration, mapping yields ideas to optimise the risk transfer process and can be used for developing risk mitigation strategies.

  7. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  8. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    SciTech Connect

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-18

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.

  9. Estimation of postseismic deformation parameters from continuous GPS data in northern Sumatra after the 2004 Sumatra-Andaman earthquake

    NASA Astrophysics Data System (ADS)

    Anugrah, Bimar; Meilano, Irwan; Gunawan, Endra; Efendi, Joni

    2015-12-01

    Continuous global positioning system (GPS) in northern Sumatra detected signal of the ongoing physical process of postseismic deformation after the M9.2 2004 Sumatra-Andaman earthquake. We analyze the characteristics of postseismic deformation of the 2004 earthquake based on GPS networks operated by BIG, and the others named AGNeSS, and SuGAr networks located in northern Sumatra. We use a simple analytical logarithmic and exponential function to evaluate the postseismic deformation parameters of the 2004 earthquake. We find that GPS data in northern Sumatra during time periods of 2005-2012 are fit better using the logarithmic function with τlog of 104.2 ± 0.1 than using the exponential function. Our result clearly indicates that other physical mechanisms of postseismic deformation should be taken into account rather than a single physical mechanism of afterslip only.

  10. Scaling relationship between corner frequencies and seismic moments of ultra micro earthquakes estimated with coda-wave spectral ratio -the Mponeng mine in South Africa

    NASA Astrophysics Data System (ADS)

    Wada, N.; Kawakata, H.; Murakami, O.; Doi, I.; Yoshimitsu, N.; Nakatani, M.; Yabe, Y.; Naoi, M. M.; Miyakawa, K.; Miyake, H.; Ide, S.; Igarashi, T.; Morema, G.; Pinder, E.; Ogasawara, H.

    2011-12-01

    Scaling relationship between corner frequencies, fc, and seismic moments, Mo is an important clue to understand the seismic source characteristics. Aki (1967) showed that Mo is proportional to fc-3 for large earthquakes (cubic law). Iio (1986) claimed breakdown of the cubic law between fc and Mo for smaller earthquakes (Mw < 2), and Gibowicz et al. (1991) also showed the breakdown for the ultra micro and small earthquakes (Mw < -2). However, it has been reported that the cubic law holds even for micro earthquakes (-1 < Mw > 4) by using high quality data observed at a deep borehole (Abercrombie, 1995; Ogasawara et al., 2001; Hiramatsu et al., 2002; Yamada et al., 2007). In order to clarify the scaling relationship for smaller earthquakes (Mw < -1), we analyzed ultra micro earthquakes using very high sampling records (48 kHz) of borehole seismometers installed within a hard rock at the Mponeng mine in South Africa. We used 4 tri-axial accelerometers of three-component that have a flat response up to 25 kHz. They were installed to be 10 to 30 meters apart from each other at 3,300 meters deep. During the period from 2008/10/14 to 2008/10/30 (17 days), 8,927 events were recorded. We estimated fc and Mo for 60 events (-3 < Mw < -1) within 200 meters from the seismometers. Assuming the Brune's source model, we estimated fc and Mo from spectral ratios. Common practice is using direct waves from adjacent events. However, there were only 5 event pairs with the distance between them less than 20 meters and Mw difference over one. In addition, the observation array is very small (radius less than 30 m), which means that effects of directivity and radiation pattern on direct waves are similar at all stations. Hence, we used spectral ratio of coda waves, since these effects are averaged and will be effectively reduced (Mayeda et al., 2007; Somei et al., 2010). Coda analysis was attempted only for relatively large 20 events (we call "coda events" hereafter) that have coda energy

  11. Exploring the uncertainty range of co-seismic stress drop estimations of large earthquakes using finite fault inversions

    NASA Astrophysics Data System (ADS)

    Adams, Mareike; Twardzik, Cedric; Ji, Chen

    2016-10-01

    A new finite fault inversion strategy is developed to explore the uncertainty range for the energy based average co-seismic stress drop (overline {{{Δ }}{τ_E}}) of large earthquakes. For a given earthquake, we conduct a modified finite fault inversion to find a solution that not only matches seismic and geodetic data but also has a overline {{{Δ }}{τ_E}} matching a specified value. We do the inversions for a wide range of stress drops. These results produce a trade-off curve between the misfit to the observations and overline {{{Δ }}{τ_E}} , which allows one to define the range of overline {{{Δ }}{τ_E}} that will produce an acceptable misfit. The study of the 2014 Rat Islands Mw 7.9 earthquake reveals an unexpected result: when using only teleseismic waveforms as data, the lower bound of overline {{{Δ }}{τ_E}} (5-10 MPa) for this earthquake is successfully constrained. However, the same dataset exhibits no sensitivity to its upper bound of overline {{{Δ }}{τ_E}} because there is limited resolution to the fine scale roughness of fault slip. Given that the spatial resolution of all seismic or geodetic data is limited, we can speculate that the upper bound of overline {{{Δ }}{τ_E}} cannot be constrained with them. This has consequences for the earthquake energy budget. Failing to constrain the upper bound of overline {{{Δ }}{τ_E}} leads to the conclusions that 1) the seismic radiation efficiency determined from the inverted model might be significantly overestimated; 2) the upper bound of the average fracture energy EG cannot be constrained by seismic or geodetic data. Thus, caution must be taken when investigating the characteristics of large earthquakes using the energy budget approach. Finally, searching for the lower bound of overline {{{Δ }}{τ_E}} can be used as an energy-based smoothing scheme during finite fault inversions.

  12. Near Field Deformation of the Mw 6.0 24 August, 2014 South Napa Earthquake Estimated by Airborne Light Detection and Ranging (LiDAR) Change Detection Techniques

    NASA Astrophysics Data System (ADS)

    Lyda, A. W.; Zhang, X.; Glennie, C. L.; Hudnut, K. W.; Brooks, B. A.

    2015-12-01

    We examine surface deformation caused by the Mw 6.0 24 August, 2014 South Napa Earthquake using high-resolution pre and post event airborne LiDAR (Light Detection and Ranging) observations. Temporally spaced LiDAR surveys taken before and after an earthquake can provide decimeter-level, 3D near-field estimates of deformation. These near-field deformation estimates can help constrain fault slip and rheology of shallow seismogenic zones. We compare and contrast estimates of deformation obtained from pre and post-event LiDAR data sets of the 2014 South Napa Earthquake using two change detection techniques, Iterative Control Point (ICP) and Particle Image Velocimetry (PIV). The ICP algorithm has been and still is the primary technique for acquiring three dimensional deformations from airborne LiDAR data sets. It conducts a rigid registration of pre-event data points to post event data points via iteratively matching data points with the smallest Euclidian distances between data sets. PIV is a technique derived from fluid mechanics that measures the displacement of a particle between two images of known time. LiDAR points act as the particles within the point cloud images so that their movement represents the horizontal deformation of the surface. The results from these change detection techniques are presented and further analyzed for differences between the techniques, the effects of temporal spacing between LiDAR collections, and the use of permanent LiDAR scatterers to constrain deformation estimates. The airborne LiDAR results will also be compared with far field deformations from space based geodetic techniques (InSAR and GNSS) and field observations of surface displacement.

  13. Probabilistic Methodology for Estimation of Number and Economic Loss (Cost) of Future Landslides in the San Francisco Bay Region, California

    USGS Publications Warehouse

    Crovelli, Robert A.; Coe, Jeffrey A.

    2008-01-01

    The Probabilistic Landslide Assessment Cost Estimation System (PLACES) presented in this report estimates the number and economic loss (cost) of landslides during a specified future time in individual areas, and then calculates the sum of those estimates. The analytic probabilistic methodology is based upon conditional probability theory and laws of expectation and variance. The probabilistic methodology is expressed in the form of a Microsoft Excel computer spreadsheet program. Using historical records, the PLACES spreadsheet is used to estimate the number of future damaging landslides and total damage, as economic loss, from future landslides caused by rainstorms in 10 counties of the San Francisco Bay region in California. Estimates are made for any future 5-year period of time. The estimated total number of future damaging landslides for the entire 10-county region during any future 5-year period of time is about 330. Santa Cruz County has the highest estimated number of damaging landslides (about 90), whereas Napa, San Francisco, and Solano Counties have the lowest estimated number of damaging landslides (5?6 each). Estimated direct costs from future damaging landslides for the entire 10-county region for any future 5-year period are about US $76 million (year 2000 dollars). San Mateo County has the highest estimated costs ($16.62 million), and Solano County has the lowest estimated costs (about $0.90 million). Estimated direct costs are also subdivided into public and private costs.

  14. Soil-Earthquake Interactions in Buyukada/ Prinkipo (Istanbul)

    NASA Astrophysics Data System (ADS)

    Ozcep, Ferhat; Karabulut, Savas; Caglak, Faruk; Ozel, Oguz

    2014-05-01

    As the largest one of the nine islands comprising the Princes' Islands in the Marmara Sea, close to Istanbul, Buyukada ("Large Isle") consist of with an area of 5.46 km2. The main factor controlling the earthquake hazard for Istanbul is a complex fault system, i.e. the North Anatolian Fault zone, which in the Marmara Sea region. Recent geophysical studies have carried out that this hazard is mainly associated within two active seismogenic areas: the Central Marmara Basin and the Adalar Fault zone, located about 15-30 km south-west and south of Istanbul. Eartquake ground motion affects the structures via the state of the soils. There are several historical buildings on Büyükada, such as the Ayia Yorgi Church and Monastery dating back to the sixth century, the Ayios Dimitrios Church, and the Hamidiye Mosque built by Abdul Hamid II and Greek Orphanage, a huge wooden building etc. The soils and buildings with characteristics of earthquakes could be caused an earthquake damage / loss. One of the most important factors in reducing the earthquake risk in urban areas due to the earthquake ground motion is to estimate gound motion level with interaction of soils. When we look at the geological structure of Buyukada, Paleozoic unites and alluvial deposit are located. Site response of alluvial deposits in Buyukada is also important for the behavior during an earthquake. Geophysical study in the study area in order to estimate the behavior of soils is carried out to obtain the dominant period (microtremor measurements) and shear wave velocity ( MASW - MAM measurements) data. Soil geophysical results is input to earthquake motion for bedrock sites, and is important to the interaction with the ground movement and the soils to estimate Büyükaada's earthquake ground motion. In the earthquake-soil interaction, spectral acceleration is an important criterion. In this study, spectral acceleration are also estiamted for ground motion level in Princes' Islands by using several

  15. Soil loss estimation and prioritization of sub-watersheds of Kali River basin, Karnataka, India, using RUSLE and GIS.

    PubMed

    Markose, Vipin Joseph; Jayappa, K S

    2016-04-01

    Most of the mountainous regions in tropical humid climatic zone experience severe soil loss due to natural factors. In the absence of measured data, modeling techniques play a crucial role for quantitative estimation of soil loss in such regions. The objective of this research work is to estimate soil loss and prioritize the sub-watersheds of Kali River basin using Revised Universal Soil Loss Equation (RUSLE) model. Various thematic layers of RUSLE factors such as rainfall erosivity (R), soil erodibility (K), topographic factor (LS), crop management factor (C), and support practice factor (P) have been prepared by using multiple spatial and non-spatial data sets. These layers are integrated in geographic information system (GIS) environment and estimated the soil loss. The results show that ∼42 % of the study area falls under low erosion risk and only 6.97 % area suffer from very high erosion risk. Based on the rate of soil loss, 165 sub-watersheds have been prioritized into four categories-very high, high, moderate, and low erosion risk. Anthropogenic activities such as deforestation, construction of dams, and rapid urbanization are the main reasons for high rate of soil loss in the study area. The soil erosion rate and prioritization maps help in implementation of a proper watershed management plan for the river basin. PMID:26969157

  16. Soil loss estimation and prioritization of sub-watersheds of Kali River basin, Karnataka, India, using RUSLE and GIS.

    PubMed

    Markose, Vipin Joseph; Jayappa, K S

    2016-04-01

    Most of the mountainous regions in tropical humid climatic zone experience severe soil loss due to natural factors. In the absence of measured data, modeling techniques play a crucial role for quantitative estimation of soil loss in such regions. The objective of this research work is to estimate soil loss and prioritize the sub-watersheds of Kali River basin using Revised Universal Soil Loss Equation (RUSLE) model. Various thematic layers of RUSLE factors such as rainfall erosivity (R), soil erodibility (K), topographic factor (LS), crop management factor (C), and support practice factor (P) have been prepared by using multiple spatial and non-spatial data sets. These layers are integrated in geographic information system (GIS) environment and estimated the soil loss. The results show that ∼42 % of the study area falls under low erosion risk and only 6.97 % area suffer from very high erosion risk. Based on the rate of soil loss, 165 sub-watersheds have been prioritized into four categories-very high, high, moderate, and low erosion risk. Anthropogenic activities such as deforestation, construction of dams, and rapid urbanization are the main reasons for high rate of soil loss in the study area. The soil erosion rate and prioritization maps help in implementation of a proper watershed management plan for the river basin.

  17. Estimating the loss of C, N and microbial biomass from Biological Soil Crusts under simulated rainfall

    NASA Astrophysics Data System (ADS)

    Gommeaux, M.; Malam Issa, O.; Bouchet, T.; Valentin, C.; Rajot, J.-L.; Bertrand, I.; Alavoine, G.; Desprats, J.-F.; Cerdan, O.; Fatondji, D.

    2012-04-01

    Most areas where biological soil crusts (BSC) develop undergo a climate with heavy but sparse rainfall events. The hydrological response of the BSC, namely the amount of runoff, is highly variable. Rainfall simulation experiments were conducted in Sadoré, south-western Niger. The aim was to estimate the influence of the BSC coverage on the quantity and quality of water, particles and solutes exported during simulated rainfall events. Ten 1 m2 plots were selected based on their various degree of BSC cover (4-89%) and type of underlying physical crust (structural or erosion crusts). The plots are located on similar sandy soil with moderate slope (3-6%). The experiments consisted of two rainfall events, spaced at 22-hours interval: 60 mm/h for 20 min, and 120 mm/h for 10 min. During each experiments particles dectached and runoff water were collected and filtered in the laboratory. C and N content were determined both in water and sediments samples.. These analyses were completed by measurements of phospholipid fatty acids and chlorophyll a contents in sediments and BSC samples collected before and after the rainfall. Mineral N and microbial biomass carbon of BSC samples were also analysed. The results confirmed that BSC reduce the loss of particles and exert a protective effect on soils with regard to particle detachment by raindrop. However there is no general relationship between the BSC coverage and the loss of C and N due to runoff. Contrarily, the C and N content in the sediments is negatively correlated to their mass. The type of physical crust on which the BSC develop also has to be taken into account. These results will contribute to the region-wide modeling of the role of BSC in biogeochemical cycles.

  18. Cascading uncertainties in flood inundation models to uncertain estimates of damage and loss

    NASA Astrophysics Data System (ADS)

    Fewtrell, Timothy; Michel, Gero; Ntelekos, Alexandros; Bates, Paul

    2010-05-01

    The complexity of flood processes, particularly in urban environments, and the difficulties of collecting data during flood events, presents significant and particular challenges to modellers, especially when considering large geographic areas. As a result, the modelling process incorporates a number of areas of uncertainty during model conceptualisation, construction and evaluation. There is a wealth of literature detailing the relative magnitudes of uncertainties in numerical flood input data (e.g. boundary conditions, model resolution and friction specification) for a wide variety of flood inundation scenarios (e.g. fluvial inundation and surface water flooding). Indeed, recent UK funded projects (e.g. FREE) have explicitly examined the effect of cascading uncertainties in ensembles of GCM output through rainfall-runoff models to hydraulic flood inundation models. However, there has been little work examining the effect of cascading uncertainties in flood hazard ensembles to estimates of damage and loss, the quantity of interest when assessing flood risk. Furthermore, vulnerability is possibly the largest area of uncertainty for (re-)insurers as in-depth and reliable of knowledge of portfolios is difficult to obtain. Insurance industry CAT models attempt to represent a credible range of flood events over large geographic areas and as such examining all sources of uncertainty is not computationally tractable. However, the insurance industry is also marked by a trend towards an increasing need to understand the variability in flood loss estimates derived from these CAT models. In order to assess the relative importance of uncertainties in flood inundation models and depth/damage curves, hypothetical 1-in-100 and 1-in-200 year return period flood events are propagated through the Greenwich embayment in London, UK. Errors resulting from topographic smoothing, friction specification and inflow boundary conditions are cascaded to form an ensemble of flood levels and

  19. A multiple-approach radiometric age estimate for the Rotoiti and Earthquake Flat eruptions, New Zealand, with implications for the MIS 4/3 boundary

    USGS Publications Warehouse

    Wilson, C.J.N.; Rhoades, D.A.; Lanphere, M.A.; Calvert, A.T.; Houghton, B.F.; Weaver, S.D.; Cole, J.W.

    2007-01-01

    Pyroclastic fall deposits of the paired Rotoiti and Earthquake Flat eruptions from the Taupo Volcanic Zone (New Zealand) combine to form a widespread isochronous horizon over much of northern New Zealand and the southwest Pacific. This horizon is important for correlating climatic and environmental changes during the Last Glacial period, but has been the subject of numerous disparate age estimates between 35.1??2.8 and 71??6 ka (all errors are 1 s.d.), obtained by a variety of techniques. A potassium-argon (K-Ar) age of 64??4 ka was previously determined on bracketing lavas at Mayor Island volcano, offshore from the Taupo Volcanic Zone. We present a new, more-precise 40Ar/39Ar age determination on a lava flow on Mayor Island, that shortly post-dates the Rotoiti/Earthquake Flat fall deposits, of 58.5??1.1 ka. This value, coupled with existing ages from underlying lavas, yield a new estimate for the age of the combined eruptions of 61.0??1.4 ka, which is consistent with U-Th disequilibrium model-age data for zircons from the Rotoiti deposits. Direct 40Ar/39Ar age determinations of plagioclase and biotite from the Rotoiti and Earthquake Flat eruption products yield variable values between 49.6??2.8 and 125.3??10.0 ka, with the scatter attributed to low radiogenic Ar yields, and/or alteration, and/or inheritance of xenocrystic material with inherited Ar. Rotoiti/Earthquake Flat fall deposits occur in New Zealand in association with palynological indicators of mild climate, attributed to Marine Isotope Stage (MIS) 3 and thus used to suggest an age that is post-59 ka. The natures of the criteria used to define the MIS 4/3 boundary in the Northern and Southern hemispheres, however, imply that the new 61 ka age for the Rotoiti/Earthquake Flat eruption deposits will provide the inverse, namely, a more accurate isochronous marker for correlating diverse changes across the MIS 4/3 boundary in the southwest Pacific. ?? 2007 Elsevier Ltd. All rights reserved.

  20. Use Of Scenario Ground Motion Maps In Earthquake Engineering

    NASA Astrophysics Data System (ADS)

    Somerville, P. G.

    2001-12-01

    Design ground motions are defined probabilistically in building codes used in the United States. However, ground motion maps of scenario earthquakes have some important applications. One is the development of emergency response plans by government agencies. Another is modeling the response of lifeline systems, which depends on the geographical distribution of shaking and of the elements of the lifeline system. A third is the estimation of maximum loss for a portfolio of structures that are owned or insured by a single organization. In all of these cases, the required seismic hazard information relates to the occurrence of a single event. In all of these applications, it may be important to know the likelihood of occurrence of the earthquake event and of the ensuing ground motions. This kind of information is provided by probabilistic seismic hazard analysis (PSHA), which considers the effects of a large number of earthquake scenarios that involve earthquakes of different magnitudes occurring on different seismic sources. Because the PSHA involves a large number of earthquake scenarios, it is usually impractical to use the detailed earthquake source and ground motion models that are used to generate the ground motions of a single earthquake scenario. Instead, simple earthquake source and ground motion models are used. By deaggregating the probabilistic seismic hazard, it is possible to identify the magnitude-distance combinations that dominate the seismic hazard at a specified annual frequency of occurrence. These magnitude-distance combinations can then be used to identify the most relevant earthquake scenarios. However, differences in the level of detail in the earthquake source and ground motion models used by PSHA and scenario calculations will in general lead to discrepancy between the ground motions of the scenario earthquake and the ground motions of an equivalent earthquake scenario as represented in the PSHA, potentially leading to misidentification of the

  1. Estimating fish exploitation and aquatic habitat loss across diffuse inland recreational fisheries.

    PubMed

    de Kerckhove, Derrick Tupper; Minns, Charles Kenneth; Chu, Cindy

    2015-01-01

    The current state of many freshwater fish stocks worldwide is largely unknown but suspected to be vulnerable to exploitation from recreational fisheries and habitat degradation. Both these factors, combined with complex ecological dynamics and the diffuse nature of inland fisheries could lead to an invisible collapse: the drastic decline in fish stocks without great public or management awareness. In this study we provide a method to address the pervasive knowledge gaps in regional rates of exploitation and habitat degradation, and demonstrate its use in one of North America's largest and most diffuse recreational freshwater fisheries (Ontario, Canada). We estimated that (1) fish stocks were highly exploited and in apparent danger of collapse in management zones close to large population centres, and (2) fish habitat was under a low but constant threat of degradation at rates comparable to deforestation in Ontario and throughout Canada. These findings confirm some commonly held, but difficult to quantify, beliefs in inland fisheries management but also provide some further insights including (1) large anthropogenic projects greater than one hectare could contribute much more to fish habitat loss on an area basis than the cumulative effect of smaller projects within one year, (2) hooking mortality from catch-and-release fisheries is likely a greater source of mortality than the harvest itself, and (3) in most northern management zones over 50% of the fisheries resources are not yet accessible to anglers. While this model primarily provides a framework to prioritize management decisions and further targeted stock assessments, we note that our regional estimates of fisheries productivity and exploitation were similar to broadscale monitoring efforts by the Province of Ontario. We discuss the policy implications from our results and extending the model to other jurisdictions and countries. PMID:25875790

  2. Estimating Fish Exploitation and Aquatic Habitat Loss across Diffuse Inland Recreational Fisheries

    PubMed Central

    de Kerckhove, Derrick Tupper; Minns, Charles Kenneth; Chu, Cindy

    2015-01-01

    The current state of many freshwater fish stocks worldwide is largely unknown but suspected to be vulnerable to exploitation from recreational fisheries and habitat degradation. Both these factors, combined with complex ecological dynamics and the diffuse nature of inland fisheries could lead to an invisible collapse: the drastic decline in fish stocks without great public or management awareness. In this study we provide a method to address the pervasive knowledge gaps in regional rates of exploitation and habitat degradation, and demonstrate its use in one of North America’s largest and most diffuse recreational freshwater fisheries (Ontario, Canada). We estimated that 1) fish stocks were highly exploited and in apparent danger of collapse in management zones close to large population centres, and 2) fish habitat was under a low but constant threat of degradation at rates comparable to deforestation in Ontario and throughout Canada. These findings confirm some commonly held, but difficult to quantify, beliefs in inland fisheries management but also provide some further insights including 1) large anthropogenic projects greater than one hectare could contribute much more to fish habitat loss on an area basis than the cumulative effect of smaller projects within one year, 2) hooking mortality from catch-and-release fisheries is likely a greater source of mortality than the harvest itself, and 3) in most northern management zones over 50% of the fisheries resources are not yet accessible to anglers. While this model primarily provides a framework to prioritize management decisions and further targeted stock assessments, we note that our regional estimates of fisheries productivity and exploitation were similar to broadscale monitoring efforts by the Province of Ontario. We discuss the policy implications from our results and extending the model to other jurisdictions and countries. PMID:25875790

  3. Estimating fish exploitation and aquatic habitat loss across diffuse inland recreational fisheries.

    PubMed

    de Kerckhove, Derrick Tupper; Minns, Charles Kenneth; Chu, Cindy

    2015-01-01

    The current state of many freshwater fish stocks worldwide is largely unknown but suspected to be vulnerable to exploitation from recreational fisheries and habitat degradation. Both these factors, combined with complex ecological dynamics and the diffuse nature of inland fisheries could lead to an invisible collapse: the drastic decline in fish stocks without great public or management awareness. In this study we provide a method to address the pervasive knowledge gaps in regional rates of exploitation and habitat degradation, and demonstrate its use in one of North America's largest and most diffuse recreational freshwater fisheries (Ontario, Canada). We estimated that (1) fish stocks were highly exploited and in apparent danger of collapse in management zones close to large population centres, and (2) fish habitat was under a low but constant threat of degradation at rates comparable to deforestation in Ontario and throughout Canada. These findings confirm some commonly held, but difficult to quantify, beliefs in inland fisheries management but also provide some further insights including (1) large anthropogenic projects greater than one hectare could contribute much more to fish habitat loss on an area basis than the cumulative effect of smaller projects within one year, (2) hooking mortality from catch-and-release fisheries is likely a greater source of mortality than the harvest itself, and (3) in most northern management zones over 50% of the fisheries resources are not yet accessible to anglers. While this model primarily provides a framework to prioritize management decisions and further targeted stock assessments, we note that our regional estimates of fisheries productivity and exploitation were similar to broadscale monitoring efforts by the Province of Ontario. We discuss the policy implications from our results and extending the model to other jurisdictions and countries.

  4. Variability of ozone loss during Arctic winter (1991 to 2000) estimated from UARS Microwave Limb Sounder measurement

    NASA Technical Reports Server (NTRS)

    Manney, G.; Froidevaux, F.; Santee, M. L.; Livesey, N. J.; Sabutis, J. L.; Waters, J. W.

    2002-01-01

    A comprehensive analysis of version 5 Upper Atmosphere Research Satellite (UARS) Microwave Limb Sounder (MLS) ozone data using a Lagrangian Transport (LT) model provides estimates of chemical ozone depletion for the 1991-1992 through 1997-1998 Arctic winters. These new estimates give a consistent, three-dimensional picture of ozone loss during seven Arctic winters; previous Arctic ozone loss estimates from MLS were based on various earlier data versions and were done only for late winter and only for a subset of the years observed by MLS. We find large interannual variability in the amount, timing, and patterns of ozone depletion and in the degree to which chemical loss is masked by dynamical processes.

  5. Nitrogen Loss Estimation Worksheet (NLEW): an agricultural nitrogen loading reduction tracking tool.

    PubMed

    Osmond, D L; Xu, L; Ranells, N N; Hodges, S C; Hansard, R; Pratt, S H

    2001-11-09

    The Neuse River Basin in North Carolina was regulated in 1998, requiring that all pollution sources (point and nonpoint) reduce nitrogen (N) loading into the Neuse Estuary by 30%. Point source N reductions have already been reduced by approximately 35%. The diffuse nature of nonpoint source pollution, and its spatial and temporal variability, makes it a more difficult problem to treat. Agriculture is believed to contribute over 50% of the total N load to the river. In order to reduce these N inputs, best management practices (BMPs) are necessary to control the delivery of N from agricultural activities to water resources and to prevent impacts to the physical and biological integrity of surface and ground water. To provide greater flexibility to the agricultural community beyond standard BMPs (nutrient management, riparian buffers, and water-control structures), an agricultural N accounting tool, called Nitrogen Loss Estimation Worksheet (NLEW), was developed to track N reductions due to BMP implementation. NLEW uses a modified N-balance equation that accounts for some N inputs as well as N reductions from nutrient management and other BMPs. It works at both the field- and county-level scales. The tool has been used by counties to determine different N reduction strategies to achieve the 30% targeted reduction.

  6. Comparison of heating losses and macro thermogravimetric analysis procedures for estimating unburned carbon in combustion residues

    SciTech Connect

    Stuart C. Burris; Dong Li; John T. Riley

    2005-08-01

    One of the most important indices for evaluating the combustion efficiencies of boilers, as well as the commercial value of the produced fly ash, is the unburned carbon in fly ash. The most common method currently used by combustion engineers to estimate the amount of unburned carbon in fly ash is to equate it to the dry loss on ignition (LOI) value. There seems to be no reported systematic study linking LOI values with the true carbon content of ashes and combustion residues. In this study, the LOI values for 35 combustion residues were determined at 500, 750, and 950{sup o}C, using a macro thermogravimetric analyzer. The carbon contents of the combustion residues and the residues from the LOI determinations were then measured. For the samples in this study, it was determined that temperatures of {gt}790{sup o}C should be used to achieve complete carbon burnoff. For low-percentage-carbon combustion residues, there is very poor agreement between the unburned carbon contents and the LOI values. This is especially true if the samples are exposed to the atmosphere for extended periods of time, because the combustion residues readily absorb moisture and acidic gases. For high-percentage-carbon combustion residues, there is good agreement between the unburned carbon and the LOI values, especially if the residues are relatively fresh.

  7. A new tool for estimating phosphorus loss from cattle barnyards and outdoor lots

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phosphorus (P) loss from agriculture can compromise quality of receiving water bodies. For cattle farms, P can be lost from cropland, pastures, and outdoor animal lots. We developed a new model that predicts annual runoff, total solids loss, and total and dissolved P loss from cattle lots. The model...

  8. Estimating the magnitude of prediction uncertainties for field-scale P loss models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, an uncertainty analysis for the Annual P Loss Estima...

  9. Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phosphorous (P) loss models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. All P loss models, however, have an inherent amount of uncertainty associated with them. In this study, we conducted an uncertainty analysis with ...

  10. Estimating the tsunami source model from the deposits: Testing the reliability of the method following the 2011 off the Pacific coast of Tohoku Earthquak

    NASA Astrophysics Data System (ADS)

    Hashimoto, K.; Goto, K.; Sugawara, D.; Abe, T.; Imamura, F.

    2012-12-01

    Information on the magnitude of past tsunamis is essential for evaluating the risks from the low-frequency large-scale earthquakes and tsunamis. Numerical simulation technique is used to reproduce propagation and inundation by modern tsunamis and to estimate the nature of the wave source, as represented by the earthquake magnitude and focal mechanism. The simulated results are validated based on various kinds of tsunami records. In the same way, numerical simulations of historical tsunamis are based on the available historical records on the tsunami event. However, historical records are sometimes too sparse and too abstract as well to quantify the run-up heights and inundation distances. To date, tsunami deposits are widely used for physical evidence to compensate the historical accounts. Insights to the heights and inundation distances, as well as the waveforms, are derived from tsunami deposits. The fault parameters of the past earthquake, such as strike, dip, and slip, are determined under the assumption that the distribution of tsunami deposits can closely represents the actual inundation distance. An important question arise here that whether these fault parameters can be determined accurately by the distribution of tsunami deposits. The answer to the question can be derived from the application of the methodology to modern examples. The 2011 off the Pacific coast of Tohoku Earthquake Tsunami deposited enormous volume of the sediments on the coastal plains. It is reported that the focal mechanism and the magnitude of the 2011 event was unusual; Based on instrumental observation data, the amount of maximum fault slip is estimated at about 30 m [JMA, 2011] or even 56 m [Geographical Information Authority of Japan, 2011], the latter is being derived from the seafloor and on-land geodetic observation data from numbers of GPS stations. This is considerably beyond the empirical formula of the relationship between the slip amount and earthquake magnitude. In the

  11. Modeling earthquake dynamics

    NASA Astrophysics Data System (ADS)

    Charpentier, Arthur; Durand, Marilou

    2015-07-01

    In this paper, we investigate questions arising in Parsons and Geist (Bull Seismol Soc Am 102:1-11, 2012). Pseudo causal models connecting magnitudes and waiting times are considered, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos and Karlis (Environmetrics 19: 251-269, 2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are functions of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year or a decade.

  12. Mass wasting triggered by the 5 March 1987 Ecuador earthquakes

    USGS Publications Warehouse

    Schuster, R.L.; Nieto, A.S.; O'Rourke, T. D.; Crespo, E.; Plaza-Nieto, G.

    1996-01-01

    On 5 March 1987, two earthquakes (Ms=6.1 and Ms=6.9) occurred about 25 km north of Reventador Volcano, along the eastern slopes of the Andes Mountains in northeastern Ecuador. Although the shaking damaged structures in towns and villages near the epicentral area, the economic and social losses directly due to earthquake shaking were small compared to the effects of catastrophic earthquake-triggered mass wasting and flooding. About 600 mm of rain fell in the region in the month preceding the earthquakes; thus, the surficial soils had high moisture contents. Slope failures commonly started as thin slides, which rapidly turned into fluid debris avalanches and debris flows. The surficial soils and thick vegetation covering them flowed down the slopes into minor tributaries and then were carried into major rivers. Rock and earth slides, debris avalanches, debris and mud flows, and resulting floods destroyed about 40 km of the Trans-Ecuadorian oil pipeline and the only highway from Quito to Ecuador's northeastern rain forests and oil fields. Estimates of total volume of earthquake-induced mass wastage ranged from 75-110 million m3. Economic losses were about US$ 1 billion. Nearly all of the approximately 1000 deaths from the earthquakes were a consequence of mass wasting and/ or flooding.

  13. Earthquakes; March-April 1975

    USGS Publications Warehouse

    Person, W.J.

    1975-01-01

    There were no major earthquakes (magnitude 7.0-7.9) in March or April; however, there were earthquake fatalities in Chile, Iran, and Venezuela and approximately 35 earthquake-related injuries were reported around the world. In the United States a magnitude 6.0 earthquake struck the Idaho-Utah border region. Damage was estimated at about a million dollars. The shock was felt over a wide area and was the largest to hit the continental Untied States since the San Fernando earthquake of February 1971. 

  14. Teleseismic waveform analysis of deep-focus earthquake for the preliminary estimation of crustal structure of the northern part of Korea

    NASA Astrophysics Data System (ADS)

    Cho, H.; Shin, J.

    2010-12-01

    Crustal structures in the several areas of the northern part of Korea are estimated using the long-period teleseismic depth phase pP and the Moho underside-reflected phase pMP generated by deep-focus earthquakes. The analysis of waveform is performed through comparison of recordings and synthetics of these phases computed using a hybrid reflectivity method, WKBJ approximation for propagation in the vertically inhomogeneous mantle and the computation of Haskell propagator matrix in the layered crust and upper mantle. The pMP phase is a precursor to the surface reflection pP phase and its amplitude is relatively small. The analysis of vertical component of P, pP, and pMP provides the estimation of structure of the source side. The deep-focus earthquakes occurred at the border area of North Korea, China, and Russia are adequate for this study. The seismograms recorded at the GSN stations in Southeast Asia provide clear identification of pMP and pP phases. The preliminary analysis employs deep-focus (580 km) earthquake of magnitude 6.3 Mb of which epicenter is located at the border region between east Russia and northeast China. Seismograms after 0.01 - 0.2 Hz bandpass filtering clearly exhibit pMP and pP phases recorded on four GSN stations (BTDF, PSI, COCO, and DGAR). Shin and Baag (2000) suggested approximate crustal thickness of the region between northern Korea and northeastern China. The crustal thickness appears to be varied from 25 to 35 km that is compatible with the preliminary analysis.

  15. Source Mechanism of May 30, 2015 Bonin Islands, Japan Deep Earthquake (Mw7.8) Estimated by Broadband Waveform Modeling

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Nakamura, T.; Miyoshi, T.

    2015-12-01

    May 30, 2015 Bonin Islands, Japan earthquake (Mw 7.8, depth 679.9km GCMT) was one of the deepest earthquakes ever recorded. We apply the waveform inversion technique (Kikuchi & Kanamori, 1991) to obtain slip distribution in the source fault of this earthquake in the same manner as our previous work (Nakamura et al., 2010). We use 60 broadband seismograms of IRIS GSN seismic stations with epicentral distance between 30 and 90 degrees. The broadband original data are integrated into ground displacement and band-pass filtered in the frequency band 0.002-1 Hz. We use the velocity structure model IASP91 to calculate the wavefield near source and stations. We assume that the fault is squared with the length 50 km. We obtain source rupture model for both nodal planes with high dip angle (74 degree) and low dip angle (26 degree) and compare the synthetic seismograms with the observations to determine which source rupture model would explain the observations better. We calculate broadband synthetic seismograms with these source propagation models using the spectral-element method (Komatitsch & Tromp, 2001). We use new Earth Simulator system in JAMSTEC to compute synthetic seismograms using the spectral-element method. The simulations are performed on 7,776 processors, which require 1,944 nodes of the Earth Simulator. On this number of nodes, a simulation of 50 minutes of wave propagation accurate at periods of 3.8 seconds and longer requires about 5 hours of CPU time. Comparisons of the synthetic waveforms with the observation at teleseismic stations show that the arrival time of pP wave calculated for depth 679km matches well with the observation, which demonstrates that the earthquake really happened below the 660 km discontinuity. In our present forward simulations, the source rupture model with the low-angle fault dipping is likely to better explain the observations.

  16. Aura's Microwave Limb Sounder Estimates of Ozone Loss, 2004/2005 Arctic Winter

    NASA Technical Reports Server (NTRS)

    2005-01-01

    These data maps from Aura's Microwave Limb Sounder depict levels of hydrogen chloride (top), chlorine monoxide (center), and ozone (bottom) at an altitude of approximately 19 kilometers (490,000 feet) on selected days during the 2004-05 Arctic winter. White contours demark the boundary of the winter polar vortex.

    The maps from December 23, 2004, illustrate vortex conditions shortly before significant chemical ozone destruction began. By January 23, 2005, chlorine is substantially converted from the 'safe' form of hydrogen chloride, which is depleted throughout the vortex, to the 'unsafe' form of chlorine monoxide, which is enhanced in the portions of the region that receive sunlight at that time of year. Ozone increased over the month as a result of dynamical effects, and chemical ozone destruction is just beginning at this time. A brief period of intense cold a few days later promotes further chlorine activation and consequent changes in hydrogen chloride and chlorine monoxide levels on January 27, 2005. Peak chlorine monoxide enhancement occurs in early February.

    By February 24, 2005, chlorine deactivation is well underway, with chlorine monoxide abundances dropping and hydrogen chloride abundances rising. Almost all chlorine monoxide has been quenched by March 10, 2005. The fact that hydrogen chloride has not fully rebounded to December abundances suggests that some of that chemical was recovered into another chlorine reservoir species.

    Ozone maps for January 27, 2005, through March 10, 2005, show indications of mixing of air from outside the polar vortex into it. Such occurrences throughout this winter, especially in late February and early March, complicate analyses, and detailed calculations are required to rigorously disentangle chemical and dynamical effects and accurately diagnose chemical ozone destruction.

    Based on various analyses of Microwave Limb Sounder data, we estimate that maximum local ozone loss of approximately 2 parts

  17. Influence of Agropastoral System Components on Mountain Grassland Vulnerability Estimated by Connectivity Loss.

    PubMed

    Gartzia, Maite; Fillat, Federico; Pérez-Cabello, Fernando; Alados, Concepción L

    2016-01-01

    Over the last decades, global changes have altered the structure and properties of natural and semi-natural mountain grasslands. Those changes have contributed to grassland loss mainly through colonization by woody species at low elevations, and increases in biomass and greenness at high elevations. Nevertheless, the interactions between agropastoral components; i.e., ecological (grassland, environmental, and geolocation properties), social, and economic components, and their effects on the grasslands are still poorly understood. We estimated the vulnerability of dense grasslands in the Central Pyrenees, Spain, based on the connectivity loss (CL) among grassland patches that has occurred between the 1980s and the 2000s, as a result of i) an increase in biomass and greenness (CL-IBG), ii) woody encroachment (CL-WE), or iii) a decrease in biomass and greenness (CL-DBG). The environmental and grassland components of the agropastoral system were associated with the three processes, especially CL-IBG and CL-WE, in relation with the succession of vegetation toward climax communities, fostered by land abandonment and exacerbated by climate warming. CL-IBG occurred in pasture units that had a high proportion of dense grasslands and low current livestock pressure. CL-WE was most strongly associated with pasture units that had a high proportion of woody habitat and a large reduction in sheep and goat pressure between the 1930s and the 2000s. The economic component was correlated with the CL-WE and the CL-DBG; specifically, expensive pastures were the most productive and could maintain the highest rates of livestock grazing, which slowed down woody encroachment, but caused grassland degradation and DBG. In addition, CL-DBG was associated with geolocation of grasslands, mainly because livestock tend to graze closer to passable roads and buildings, where they cause grassland degradation. To properly manage the grasslands, an integrated management plan must be developed that

  18. Influence of Agropastoral System Components on Mountain Grassland Vulnerability Estimated by Connectivity Loss.

    PubMed

    Gartzia, Maite; Fillat, Federico; Pérez-Cabello, Fernando; Alados, Concepción L

    2016-01-01

    Over the last decades, global changes have altered the structure and properties of natural and semi-natural mountain grasslands. Those changes have contributed to grassland loss mainly through colonization by woody species at low elevations, and increases in biomass and greenness at high elevations. Nevertheless, the interactions between agropastoral components; i.e., ecological (grassland, environmental, and geolocation properties), social, and economic components, and their effects on the grasslands are still poorly understood. We estimated the vulnerability of dense grasslands in the Central Pyrenees, Spain, based on the connectivity loss (CL) among grassland patches that has occurred between the 1980s and the 2000s, as a result of i) an increase in biomass and greenness (CL-IBG), ii) woody encroachment (CL-WE), or iii) a decrease in biomass and greenness (CL-DBG). The environmental and grassland components of the agropastoral system were associated with the three processes, especially CL-IBG and CL-WE, in relation with the succession of vegetation toward climax communities, fostered by land abandonment and exacerbated by climate warming. CL-IBG occurred in pasture units that had a high proportion of dense grasslands and low current livestock pressure. CL-WE was most strongly associated with pasture units that had a high proportion of woody habitat and a large reduction in sheep and goat pressure between the 1930s and the 2000s. The economic component was correlated with the CL-WE and the CL-DBG; specifically, expensive pastures were the most productive and could maintain the highest rates of livestock grazing, which slowed down woody encroachment, but caused grassland degradation and DBG. In addition, CL-DBG was associated with geolocation of grasslands, mainly because livestock tend to graze closer to passable roads and buildings, where they cause grassland degradation. To properly manage the grasslands, an integrated management plan must be developed that

  19. Influence of Agropastoral System Components on Mountain Grassland Vulnerability Estimated by Connectivity Loss

    PubMed Central

    Fillat, Federico; Pérez-Cabello, Fernando; Alados, Concepción L.

    2016-01-01

    Over the last decades, global changes have altered the structure and properties of natural and semi-natural mountain grasslands. Those changes have contributed to grassland loss mainly through colonization by woody species at low elevations, and increases in biomass and greenness at high elevations. Nevertheless, the interactions between agropastoral components; i.e., ecological (grassland, environmental, and geolocation properties), social, and economic components, and their effects on the grasslands are still poorly understood. We estimated the vulnerability of dense grasslands in the Central Pyrenees, Spain, based on the connectivity loss (CL) among grassland patches that has occurred between the 1980s and the 2000s, as a result of i) an increase in biomass and greenness (CL-IBG), ii) woody encroachment (CL-WE), or iii) a decrease in biomass and greenness (CL-DBG). The environmental and grassland components of the agropastoral system were associated with the three processes, especially CL-IBG and CL-WE, in relation with the succession of vegetation toward climax communities, fostered by land abandonment and exacerbated by climate warming. CL-IBG occurred in pasture units that had a high proportion of dense grasslands and low current livestock pressure. CL-WE was most strongly associated with pasture units that had a high proportion of woody habitat and a large reduction in sheep and goat pressure between the 1930s and the 2000s. The economic component was correlated with the CL-WE and the CL-DBG; specifically, expensive pastures were the most productive and could maintain the highest rates of livestock grazing, which slowed down woody encroachment, but caused grassland degradation and DBG. In addition, CL-DBG was associated with geolocation of grasslands, mainly because livestock tend to graze closer to passable roads and buildings, where they cause grassland degradation. To properly manage the grasslands, an integrated management plan must be developed that

  20. Testing the use of bulk organic δ13C, δ15N, and Corg:Ntot ratios to estimate subsidence during the 1964 great Alaska earthquake

    USGS Publications Warehouse

    Bender, Adrian M; Witter, Robert C.; Rogers, Matthew

    2015-01-01

    During the Mw 9.2 1964 great Alaska earthquake, Turnagain Arm near Girdwood, Alaska subsided 1.7 ± 0.1 m based on pre- and postearthquake leveling. The coseismic subsidence in 1964 caused equivalent sudden relative sea-level (RSL) rise that is stratigraphically preserved as mud-over-peat contacts where intertidal silt buried peaty marsh surfaces. Changes in intertidal microfossil assemblages across these contacts have been used to estimate subsidence in 1964 by applying quantitative microfossil transfer functions to reconstruct corresponding RSL rise. Here, we review the use of organic stable C and N isotope values and Corg:Ntot ratios as alternative proxies for reconstructing coseismic RSL changes, and report independent estimates of subsidence in 1964 by using δ13C values from intertidal sediment to assess RSL change caused by the earthquake. We observe that surface sediment δ13C values systematically decrease by ∼4‰ over the ∼2.5 m increase in elevation along three 60- to 100-m-long transects extending from intertidal mud flat to upland environments. We use a straightforward linear regression to quantify the relationship between modern sediment δ13C values and elevation (n = 84, R2 = 0.56). The linear regression provides a slope–intercept equation used to reconstruct the paleoelevation of the site before and after the earthquake based on δ13C values in sandy silt above and herbaceous peat below the 1964 contact. The regression standard error (average = ±0.59‰) reflects the modern isotopic variability at sites of similar surface elevation, and is equivalent to an uncertainty of ±0.4 m elevation with respect to Mean Higher High Water. To reduce potential errors in paleoelevation and subsidence estimates, we analyzed multiple sediment δ13C values in nine cores on a shore-perpendicular transect at Bird Point. Our method estimates 1.3 ± 0.4 m of coseismic RSL rise across the 1964 contact by taking the arithmetic mean of the

  1. Re-estimation of glacier mass loss in Greenland from GRACE with correction of land-ocean leakage effects

    NASA Astrophysics Data System (ADS)

    Jin, Shuanggen; Zou, Fang

    2015-12-01

    The Gravity Recovery and Climate Experiment (GRACE) satellites can estimate the high-precision time-varying gravity field and the changes of Earth's surface mass, which have been widely used in water cycle and glacier mass balance. However, one of larger errors in GRACE measurements, land-ocean leakage effects, restricts high precision retrieval of ocean mass and terrestrial water storage variations along the coasts, particularly estimation of mass loss in Greenland. The land-ocean leakage effect along the coasts in Greenland will contaminate the mass loss signals with significant signal attenuation. In this paper, the precise glacier mass loss in Greenland from GRACE is re-estimated with correction of land-ocean leakage effects using the forward gravity modeling. The loss of Greenland ice-sheets is - 102.8 ± 9.01 Gt/a without removing leakage effect, but - 183.0 ± 19.91 Gt/a after removing the leakage effect from September 2003 to March 2008, which has a good agreement with ICESat results of - 184.8 ± 28.2 Gt/a. From January 2003 to December 2013, the total Greenland ice-sheet loss is at - 261.54 ± 6.12 Gt/a from GRACE measurements with removing the leakage effect by 42.4%, while two-thirds of total glacier melting in Greenland occurred in southern Greenland in the past 11 years. The secular leakage effects on glacier melting estimate is mainly located in the coastal areas, where larger glacier signals are significantly attenuated due to leaking out into the ocean. Furthermore, the leakage signals also have remarkable effects on seasonal and acceleration variations of glacier mass loss in Greenland. More significantly accelerated loss of glacier mass in Greenland is found at - 26.19 Gt/a2 after correcting for leakage effects.

  2. A Full Dynamic Compound Inverse Method for output-only element-level system identification and input estimation from earthquake response signals

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2016-08-01

    This paper proposes a new output-only element-level system identification and input estimation technique, towards the simultaneous identification of modal parameters, input excitation time history and structural features at the element-level by adopting earthquake-induced structural response signals. The method, named Full Dynamic Compound Inverse Method (FDCIM), releases strong assumptions of earlier element-level techniques, by working with a two-stage iterative algorithm. Jointly, a Statistical Average technique, a modification process and a parameter projection strategy are adopted at each stage to achieve stronger convergence for the identified estimates. The proposed method works in a deterministic way and is completely developed in State-Space form. Further, it does not require continuous- to discrete-time transformations and does not depend on initialization conditions. Synthetic earthquake-induced response signals from different shear-type buildings are generated to validate the implemented procedure, also with noise-corrupted cases. The achieved results provide a necessary condition to demonstrate the effectiveness of the proposed identification method.

  3. Geodetic model of the 2015 April 25 Mw 7.8 Gorkha Nepal Earthquake and Mw 7.3 aftershock estimated from InSAR and GPS data

    NASA Astrophysics Data System (ADS)

    Feng, Guangcai; Li, Zhiwei; Shan, Xinjian; Zhang, Lei; Zhang, Guohong; Zhu, Jianjun

    2015-11-01

    We map the complete surface deformation of 2015 Mw 7.8 Gorkha Nepal earthquake and its Mw 7.3 aftershock with two parallel ALOS2 descending ScanSAR paths' and two ascending Stripmap paths' images. The coseismic fault-slip model from a combined inversion of InSAR and GPS data reveals that this event is a reverse fault motion, with a slight right-lateral strike-slip component. The maximum thrust-slip and right-lateral strike-slip values are 5.7 and 1.2 m, respectively, located at a depth of 7-15 km, southeast to the epicentre. The total seismic moment 7.55 × 1020 Nm, corresponding to a moment magnitude Mw 7.89, is similar to the seismological estimates. Fault slips of both the main shock and the largest aftershock are absent from the upper thrust shallower than 7 km, indicating that there is a locking lower edge of Himalayan Main Frontal Thrust and future seismic disaster is not unexpected in this area. We also find that the energy released in this earthquake is much less than the accumulated moment deficit over the past seven centuries estimated in previous studies, so the region surrounding Kathmandu is still under the threaten of seismic hazards.

  4. Estimating the probability of occurrence of earthquakes (M>6) in the Western part of the Corinth rift using fault-based and classical seismotectonic approaches.

    NASA Astrophysics Data System (ADS)

    Boiselet, Aurelien; Scotti, Oona; Lyon-Caen, Hélène

    2014-05-01

    The Corinth rift, Greece, is one of the regions with highest strain rates in the Euro-Mediterranean area and as such it has long been identified as a site of major importance for earthquake studies in Europe (20 years of research by the Corinth Rift Laboratory and 4 years of in-depth studies by the ANR-SISCOR project). This enhanced knowledge, acquired in particular, in the western part of the Gulf of Corinth, an area about 50 by 40 km, between the city of Patras to the west and the city of Aigion to the east, provides an excellent opportunity to compare fault-based and classical seismotectonic approaches currently used in seismic hazard assessment studies. A homogeneous earthquake catalogue was first constructed for the Greek territory based on two existing earthquake catalogues available for Greece (National Observatory of Athens and Thessaloniki). In spite of numerous documented damaging earthquakes, only a limited amount of macroseismic intensity data points are available in the existing databases for the damaging earthquakes affecting the west Corinth rift region. A re-interpretation of the macroseismic intensity field for numerous events was thus conducted, following an in-depth analysis of existing and newly found documentation (for details see Rovida et al. EGU2014-6346). In parallel, the construction of a comprehensive database of all relevant geological, geodetical and geophysical information (available in the literature and recently collected within the ANR-SISCOR project), allowed proposing rupture geometries for the different fault-systems identified in the study region. The combination of the new earthquake parameters and the newly defined fault geometries, together with the existing published paleoseismic data, allowed proposing a suite of rupture scenarios including the activation of multiple fault segments. The methodology used to achieve this goal consisted in setting up a logic tree that reflected the opinion of all the members of the ANR

  5. Estimation of heat loss from a cylindrical cavity receiver based on simultaneous energy and exergy analyses

    NASA Astrophysics Data System (ADS)

    Madadi, Vahid; Tavakoli, Touraj; Rahimi, Amir

    2015-03-01

    This study undertakes the experimental and theoretical investigation of heat losses from a cylindrical cavity receiver employed in a solar parabolic dish collector. Simultaneous energy and exergy equations are used for a thermal performance analysis of the system. The effects of wind speed and its direction on convection loss has also been investigated. The effects of operational parameters, such as heat transfer fluid mass flow rate and wind speed, and structural parameters, such as receiver geometry and inclination, are investigated. The portion of radiative heat loss is less than 10%. An empirical and simplified correlation for estimating the dimensionless convective heat transfer coefficient in terms of the Re mathrm {Re} number and the average receiver wall temperature is proposed. This correlation is applicable for a wind speed range of 0.10.1 to 10 m/s. Moreover, the proposed correlation for Nu mathrm {Nu} number is validated using experimental data obtained through the experiments carried out with a conical receiver with two aperture diameters. The coefficient of determination R2 and the normalized root

  6. Recent wetland land loss due to hurricanes: improved estimates based upon multiple source images

    USGS Publications Warehouse

    Kranenburg, Christine J.; Palaseanu-Lovejoy, Monica; Barras, John A.; Brock, John C.; Wang, Ping; Rosati, Julie D.; Roberts, Tiffany M.

    2011-01-01

    The objective of this study was to provide a moderate resolution 30-m fractional water map of the Chenier Plain for 2003, 2006 and 2009 by using information contained in high-resolution satellite imagery of a subset of the study area. Indices and transforms pertaining to vegetation and water were created using the high-resolution imagery, and a threshold was applied to obtain a categorical land/water map. The high-resolution data was used to train a decision-tree classifier to estimate percent water in a lower resolution (Landsat) image. Two new water indices based on the tasseled cap transformation were proposed for IKONOS imagery in wetland environments and more than 700 input parameter combinations were considered for each Landsat image classified. Final selection and thresholding of the resulting percent water maps involved over 5,000 unambiguous classified random points using corresponding 1-m resolution aerial photographs, and a statistical optimization procedure to determine the threshold at which the maximum Kappa coefficient occurs. Each selected dataset has a Kappa coefficient, percent correctly classified (PCC) water, land and total greater than 90%. An accuracy assessment using 1,000 independent random points was performed. Using the validation points, the PCC values decreased to around 90%. The time series change analysis indicated that due to Hurricane Rita, the study area lost 6.5% of marsh area, and transient changes were less than 3% for either land or water. Hurricane Ike resulted in an additional 8% land loss, although not enough time has passed to discriminate between persistent and transient changes.

  7. Can diligent and extensive mapping of faults provide reliable estimates of the expected maximum earthquakes at these faults? No. (Invited)

    NASA Astrophysics Data System (ADS)

    Bird, P.

    2010-12-01

    The hope expressed in the title question above can be contradicted in 5 ways, listed below. To summarize, an earthquake rupture can be larger than anticipated either because the fault system has not been fully mapped, or because the rupture is not limited to the pre-existing fault network. 1. Geologic mapping of faults is always incomplete due to four limitations: (a) Map-scale limitation: Faults below a certain (scale-dependent) apparent offset are omitted; (b) Field-time limitation: The most obvious fault(s) get(s) the most attention; (c) Outcrop limitation: You can't map what you can't see; and (d) Lithologic-contrast limitation: Intra-formation faults can be tough to map, so they are often assumed to be minor and omitted. If mapping is incomplete, fault traces may be longer and/or better-connected than we realize. 2. Fault trace “lengths” are unreliable guides to maximum magnitude. Fault networks have multiply-branching, quasi-fractal shapes, so fault “length” may be meaningless. Naming conventions for main strands are unclear, and rarely reviewed. Gaps due to Quaternary alluvial cover may not reflect deeper seismogenic structure. Mapped kinks and other “segment boundary asperities” may be only shallow structures. Also, some recent earthquakes have jumped and linked “separate” faults (Landers, California 1992; Denali, Alaska, 2002) [Wesnousky, 2006; Black, 2008]. 3. Distributed faulting (“eventually occurring everywhere”) is predicted by several simple theories: (a) Viscoelastic stress redistribution in plate/microplate interiors concentrates deviatoric stress upward until they fail by faulting; (b) Unstable triple-junctions (e.g., between 3 strike-slip faults) in 2-D plate theory require new faults to form; and (c) Faults which appear to end (on a geologic map) imply distributed permanent deformation. This means that all fault networks evolve and that even a perfect fault map would be incomplete for future ruptures. 4. A recent attempt

  8. Frictional Heat Generation and Slip Duration Estimated From Micro-fault in an Exhumed Accretionary Complex and Their Relations to the Scaling Law for Slow Earthquakes

    NASA Astrophysics Data System (ADS)

    Hashimoto, Y.; Morita, K.; Okubo, M.; Hamada, Y.; Lin, W.; Hirose, T.; Kitamura, M.

    2015-12-01

    Fault motion has been estimated by diffusion pattern of frictional heating recorded in geology (e.g., Fulton et al., 2012). The same record in deeper subduction plate interface can be observed from micro-faults in an exhumed accretionary complex. In this study, we focused on a micro-fault within the Cretaceous Shimanto Belt, SW Japan to estimate fault motion from the frictional heating diffusion pattern. A carbonaceous material concentrated layer (CMCL) with ~2m of thickness is observed in study area. Some micro-faults cut the CMCL. Thickness of a fault is about 3.7mm. Injection veins and dilatant fractures were observed in thin sections, suggesting that the high fluid pressure was existed. Samples with 10cm long were collected to measure distribution of vitrinite reflectance (Ro) as a function of distance from the center of micro-fault. Ro of host rock was ~1.0%. Diffusion pattern was detected decreasing in Ro from ~1.2%-~1.1%. Characteristic diffusion distance is ~4-~9cm. We conducted grid search to find the optimal frictional heat generation per unit area (Q, the product of friction coefficient, normal stress and slip velocity) and slip duration (t) to fit the diffusion pattern. Thermal diffusivity (0.98*10-8m2/s) and thermal conductivity (2.0 W/mK) were measured. In the result, 2000-2500J/m2 of Q and 63000-126000s of t were estimated. Moment magnitudes (M0) of slow earthquakes (slow EQs) follow a scaling law with slip duration and its dimension is different from that for normal earthquakes (normal EQ) (Ide et al., 2007). The slip duration estimated in this study (~104-~105s) consistent with 4-5 of M0, never fit to the scaling law for normal EQ. Heat generation can be inverted from 4-5 of M0, corresponding with ~108-~1011J, which is consistent with rupture area of 105-108m2 in this study. The comparisons in heat generation and slip duration between geological measurements and geophysical remote observations give us the estimation of rupture area, M0, and

  9. The effect of blood pressure calibrations and transcranial Doppler signal loss on transfer function estimates of cerebral autoregulation

    PubMed Central

    Deegan, Brian M.; Serrador, Jorge M.; Nakagawa, Kazuma; Jones, Edward; Sorond, Farzaneh A.; ÓLaighin, Gearóid

    2015-01-01

    There are methodological concerns with combined use of transcranial Doppler (TCD) and Finapres to measure dynamic cerebral autoregulation. The Finapres calibration mechanism (“physiocal”) causes interruptions to blood pressure recordings. Also, TCD is subject to signal loss due to probe movement. We assessed the effects of “physiocals” and TCD signal loss on transfer function estimates in recordings of 45 healthy subjects. We added artificial “physiocals” and removed sections of TCD signal from 5 min Finapres and TCD recordings. We also compared transfer function results from 5 min time series with time series as short as 1 min. Accurate transfer function estimates can be achieved in the 0.03–0.07 Hz band using beat-by-beat data with linear interpolation, while data loss is less than 10 s. At frequencies between 0.07 and 0.5 Hz, transfer function estimates become unreliable with 5 s of data loss every 50 s. 2 s data loss only affects frequency bands above 0.15 Hz. Finally, accurate transfer function assessment of autoregulatory function can be achieved from time series as short as 1 min, although gain and coherence tend to be overestimated at higher frequencies. PMID:21239208

  10. Gross margin losses due to Salmonella Dublin infection in Danish dairy cattle herds estimated by simulation modelling.

    PubMed

    Nielsen, T D; Kudahl, A B; Østergaard, S; Nielsen, L R

    2013-08-01

    Salmonella Dublin affects production and animal health in cattle herds. The objective of this study was to quantify the gross margin (GM) losses following introduction and spread of S. Dublin within dairy herds. The GM losses were estimated using an age-structured stochastic, mechanistic and dynamic simulation model. The model incorporated six age groups (neonatal, pre-weaned calves, weaned calves, growing heifers, breeding heifers and cows) and five infection stages (susceptible, acutely infected, carrier, super shedder and resistant). The effects of introducing one S. Dublin infectious heifer were estimated through 1000 simulation iterations for 12 scenarios. These 12 scenarios were combinations of three herd sizes (85, 200 and 400 cows) and four management levels (very good, good, poor and very poor). Input parameters for effects of S. Dublin on production and animal health were based on literature and calibrations to mimic real life observations. Mean annual GMs per cow stall were compared between herds experiencing within-herd spread of S. Dublin and non-infected reference herds over a 10-year period. The estimated GM losses were largest in the first year after infection, and increased with poorer management and herd size, e.g. average annual GM losses were estimated to 49 euros per stall for the first year after infection, and to 8 euros per stall annually averaged over the 10 years after herd infection for a 200 cow stall herd with very good management. In contrast, a 200 cow stall herd with very poor management lost on average 326 euros per stall during the first year, and 188 euros per stall annually averaged over the 10-year period following introduction of infection. The GM losses arose from both direct losses such as reduced milk yield, dead animals, treatment costs and abortions as well as indirect losses such as reduced income from sold heifers and calves, and lower milk yield of replacement animals. Through sensitivity analyses it was found that the

  11. Transepidermal water loss in newborn infants. I. Relation to ambient humidity and site of measurement and estimation of total transepidermal water loss.

    PubMed

    Hammarl-nd, K; Nilsson, G E; Oberg, P A; Sedin, G

    1977-09-01

    Insensible water loss (IWL) is an important factor in the thermoregulation and water balance of the newborn infant. A method for direct measurement of the rate of evaporation from the skin surface has been developed. The method, which is based on determination of the vapour pressure gradient close to the skin surface, allows free evaporation. From measurements performed on 19 newborns placed in incubators, a linear relation was found between the evaporation rate (ER) and the humidity of the environment at a constant ambient temperature. A 40% lower ER was recorded at a high relative humidity (60%) than at a low one (20%) in the incubator. At measurements on different sites of the body, a high ER was observed on the face and peripheral parts of the extremities, while ER at other sites was relatively low. By determining ER from different parts of the body and calculating the areas of the corresponding surfaces, the total cutaneous insensible water loss for the infant in question could be obtained. The transepidermal water loss (TEWL) for the whole body surface area was calculated to be 8.1 g/m2h. On the basis of measurements performed it was found that the total cutaneous insensible water loss can be estimated with a reasonable degree of accuracy by recording ER from only three easily accessible measurement points.

  12. Estimation of offsets in GPS time-series and application to the detection of earthquake deformation in the far-field

    NASA Astrophysics Data System (ADS)

    Montillet, J.-P.; Williams, S. D. P.; Koulali, A.; McClusky, S. C.

    2015-02-01

    Extracting geophysical signals from Global Positioning System (GPS) coordinate time-series is a well-established practice that has led to great insights into how the Earth deforms. Often small discontinuities are found in such time-series and are traceable to either broad-scale deformation (i.e. earthquakes) or discontinuities due to equipment changes and/or failures. Estimating these offsets accurately enables the identification of coseismic deformation estimates in the former case, and the removal of unwanted signals in the latter case which then allows tectonic rates to be estimated more accurately. We develop a method to estimate accurately discontinuities in time series of GPS positions at specified epochs, based on a so-called `offset series'. The offset series are obtained by varying the amount of GPS data before and after an event while estimating the offset. Two methods, a mean and a weighted mean method, are then investigated to produce the estimated discontinuity from the offset series. The mean method estimates coseismic offsets without making assumptions about geophysical processes that may be present in the data (i.e. tectonic rate, seasonal variations), whereas the weighted mean method includes estimating coseismic offsets with a model of these processes. We investigate which approach is the most appropriate given certain lengths of available data and noise within the time-series themselves. For the Sumatra-Andaman event, with 4.5 yr of pre-event data, we show that between 2 and 3 yr of post-event data are required to produce accurate offset estimates with the weighted mean method. With less data, the mean method should be used, but the uncertainties of the estimated discontinuity are larger.

  13. The integration of stress, strain, and seismogenic fault data: towards more robust estimates of the earthquake potential in Italy and its surroundings

    NASA Astrophysics Data System (ADS)

    Caporali, Alessandro; Braitenberg, Carla; Burrato, Pierfrancesco; Carafa, Michele; Di Giovambattista, Rita; Gentili, Stefania; Mariucci, Maria Teresa; Montone, Paola; Morsut, Federico; Nicolini, Luca; Pivetta, Tommaso; Roselli, Pamela; Rossi, Giuliana; Valensise, Gian Luca; Vigano, Alfio

    2016-04-01

    Italy is an earthquake-prone country with a long tradition in observational seismology. For many years, the country's unique historical earthquake record has revealed fundamental properties of Italian seismicity and has been used to determine earthquake rates. Paleoseismological studies conducted over the past 20 years have shown that the length of this record - 5 to 8 centuries, depending on areas - is just a fraction of the typical recurrence interval of Italian faults - consistently larger than a millennium. Hence, so far the earthquake potential may have been significantly over- or under-estimated. Based on a clear perception of these circumstances, over the past two decades large networks and datasets describing independent aspects of the seismic cycle have been developed. INGV, OGS, some universities and local administrations have built networks that globally include nearly 500 permanent GPS/GNSS sites, routinely used to compute accurate horizontal velocity gradients reflecting the accumulation of tectonic strain. INGV developed the Italian present-day stress map, which includes over 700 datapoints based on geophysical in-situ measurements and fault plane solutions, and the Database of Individual Seismogenic Sources (DISS), a unique compilation featuring nearly 300 three-dimensional seismogenic faults over the entire nation. INGV also updates and maintains the Catalogo Parametrico dei Terremoti Italiani (CPTI) and the instrumental earthquake database ISIDe, whereas OGS operates its own seismic catalogue for northeastern Italy. We present preliminary results on the use of this wealth of homogeneously collected and updated observations of stress and strain as a source of loading/unloading of the faults listed in the DISS database. We use the geodetic strain rate - after converting it to stress rate in conjunction with the geophysical stress data of the Stress Map - to compute the Coulomb Failure Function on all fault planes described by the DISS database. This

  14. Characteristics of postseismic deformation following the 2003 Tokachi-oki earthquake and estimation of the viscoelastic structure in Hokkaido, northern Japan

    NASA Astrophysics Data System (ADS)

    Itoh, Yuji; Nishimura, Takuya

    2016-09-01

    Postseismic deformation of the 2003 Tokachi-oki earthquake ( M w 8.0) has been observed by GNSS. We analyzed the deformation observed in Hokkaido in the 2nd to the 7th year following the 2003 Tokachi-oki earthquake and examined the effect of two major mechanisms (i.e., afterslip and viscoelastic relaxation) for the observed postseismic deformation by fitting it with a model consisting of afterslip and viscoelastic relaxation. The thickness of the lithosphere, the viscosity of the asthenosphere, and the time decaying constant of afterslip were estimated to be 50 km, 2.0 × 1019 Pa s, and 0.110 year, respectively, which are concordant with those in the Tohoku region estimated in previous studies. The revealed characteristics of postseismic deformation are as follows. At most of the used stations, afterslip played the dominant role in the 2nd year and was still sustained near the coseismic area even in the 7th year. However, the calculated velocity due to viscoelastic relaxation was comparable to that due to afterslip at the stations in northern Hokkaido after the 5th year. Because the calculated velocity due to viscoelastic relaxation was landward near the coseismic slip area, afterslip near the coseismic slip area will be biased to be smaller if viscoelastic relaxation is ignored. A systematic spatial pattern of the residuals considering afterslip only highlights an importance for explaining the observation data. We also examined the effect of viscoelastic relaxation due to afterslip for the parameter estimation and found that it was too small to affect the estimated structure parameters.[Figure not available: see fulltext.

  15. Hurricane Loss Estimation Models: Opportunities for Improving the State of the Art.

    NASA Astrophysics Data System (ADS)

    Watson, Charles C., Jr.; Johnson, Mark E.

    2004-11-01

    The results of hurricane loss models are used regularly for multibillion dollar decisions in the insurance and financial services industries. These models are proprietary, and this “black box” nature hinders analysis. The proprietary models produce a wide range of results, often producing loss costs that differ by a ratio of three to one or more. In a study for the state of North Carolina, 324 combinations of loss models were analyzed, based on a combination of nine wind models, four surface friction models, and nine damage models drawn from the published literature in insurance, engineering, and meteorology. These combinations were tested against reported losses from Hurricanes Hugo and Andrew as reported by a major insurance company, as well as storm total losses for additional storms. Annual loss costs were then computed using these 324 combinations of models for both North Carolina and Florida, and compared with publicly available proprietary model results in Florida. The wide range of resulting loss costs for open, scientifically defensible models that perform well against observed losses mirrors the wide range of loss costs computed by the proprietary models currently in use. This outcome may be discouraging for governmental and corporate decision makers relying on this data for policy and investment guidance (due to the high variability across model results), but it also provides guidance for the efforts of future investigations to improve loss models. Although hurricane loss models are true multidisciplinary efforts, involving meteorology, engineering, statistics, and actuarial sciences, the field of meteorology offers the most promising opportunities for improvement of the state of the art.

  16. Spatial and temporal estimation of soil loss for the sustainable management of a wet semi-arid watershed cluster.

    PubMed

    Rejani, R; Rao, K V; Osman, M; Srinivasa Rao, Ch; Reddy, K Sammi; Chary, G R; Pushpanjali; Samuel, Josily

    2016-03-01

    The ungauged wet semi-arid watershed cluster, Seethagondi, lies in the Adilabad district of Telangana in India and is prone to severe erosion and water scarcity. The runoff and soil loss data at watershed, catchment, and field level are necessary for planning soil and water conservation interventions. In this study, an attempt was made to develop a spatial soil loss estimation model for Seethagondi cluster using RUSLE coupled with ARCGIS and was used to estimate the soil loss spatially and temporally. The daily rainfall data of Aphrodite for the period from 1951 to 2007 was used, and the annual rainfall varied from 508 to 1351 mm with a mean annual rainfall of 950 mm and a mean erosivity of 6789 MJ mm ha(-1) h(-1) year(-1). Considerable variation in land use land cover especially in crop land and fallow land was observed during normal and drought years, and corresponding variation in the erosivity, C factor, and soil loss was also noted. The mean value of C factor derived from NDVI for crop land was 0.42 and 0.22 in normal year and drought years, respectively. The topography is undulating and major portion of the cluster has slope less than 10°, and 85.3% of the cluster has soil loss below 20 t ha(-1) year(-1). The soil loss from crop land varied from 2.9 to 3.6 t ha(-1) year(-1) in low rainfall years to 31.8 to 34.7 t ha(-1) year(-1) in high rainfall years with a mean annual soil loss of 12.2 t ha(-1) year(-1). The soil loss from crop land was higher in the month of August with an annual soil loss of 13.1 and 2.9 t ha(-1) year(-1) in normal and drought year, respectively. Based on the soil loss in a normal year, the interventions recommended for 85.3% of area of the watershed includes agronomic measures such as contour cultivation, graded bunds, strip cropping, mixed cropping, crop rotations, mulching, summer plowing, vegetative bunds, agri-horticultural system, and management practices such as broad bed furrow, raised sunken beds, and harvesting available water

  17. Spatial and temporal estimation of soil loss for the sustainable management of a wet semi-arid watershed cluster.

    PubMed

    Rejani, R; Rao, K V; Osman, M; Srinivasa Rao, Ch; Reddy, K Sammi; Chary, G R; Pushpanjali; Samuel, Josily

    2016-03-01

    The ungauged wet semi-arid watershed cluster, Seethagondi, lies in the Adilabad district of Telangana in India and is prone to severe erosion and water scarcity. The runoff and soil loss data at watershed, catchment, and field level are necessary for planning soil and water conservation interventions. In this study, an attempt was made to develop a spatial soil loss estimation model for Seethagondi cluster using RUSLE coupled with ARCGIS and was used to estimate the soil loss spatially and temporally. The daily rainfall data of Aphrodite for the period from 1951 to 2007 was used, and the annual rainfall varied from 508 to 1351 mm with a mean annual rainfall of 950 mm and a mean erosivity of 6789 MJ mm ha(-1) h(-1) year(-1). Considerable variation in land use land cover especially in crop land and fallow land was observed during normal and drought years, and corresponding variation in the erosivity, C factor, and soil loss was also noted. The mean value of C factor derived from NDVI for crop land was 0.42 and 0.22 in normal year and drought years, respectively. The topography is undulating and major portion of the cluster has slope less than 10°, and 85.3% of the cluster has soil loss below 20 t ha(-1) year(-1). The soil loss from crop land varied from 2.9 to 3.6 t ha(-1) year(-1) in low rainfall years to 31.8 to 34.7 t ha(-1) year(-1) in high rainfall years with a mean annual soil loss of 12.2 t ha(-1) year(-1). The soil loss from crop land was higher in the month of August with an annual soil loss of 13.1 and 2.9 t ha(-1) year(-1) in normal and drought year, respectively. Based on the soil loss in a normal year, the interventions recommended for 85.3% of area of the watershed includes agronomic measures such as contour cultivation, graded bunds, strip cropping, mixed cropping, crop rotations, mulching, summer plowing, vegetative bunds, agri-horticultural system, and management practices such as broad bed furrow, raised sunken beds, and harvesting available water

  18. Long-term psychological outcome for non-treatment-seeking earthquake survivors in Turkey.

    PubMed

    Salcioglu, Ebru; Basoglu, Metin; Livanou, Maria

    2003-03-01

    This study examined the incidence of posttraumatic stress disorder (PTSD) and depression in 586 earthquake survivors living in prefabricated housing sites a mean of 20 months after the 1999 earthquake in Turkey. The estimated rates of PTSD and major depression were 39% and 18%, respectively. More severe PTSD symptoms related to greater fear during the earthquake, female gender, older age, participation in rescue work, having been trapped under rubble, and personal history of psychiatric illness. More severe depression symptoms related to older age, loss of close ones, single marital status, past psychiatric illness, previous trauma experience, female gender, and family history of psychiatric illness. These findings suggest that catastrophic earthquakes have long-term psychological consequences, particularly for survivors with high levels of trauma exposure. These findings lend further support to the need for long-term mental health care policies for earthquake survivors. Outreach service delivery programs are needed to access non-treatment-seeking survivors with chronic PTSD. PMID:12637841

  19. Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical abstract: Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analys...

  20. Applying the Land Use Portfolio Model to Estimate Natural-Hazard Loss and Risk - A Hypothetical Demonstration for Ventura County, California

    USGS Publications Warehouse

    Dinitz, Laura B.

    2008-01-01

    -MH currently performs analyses for earthquakes, floods, and hurricane wind. HAZUS-MH loss estimates, however, do not account for some uncertainties associated with the specific natural-hazard scenarios, such as the likelihood of occurrence within a particular time horizon or the effectiveness of alternative risk-reduction options. Because of the uncertainties involved, it is challenging to make informative decisions about how to cost-effectively reduce risk from natural-hazard events. Risk analysis is one approach that decision-makers can use to evaluate alternative risk-reduction choices when outcomes are unknown. The Land Use Portfolio Model (LUPM), developed by the U.S. Geological Survey (USGS), is a geospatial scenario-based tool that incorporates hazard-event uncertainties to support risk analysis. The LUPM offers an approach to estimate and compare risks and returns from investments in risk-reduction measures. This paper describes and demonstrates a hypothetical application of the LUPM for Ventura County, California, and examines the challenges involved in developing decision tools that provide quantitative methods to estimate losses and analyze risk from natural hazards.

  1. Quantitative estimation of farmland soil loss by wind-erosion using improved particle-size distribution comparison method (IPSDC)

    NASA Astrophysics Data System (ADS)

    Rende, Wang; Zhongling, Guo; Chunping, Chang; Dengpan, Xiao; Hongjun, Jiang

    2015-12-01

    The rapid and accurate estimation of soil loss by wind erosion still remains challenge. This study presents an improved scheme for estimating the soil loss by wind erosion of farmland. The method estimates the soil loss by wind erosion based on a comparison of the relative contents of erodible and non-erodible particles between the surface and sub-surface layers of the farmland ploughed layer after wind erosion. It is based on the features that the soil particle-size distribution of the sampling soil layer (approximately 2 cm) is relatively uniform, and that on the surface layer, wind erosion causes the relative numbers of erodible and non-erodible particles to decrease and increase, respectively. Estimations were performed using this method for the wind erosion periods (WEP) from Oct. of 2012 to May of 2013 and from Oct. of 2013 to April of 2014 and a large wind-erosion event (WEE) on May 3, 2014 in the Bashang area of Hebei Province. The results showed that the average soil loss of farmland by wind erosion from Oct. of 2012 to May of 2013 was 2852.14 g/m2 with an average depth of 0.21 cm, while soil loss by wind from Oct. of 2013 to April of 2014 was 1199.17 g/m2 with a mean depth of 0.08 cm. During the severe WEE on May 3, 2014, the average soil loss of farmland by wind erosion was 1299.19 g/m2 with an average depth of 0.10 cm. The soil loss by wind erosion of ploughed and raked fields (PRF) was approximately twice as large as that of oat-stubble fields (OSF). The improved method of particle-size distribution comparison (IPSDC) has several advantages. It can not only calculate the wind erosion amount, but also the wind deposition amount. Slight changes in the sampling thickness and in the particle diameter range of the non-erodible particles will not obviously influence the results. Furthermore, the method is convenient, rapid, simple to implement. It is suitable for estimating the soil loss or deposition by wind erosion of farmland with flat surfaces and high

  2. Remote sensing as a tool for watershed-wide estimation of net solar radiation and water loss to the atmosphere

    NASA Technical Reports Server (NTRS)

    Khorram, S.; Thomas, R. W.

    1976-01-01

    Results are presented for a study intended to develop a general remote sensing-aided cost-effective procedure to estimate watershed-wide water loss to the atmosphere via evapotranspiration and to estimate net solar radiation over the watershed. Evapotranspiration estimation employs a basic two-stage two-phase sample of three information resolution levels. Net solar radiation is taken as one of the variables at each level of evapotranspiration modeling. The input information for models requiring spatial information will be provided by Landsat digital data, environmental satellite data, ground meteorological data, ground sample unit information, and topographic data. The outputs of the sampling-estimation/data bank system will be in-place maps of evapotranspiration on a data resolution element basis, watershed-wide evapotranspiration isopleths, and estimates of watershed and subbasin total evapotranspiration with associated statistical confidence bounds. The methodology developed is being tested primarily on the Spanish Creek Watershed Plumas County, California.

  3. Earthquake Risk Mitigation in the Tokyo Metropolitan area

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Sakai, S.; Kasahara, K.; Nakagawa, S.; Nanjo, K.; Panayotopoulos, Y.; Tsuruoka, H.

    2010-12-01

    Seismic disaster risk mitigation in urban areas constitutes a challenge through collaboration of scientific, engineering, and social-science fields. Examples of collaborative efforts include research on detailed plate structure with identification of all significant faults, developing dense seismic networks; strong ground motion prediction, which uses information on near-surface seismic site effects and fault models; earthquake resistant and proof structures; and cross-discipline infrastructure for effective risk mitigation just after catastrophic events. Risk mitigation strategy for the next greater earthquake caused by the Philippine Sea plate (PSP) subducting beneath the Tokyo metropolitan area is of major concern because it caused past mega-thrust earthquakes, such as the 1703 Genroku earthquake (magnitude M8.0) and the 1923 Kanto earthquake (M7.9) which had 105,000 fatalities. A M7 or greater (M7+) earthquake in this area at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The Central Disaster Management Council of Japan estimates that the M7+ earthquake will cause 11,000 fatalities and 112 trillion yen (about 1 trillion US$) economic loss. This earthquake is evaluated to occur with a probability of 70% in 30 years by the Earthquake Research Committee of Japan. In order to mitigate disaster for greater Tokyo, the Special Project for Earthquake Disaster Mitigation in the Tokyo Metropolitan Area (2007-2011) was launched in collaboration with scientists, engineers, and social-scientists in nationwide institutions. The results that are obtained in the respective fields will be integrated until project termination to improve information on the strategy assessment for seismic risk mitigation in the Tokyo metropolitan area. In this talk, we give an outline of our project as an example of collaborative research on earthquake risk mitigation. Discussion is extended to our effort in progress and

  4. Estimation of parasitic losses in a proposed mesoscale resonant engine: Experiment and model

    NASA Astrophysics Data System (ADS)

    Preetham, B. S.; Anderson, M.; Richards, C.

    2014-02-01

    A resonant engine in which the piston-cylinder assembly is replaced by a flexible cavity is realized at the mesoscale using flexible metal bellows to demonstrate the feasibility of the concept. A four stroke motoring technique is developed and measurements are performed to determine parasitic losses. A non-linear lumped parameter model is developed to evaluate the engine performance. Experimentally, the heat transfer and friction effects are separated by varying the engine speed and operating frequency. The engine energy flow diagram showing the energy distribution among various parasitic elements reveals that the friction loss in the bellows is smaller than the sliding friction loss in a typical piston-cylinder assembly.

  5. One-Step Targeted Minimum Loss-based Estimation Based on Universal Least Favorable One-Dimensional Submodels

    PubMed Central

    van der Laan, Mark; Gruber, Susan

    2016-01-01

    Consider a study in which one observes n independent and identically distributed random variables whose probability distribution is known to be an element of a particular statistical model, and one is concerned with estimation of a particular real valued pathwise differentiable target parameter of this data probability distribution. The targeted maximum likelihood estimator (TMLE) is an asymptotically efficient substitution estimator obtained by constructing a so called least favorable parametric submodel through an initial estimator with score, at zero fluctuation of the initial estimator, that spans the efficient influence curve, and iteratively maximizing the corresponding parametric likelihood till no more updates occur, at which point the updated initial estimator solves the so called efficient influence curve equation. In this article we construct a one-dimensional universal least favorable submodel for which the TMLE only takes one step, and thereby requires minimal extra data fitting to achieve its goal of solving the efficient influence curve equation. We generalize these to universal least favorable submodels through the relevant part of the data distribution as required for targeted minimum loss-based estimation. Finally, remarkably, given a multidimensional target parameter, we develop a universal canonical one-dimensional submodel such that the one-step TMLE, only maximizing the log-likelihood over a univariate parameter, solves the multivariate efficient influence curve equation. This allows us to construct a one-step TMLE based on a one-dimensional parametric submodel through the initial estimator, that solves any multivariate desired set of estimating equations. PMID:27227728

  6. Simplified Loss Estimation of Splice to Photonic Crystal Fiber using New Model

    NASA Astrophysics Data System (ADS)

    Karak, Anup; Kundu, Dipankar; Sarkar, Somenath

    2016-06-01

    For a range of fiber parameters and wavelengths, the splice losses between photonic crystal fiber and a single mode fiber are calculated using our simplified and effective model of photonic crystal fiber following a recently developed elaborate method. Again, since the transverse offset and angular mismatch are the serious factors which contribute crucially to splice losses between two optical fibers, these losses between the same couple of fibers are also studied, using our formulation. The concerned results are seen to match fairly excellently with rigorous ones and consistently in comparison with earlier empirical results. Moreover, our formulation can be developed from theoretical framework over entire optogeometrical parameters of photonic crystal fiber within single mode region instead of using deeply involved full vectorial methods. This user-friendly simple approach of computing splice loss should find wide use by experimentalists and system users.

  7. Wildlife Loss Estimates and Summary of Previous Mitigation Related to Hydroelectric Projects in Montana, Volume Three, Hungry Horse Project.

    SciTech Connect

    Casey, Daniel

    1984-10-01

    This assessment addresses the impacts to the wildlife populations and wildlife habitats due to the Hungry Horse Dam project on the South Fork of the Flathead River and previous mitigation of theses losses. In order to develop and focus mitigation efforts, it was first necessary to estimate wildlife and wildlife hatitat losses attributable to the construction and operation of the project. The purpose of this report was to document the best available information concerning the degree of impacts to target wildlife species. Indirect benefits to wildlife species not listed will be identified during the development of alternative mitigation measures. Wildlife species incurring positive impacts attributable to the project were identified.

  8. Estimating bias from loss to follow-up in a prospective cohort study of bicycle crash injuries

    PubMed Central

    Tin Tin, Sandar; Woodward, Alistair; Ameratunga, Shanthi

    2014-01-01

    Background Loss to follow-up, if related to exposures, confounders and outcomes of interest, may bias association estimates. We estimated the magnitude and direction of such bias in a prospective cohort study of crash injury among cyclists. Methods The Taupo Bicycle Study involved 2590 adult cyclists recruited from New Zealand's largest cycling event in 2006 and followed over a median period of 4.6 years through linkage to four administrative databases. We resurveyed the participants in 2009 and excluded three participants who died prior to the resurvey. We compared baseline characteristics and crash outcomes of the baseline (2006) and follow-up (those who responded in 2009) cohorts by ratios of relative frequencies and estimated potential bias from loss to follow-up on seven exposure-outcome associations of interest by ratios of HRs. Results Of the 2587 cyclists in the baseline cohort, 1526 (60%) responded to the follow-up survey. The responders were older, more educated and more socioeconomically advantaged. They were more experienced cyclists who often rode in a bunch, off-road or in the dark, but were less likely to engage in other risky cycling behaviours. Additionally, they experienced bicycle crashes more frequently during follow-up. The selection bias ranged between −10% and +9% for selected associations. Conclusions Loss to follow-up was differential by demographic, cycling and behavioural risk characteristics as well as crash outcomes, but did not substantially bias association estimates of primary research interest. PMID:24336816

  9. Extinction cascades partially estimate herbivore losses in a complete Lepidoptera--plant food web.

    PubMed

    Pearse, Ian S; Altermatt, Florian

    2013-08-01

    The loss of species from an ecological community can have cascading effects leading to the extinction of other species. Specialist herbivores are highly diverse and may be particularly susceptible to extinction due to host plant loss. We used a bipartite food web of 900 Lepidoptera (butterfly and moth) herbivores and 2403 plant species from Central Europe to simulate the cascading effect of plant extinctions on Lepidoptera extinctions. Realistic extinction sequences of plants, incorporating red-list status, range size, and native status, altered subsequent Lepidoptera extinctions. We compared simulated Lepidoptera extinctions to the number of actual regional Lepidoptera extinctions and found that all predicted scenarios underestimated total observed extinctions but accurately predicted observed extinctions attributed to host loss (n = 8, 14%). Likely, many regional Lepidoptera extinctions occurred for reasons other than loss of host plant alone, such as climate change and habitat loss. Ecological networks can be useful in assessing a component of extinction risk to herbivores based on host loss, but further factors may be equally important.

  10. Great East Japan Earthquake Tsunami

    NASA Astrophysics Data System (ADS)

    Iijima, Y.; Minoura, K.; Hirano, S.; Yamada, T.

    2011-12-01

    supercritical flows, resulting in the loss of landward seawall slopes. Such erosion was also observed at landward side of footpath between rice fields. The Sendai plain was subjected just after the main shock of the earthquake. Seawater inundation resulting from tsunami run-up lasted two months. The historical document Sandai-jitsuroku, which gives a detailed history of all of Japan, describes the Jogan earthquake and subsequent tsunami which have attacked Sendai plain in AD 869. The document describes the prolonged period of flooding, and it is suggested that co-seismic subsidence of the plain took place. The inundation area of the Jogan tsunami estimated by the distribution of tsunami deposit mostly overlaps with that of the 3.11 tsunami. Considering the very similarity of seismic shocks between the both, we interpreted the Great East Japan Earthquake Tsunami is the second coming of the Jogan Earthquake Tsunami.

  11. Estimated tooth loss based on number of present teeth in Japanese adults using national surveys of dental disease.

    PubMed

    Yoshino, Koichi; Ishizuka, Yoichi; Fukai, Kakuhiro; Takiguchi, Toru; Sugihara, Naoki

    2015-01-01

    Oral health instruction for adults should take into account the potential effect of tooth loss, as this has been suggested to predict further tooth loss. Therefore, the purpose of this study was to determine whether further tooth loss could be predicted from the number of present teeth (PT). We employed the same method as in our previous study, this time using two national surveys of dental disease, which were deemed to represent a generational cohort. Percentiles were estimated using the cumulative frequency distribution of PT from the two surveys. The first was a survey of 704 participants aged 50-59 years conducted in 2005, and the second was a survey of 747 participants aged 56-65 years conducted in 2011. The 1st to 100th percentiles of the number of PT were calculated for both age groups. Using these percentiles and a generational cohort analysis based on the two surveys, the number of teeth lost per year could be calculated. The distribution of number of teeth lost generated a convex curve. Peak tooth loss occurred at around 12-14 PT, with 0.54 teeth being lost per year. The percentage of teeth lost (per number of PT) increased as number of PT decreased. The results confirmed that tooth loss promotes further tooth loss. These data should be made available for use in adult oral health education.

  12. Structural Constraints and Earthquake Recurrence Estimates for the West Tahoe-Dollar Point Fault, Lake Tahoe Basin, California

    NASA Astrophysics Data System (ADS)

    Maloney, J. M.; Driscoll, N. W.; Kent, G.; Brothers, D. S.; Baskin, R. L.; Babcock, J. M.; Noble, P. J.; Karlin, R. E.

    2011-12-01

    Previous work in the Lake Tahoe Basin (LTB), California, identified the West Tahoe-Dollar Point Fault (WTDPF) as the most hazardous fault in the region. Onshore and offshore geophysical mapping delineated three segments of the WTDPF extending along the western margin of the LTB. The rupture patterns between the three WTDPF segments remain poorly understood. Fallen Leaf Lake (FLL), Cascade Lake, and Emerald Bay are three sub-basins of the LTB, located south of Lake Tahoe, that provide an opportunity to image primary earthquake deformation along the WTDPF and associated landslide deposits. We present results from recent (June 2011) high-resolution seismic CHIRP surveys in FLL and Cascade Lake, as well as complete multibeam swath bathymetry coverage of FLL. Radiocarbon dates obtained from the new piston cores acquired in FLL provide age constraints on the older FLL slide deposits and build on and complement previous work that dated the most recent event (MRE) in Fallen Leaf Lake at ~4.1-4.5 k.y. BP. The CHIRP data beneath FLL image slide deposits that appear to correlate with contemporaneous slide deposits in Emerald Bay and Lake Tahoe. A major slide imaged in FLL CHIRP data is slightly younger than the Tsoyowata ash (7950-7730 cal yrs BP) identified in sediment cores and appears synchronous with a major Lake Tahoe slide deposit (7890-7190 cal yrs BP). The equivalent age of these slides suggests the penultimate earthquake on the WTDPF may have triggered them. If correct, we postulate a recurrence interval of ~3-4 k.y. These results suggest the FLL segment of the WTDPF is near its seismic recurrence cycle. Additionally, CHIRP profiles acquired in Cascade Lake image the WTDPF for the first time in this sub-basin, which is located near the transition zone between the FLL and Rubicon Point Sections of the WTDPF. We observe two fault-strands trending N45°W across southern Cascade Lake for ~450 m. The strands produce scarps of ~5 m and ~2.7 m, respectively, on the lake

  13. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    than do the blockquake and Parkfield data. This provided opportunities for discussing the difference between Poisson and normal distributions, how those differences affect our estimation of future earthquake probabilities, the importance of both the mean and the standard deviation in predicting future behavior from a sequence of events, and how conditional probability is used to help seismologists predict future earthquakes given a known or theoretical distribution of past earthquakes.

  14. Estimating the Frequency of Horizontal Gene Transfer Using Phylogenetic Models of Gene Gain and Loss.

    PubMed

    Zamani-Dahaj, Seyed Alireza; Okasha, Mohamed; Kosakowski, Jakub; Higgs, Paul G

    2016-07-01

    We analyze patterns of gene presence and absence in a maximum likelihood framework with rate parameters for gene gain and loss. Standard methods allow independent gains and losses in different parts of a tree. While losses of the same gene are likely to be frequent, multiple gains need to be considered carefully. A gene gain could occur by horizontal transfer or by origin of a gene within the lineage being studied. If a gene is gained more than once, then at least one of these gains must be a horizontal transfer. A key parameter is the ratio of gain to loss rates, a/v We consider the limiting case known as the infinitely many genes model, where a/v tends to zero and a gene cannot be gained more than once. The infinitely many genes model is used as a null model in comparison to models that allow multiple gains. Using genome data from cyanobacteria and archaea, it is found that the likelihood is significantly improved by allowing for multiple gains, but the average a/v is very small. The fraction of genes whose presence/absence pattern is best explained by multiple gains is only 15% in the cyanobacteria and 20% and 39% in two data sets of archaea. The distribution of rates of gene loss is very broad, which explains why many genes follow a treelike pattern of vertical inheritance, despite the presence of a significant minority of genes that undergo horizontal transfer.

  15. Accuracy of telemetry signal power loss in a filter as an estimate for telemetry degradation

    NASA Technical Reports Server (NTRS)

    Koerner, M. A.

    1989-01-01

    When telemetry data is transmitted through a communication link, some degradation in telemetry performance occurs as a result of the imperfect frequency response of the channel. The term telemetry degradation as used here is the increase in received signal power required to offset this filtering. The usual approach to assessing this degradation is to assume that it is equal to the signal power loss in the filtering, which is easily calculated. However, this approach neglects the effects of the nonlinear phase response of the filter, the effect of any reduction of the receiving system noise due to the filter, and intersymbol interference. Here, an exact calculation of the telemetry degradation, which includes all of the above effects, is compared with the signal power loss calculation for RF filtering of NRZ data on a carrier. The signal power loss calculation is found to be a reasonable approximation when the filter follows the point at which the receiving system noise is introduced, especially if the signal power loss is less than 0.5 dB. The signal power loss approximation is less valid when the receiving system noise is not filtered.

  16. A new macroseismic intensity prediction equation and magnitude estimates of the 1811-1812 New Madrid and 1886 Charleston, South Carolina, earthquakes

    NASA Astrophysics Data System (ADS)

    Boyd, O. S.; Cramer, C. H.

    2013-12-01

    We develop an intensity prediction equation (IPE) for the Central and Eastern United States, explore differences between modified Mercalli intensities (MMI) and community internet intensities (CII) and the propensity for reporting, and estimate the moment magnitudes of the 1811-1812 New Madrid, MO, and 1886 Charleston, SC, earthquakes. We constrain the study with North American census data, the National Oceanic and Atmospheric Administration MMI dataset (responses between 1924 and 1985), and the USGS ';Did You Feel It?' CII dataset (responses between June, 2000 and August, 2012). The combined intensity dataset has more than 500,000 felt reports for 517 earthquakes with magnitudes between 2.5 and 7.2. The IPE has the basic form, MMI=c1+c2M+c3exp(λ)+c4λ. where M is moment magnitude and λ is mean log hypocentral distance. Previous IPEs use a limited dataset of MMI, do not differentiate between MMI and CII data in the CEUS, nor account for spatial variations in population. These factors can have an impact at all magnitudes, especially the last factor at large magnitudes and small intensities where the population drops to zero in the Atlantic Ocean and Gulf of Mexico. We assume that the number of reports of a given intensity have hypocentral distances that are log-normally distributed, the distribution of which is modulated by population and the propensity for individuals to report their experience. We do not account for variations in stress drop, regional variations in Q, or distance-dependent geometrical spreading. We simulate the distribution of reports of a given intensity accounting for population and use a grid search method to solve for the fraction of population to report the intensity, the standard deviation of the log-normal distribution and the mean log hypocentral distance, which appears in the above equation. We find that lower intensities, both CII and MMI, are less likely to be reported than greater intensities. Further, there are strong spatial

  17. Earthquake hazards: a national threat

    USGS Publications Warehouse

    ,

    2006-01-01

    Earthquakes are one of the most costly natural hazards faced by the Nation, posing a significant risk to 75 million Americans in 39 States. The risks that earthquakes pose to society, including death, injury, and economic loss, can be greatly reduced by (1) better planning, construction, and mitigation practices before earthquakes happen, and (2) providing critical and timely information to improve response after they occur. As part of the multi-agency National Earthquake Hazards Reduction Program, the U.S. Geological Survey (USGS) has the lead Federal responsibility to provide notification of earthquakes in order to enhance public safety and to reduce losses through effective forecasts based on the best possible scientific information.

  18. Estimating losses in heat networks coated with modern liquid crystal thermal insulation

    NASA Astrophysics Data System (ADS)

    Ilyin, R. A.

    2015-07-01

    One of the present issues during heat network operation in Russia is the losses of thermal energy at its transfer to consumers. According to statements of experts, losses in heat networks reach 35-50%. In this work, some properties of thermo-insulating materials currently in use are described. The innovative TLM Ceramic liquid-crystal thermal insulation is presented by its positive technical and economical characteristics, as well as field-performance data, and the doubts of experts about its declared properties. Location measurement data are presented for Astrakhan Severnaya heat and power plant hot-water system section covered with the 2-mm-thick liquid-crystal thermal insulation layer. Specific heat losses from the hot-water system surface have been determined and the arguments for inexpediency of applying TLM Ceramic liquid-crystal thermal insulation in heat-and-power engineering are discussed.

  19. Estimating Earthquake Magnitude from the Kentucky Bend Scarp in the New Madrid Seismic Zone Using Field Geomorphic Mapping and High-Resolution LiDAR Topography

    NASA Astrophysics Data System (ADS)

    Kelson, K. I.; Kirkendall, W. G.

    2014-12-01

    Recent suggestions that the 1811-1812 earthquakes in the New Madrid Seismic Zone (NMSZ) ranged from M6.8-7.0 versus M8.0 have implications for seismic hazard estimation in the central US. We more accurately identify the location of the NW-striking, NE-facing Kentucky Bend scarp along the northern Reelfoot fault, which is spatially associated with the Lake County uplift, contemporary seismicity, and changes in the Mississippi River from the February 1812 earthquake. We use 1m-resolution LiDAR hillshades and slope surfaces, aerial photography, soil surveys, and field geomorphic mapping to estimate the location, pattern, and amount of late Holocene coseismic surface deformation. We define eight late Holocene to historic fluvial deposits, and delineate younger alluvia that are progressively inset into older deposits on the upthrown, western side of the fault. Some younger, clayey deposits indicate past ponding against the scarp, perhaps following surface deformational events. The Reelfoot fault is represented by sinuous breaks-in-slope cutting across these fluvial deposits, locally coinciding with shallow faults identified via seismic reflection data (Woolery et al., 1999). The deformation pattern is consistent with NE-directed reverse faulting along single or multiple SW-dipping fault planes, and the complex pattern of fluvial deposition appears partially controlled by intermittent uplift. Six localities contain scarps across correlative deposits and allow evaluation of cumulative surface deformation from LiDAR-derived topographic profiles. Displacements range from 3.4±0.2 m, to 2.2±0.2 m, 1.4±0.3 m, and 0.6±0.1 m across four progressively younger surfaces. The spatial distribution of the profiles argues against the differences being a result of along-strike uplift variability. We attribute the lesser displacements of progressively younger deposits to recurrent surface deformation, but do not yet interpret these initial data with respect to possible earthquake

  20. Microwave continuum measurements and estimates of mass loss rates for cool giants and supergiants

    NASA Technical Reports Server (NTRS)

    Drake, S. A.; Linsky, J. L.

    1986-01-01

    Attention is given to the results of a sensitive, 6-cm radio continuum survey conducted with the NRAO VLA of 39 of the nearest single cool giants and supergiants of G0-M5 spectral types; the survey was conducted in order to obtain accurate measurements of the mass loss rates of ionized gas for a representative sample of such stars, in order to furnish constraints for, and a better understanding of, the total mass loss rates. The inferred angular diameters for the cool giant sources are noted to be twice as large as photospheric angular diameters, implying that these stars are surrounded by extended chromospheres containing warm partially ionized gas.

  1. DXA, bioelectrical impedance, ultrasonography and biometry for the estimation of fat and lean mass in cats during weight loss

    PubMed Central

    2012-01-01

    Background Few equations have been developed in veterinary medicine compared to human medicine to predict body composition. The present study was done to evaluate the influence of weight loss on biometry (BIO), bioimpedance analysis (BIA) and ultrasonography (US) in cats, proposing equations to estimate fat (FM) and lean (LM) body mass, as compared to dual energy x-ray absorptiometry (DXA) as the referenced method. For this were used 16 gonadectomized obese cats (8 males and 8 females) in a weight loss program. DXA, BIO, BIA and US were performed in the obese state (T0; obese animals), after 10% of weight loss (T1) and after 20% of weight loss (T2). Stepwise regression was used to analyze the relationship between the dependent variables (FM, LM) determined by DXA and the independent variables obtained by BIO, BIA and US. The better models chosen were evaluated by a simple regression analysis and means predicted vs. determined by DXA were compared to verify the accuracy of the equations. Results The independent variables determined by BIO, BIA and US that best correlated (p < 0.005) with the dependent variables (FM and LM) were BW (body weight), TC (thoracic circumference), PC (pelvic circumference), R (resistance) and SFLT (subcutaneous fat layer thickness). Using Mallows’Cp statistics, p value and r2, 19 equations were selected (12 for FM, 7 for LM); however, only 7 equations accurately predicted FM and one LM of cats. Conclusions The equations with two variables are better to use because they are effective and will be an alternative method to estimate body composition in the clinical routine. For estimated lean mass the equations using body weight associated with biometrics measures can be proposed. For estimated fat mass the equations using body weight associated with bioimpedance analysis can be proposed. PMID:22781317

  2. Estimating tempo and mode of Y chromosome turnover: explaining Y chromosome loss with the fragile Y hypothesis.

    PubMed

    Blackmon, Heath; Demuth, Jeffery P

    2014-06-01

    Chromosomal sex determination is phylogenetically widespread, having arisen independently in many lineages. Decades of theoretical work provide predictions about sex chromosome differentiation that are well supported by observations in both XY and ZW systems. However, the phylogenetic scope of previous work gives us a limited understanding of the pace of sex chromosome gain and loss and why Y or W chromosomes are more often lost in some lineages than others, creating XO or ZO systems. To gain phylogenetic breadth we therefore assembled a database of 4724 beetle species' karyotypes and found substantial variation in sex chromosome systems. We used the data to estimate rates of Y chromosome gain and loss across a phylogeny of 1126 taxa estimated from seven genes. Contrary to our initial expectations, we find that highly degenerated Y chromosomes of many members of the suborder Polyphaga are rarely lost, and that cases of Y chromosome loss are strongly associated with chiasmatic segregation during male meiosis. We propose the "fragile Y" hypothesis, that recurrent selection to reduce recombination between the X and Y chromosome leads to the evolution of a small pseudoautosomal region (PAR), which, in taxa that require XY chiasmata for proper segregation during meiosis, increases the probability of aneuploid gamete production, with Y chromosome loss. This hypothesis predicts that taxa that evolve achiasmatic segregation during male meiosis will rarely lose the Y chromosome. We discuss data from mammals, which are consistent with our prediction.

  3. Estimates of the prevalence of anomalous signal losses in the Yellow Sea derived from acoustic and oceanographic computer model simulations

    NASA Astrophysics Data System (ADS)

    Chin-Bing, Stanley A.; King, David B.; Warn-Varnas, Alex C.; Lamb, Kevin G.; Hawkins, James A.; Teixeira, Marvi

    2002-05-01

    The results from collocated oceanographic and acoustic simulations in a region of the Yellow Sea near the Shandong peninsula have been presented [Chin-Bing et al., J. Acoust. Soc. Am. 108, 2577 (2000)]. In that work, the tidal flow near the peninsula was used to initialize a 2.5-dimensional ocean model [K. G. Lamb, J. Geophys. Res. 99, 843-864 (1994)] that subsequently generated internal solitary waves (solitons). The validity of these soliton simulations was established by matching satellite imagery taken over the region. Acoustic propagation simulations through this soliton field produced results similar to the anomalous signal loss measured by Zhou, Zhang, and Rogers [J. Acoust. Soc. Am. 90, 2042-2054 (1991)]. Analysis of the acoustic interactions with the solitons also confirmed the hypothesis that the loss mechanism involved acoustic mode coupling. Recently we have attempted to estimate the prevalence of these anomalous signal losses in this region. These estimates were made from simulating acoustic effects over an 80 hour space-time evolution of soliton packets. Examples will be presented that suggest the conditions necessary for anomalous signal loss may be more prevalent than previously thought. [Work supported by ONR/NRL and by a High Performance Computing DoD grant.

  4. Estimating Tempo and Mode of Y Chromosome Turnover: Explaining Y Chromosome Loss With the Fragile Y Hypothesis

    PubMed Central

    Blackmon, Heath; Demuth, Jeffery P.

    2014-01-01

    Chromosomal sex determination is phylogenetically widespread, having arisen independently in many lineages. Decades of theoretical work provide predictions about sex chromosome differentiation that are well supported by observations in both XY and ZW systems. However, the phylogenetic scope of previous work gives us a limited understanding of the pace of sex chromosome gain and loss and why Y or W chromosomes are more often lost in some lineages than others, creating XO or ZO systems. To gain phylogenetic breadth we therefore assembled a database of 4724 beetle species’ karyotypes and found substantial variation in sex chromosome systems. We used the data to estimate rates of Y chromosome gain and loss across a phylogeny of 1126 taxa estimated from seven genes. Contrary to our initial expectations, we find that highly degenerated Y chromosomes of many members of the suborder Polyphaga are rarely lost, and that cases of Y chromosome loss are strongly associated with chiasmatic segregation during male meiosis. We propose the “fragile Y” hypothesis, that recurrent selection to reduce recombination between the X and Y chromosome leads to the evolution of a small pseudoautosomal region (PAR), which, in taxa that require XY chiasmata for proper segregation during meiosis, increases the probability of aneuploid gamete production, with Y chromosome loss. This hypothesis predicts that taxa that evolve achiasmatic segregation during male meiosis will rarely lose the Y chromosome. We discuss data from mammals, which are consistent with our prediction. PMID:24939995

  5. Sensitivity and uncertainty analysis for the annual P loss estimator (APLE) model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that there are inherent uncertainties with model predictions, limited studies have addressed model prediction uncertainty. In this study we assess the effect of model input error on predict...

  6. Wind storm loss estimations in the Canton of Vaud (Western Switzerland)

    NASA Astrophysics Data System (ADS)

    Etienne, C.; Beniston, M.

    2012-12-01

    A storm loss model that was first developed for Germany is applied to the much smaller geographic area of the canton of Vaud, in Western Switzerland. 24 major wind storms that struck the region during the period 1990-2010 are analysed, and outputs are compared to loss observations provided by an insurance company. Model inputs include population data and daily maximum wind speeds from weather stations. These measured wind speeds are regionalised in the canton of Vaud following different methods, using either basic interpolation techniques from Geographic Information Systems (GIS), or by using an existing extreme wind speed map of Switzerland whose values are used as thresholds. A third method considers the wind power, integrating wind speeds temporally over storm duration to calculate losses. Outputs show that the model leads to similar results for all methods, with Pearson's correlation and Spearman's rank coefficients of roughly 0.7. Bootstrap techniques are applied to test the model's robustness. Impacts of population growth and possible changes in storminess under conditions of climate change shifts are also examined for this region, emphasizing high shifts in economic losses related to small increases of input wind speeds.

  7. Turkish Compulsory Earthquake Insurance (TCIP)

    NASA Astrophysics Data System (ADS)

    Erdik, M.; Durukal, E.; Sesetyan, K.

    2009-04-01

    Through a World Bank project a government-sponsored Turkish Catastrophic Insurance Pool (TCIP) is created in 2000 with the essential aim of transferring the government's financial burden of replacing earthquake-damaged housing to international reinsurance and capital markets. Providing coverage to about 2.9 Million homeowners TCIP is the largest insurance program in the country with about 0.5 Billion USD in its own reserves and about 2.3 Billion USD in total claims paying capacity. The total payment for earthquake damage since 2000 (mostly small, 226 earthquakes) amounts to about 13 Million USD. The country-wide penetration rate is about 22%, highest in the Marmara region (30%) and lowest in the south-east Turkey (9%). TCIP is the sole-source provider of earthquake loss coverage up to 90,000 USD per house. The annual premium, categorized on the basis of earthquake zones type of structure, is about US90 for a 100 square meter reinforced concrete building in the most hazardous zone with 2% deductible. The earthquake engineering related shortcomings of the TCIP is exemplified by fact that the average rate of 0.13% (for reinforced concrete buildings) with only 2% deductible is rather low compared to countries with similar earthquake exposure. From an earthquake engineering point of view the risk underwriting (Typification of housing units to be insured, earthquake intensity zonation and the sum insured) of the TCIP needs to be overhauled. Especially for large cities, models can be developed where its expected earthquake performance (and consequently the insurance premium) can be can be assessed on the basis of the location of the unit (microzoned earthquake hazard) and basic structural attributes (earthquake vulnerability relationships). With such an approach, in the future the TCIP can contribute to the control of construction through differentiation of premia on the basis of earthquake vulnerability.

  8. An approach to estimating radiological risk of offsite release from a design basis earthquake for the Process Experimental Pilot Plant (PREPP)

    SciTech Connect

    Lucero, V.; Meale, B.M.; Reny, D.A.; Brown, A.N.

    1990-09-01

    In compliance with Department of Energy (DOE) Order 6430.1A, a seismic analysis was performed on DOE's Process Experimental Pilot Plant (PREPP), a facility for processing low-level and transuranic (TRU) waste. Because no hazard curves were available fo