Science.gov

Sample records for earthquake loss estimation

  1. Earthquake Loss Estimation Uncertainties

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander

    2013-04-01

    The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity

  2. Loss estimation of Membramo earthquake

    NASA Astrophysics Data System (ADS)

    Damanik, R.; Sedayo, H.

    2016-05-01

    Papua Tectonics are dominated by the oblique collision of the Pacific plate along the north side of the island. A very high relative plate motions (i.e. 120 mm/year) between the Pacific and Papua-Australian Plates, gives this region a very high earthquake production rate, about twice as much as that of Sumatra, the western margin of Indonesia. Most of the seismicity occurring beneath the island of New Guinea is clustered near the Huon Peninsula, the Mamberamo region, and the Bird's Neck. At 04:41 local time(GMT+9), July 28th 2015, a large earthquake of Mw = 7.0 occurred at West Mamberamo Fault System. The earthquake focal mechanism are dominated by northwest-trending thrust mechanisms. GMPE and ATC vulnerability curve were used to estimate distribution of damage. Mean of estimated losses was caused by this earthquake is IDR78.6 billion. We estimated insurance loss will be only small portion in total general due to deductible.

  3. Estimating economic losses from earthquakes using an empirical approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2013-01-01

    We extended the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) empirical fatality estimation methodology proposed by Jaiswal et al. (2009) to rapidly estimate economic losses after significant earthquakes worldwide. The requisite model inputs are shaking intensity estimates made by the ShakeMap system, the spatial distribution of population available from the LandScan database, modern and historic country or sub-country population and Gross Domestic Product (GDP) data, and economic loss data from Munich Re's historical earthquakes catalog. We developed a strategy to approximately scale GDP-based economic exposure for historical and recent earthquakes in order to estimate economic losses. The process consists of using a country-specific multiplicative factor to accommodate the disparity between economic exposure and the annual per capita GDP, and it has proven successful in hindcast-ing past losses. Although loss, population, shaking estimates, and economic data used in the calibration process are uncertain, approximate ranges of losses can be estimated for the primary purpose of gauging the overall scope of the disaster and coordinating response. The proposed methodology is both indirect and approximate and is thus best suited as a rapid loss estimation model for applications like the PAGER system.

  4. Status of developing Earthquake Loss Estimation in Korea Using HAZUS

    NASA Astrophysics Data System (ADS)

    Kang, S. Y.; Kim, K. H.

    2015-12-01

    HAZUS, a loss estimation tool due to natural hazards, has been used in Korea. In the earlier development of earthquake loss estimation system in Korea, a ShakeMap due to magnitude 6.7 scenario earthquake in the southeastern Korea prepared by USGS was used. Attenuation relation proposed by Boore et al. (1997) is assumed to simulate the strong ground motion with distance. During the initial stage, details of local site characteristics and attenuation relations were not properly accounted. Later, the attenuation relation proposed by Sadigh et al. (1997) for site classes B, C, and D were reviewed and applied to the Korean Peninsula. Loss estimations were improved using the attenuation relation and the deterministic methods available in HAZUS. Most recently, a site classification map has been derived using geologic and geomorphologic data, which are readily available from the geologic and topographic maps of Korea. Loss estimations using the site classification map differ from earlier ones. For example, earthquake loss using ShakeMap overestimates house damages. 43% of houses are estimated to experience moderate or severe damage in the results using ShakeMap, while 23 % is estimated in those using the site classification map. The number of people seeking emergency shelters is also different from previous estimates. It is considered revised estimates are more realistic since the ground motions ensuing from earthquakes are better represented. In the next application, landslide, liquefaction and fault information are planned to be implemented in HAZUS. The result is expected to better represent any loss under the emergency situation, thus help the planning disaster response and hazard mitigations.

  5. Estimating annualized earthquake losses for the conterminous United States

    USGS Publications Warehouse

    Jaiswal, Kishor S.; Bausch, Douglas; Chen, Rui; Bouabid, Jawhar; Seligson, Hope

    2015-01-01

    We make use of the most recent National Seismic Hazard Maps (the years 2008 and 2014 cycles), updated census data on population, and economic exposure estimates of general building stock to quantify annualized earthquake loss (AEL) for the conterminous United States. The AEL analyses were performed using the Federal Emergency Management Agency's (FEMA) Hazus software, which facilitated a systematic comparison of the influence of the 2014 National Seismic Hazard Maps in terms of annualized loss estimates in different parts of the country. The losses from an individual earthquake could easily exceed many tens of billions of dollars, and the long-term averaged value of losses from all earthquakes within the conterminous U.S. has been estimated to be a few billion dollars per year. This study estimated nationwide losses to be approximately $4.5 billion per year (in 2012$), roughly 80% of which can be attributed to the States of California, Oregon and Washington. We document the change in estimated AELs arising solely from the change in the assumed hazard map. The change from the 2008 map to the 2014 map results in a 10 to 20% reduction in AELs for the highly seismic States of the Western United States, whereas the reduction is even more significant for Central and Eastern United States.

  6. Global Building Inventory for Earthquake Loss Estimation and Risk Management

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David; Porter, Keith

    2010-01-01

    We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat’s demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature.

  7. Rapid estimation of earthquake loss based on instrumental seismic intensity: design and realization

    NASA Astrophysics Data System (ADS)

    Huang, Hongsheng; Chen, Lin; Zhu, Gengqing; Wang, Lin; Lin, Yanzhao; Wang, Huishan

    2013-11-01

    As a result of our ability to acquire large volumes of real-time earthquake observation data, coupled with increased computer performance, near real-time seismic instrument intensity can be obtained by using ground motion data observed by instruments and by using the appropriate spatial interpolation methods. By combining vulnerability study results from earthquake disaster research with earthquake disaster assessment models, we can estimate the losses caused by devastating earthquakes, in an attempt to provide more reliable information for earthquake emergency response and decision support. This paper analyzes the latest progress on the methods of rapid earthquake loss estimation at home and abroad. A new method involving seismic instrument intensity rapid reporting to estimate earthquake loss is proposed and the relevant software is developed. Finally, a case study using the M L4.9 earthquake that occurred in Shun-chang county, Fujian Province on March 13, 2007 is given as an example of the proposed method.

  8. A global building inventory for earthquake loss estimation and risk management

    USGS Publications Warehouse

    Jaiswal, K.; Wald, D.; Porter, K.

    2010-01-01

    We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat's demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature. ?? 2010, Earthquake Engineering Research Institute.

  9. Comparing population exposure to multiple Washington earthquake scenarios for prioritizing loss estimation studies

    USGS Publications Warehouse

    Wood, Nathan J.; Ratliff, Jamie L.; Schelling, John; Weaver, Craig S.

    2014-01-01

    Scenario-based, loss-estimation studies are useful for gauging potential societal impacts from earthquakes but can be challenging to undertake in areas with multiple scenarios and jurisdictions. We present a geospatial approach using various population data for comparing earthquake scenarios and jurisdictions to help emergency managers prioritize where to focus limited resources on data development and loss-estimation studies. Using 20 earthquake scenarios developed for the State of Washington (USA), we demonstrate how a population-exposure analysis across multiple jurisdictions based on Modified Mercalli Intensity (MMI) classes helps emergency managers understand and communicate where potential loss of life may be concentrated and where impacts may be more related to quality of life. Results indicate that certain well-known scenarios may directly impact the greatest number of people, whereas other, potentially lesser-known, scenarios impact fewer people but consequences could be more severe. The use of economic data to profile each jurisdiction’s workforce in earthquake hazard zones also provides additional insight on at-risk populations. This approach can serve as a first step in understanding societal impacts of earthquakes and helping practitioners to efficiently use their limited risk-reduction resources.

  10. Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge

    USGS Publications Warehouse

    Jaiswal, K.S.; Wald, D.J.

    2012-01-01

    We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.

  11. A new Tool for Estimating Losses due to Earthquakes: QUAKELOSS2

    NASA Astrophysics Data System (ADS)

    Kaestli, P.; Wyss, M.; Bonjour, C.; Wiemer, S.; Wyss, B. M.

    2007-12-01

    WAPMERR and the Swiss Seismological Service are developing new software for estimating mean damage to buildings, number of injured and number of fatalities due to earthquakes worldwide. The focus for applications is real-time estimates of losses after earthquakes in countries without dense seismograph networks, and results that are easy to digest by relief agencies. Therefore, the standard version of the software addresses losses by settlement, subdivisions of settlements and important pieces of infrastructure. However, a generic design, an open source policy and well defined interfaces will allow the software to work on any gridded or discrete building stock data, to do Monte-Carlo simulations for error assessment and to plug in more elaborate source models than simple point and line sources and thus to compute realistic loss scenarios as well as probabilistic risk maps. It will provide interfaces to SHAKEMAP and PAGER, such that innovations developed for the latter programs may be used in QUAKELOSS2, and vice versa. A client server design will provide a front-end web interface where the user may directly manage servers as well as run the software in one's&pown laboratory. The input-output features and mapping will be designed to allow the user to run QUAKELOSS2 remotely with basic functions, as well as in a laboratory setting including a full-featured GIS setup for additional analysis. In many cases, the input data (earthquake parameters as well as population and building stock data) are poorly known for developing countries. Calibration of loss estimates, using past earthquakes that have caused damage and WAPMERR's experience of four years" estimating losses, will help to produce approximately correct results in countries with strong earthquake activity. A worldwide standard dataset on population and building stock will be provided as open source together with the software. The dataset will be improved successively, based on input from satellite images

  12. Loss estimates for a Puente Hills blind-thrust earthquake in Los Angeles, California

    USGS Publications Warehouse

    Field, E.H.; Seligson, H.A.; Gupta, N.; Gupta, V.; Jordan, T.H.; Campbell, K.W.

    2005-01-01

    Based on OpenSHA and HAZUS-MH, we present loss estimates for an earthquake rupture on the recently identified Puente Hills blind-thrust fault beneath Los Angeles. Given a range of possible magnitudes and ground motion models, and presuming a full fault rupture, we estimate the total economic loss to be between $82 and $252 billion. This range is not only considerably higher than a previous estimate of $69 billion, but also implies the event would be the costliest disaster in U.S. history. The analysis has also provided the following predictions: 3,000-18,000 fatalities, 142,000-735,000 displaced households, 42,000-211,000 in need of short-term public shelter, and 30,000-99,000 tons of debris generated. Finally, we show that the choice of ground motion model can be more influential than the earthquake magnitude, and that reducing this epistemic uncertainty (e.g., via model improvement and/or rejection) could reduce the uncertainty of the loss estimates by up to a factor of two. We note that a full Puente Hills fault rupture is a rare event (once every ???3,000 years), and that other seismic sources pose significant risk as well. ?? 2005, Earthquake Engineering Research Institute.

  13. Regional earthquake loss estimation in the Autonomous Province of Bolzano - South Tyrol (Italy)

    NASA Astrophysics Data System (ADS)

    Huttenlau, Matthias; Winter, Benjamin

    2013-04-01

    Beside storm events geophysical events cause a majority of natural hazard losses on a global scale. However, in alpine regions with a moderate earthquake risk potential like in the study area and thereupon connected consequences on the collective memory this source of risk is often neglected in contrast to gravitational and hydrological hazards processes. In this context, the comparative analysis of potential disasters and emergencies on a national level in Switzerland (Katarisk study) has shown that earthquakes are the most serious source of risk in general. In order to estimate the potential losses of earthquake events for different return periods and loss dimensions of extreme events the following study was conducted in the Autonomous Province of Bolzano - South Tyrol (Italy). The applied methodology follows the generally accepted risk concept based on the risk components hazard, elements at risk and vulnerability, whereby risk is not defined holistically (direct, indirect, tangible and intangible) but with the risk category losses on buildings and inventory as a general risk proxy. The hazard analysis is based on a regional macroseismic scenario approach. Thereby, the settlement centre of each community (116 communities) is defined as potential epicentre. For each epicentre four different epicentral scenarios (return periods of 98, 475, 975 and 2475 years) are calculated based on the simple but approved and generally accepted attenuation law according to Sponheuer (1960). The relevant input parameters to calculate the epicentral scenarios are (i) the macroseismic intensity and (ii) the focal depth. The considered macroseismic intensities are based on a probabilistic seismic hazard analysis (PSHA) of the Italian earthquake catalogue on a community level (Dipartimento della Protezione Civile). The relevant focal depth are considered as a mean within a defined buffer of the focal depths of the harmonized earthquake catalogues of Italy and Switzerland as well as

  14. Ways to increase the reliability of earthquake loss estimations in emergency mode

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valeri; Ugarov, Aleksander

    2016-04-01

    The lessons of earthquake disasters in Nepal, China, Indonesia, India, Haiti, Turkey and many others show that authorities in charge of emergency response are most often lacking prompt and reliable information on the disaster itself and its secondary effects. Timely and adequate action just after a strong earthquake can result in significant benefits in saving lives and other benefits, especially, in densely populated areas with high level of industrialization. The reliability of rough and rapid information provided by "global systems" (i.e. systems operated without consideration on wherever the earthquake has occurred), in emergency mode is strongly dependent on many factors dealt with input data and simulation models used in such systems. The paper analyses the different factors contribution to the total "error" of fatality estimation in emergency mode. Examples of four strong events in Nepal, Italy, China, Italy allowed to make a conclusion that the reliability of loss estimations is first of all influenced by the uncertainties in event parameters determination (coordinates, magnitude, source depth); this factors' group rating is the highest; as the degree of influence on reliability of loss estimations is equal to about 50%. The second place is taken by the factors' group responsible for macroseismic field simulation; the degree of influence of the group errors is about 30%. The last place is taken by group of factors, which describes the built environment distribution and regional vulnerability functions; the factors' group contributes about 20% to the error of loss estimation. Ways to minimize the influence of different factors on the reliability of loss assessment in near real time are proposed. The first one is to determine the rating of seismological surveys for different zones in attempting to decrease uncertainties in the earthquake parameters input determination in emergency mode. The second one is to "calibrate" the "global systems" drawing advantage

  15. Estimating earthquake potential

    USGS Publications Warehouse

    Page, R.A.

    1980-01-01

    The hazards to life and property from earthquakes can be minimized in three ways. First, structures can be designed and built to resist the effects of earthquakes. Second, the location of structures and human activities can be chosen to avoid or to limit the use of areas known to be subject to serious earthquake hazards. Third, preparations for an earthquake in response to a prediction or warning can reduce the loss of life and damage to property as well as promote a rapid recovery from the disaster. The success of the first two strategies, earthquake engineering and land use planning, depends on being able to reliably estimate the earthquake potential. The key considerations in defining the potential of a region are the location, size, and character of future earthquakes and frequency of their occurrence. Both historic seismicity of the region and the geologic record are considered in evaluating earthquake potential. 

  16. Estimation of damage and human losses due to earthquakes worldwide - QLARM strategy and experience

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Rosset, P.; Wyss, M.; Wiemer, S.; Bonjour, C.; Cua, G.

    2009-04-01

    Within the framework of the IMRPOVE project, we are constructing our second-generation loss estimation tool QLARM (earthQuake Loss Assessment for Response and Mitigation). At the same time, we are upgrading the input data to be used in real-time and scenario mode. The software and databases will be open to all scientific users. The estimates include: (1) total number of fatalities and injured, (2) casualties by settlement, (3) percent of buildings in five damage grades in each settlement, (4) a map showing mean damage by settlement, and (5) functionality of large medical facilities. We present here our strategy and progress so far in constructing and calibrating the new tool. The QLARM worldwide database of the elements-at-risk consists of point and discrete city models with the following parameters: (1) Soil amplification factors; (2) distribution of building stock and population into vulnerability classes of the European Macroseismic Scale (EMS-98); (3) most recent population numbers by settlement or district; (4) information regarding medical facilities where available. We calculate the seismic demand in terms of (a) macroseismic (seismic intensity) or (b) instrumental (PGA) parameters. Attenuation relationships predicting both parameters will be used for different regions worldwide, considering the tectonic regime and wave propagation characteristics. We estimate damage and losses using: (i) vulnerability models pertinent to EMS-98 vulnerability classes; (ii) building collapse rates pertinent to different regions worldwide; and, (iii) casualty matrices pertinent to EMS-98 vulnerability classes. We also provide approximate estimates for the functionality of large medical facilities considering their structural, non-structural damage and loss-of-function of the medical equipment and installations. We calibrate the QLARM database and the loss estimation tool using macroseismic observations and information regarding damage and human losses from past earthquakes

  17. A simulation of Earthquake Loss Estimation in Southeastern Korea using HAZUS and the local site classification Map

    NASA Astrophysics Data System (ADS)

    Kang, S.; Kim, K.

    2013-12-01

    Regionally varying seismic hazards can be estimated using an earthquake loss estimation system (e.g. HAZUS-MH). The estimations for actual earthquakes help federal and local authorities develop rapid, effective recovery measures. Estimates for scenario earthquakes help in designing a comprehensive earthquake hazard mitigation plan. Local site characteristics influence the ground motion. Although direct measurements are desirable to construct a site-amplification map, such data are expensive and time consuming to collect. Thus we derived a site classification map of the southern Korean Peninsula using geologic and geomorphologic data, which are readily available for the entire southern Korean Peninsula. Class B sites (mainly rock) are predominant in the area, although localized areas of softer soils are found along major rivers and seashores. The site classification map is compared with independent site classification studies to confirm our site classification map effectively represents the local behavior of site amplification during an earthquake. We then estimated the losses due to a magnitude 6.7 scenario earthquake in Gyeongju, southeastern Korea, with and without the site classification map. Significant differences in loss estimates were observed. The loss without the site classification map decreased without variation with increasing epicentral distance, while the loss with the site classification map varied from region to region, due to both the epicentral distance and local site effects. The major cause of the large loss expected in Gyeongju is the short epicentral distance. Pohang Nam-Gu is located farther from the earthquake source region. Nonetheless, the loss estimates in the remote city are as large as those in Gyeongju and are attributed to the site effect of soft soil found widely in the area.

  18. Integration of Near-fault Earthquake Ground Motion Simulations in Damage and Loss Estimation Procedures.

    NASA Astrophysics Data System (ADS)

    Faccioli, E.; Lagomarsino, S.; Demartinos, K.; Smerzini, C.; Stuppazzini, M.; Vanini, M.; Villani, M.; Smolka, A.; Allmann, A.

    2010-05-01

    In this contribution we investigate the advantages and/or limitations of integrating standard approaches for damage and loss estimation procedures with synthetic data from 3D large scale numerical simulations, capable to reproduce the coupling of near-fault conditions, including the focal mechanism of the source and directivity effects, and complex geological configurations, such as deep alluvial basins or irregular topographic profiles. As a matter of fact, the largest portion of damage and losses during a major earthquake occur in near-field conditions, where earthquake ground motion is typically poorly constrained based on standard attenuation relationships, that may not be based on a sufficiently detailed description both of the seismic source and of the local geological conditions. As a case study we decided to use a scenario earthquake of Mw 6.4 occurring in the town of Sulmona, Italy along the active Mount Morrone fault. The area, located only 40 km south of l'Aquila, was selected in the frame of the Italian Project S2 (DPC-INGV 2007-2009) thanks to the amount of geological and seismological information that allowed on one hand to perform near-fault 3D earthquake ground motion simulations, and, on the other side, a reliable quantification of the potential damage thanks to the accurate data characterizing the building stocks. The 3D simulations have been carried out through a high performance Spectral Elements tool, namely GeoELSE (http://geoelse.stru.polimi.it), designed to study linear, non-linear viscoelastic and viscoplastic wave propagation analyses in large-scale earth models, including the seismic source, the propagation path, the local near-surface geology, and, if needed, the interaction with man-made structures . The parallel implementation of the code GeoELSE ensures a reasonable computer time to resolve tens of million of degrees of freedom up to 2.5 Hz. Damage and loss evaluations based on the results of numerical simulations are compared with

  19. Comparison of Loss Estimates for Greater Victoria, British Columbia, from Scenario Earthquakes using HAZUS - Implications for Risk, Response and Recovery

    NASA Astrophysics Data System (ADS)

    Zaleski, M. P.; Clague, J. J.

    2012-12-01

    Victoria, British Columbia, lies near the Cascadia subduction zone, where three distinct classes of earthquakes contribute to local seismic risk. The largest-magnitude events are subduction-interface earthquakes, which generate widespread shaking across the Pacific Northwest region from British Columbia to northern California. Interface-earthquake risk is mitigated somewhat by the low frequency of events and the distance from the source to populated areas. The largest contribution to the probabilistic hazard is from strong deep-focus earthquakes within the down-going Juan de Fuca slab. Intraslab quakes are frequent, but attenuation from depth results in smaller ground motions. The highest-loss scenarios are associated with major earthquakes on shallow west- to northwest-trending crustal faults that extend across Puget Sound and the southern Strait of Georgia. These faults are a result of compression in the North American plate associated with oblique subduction of the Juan de Fuca slab beneath southwestern British Columbia and northwestern Washington. Our understanding of frequency-magnitude relations for individual shallow-crustal faults is hampered by a widespread cover of Pleistocene glacial deposits, thus the risk is difficult to estimate. We have prepared shake maps for several scenario earthquakes that take into account local geologic conditions. We compare strong ground motions from local crustal fault sources with Cascadia plate-boundary, intraslab and probabilistic building code ground motions. Hazard maps from scenario events are combined with models of the build environment within the HAZUS platform to generate loss estimates. The results may be used to identify vulnerabilities, focus advance mitigation efforts, and guide response and recovery planning.

  20. Impact of Uncertainty on Loss Estimates for a Repeat of the 1908 Messina-Reggio Calabria Earthquake in Southern Italy

    SciTech Connect

    Franco, Guillermo; Shen-Tu, Bing Ming; Bazzurro, Paolo; Goretti, Agostino; Valensise, Gianluca

    2008-07-08

    Increasing sophistication in the insurance and reinsurance market is stimulating the move towards catastrophe models that offer a greater degree of flexibility in the definition of model parameters and model assumptions. This study explores the impact of uncertainty in the input parameters on the loss estimates by departing from the exclusive usage of mean values to establish the earthquake event mechanism, the ground motion fields, or the damageability of the building stock. Here the potential losses due to a repeat of the 1908 Messina-Reggio Calabria event are calculated using different plausible alternatives found in the literature that encompass 12 event scenarios, 2 different ground motion prediction equations, and 16 combinations of damage functions for the building stock, a total of 384 loss scenarios. These results constitute the basis for a sensitivity analysis of the different assumptions on the loss estimates that allows the model user to estimate the impact of the uncertainty on input parameters and the potential spread of the model results. For the event under scrutiny, average losses would amount today to about 9.000 to 10.000 million Euros. The uncertainty in the model parameters is reflected in the high coefficient of variation of this loss, reaching approximately 45%. The choice of ground motion prediction equations and vulnerability functions of the building stock contribute the most to the uncertainty in loss estimates. This indicates that the application of non-local-specific information has a great impact on the spread of potential catastrophic losses. In order to close this uncertainty gap, more exhaustive documentation practices in insurance portfolios will have to go hand in hand with greater flexibility in the model input parameters.

  1. Observed and estimated economic losses in Guadeloupe (French Antilles) after Les Saintes Earthquake (2004). Application to risk comparison

    NASA Astrophysics Data System (ADS)

    Monfort, Daniel; Reveillère, Arnaud; Lecacheux, Sophie; Muller, Héloise; Grisanti, Ludovic; Baills, Audrey; Bertil, Didier; Sedan, Olivier; Tinard, Pierre

    2013-04-01

    The main objective of this work is to compare the potential direct economic losses between two different hazards in Guadeloupe (French Antilles), earthquakes and storm surges, for different return periods. In order to validate some hypotheses which are done concerning building typologies and their insured values a comparison between real economic loss data and estimated ones is done using a real event. In 2004 there was an earthquake in Guadeloupe, Mw 6.3, in a little archipelago in the south of Guadeloupe called Les Saintes. The heaviest intensities were VIII in the municipalities of Les Saintes and decreases from VII to IV in the other municipalities of Guadeloupe. The CCR, French Reinsurance Organism, has provided data about the total insured economic losses estimated per municipality (in a situation in 2011) and the insurance penetration ratio, it means, the ratio of insured exposed elements per municipality. Some other information about observed damaged structures is quite irregular all over the archipelago, being the only reliable one the observed macroseismic intensity per municipality (field survey done by BCSF). These data at Guadeloupe's scale has been compared with results coming from a retro damage scenario for this earthquake done with the vulnerability data from current buildings and their mean economic value of each building type and taking into account the local amplification effects on the earthquake propagation. In general the results are quite similar but with some significant differences. The results coming from scenario are quite correlated with the spatial attenuation from the earthquake intensity; the heaviest economic losses are concentrated within the municipalities exposed to a considerable and damageable intensity (VII to VIII). On the other side, CCR data show that heavy economic damages are not only located in the most impacted cities but also in the most important municipalities of the archipelago in terms of economic activity

  2. Planning a Preliminary program for Earthquake Loss Estimation and Emergency Operation by Three-dimensional Structural Model of Active Faults

    NASA Astrophysics Data System (ADS)

    Ke, M. C.

    2015-12-01

    Large scale earthquakes often cause serious economic losses and a lot of deaths. Because the seismic magnitude, the occurring time and the occurring location of earthquakes are still unable to predict now. The pre-disaster risk modeling and post-disaster operation are really important works of reducing earthquake damages. In order to understanding disaster risk of earthquakes, people usually use the technology of Earthquake simulation to build the earthquake scenarios. Therefore, Point source, fault line source and fault plane source are the models which often are used as a seismic source of scenarios. The assessment results made from different models used on risk assessment and emergency operation of earthquakes are well, but the accuracy of the assessment results could still be upgrade. This program invites experts and scholars from Taiwan University, National Central University, and National Cheng Kung University, and tries using historical records of earthquakes, geological data and geophysical data to build underground three-dimensional structure planes of active faults. It is a purpose to replace projection fault planes by underground fault planes as similar true. The analysis accuracy of earthquake prevention efforts can be upgraded by this database. Then these three-dimensional data will be applied to different stages of disaster prevention. For pre-disaster, results of earthquake risk analysis obtained by the three-dimensional data of the fault plane are closer to real damage. For disaster, three-dimensional data of the fault plane can be help to speculate that aftershocks distributed and serious damage area. The program has been used 14 geological profiles to build the three dimensional data of Hsinchu fault and HisnCheng faults in 2015. Other active faults will be completed in 2018 and be actually applied on earthquake disaster prevention.

  3. Urban Earthquake Shaking and Loss Assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Level 2 analysis of the ELER Software (similar to HAZUS and SELENA) is essentially intended for earthquake risk assessment (building damage, consequential human casualties and macro economic loss quantifiers) in urban areas. The basic Shake Mapping is similar to the Level 0 and Level 1 analysis however, options are available for more sophisticated treatment of site response through externally entered data and improvement of the shake map through incorporation

  4. Too generous to a fault? Is reliable earthquake safety a lost art? Errors in expected human losses due to incorrect seismic hazard estimates

    NASA Astrophysics Data System (ADS)

    Bela, James

    2014-11-01

    "One is well advised, when traveling to a new territory, to take a good map and then to check the map with the actual territory during the journey." In just such a reality check, Global Seismic Hazard Assessment Program (GSHAP) maps (prepared using PSHA) portrayed a "low seismic hazard," which was then also assumed to be the "risk to which the populations were exposed." But time-after-time-after-time the actual earthquakes that occurred were not only "surprises" (many times larger than those implied on the maps), but they were often near the maximum potential size (Maximum Credible Earthquake or MCE) that geologically could occur. Given these "errors in expected human losses due to incorrect seismic hazard estimates" revealed globally in these past performances of the GSHAP maps (> 700,000 deaths 2001-2011), we need to ask not only: "Is reliable earthquake safety a lost art?" but also: "Who and what were the `Raiders of the Lost Art?' "

  5. A new method for the production of social fragility functions and the result of its use in worldwide fatality loss estimation for earthquakes

    NASA Astrophysics Data System (ADS)

    Daniell, James; Wenzel, Friedemann

    2014-05-01

    A review of over 200 fatality models over the past 50 years for earthquake loss estimation from various authors has identified key parameters that influence fatality estimation in each of these models. These are often very specific and cannot be readily adapted globally. In the doctoral dissertation of the author, a new method is used for regression of fatalities to intensity using loss functions based not only on fatalities, but also using population models and other socioeconomic parameters created through time for every country worldwide for the period 1900-2013. A calibration of functions was undertaken from 1900-2008, and each individual quake analysed from 2009-2013 in real-time, in conjunction with www.earthquake-report.com. Using the CATDAT Damaging Earthquakes Database containing socioeconomic loss information for 7208 damaging earthquake events from 1900-2013 including disaggregation of secondary effects, fatality estimates for over 2035 events have been re-examined from 1900-2013. In addition, 99 of these events have detailed data for the individual cities and towns or have been reconstructed to create a death rate as a percentage of population. Many historical isoseismal maps and macroseismic intensity datapoint surveys collected globally, have been digitised and modelled covering around 1353 of these 2035 fatal events, to include an estimate of population, occupancy and socioeconomic climate at the time of the event at each intensity bracket. In addition, 1651 events without fatalities but causing damage have also been examined in this way. The production of socioeconomic and engineering indices such as HDI and building vulnerability has been undertaken on a country-level and state/province-level leading to a dataset allowing regressions not only using a static view of risk, but also allowing for the change in the socioeconomic climate between the earthquake events to be undertaken. This means that a year 1920 event in a country, will not simply be

  6. Rapid exposure and loss estimates for the May 12, 2008 Mw 7.9 Wenchuan earthquake provided by the U.S. Geological Survey's PAGER system

    USGS Publications Warehouse

    Earle, P.S.; Wald, D.J.; Allen, T.I.; Jaiswal, K.S.; Porter, K.A.; Hearne, M.G.

    2008-01-01

    One half-hour after the May 12th Mw 7.9 Wenchuan, China earthquake, the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) system distributed an automatically generated alert stating that 1.2 million people were exposed to severe-to-extreme shaking (Modified Mercalli Intensity VIII or greater). It was immediately clear that a large-scale disaster had occurred. These alerts were widely distributed and referenced by the major media outlets and used by governments, scientific, and relief agencies to guide their responses. The PAGER alerts and Web pages included predictive ShakeMaps showing estimates of ground shaking, maps of population density, and a list of estimated intensities at impacted cities. Manual, revised alerts were issued in the following hours that included the dimensions of the fault rupture. Within a half-day, PAGER’s estimates of the population exposed to strong shaking levels stabilized at 5.2 million people. A coordinated research effort is underway to extend PAGER’s capability to include estimates of the number of casualties. We are pursuing loss models that will allow PAGER the flexibility to use detailed inventory and engineering results in regions where these data are available while also calculating loss estimates in regions where little is known about the type and strength of the built infrastructure. Prototype PAGER fatality estimates are currently implemented and can be manually triggered. In the hours following the Wenchuan earthquake, these models predicted fatalities in the tens of thousands.

  7. Trends in global earthquake loss

    NASA Astrophysics Data System (ADS)

    Arnst, Isabel; Wenzel, Friedemann; Daniell, James

    2016-04-01

    Based on the CATDAT damage and loss database we analyse global trends of earthquake losses (in current values) and fatalities for the period between 1900 and 2015 from a statistical perspective. For this time period the data are complete for magnitudes above 6. First, we study the basic statistics of losses and find that losses below 10 bl. US satisfy approximately a power law with an exponent of 1.7 for the cumulative distribution. Higher loss values are modelled with the General Pareto Distribution (GPD). The 'transition' between power law and GPD is determined with the Mean Excess Function. We split the data set into a period of pre 1955 and post 1955 loss data as in those periods the exposure is significantly different due to population growth. The Annual Average Loss (AAL) for direct damage for events below 10 bl. US differs by a factor of 6, whereas the incorporation of the extreme loss events increases the AAL from 25 bl. US/yr to 30 bl. US/yr. Annual Average Deaths (AAD) show little (30%) difference for events below 6.000 fatalities and AAD values of 19.000 and 26.000 deaths per year if extreme values are incorporated. With data on the global Gross Domestic Product (GDP) that reflects the annual expenditures (consumption, investment, government spending) and on capital stock we relate losses to the economic capacity of societies and find that GDP (in real terms) grows much faster than losses so that the latter one play a decreasing role given the growing prosperity of mankind. This reasoning does not necessarily apply on a regional scale. Main conclusions of the analysis are that (a) a correct projection of historic loss values to nowadays US values is critical; (b) extreme value analysis is mandatory; (c) growing exposure is reflected in the AAL and AAD results for the periods pre and post 1955 events; (d) scaling loss values with global GDP data indicates that the relative size - from a global perspective - of losses decreases rapidly over time.

  8. Origin of Human Losses due to the Emilia Romagna, Italy, M5.9 Earthquake of 20 May 2012 and their Estimate in Real Time

    NASA Astrophysics Data System (ADS)

    Wyss, M.

    2012-12-01

    Estimating human losses within less than an hour worldwide requires assumptions and simplifications. Earthquake for which losses are accurately recorded after the event provide clues concerning the influence of error sources. If final observations and real time estimates differ significantly, data and methods to calculate losses may be modified or calibrated. In the case of the earthquake in the Emilia Romagna region with M5.9 on May 20th, the real time epicenter estimates of the GFZ and the USGS differed from the ultimate location by the INGV by 6 and 9 km, respectively. Fatalities estimated within an hour of the earthquake by the loss estimating tool QLARM, based on these two epicenters, numbered 20 and 31, whereas 7 were reported in the end, and 12 would have been calculated if the ultimate epicenter released by INGV had been used. These four numbers being small, do not differ statistically. Thus, the epicenter errors in this case did not appreciably influence the results. The QUEST team of INGV has reported intensities with I ≥ 5 at 40 locations with accuracies of 0.5 units and QLARM estimated I > 4.5 at 224 locations. The differences between the observed and calculated values at the 23 common locations show that the calculation in the 17 instances with significant differences were too high on average by one unit. By assuming higher than average attenuation within standard bounds for worldwide loss estimates, the calculated intensities model the observed ones better: For 57% of the locations, the difference was not significant; for the others, the calculated intensities were still somewhat higher than the observed ones. Using a generic attenuation law with higher than average attenuation, but not tailored to the region, the number of estimated fatalities becomes 12 compared to 7 reported ones. Thus, attenuation in this case decreased the discrepancy between observed and reported death by approximately a factor of two. The source of the fatalities is

  9. Pan-European Seismic Risk Assessment: A proof of concept using the Earthquake Loss Estimation Routine (ELER)

    NASA Astrophysics Data System (ADS)

    Corbane, Christina; Hancilar, Ufuk; Silva, Vitor; Ehrlich, Daniele; De Groeve, Tom

    2016-04-01

    One of the key objectives of the new EU civil protection mechanism is an enhanced understanding of risks the EU is facing. Developing a European perspective may create significant opportunities of successfully combining resources for the common objective of preventing and mitigating shared risks. Risk assessments and mapping represent the first step in these preventive efforts. The EU is facing an increasing number of natural disasters. Among them earthquakes are the second deadliest after extreme temperatures. A better-shared understanding of where seismic risk lies in the EU is useful to identify which regions are most at risk and where more detailed seismic risk assessments are needed. In that scope, seismic risk assessment models at a pan-European level have a great potential in obtaining an overview of the expected economic and human losses using a homogeneous quantitative approach and harmonized datasets. This study strives to demonstrate the feasibility of performing a probabilistic seismic risk assessment at a pan-European level with an open access methodology and using open datasets available across the EU. It aims also at highlighting the challenges and needs in datasets and the information gaps for a consistent seismic risk assessment at the pan-European level. The study constitutes a "proof of concept" that can complement the information provided by Member States in their National Risk Assessments. Its main contribution lies in pooling open-access data from different sources in a homogeneous format, which could serve as baseline data for performing more in depth risk assessments in Europe.

  10. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  11. A quick earthquake disaster loss assessment method supported by dasymetric data for emergency response in China

    NASA Astrophysics Data System (ADS)

    Xu, Jinghai; An, Jiwen; Nie, Gaozong

    2016-04-01

    Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.

  12. ELER software - a new tool for urban earthquake loss assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Erdik, M.

    2010-12-01

    Rapid loss estimation after potentially damaging earthquakes is critical for effective emergency response and public information. A methodology and software package, ELER-Earthquake Loss Estimation Routine, for rapid estimation of earthquake shaking and losses throughout the Euro-Mediterranean region was developed under the Joint Research Activity-3 (JRA3) of the EC FP6 Project entitled "Network of Research Infrastructures for European Seismology-NERIES". Recently, a new version (v2.0) of ELER software has been released. The multi-level methodology developed is capable of incorporating regional variability and uncertainty originating from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. Although primarily intended for quasi real-time estimation of earthquake shaking and losses, the routine is also equally capable of incorporating scenario-based earthquake loss assessments. This paper introduces the urban earthquake loss assessment module (Level 2) of the ELER software which makes use of the most detailed inventory databases of physical and social elements at risk in combination with the analytical vulnerability relationships and building damage-related casualty vulnerability models for the estimation of building damage and casualty distributions, respectively. Spectral capacity-based loss assessment methodology and its vital components are presented. The analysis methods of the Level 2 module, i.e. Capacity Spectrum Method (ATC-40, 1996), Modified Acceleration-Displacement Response Spectrum Method (FEMA 440, 2005), Reduction Factor Method (Fajfar, 2000) and Coefficient Method (ASCE 41-06, 2006), are applied to the selected building types for validation and verification purposes. The damage estimates are compared to the results obtained from the other studies available in the literature, i.e. SELENA v4.0 (Molina et al., 2008) and

  13. Extreme Earthquake Risk Estimation by Hybrid Modeling

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Garcia, S.; Emerson, D.; Perea, N.; Salazar, A.; Moulinec, C.

    2012-12-01

    The estimation of the hazard and the economical consequences i.e. the risk associated to the occurrence of extreme magnitude earthquakes in the neighborhood of urban or lifeline infrastructure, such as the 11 March 2011 Mw 9, Tohoku, Japan, represents a complex challenge as it involves the propagation of seismic waves in large volumes of the earth crust, from unusually large seismic source ruptures up to the infrastructure location. The large number of casualties and huge economic losses observed for those earthquakes, some of which have a frequency of occurrence of hundreds or thousands of years, calls for the development of new paradigms and methodologies in order to generate better estimates, both of the seismic hazard, as well as of its consequences, and if possible, to estimate the probability distributions of their ground intensities and of their economical impacts (direct and indirect losses), this in order to implement technological and economical policies to mitigate and reduce, as much as possible, the mentioned consequences. Herewith, we propose a hybrid modeling which uses 3D seismic wave propagation (3DWP) and neural network (NN) modeling in order to estimate the seismic risk of extreme earthquakes. The 3DWP modeling is achieved by using a 3D finite difference code run in the ~100 thousands cores Blue Gene Q supercomputer of the STFC Daresbury Laboratory of UK, combined with empirical Green function (EGF) techniques and NN algorithms. In particular the 3DWP is used to generate broadband samples of the 3D wave propagation of extreme earthquakes (plausible) scenarios corresponding to synthetic seismic sources and to enlarge those samples by using feed-forward NN. We present the results of the validation of the proposed hybrid modeling for Mw 8 subduction events, and show examples of its application for the estimation of the hazard and the economical consequences, for extreme Mw 8.5 subduction earthquake scenarios with seismic sources in the Mexican

  14. The OPAL Project: Open source Procedure for Assessment of Loss using Global Earthquake Modelling software

    NASA Astrophysics Data System (ADS)

    Daniell, James

    2010-05-01

    This paper provides a comparison between Earthquake Loss Estimation (ELE) software packages and their application using an "Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software" (OPAL). The OPAL procedure has been developed to provide a framework for optimisation of a Global Earthquake Modelling process through: 1) Overview of current and new components of earthquake loss assessment (vulnerability, hazard, exposure, specific cost and technology); 2) Preliminary research, acquisition and familiarisation with all available ELE software packages; 3) Assessment of these 30+ software packages in order to identify the advantages and disadvantages of the ELE methods used; and 4) Loss analysis for a deterministic earthquake (Mw7.2) for the Zeytinburnu district, Istanbul, Turkey, by applying 3 software packages (2 new and 1 existing): a modified displacement-based method based on DBELA (Displacement Based Earthquake Loss Assessment), a capacity spectrum based method HAZUS (HAZards United States) and the Norwegian HAZUS-based SELENA (SEismic Loss EstimatioN using a logic tree Approach) software which was adapted for use in order to compare the different processes needed for the production of damage, economic and social loss estimates. The modified DBELA procedure was found to be more computationally expensive, yet had less variability, indicating the need for multi-tier approaches to global earthquake loss estimation. Similar systems planning and ELE software produced through the OPAL procedure can be applied to worldwide applications, given exposure data. Keywords: OPAL, displacement-based, DBELA, earthquake loss estimation, earthquake loss assessment, open source, HAZUS

  15. Losses from the Northridge earthquake: disruption to high-technology industries in the Los Angeles Basin.

    PubMed

    Suarez-Villa, L; Walrod, W

    1999-03-01

    This study explores the relationship between industrial location geography, metropolitan patterns and earthquake disasters. Production losses from the 1994 Northridge earthquake to the Los Angeles Basin's most important high-technology industrial sector are evaluated in the context of that area's polycentric metropolitan form. Locations for each one of the Los Angeles Basin's 1,126 advanced electronics manufacturing establishments were identified and mapped, providing an indication of the patterns and clusters of the industry. An extensive survey of those establishments gathered information on disruptions from the Northridge earthquake. Production losses were then estimated, based on the sampled plants' lost workdays and the earthquake's distance-decay effects. A conservative estimate of total production losses to establishments in seven four-digit SIC advanced electronics industrial groups placed their value at US$220.4 million. Based on this estimate of losses, it is concluded that the Northridge earthquake's economic losses were much higher than initially anticipated. PMID:10204286

  16. Ten Years of Real-Time Earthquake Loss Alerts

    NASA Astrophysics Data System (ADS)

    Wyss, M.

    2013-12-01

    The priorities of the most important parameters of an earthquake disaster are: Number of fatalities, number of injured, mean damage as a function of settlement, expected intensity of shaking at critical facilities. The requirements to calculate these parameters in real time are: 1) Availability of reliable earthquake source parameters within minutes. 2) Capability of calculating expected intensities of strong ground shaking. 3) Data sets on population distribution and conditions of building stock as a function of settlements. 4) Data on locations of critical facilities. 5) Verified methods of calculating damage and losses. 6) Personnel available on a 24/7 basis to perform and review these calculations. There are three services available that distribute information about the likely consequences of earthquakes within about half an hour of the event. Two of these calculate losses, one gives only general information. Although, much progress has been made during the last ten years improving the data sets and the calculating methods, much remains to be done. The data sets are only first order approximations and the methods bare refinement. Nevertheless, the quantitative loss estimates after damaging earthquakes in real time are generally correct in the sense that they allow distinguishing disastrous from inconsequential events.

  17. Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software (OPAL)

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.

    2011-07-01

    This paper provides a comparison between Earthquake Loss Estimation (ELE) software packages and their application using an "Open Source Procedure for Assessment of Loss using Global Earthquake Modelling software" (OPAL). The OPAL procedure was created to provide a framework for optimisation of a Global Earthquake Modelling process through: 1. overview of current and new components of earthquake loss assessment (vulnerability, hazard, exposure, specific cost, and technology); 2. preliminary research, acquisition, and familiarisation for available ELE software packages; 3. assessment of these software packages in order to identify the advantages and disadvantages of the ELE methods used; and 4. loss analysis for a deterministic earthquake (Mw = 7.2) for the Zeytinburnu district, Istanbul, Turkey, by applying 3 software packages (2 new and 1 existing): a modified displacement-based method based on DBELA (Displacement Based Earthquake Loss Assessment, Crowley et al., 2006), a capacity spectrum based method HAZUS (HAZards United States, FEMA, USA, 2003) and the Norwegian HAZUS-based SELENA (SEismic Loss EstimatioN using a logic tree Approach, Lindholm et al., 2007) software which was adapted for use in order to compare the different processes needed for the production of damage, economic, and social loss estimates. The modified DBELA procedure was found to be more computationally expensive, yet had less variability, indicating the need for multi-tier approaches to global earthquake loss estimation. Similar systems planning and ELE software produced through the OPAL procedure can be applied to worldwide applications, given exposure data.

  18. Quantitative assessment of earthquake damages: approximate economic loss

    NASA Astrophysics Data System (ADS)

    Badal, J.; Vazquez-Prada, M.; Gonzalez, A.; Samardzhieva, E.

    2003-04-01

    Prognostic estimations about the approximate direct economic cost associated with the damages caused by earthquakes are made following a suitable methodology of wide-ranging application. For an evaluation in advance of the economic cost derived from the damages, we take into account the local social wealth as a function of the gross domestic product of the country. We use a GIS-based tool, tacking advantage of the possibilities of such a system for the treatment of space-distributed data. The work is performed on the basis of the relationship between macroseismic intensity and earthquake economic loss in percentage of the wealth. We have implemented interactive software permitting to efficiently show the information by screen and the rapid visual evaluation of the performance of our method. Such an approach to earthquake casualties and damages is carried out for sites near to important urban concentrations located in a seismically active zone of Spain, thus contributing to an easier taking of decisions in contemporary earthquake engineering, emergency preparedness planning and seismic risk prevention.

  19. Losses to single-family housing from ground motions in the 1994 Northridge, California, earthquake

    USGS Publications Warehouse

    Wesson, R.L.; Perkins, D.M.; Leyendecker, E.V.; Roth, R.J., Jr.; Petersen, M.D.

    2004-01-01

    The distributions of insured losses to single-family housing following the 1994 Northridge, California, earthquake for 234 ZIP codes can be satisfactorily modeled with gamma distributions. Regressions of the parameters in the gamma distribution on estimates of ground motion, derived from ShakeMap estimates or from interpolated observations, provide a basis for developing curves of conditional probability of loss given a ground motion. Comparison of the resulting estimates of aggregate loss with the actual aggregate loss gives satisfactory agreement for several different ground-motion parameters. Estimates of loss based on a deterministic spatial model of the earthquake ground motion, using standard attenuation relationships and NEHRP soil factors, give satisfactory results for some ground-motion parameters if the input ground motions are increased about one and one-half standard deviations above the median, reflecting the fact that the ground motions for the Northridge earthquake tended to be higher than the median ground motion for other earthquakes with similar magnitude. The results give promise for making estimates of insured losses to a similar building stock under future earthquake loading. ?? 2004, Earthquake Engineering Research Institute.

  20. Using Socioeconomic Data to Calibrate Loss Estimates

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2013-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  1. Real-Time Loss Estimation Using Hazus and Shakemap Data

    NASA Astrophysics Data System (ADS)

    Kircher, C. A.

    2003-12-01

    This paper describes real-time damage and loss estimation using the HAZUS earthquake loss estimation technology and ShakeMap data, and provides an example comparison of predicted and observed losses for the 1994 Northridge earthquake. HAZUS [NIBS, 1999, Kircher et al., 1997a/1997b, Whitman et al., 1997] is the standardized earthquake loss estimation methodology developed by the National Institute of Building Sciences (NIBS) for the United States Federal Emergency Management Agency (FEMA). HAZUS was originally developed to assist emergency response planners to "provide local, state and regional officials with the tools necessary to plan and stimulate efforts to reduce risk from earthquakes and to prepare for emergency response and recovery from an earthquake." HAZUS can also be used to make regional estimates of damage and loss following and earthquake using ground motion, ShakeMap, data provided by the United States Geological Survey (USGS) as part of Tri-Net in Southern California [Wald et al., 1999] or by other regional strong-motion instrumentation networks.

  2. Development of fragility functions to estimate homelessness after an earthquake

    NASA Astrophysics Data System (ADS)

    Brink, Susan A.; Daniell, James; Khazai, Bijan; Wenzel, Friedemann

    2014-05-01

    used to estimate homelessness as a function of information that is readily available immediately after an earthquake. These fragility functions could be used by relief agencies and governments to provide an initial assessment of the need for allocation of emergency shelter immediately after an earthquake. Daniell JE (2014) The development of socio-economic fragility functions for use in worldwide rapid earthquake loss estimation procedures, Ph.D. Thesis (in publishing), Karlsruhe, Germany. Daniell, J. E., Khazai, B., Wenzel, F., & Vervaeck, A. (2011). The CATDAT damaging earthquakes database. Natural Hazards and Earth System Science, 11(8), 2235-2251. doi:10.5194/nhess-11-2235-2011 Daniell, J.E., Wenzel, F. and Vervaeck, A. (2012). "The Normalisation of socio-economic losses from historic worldwide earthquakes from 1900 to 2012", 15th WCEE, Lisbon, Portugal, Paper No. 2027. Jaiswal, K., & Wald, D. (2010). An Empirical Model for Global Earthquake Fatality Estimation. Earthquake Spectra, 26(4), 1017-1037. doi:10.1193/1.3480331

  3. Creating a Global Building Inventory for Earthquake Loss Assessment and Risk Management

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.

    2008-01-01

    Earthquakes have claimed approximately 8 million lives over the last 2,000 years (Dunbar, Lockridge and others, 1992) and fatality rates are likely to continue to rise with increased population and urbanizations of global settlements especially in developing countries. More than 75% of earthquake-related human casualties are caused by the collapse of buildings or structures (Coburn and Spence, 2002). It is disheartening to note that large fractions of the world's population still reside in informal, poorly-constructed & non-engineered dwellings which have high susceptibility to collapse during earthquakes. Moreover, with increasing urbanization half of world's population now lives in urban areas (United Nations, 2001), and half of these urban centers are located in earthquake-prone regions (Bilham, 2004). The poor performance of most building stocks during earthquakes remains a primary societal concern. However, despite this dark history and bleaker future trends, there are no comprehensive global building inventories of sufficient quality and coverage to adequately address and characterize future earthquake losses. Such an inventory is vital both for earthquake loss mitigation and for earthquake disaster response purposes. While the latter purpose is the motivation of this work, we hope that the global building inventory database described herein will find widespread use for other mitigation efforts as well. For a real-time earthquake impact alert system, such as U.S. Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER), (Wald, Earle and others, 2006), we seek to rapidly evaluate potential casualties associated with earthquake ground shaking for any region of the world. The casualty estimation is based primarily on (1) rapid estimation of the ground shaking hazard, (2) aggregating the population exposure within different building types, and (3) estimating the casualties from the collapse of vulnerable buildings. Thus, the

  4. Earthquakes trigger the loss of groundwater biodiversity.

    PubMed

    Galassi, Diana M P; Lombardo, Paola; Fiasca, Barbara; Di Cioccio, Alessia; Di Lorenzo, Tiziana; Petitta, Marco; Di Carlo, Piero

    2014-01-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and "ecosystem engineers", we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems. PMID:25182013

  5. Earthquakes trigger the loss of groundwater biodiversity

    NASA Astrophysics Data System (ADS)

    Galassi, Diana M. P.; Lombardo, Paola; Fiasca, Barbara; di Cioccio, Alessia; di Lorenzo, Tiziana; Petitta, Marco; di Carlo, Piero

    2014-09-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and ``ecosystem engineers'', we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems.

  6. Earthquakes trigger the loss of groundwater biodiversity

    PubMed Central

    Galassi, Diana M. P.; Lombardo, Paola; Fiasca, Barbara; Di Cioccio, Alessia; Di Lorenzo, Tiziana; Petitta, Marco; Di Carlo, Piero

    2014-01-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and “ecosystem engineers”, we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems. PMID:25182013

  7. Social vulnerability analysis of earthquake risk using HAZUS-MH losses from a M7.8 scenario earthquake on the San Andreas fault

    NASA Astrophysics Data System (ADS)

    Noriega, G. R.; Grant Ludwig, L.

    2010-12-01

    Natural hazards research indicates earthquake risk is not equitably distributed. Demographic differences are significant in determining the risks people encounter, whether and how they prepare for disasters, and how they fare when disasters occur. In this study, we analyze the distribution of economic and social losses in all 88 cities of Los Angeles County from the 2008 ShakeOut scenario earthquake. The ShakeOut scenario earthquake is a scientifically plausible M 7.8 scenario earthquake on the San Andreas fault that was developed and applied for regional earthquake preparedness planning and risk mitigation from a compilation of collaborative studies and findings by the 2007 Working Group on California Earthquake Probabilities (WGCEP). The scenario involved 1) developing a realistic scenario earthquake using the best available and most recent earthquake research findings, 2) estimation of physical damage, 3) estimation of social impact of the earthquake, and 4) identifying changes that will help to prevent a catastrophe due to an earthquake. Estimated losses from this scenario earthquake include 1,800 deaths and $213 billion dollars in economic losses. We use regression analysis to examine the relationship between potential city losses due to the ShakeOut scenario earthquake and the cities' demographic composition. The dependent variables are economic and social losses calculated in HAZUS-MH methodology for the scenario earthquake. The independent variables -median household income, tenure and race/ethnicity- have been identified as indicators of social vulnerability to natural disasters (Mileti, 1999; Cutter, 2006; Cutter & Finch, 2008). Preliminary Ordinary Least Squares (OLS) regression analysis of economic losses on race/ethnicity, income and tenure, indicates that cities with lower Hispanic population are associated with lower economic losses. Cities with higher Hispanic population are associated with higher economic losses, though this relationship is

  8. The Enormous Challenge faced by China to Reduce Earthquake Losses

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Mooney, W. D.; Wang, B.

    2014-12-01

    In past six years, several big earthquakes occurred in Chinese continent that have caused enormous economic loss and casualties. These earthquakes include the following: 2008 Mw=7.9 Wenchuan; 2010 Mw=6.9 Yushu; 2013 Mw=6.6 Lushan; and 2013 Mw=5.9 Minxian events. On August 4, 2014 the Mw=6.1 earthquake struck Ludian in Yunnan province. Althought it was a moderate size earthquake, the casualties have reached at least 589 people. In fact, more than 50% of Chinese cities and more than 70% of large to medium size cities are located in the areas where the seismic intensity may reach Ⅶ or higher. Collapsing buildings are the main cause of Chinese earthquake casualties; the secondary causes are induced geological disasters such as landslide and barrier lakes. Several enormous challenges must be overcome to reduce hazards from earthquakes and secondary disasters.(1)Much of the infrastructure in China cannot meet the engineering standard for adequate seismic protection. In particular, some buildings are not strong enough to survive the potential strong ground shaking, and some of them did do not keep away from the active fault with a safe distance. It will be very costly to reinforce or rebuild such buildings. (2) There is lack of the rigorous legislation on earthquake disaster protection. (3) It appears that both government and citizen rely too much on earthquake prediction to avoid earthquake casualties. (4) Geologic conditions is very complicate and in need of additional studies, especially in southwest of China. There still lack of detail survey on potential geologic disasters, such as landslides. Although we still cannot predict earthquakes, it is possible to greatly reduce earthquake hazards. For example, some Chinese scientists have begun studies with the aim of identifying active faults under large cities and to propose higher building standards. It will be a very difficult work to improve the quality and scope of earthquake disaster protection dramatically in

  9. Rapid earthquake hazard and loss assessment for Euro-Mediterranean region

    NASA Astrophysics Data System (ADS)

    Erdik, Mustafa; Sesetyan, Karin; Demircioglu, Mine; Hancilar, Ufuk; Zulfikar, Can; Cakti, Eser; Kamer, Yaver; Yenidogan, Cem; Tuzun, Cuneyt; Cagnan, Zehra; Harmandar, Ebru

    2010-10-01

    The almost-real time estimation of ground shaking and losses after a major earthquake in the Euro-Mediterranean region was performed in the framework of the Joint Research Activity 3 (JRA-3) component of the EU FP6 Project entitled "Network of Research Infra-structures for European Seismology, NERIES". This project consists of finding the most likely location of the earthquake source by estimating the fault rupture parameters on the basis of rapid inversion of data from on-line regional broadband stations. It also includes an estimation of the spatial distribution of selected site-specific ground motion parameters at engineering bedrock through region-specific ground motion prediction equations (GMPEs) or physical simulation of ground motion. By using the Earthquake Loss Estimation Routine (ELER) software, the multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from GMPEs, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships.

  10. Future Earth: Reducing Loss By Automating Response to Earthquake Shaking

    NASA Astrophysics Data System (ADS)

    Allen, R. M.

    2014-12-01

    Earthquakes pose a significant threat to society in the U.S. and around the world. The risk is easily forgotten given the infrequent recurrence of major damaging events, yet the likelihood of a major earthquake in California in the next 30 years is greater than 99%. As our societal infrastructure becomes ever more interconnected, the potential impacts of these future events are difficult to predict. Yet, the same inter-connected infrastructure also allows us to rapidly detect earthquakes as they begin, and provide seconds, tens or seconds, or a few minutes warning. A demonstration earthquake early warning system is now operating in California and is being expanded to the west coast (www.ShakeAlert.org). In recent earthquakes in the Los Angeles region, alerts were generated that could have provided warning to the vast majority of Los Angelinos who experienced the shaking. Efforts are underway to build a public system. Smartphone technology will be used not only to issue that alerts, but could also be used to collect data, and improve the warnings. The MyShake project at UC Berkeley is currently testing an app that attempts to turn millions of smartphones into earthquake-detectors. As our development of the technology continues, we can anticipate ever-more automated response to earthquake alerts. Already, the BART system in the San Francisco Bay Area automatically stops trains based on the alerts. In the future, elevators will stop, machinery will pause, hazardous materials will be isolated, and self-driving cars will pull-over to the side of the road. In this presentation we will review the current status of the earthquake early warning system in the US. We will illustrate how smartphones can contribute to the system. Finally, we will review applications of the information to reduce future losses.

  11. An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling

    USGS Publications Warehouse

    Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.

    2009-01-01

    We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global

  12. Estimation of earthquake risk curves of physical building damage

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias; Janouschkowetz, Silke; Fischer, Thomas; Simon, Christian

    2014-05-01

    In this study, a new approach to quantify seismic risks is presented. Here, the earthquake risk curves for the number of buildings with a defined physical damage state are estimated for South Africa. Therein, we define the physical damage states according to the current European macro-seismic intensity scale (EMS-98). The advantage of such kind of risk curve is that its plausibility can be checked more easily than for other types. The earthquake risk curve for physical building damage can be compared with historical damage and their corresponding empirical return periods. The number of damaged buildings from historical events is generally explored and documented in more detail than the corresponding monetary losses. The latter are also influenced by different economic conditions, such as inflation and price hikes. Further on, the monetary risk curve can be derived from the developed risk curve of physical building damage. The earthquake risk curve can also be used for the validation of underlying sub-models such as the hazard and vulnerability modules.

  13. Application of the loss estimation tool QLARM in Algeria

    NASA Astrophysics Data System (ADS)

    Rosset, P.; Trendafiloski, G.; Yelles, K.; Semmane, F.; Wyss, M.

    2009-04-01

    During the last six years, WAPMERR has used Quakeloss for real-time loss estimation for more than 440 earthquakes worldwide. Loss reports, posted with an average delay of 30 minutes, include a map showing the average degree of damage in settlements near the epicenter, the total number of fatalities, the total number of injured, and a detailed list of casualties and damage rates in these settlements. After the M6.7 Boumerdes earthquake in 2003, we reported 1690-3660 fatalities. The official death toll was around 2270. Since the El Asnam earthquake, seismic events in Algeria have killed about 6,000 people, injured more than 20,000 and left more than 300,000 homeless. On average, one earthquake with the potential to kill people (M>5.4) happens every three years in Algeria. In the frame of a collaborative project between WAPMERR and CRAAG, we propose to calibrate our new loss estimation tool QLARM (qlarm.ethz.ch) and estimate human losses for future likely earthquakes in Algeria. The parameters needed for this calculation are the following. (1) Ground motion relation and soil amplification factors (2) distribution of building stock and population into vulnerability classes of the European Macroseismic Scale (EMS-98) as given in the PAGER database and (3) population by settlement. Considering the resolution of the available data, we construct 1) point city models for cases where only summary data for the city are available and, 2) discrete city models when data regarding city districts are available. Damage and losses are calculated using: (a) vulnerability models pertinent to EMS-98 vulnerability classes previously validated with the existing ones in Algeria (Tipaza and Chlef) (b) building collapse models pertinent to Algeria as given in the World Housing Encyclopedia and, (c) casualty matrices pertinent to EMS-98 vulnerability classes assembled from HAZUS casualty rates. As a first trial, we simulated the 2003 Boumerdes earthquake to check the validity of the proposed

  14. Modelling the Epistemic Uncertainty in the Vulnerability Assessment Component of an Earthquake Loss Model

    NASA Astrophysics Data System (ADS)

    Crowley, H.; Modica, A.

    2009-04-01

    Loss estimates have been shown in various studies to be highly sensitive to the methodology employed, the seismicity and ground-motion models, the vulnerability functions, and assumed replacement costs (e.g. Crowley et al., 2005; Molina and Lindholm, 2005; Grossi, 2000). It is clear that future loss models should explicitly account for these epistemic uncertainties. Indeed, a cause of frequent concern in the insurance and reinsurance industries is precisely the fact that for certain regions and perils, available commercial catastrophe models often yield significantly different loss estimates. Of equal relevance to many users is the fact that updates of the models sometimes lead to very significant changes in the losses compared to the previous version of the software. In order to model the epistemic uncertainties that are inherent in loss models, a number of different approaches for the hazard, vulnerability, exposure and loss components should be clearly and transparently applied, with the shortcomings and benefits of each method clearly exposed by the developers, such that the end-users can begin to compare the results and the uncertainty in these results from different models. This paper looks at an application of a logic-tree type methodology to model the epistemic uncertainty in the vulnerability component of a loss model for Tunisia. Unlike other countries which have been subjected to damaging earthquakes, there has not been a significant effort to undertake vulnerability studies for the building stock in Tunisia. Hence, when presented with the need to produce a loss model for a country like Tunisia, a number of different approaches can and should be applied to model the vulnerability. These include empirical procedures which utilise observed damage data, and mechanics-based methods where both the structural characteristics and response of the buildings are analytically modelled. Some preliminary applications of the methodology are presented and discussed

  15. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  16. Proceedings: Earthquake Ground-Motion Estimation in Eastern North America

    SciTech Connect

    1988-08-01

    Experts in seismology and earthquake engineering convened to evaluate state-of-the-art methods for estimating ground motion from earthquakes in eastern North America. Workshop results presented here will help focus research priorities in ground-motion studies to provide more-realistic design standards for critical facilities.

  17. Precise estimation of repeating earthquake moment: Example from parkfield, california

    USGS Publications Warehouse

    Rubinstein, J.L.; Ellsworth, W.L.

    2010-01-01

    We offer a new method for estimating the relative size of repeating earthquakes using the singular value decomposition (SVD). This method takes advantage of the highly coherent waveforms of repeating earthquakes and arrives at far more precise and accurate descriptions of earthquake size than standard catalog techniques allow. We demonstrate that uncertainty in relative moment estimates is reduced from ??75% for standard coda-duration techniques employed by the network to an uncertainty of ??6.6% when the SVD method is used. This implies that a single-station estimate of moment using the SVD method has far less uncertainty than the whole-network estimates of moment based on coda duration. The SVD method offers a significant improvement in our ability to describe the size of repeating earthquakes and thus an opportunity to better understand how they accommodate slip as a function of time.

  18. A Model For Rapid Estimation of Economic Loss

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2012-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  19. Seismic Risk Assessment and Loss Estimation for Tbilisi City

    NASA Astrophysics Data System (ADS)

    Tsereteli, Nino; Alania, Victor; Varazanashvili, Otar; Gugeshashvili, Tengiz; Arabidze, Vakhtang; Arevadze, Nika; Tsereteli, Emili; Gaphrindashvili, Giorgi; Gventcadze, Alexander; Goguadze, Nino; Vephkhvadze, Sophio

    2013-04-01

    The proper assessment of seismic risk is of crucial importance for society protection and city sustainable economic development, as it is the essential part to seismic hazard reduction. Estimation of seismic risk and losses is complicated tasks. There is always knowledge deficiency on real seismic hazard, local site effects, inventory on elements at risk, infrastructure vulnerability, especially for developing countries. Lately great efforts was done in the frame of EMME (earthquake Model for Middle East Region) project, where in the work packages WP1, WP2 , WP3 and WP4 where improved gaps related to seismic hazard assessment and vulnerability analysis. Finely in the frame of work package wp5 "City Scenario" additional work to this direction and detail investigation of local site conditions, active fault (3D) beneath Tbilisi were done. For estimation economic losses the algorithm was prepared taking into account obtained inventory. The long term usage of building is very complex. It relates to the reliability and durability of buildings. The long term usage and durability of a building is determined by the concept of depreciation. Depreciation of an entire building is calculated by summing the products of individual construction unit' depreciation rates and the corresponding value of these units within the building. This method of calculation is based on an assumption that depreciation is proportional to the building's (constructions) useful life. We used this methodology to create a matrix, which provides a way to evaluate the depreciation rates of buildings with different type and construction period and to determine their corresponding value. Finally loss was estimated resulting from shaking 10%, 5% and 2% exceedance probability in 50 years. Loss resulting from scenario earthquake (earthquake with possible maximum magnitude) also where estimated.

  20. Earthquake Loss Assessment for Post-2000 Buildings in Istanbul

    NASA Astrophysics Data System (ADS)

    Hancilar, Ufuk; Cakti, Eser; Sesetyan, Karin

    2016-04-01

    Current building inventory of Istanbul city, which was compiled by street surveys in 2008, consists of more than 1.2 million buildings. The inventory provides information on lateral-load carrying system, number of floors and construction year, where almost 200,000 buildings are reinforced concrete frame type structures built after 2000. These buildings are assumed to be designed based on the provisions of Turkish Earthquake Resistant Design Code (1998) and are tagged as high-code buildings. However, there are no empirical or analytical fragility functions associated with these types of buildings. In this study we perform a damage and economic loss assessment exercise focusing on the post-2000 building stock of Istanbul. Three M7.4 scenario earthquakes near the city represent the input ground motion. As for the fragility functions, those provided by Hancilar and Cakti (2015) for code complying reinforced concrete frames are used. The results are compared with the number of damaged buildings given in the loss assessment studies available in the literature wherein expert judgment based fragilities for post-2000 buildings were used.

  1. Building losses assessment for Lushan earthquake utilization multisource remote sensing data and GIS

    NASA Astrophysics Data System (ADS)

    Nie, Juan; Yang, Siquan; Fan, Yida; Wen, Qi; Xu, Feng; Li, Lingling

    2015-12-01

    On 20 April 2013, a catastrophic earthquake of magnitude 7.0 struck the Lushan County, northwestern Sichuan Province, China. This earthquake named Lushan earthquake in China. The Lushan earthquake damaged many buildings. The situation of building loss is one basis for emergency relief and reconstruction. Thus, the building losses of the Lushan earthquake must be assessed. Remote sensing data and geographic information systems (GIS) can be employed to assess the building loss of the Lushan earthquake. The building losses assessment results for Lushan earthquake disaster utilization multisource remote sensing dada and GIS were reported in this paper. The assessment results indicated that 3.2% of buildings in the affected areas were complete collapsed. 12% and 12.5% of buildings were heavy damaged and slight damaged, respectively. The complete collapsed buildings, heavy damaged buildings, and slight damaged buildings mainly located at Danling County, Hongya County, Lushan County, Mingshan County, Qionglai County, Tianquan County, and Yingjing County.

  2. Estimates of radiated energy from global shallow subduction zone earthquakes

    NASA Astrophysics Data System (ADS)

    Bilek, S. L.; Lay, T.; Ruff, L.

    2002-12-01

    Previous studies used seismic energy to moment ratios for datasets of large earthquakes as a useful discriminant for tsunami earthquakes. We extend this idea of a "slowness" discriminant to a large dataset of subduction zone underthrusting earthquakes. We determined estimates of energy release in these shallow earthquakes using a large dataset of source time functions. This dataset contains source time functions for 418 shallow (< 70 km depth) earthquakes ranging from Mw 5.5 - 8.0 from 14 circum-Pacific subduction zones. Also included are tsunami earthquakes for which source time functions are available. We calculate energy using two methods, a substitution of a simplified triangle and integration of the original source time function. In the first method, we use a triangle substitution of peak moment and duration to find a minimum estimate of energy. The other method incorporates more of the source time function information and can be influenced by source time function complexity. We examine patterns in source time function complexity with respect to the energy estimates. For comparison with other earthquake parameters, it is useful to remove the effect of seismic moment on the energy estimates. We use the seismic energy to moment ratio (E/Mo) to highlight variations with depth, moment, and subduction zone. There is significant scatter in this ratio using both methods of energy calculation. We observe a slight increase in E/Mo with increasing Mw. There is not much variation in E/Mo with depth seen in entire dataset. However, a slight increase in E/Mo with depth is apparent in a few subduction zones such as Alaska, Central America, and Peru. An average E/Mo of 5x10e-6 roughly characterizes this shallow earthquake dataset, although with a factor of 10 scatter. This value is within about a factor of 2 of E/Mo ratios determined by Choy and Boatwright (1995). Tsunami earthquakes suggest an average E/Mo of 2x10e-7, significantly lower than the average for the shallow

  3. Application of linear statistical models of earthquake magnitude versus fault length in estimating maximum expectable earthquakes

    USGS Publications Warehouse

    Mark, Robert K.

    1977-01-01

    Correlation or linear regression estimates of earthquake magnitude from data on historical magnitude and length of surface rupture should be based upon the correct regression. For example, the regression of magnitude on the logarithm of the length of surface rupture L can be used to estimate magnitude, but the regression of log L on magnitude cannot. Regression estimates are most probable values, and estimates of maximum values require consideration of one-sided confidence limits.

  4. A Multidisciplinary Approach for Estimation of Seismic Losses: A Case Study in Turkey

    NASA Astrophysics Data System (ADS)

    Askan, A.; Erberik, M.; Un, E.

    2012-12-01

    Estimation of seismic losses including the physical, economic and social losses as well as casualties concern a wide range of authorities varying from geophysical and earthquake engineers, physical and economic planners to insurance companies. Due to the inherent uncertainties involved at each component, a probabilistic framework is required to estimate seismic losses. This study aims to propose an integrated method for predicting the potential seismic loss for a selected urban region. The main components of the proposed loss model are the seismic hazard estimation tool, building vulnerability functions, human losses and economic losses as functions of damage states of buildings. The input data for risk calculations involves regional seismicity and building fragility information. The casualty model for a given damage level considers the occupancy type, population of the building, occupancy at the time of earthquake occurrence, number of trapped occupants in the collapse, injury distribution at collapse and mortality post collapse. The economic loss module involves direct economic loss to buildings in terms of replacement, structural repair, non-structural repair costs and contents losses. Finally, the proposed loss model combines the input components within a conditional probability approach. The results are expressed in terms of expected loss. We calibrate the method with loss data from the 12 November 1999 Düzce earthquake and then predict losses for another city in Turkey (Bursa) with high seismic hazard.

  5. Rapid Ice Mass Loss: Does It Have an Influence on Earthquake Occurrence in Southern Alaska?

    NASA Technical Reports Server (NTRS)

    Sauber, Jeanne M.

    2008-01-01

    The glaciers of southern Alaska are extensive, and many of them have undergone gigatons of ice wastage on time scales on the order of the seismic cycle. Since the ice loss occurs directly above a shallow main thrust zone associated with subduction of the Pacific-Yakutat plate beneath continental Alaska, the region between the Malaspina and Bering Glaciers is an excellent test site for evaluating the importance of recent ice wastage on earthquake faulting potential. We demonstrate the influence of cumulative glacial mass loss following the 1899 Yakataga earthquake (M=8.1) by using a two dimensional finite element model with a simple representation of ice fluctuations to calculate the incremental stresses and change in the fault stability margin (FSM) along the main thrust zone (MTZ) and on the surface. Along the MTZ, our results indicate a decrease in FSM between 1899 and the 1979 St. Elias earthquake (M=7.4) of 0.2 - 1.2 MPa over an 80 km region between the coast and the 1979 aftershock zone; at the surface, the estimated FSM was larger but more localized to the lower reaches of glacial ablation zones. The ice-induced stresses were large enough, in theory, to promote the occurrence of shallow thrust earthquakes. To empirically test the influence of short-term ice fluctuations on fault stability, we compared the seismic rate from a reference background time period (1988-1992) against other time periods (1993-2006) with variable ice or tectonic change characteristics. We found that the frequency of small tectonic events in the Icy Bay region increased in 2002-2006 relative to the background seismic rate. We hypothesize that this was due to a significant increase in the rate of ice wastage in 2002-2006 instead of the M=7.9, 2002 Denali earthquake, located more than 100km away.

  6. Probabilistic assessment of decoupling loss-of-coolant accident and earthquake in nuclear power plant design

    SciTech Connect

    Lu, S.C.; Harris, D.O.

    1981-01-01

    This paper describes a research project conducted at Lawrence Livermore National Laboratory to establish a technical basis for reassessing the requirement of combining large loss-of-coolant-accident (LOCA) and earthquake loads in nuclear power plant design. A large LOCA is defined herein as a double-ended guillotine break of the primary reactor coolant loop piping (the hot leg, cold leg, and crossover) of a pressureized water reactor (PWR). A systematic probability approach has been employed to estimate the probability of a large LOCA directly and indirectly induced by earthquakes. The probability of a LOCA directly induced by earthquakes was assessed by a numerical simulation of pipe rupture of a reactor coolant system. The simulation employed a deterministic fracture mechanics model which dictates the fatigue growth of pre-existing cracks in the pipe. The simulation accounts for the stochastic nature of input elements such as the initial crack size distribution, the crack occurrence rate, crack and leak detection probabilities as functions of crack size, plant transient occurrence rates, the seismic hazard, stress histories, and crack growth model parameters. Effects on final results due to variation an uncertainty of input elements were assessed by a limited sensitivity study. Results of the simulation indicate that the probability of a double-ended guillotine break, either with or without an earthquake, is very small (on the orer of 10/sup -12/). The probability of a leak was found to be several orders of magnitudes greater than that of a complete break.

  7. An empirical model for global earthquake fatality estimation

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David

    2010-01-01

    We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U. S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits. [DOI: 10.1193/1.3480331

  8. An empirical evolutionary magnitude estimation for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wu, Yih-Min; Chen, Da-Yi

    2016-04-01

    For earthquake early warning (EEW) system, it is a difficult mission to accurately estimate earthquake magnitude in the early nucleation stage of an earthquake occurrence because only few stations are triggered and the recorded seismic waveforms are short. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters within the initial portion of waveform after P-wave arrival. However, a large-magnitude earthquake (Mw > 7.0) may take longer time to complete the whole ruptures of the causative fault. Instead of adopting amplitude contents in fixed-length time window, that may underestimate magnitude for large-magnitude events, we suppose a fast, robust and unsaturated approach to estimate earthquake magnitudes. In this new method, the EEW system can initially give a bottom-bund magnitude in a few second time window and then update magnitude without saturation by extending the time window. Here we compared two kinds of time windows for adopting amplitudes. One is pure P-wave time widow (PTW); the other is whole-wave time window after P-wave arrival (WTW). The peak displacement amplitude in vertical component were adopted from 1- to 10-s length PTW and WTW, respectively. Linear regression analysis were implemented to find the empirical relationships between peak displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that using WTW to estimate magnitudes accompanies with smaller standard deviation. In addition, large uncertainties exist in the 1-second time widow. Therefore, for magnitude estimations we suggest the EEW system need to progressively adopt peak displacement amplitudes form 2- to 10-s WTW.

  9. Estimating pore fluid pressures during the Youngstown, Ohio earthquakes

    NASA Astrophysics Data System (ADS)

    Hsieh, P. A.

    2014-12-01

    Several months after fluid injection began in December 2010 at the Northstar 1 well in Youngstown, Ohio, low-magnitude earthquakes were detected in the Youngstown area, where no prior earthquakes had been detected. Concerns that the injection might have triggered the earthquakes lead to shutdown of the well in December 2011. Earthquake relocation analysis by Kim (2013, J. Geophy. Res., v 118, p. 3506-3518) showed that, from March 2011 to January 2012, 12 earthquakes with moment magnitudes of 1.8 to 3.9 occurred at depths of 3.5 to 4 km in the Precambrian basement along a previously unmapped vertical fault. The 2.8 km deep Northstar 1 well, which penetrated the top 60 m of the basement, appeared to have been drilled into the same fault. The earthquakes occurred at lateral distances of 0 to 1 km from the well. The present study aims to estimate the fluid pressure increase due to injection. The groundwater flow model MODFLOW is used to simulate fluid pressure propagation from the well injection interval into the basement fault and two permeable sandstone layers above the basement. The basement rock away from the fault is assumed impermeable. Reservoir properties (permeability and compressibility) of the fault and sandstone layers are estimated by calibrating the model to match injection history and wellhead pressure recorded daily during the operational period. Although the available data are not sufficient to uniquely determine reservoir properties, it is possible to determine reasonable ranges. Simulated fluid pressure increases at the locations and times of the earthquakes range from less than 0.01 MPa to about 1 MPa. Pressure measurements in the well after shut-in might enhance the estimation of reservoir properties. Such data could also improve the estimation of pore fluid pressure increase due to injection.

  10. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  11. An Account of Preliminary Landslide Damage and Losses Resulting from the February 28, 2001, Nisqually, Washington, Earthquake

    USGS Publications Warehouse

    Highland, Lynn M.

    2003-01-01

    The February 28, 2001, Nisqually, Washington, earthquake (Mw = 6.8) damaged an area of the northwestern United States that previously experienced two major historical earthquakes, in 1949 and in 1965. Preliminary estimates of direct monetary losses from damage due to earthquake-induced landslides is approximately $34.3 million. However, this figure does not include costs from damages to the elevated portion of the Alaskan Way Viaduct, a major highway through downtown Seattle, Washington that will be repaired or rebuilt, depending on the future decision of local and state authorities. There is much debate as to the cause of the damage to this viaduct with evaluations of cause ranging from earthquake shaking and liquefaction to lateral spreading to a combination of these effects. If the viaduct is included in the costs, the losses increase to $500+ million (if it is repaired) or to more than $1+ billion (if it is replaced). Preliminary estimate of losses due to all causes of earthquake damage is approximately $2 billion, which includes temporary repairs to the Alaskan Way Viaduct. These preliminary dollar figures will no doubt increase when plans and decisions regarding the Viaduct are completed.

  12. Monitoring road losses for Lushan 7.0 earthquake disaster utilization multisource remote sensing images

    NASA Astrophysics Data System (ADS)

    Huang, He; Yang, Siquan; Li, Suju; He, Haixia; Liu, Ming; Xu, Feng; Lin, Yueguan

    2015-12-01

    Earthquake is one major nature disasters in the world. At 8:02 on 20 April 2013, a catastrophic earthquake with Ms 7.0 in surface wave magnitude occurred in Sichuan province, China. The epicenter of this earthquake located in the administrative region of Lushan County and this earthquake was named the Lushan earthquake. The Lushan earthquake caused heavy casualties and property losses in Sichuan province. After the earthquake, various emergency relief supplies must be transported to the affected areas. Transportation network is the basis for emergency relief supplies transportation and allocation. Thus, the road losses of the Lushan earthquake must be monitoring. The road losses monitoring results for Lushan earthquake disaster utilization multisource remote sensing images were reported in this paper. The road losses monitoring results indicated that there were 166 meters' national roads, 3707 meters' provincial roads, 3396 meters' county roads, 7254 meters' township roads, and 3943 meters' village roads were damaged during the Lushan earthquake disaster. The damaged roads mainly located at Lushan County, Baoxing County, Tianquan County, Yucheng County, Mingshan County, and Qionglai County. The results also can be used as a decision-making information source for the disaster management government in China.

  13. Development of a Global Slope Dataset for Estimation of Landslide Occurrence Resulting from Earthquakes

    USGS Publications Warehouse

    Verdin, Kristine L.; Godt, Jonathan W.; Funk, Christopher C.; Pedreros, Diego; Worstell, Bruce; Verdin, James

    2007-01-01

    Landslides resulting from earthquakes can cause widespread loss of life and damage to critical infrastructure. The U.S. Geological Survey (USGS) has developed an alarm system, PAGER (Prompt Assessment of Global Earthquakes for Response), that aims to provide timely information to emergency relief organizations on the impact of earthquakes. Landslides are responsible for many of the damaging effects following large earthquakes in mountainous regions, and thus data defining the topographic relief and slope are critical to the PAGER system. A new global topographic dataset was developed to aid in rapidly estimating landslide potential following large earthquakes. We used the remotely-sensed elevation data collected as part of the Shuttle Radar Topography Mission (SRTM) to generate a slope dataset with nearly global coverage. Slopes from the SRTM data, computed at 3-arc-second resolution, were summarized at 30-arc-second resolution, along with statistics developed to describe the distribution of slope within each 30-arc-second pixel. Because there are many small areas lacking SRTM data and the northern limit of the SRTM mission was lat 60?N., statistical methods referencing other elevation data were used to fill the voids within the dataset and to extrapolate the data north of 60?. The dataset will be used in the PAGER system to rapidly assess the susceptibility of areas to landsliding following large earthquakes.

  14. Associations between economic loss, financial strain and the psychological status of Wenchuan earthquake survivors.

    PubMed

    Huang, Yunong; Wong, Hung; Tan, Ngoh Tiong

    2015-10-01

    This study examines the effects of economic loss on the life satisfaction and mental health of Wenchuan earthquake survivors. Economic loss is measured by earthquake impacts on the income and houses of the survivors. The correlation analysis shows that earthquake impact on income is significantly correlated with life satisfaction and depression. The regression analyses indicate that earthquake impact on income is indirectly associated with life satisfaction and depression through its effect on financial strain. The research highlights the importance of coping strategies in maintaining a balance between economic status and living demands for disaster survivors. PMID:25754768

  15. Estimation of the magnitudes and epicenters of Philippine historical earthquakes

    NASA Astrophysics Data System (ADS)

    Bautista, Maria Leonila P.; Oike, Kazuo

    2000-02-01

    The magnitudes and epicenters of Philippine earthquakes from 1589 to 1895 are estimated based on the review, evaluation and interpretation of historical accounts and descriptions. The first step involves the determination of magnitude-felt area relations for the Philippines for use in the magnitude estimation. Data used were the earthquake reports of 86, recent, shallow events with well-described effects and known magnitude values. Intensities are assigned according to the modified Mercalli intensity scale of I to XII. The areas enclosed by Intensities III to IX [ A(III) to A(IX)] are measured and related to magnitude values. The most robust relations are found for magnitudes relating to A(VI), A(VII), A(VIII) and A(IX). Historical earthquake data are obtained from primary sources in libraries in the Philippines and Spain. Most of these accounts were made by Spanish priests and officials stationed in the Philippines during the 15th to 19th centuries. More than 3000 events are catalogued, interpreted and their intensities determined by considering the possible effects of local site conditions, type of construction and the number and locations of existing towns to assess completeness of reporting. Of these events, 485 earthquakes with the largest number of accounts or with at least a minimum report of damage are selected. The historical epicenters are estimated based on the resulting generalized isoseismal maps augmented by information on recent seismicity and location of known tectonic structures. Their magnitudes are estimated by using the previously determined magnitude-felt area equations for recent events. Although historical epicenters are mostly found to lie on known tectonic structures, a few, however, are found to lie along structures that show not much activity during the instrumented period. A comparison of the magnitude distributions of historical and recent events showed that only the period 1850 to 1900 may be considered well-reported in terms of

  16. Global Earthquake Casualties due to Secondary Effects: A Quantitative Analysis for Improving PAGER Losses

    USGS Publications Warehouse

    Wald, David J.

    2010-01-01

    This study presents a quantitative and geospatial description of global losses due to earthquake-induced secondary effects, including landslide, liquefaction, tsunami, and fire for events during the past 40 years. These processes are of great importance to the US Geological Survey’s (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, which is currently being developed to deliver rapid earthquake impact and loss assessments following large/significant global earthquakes. An important question is how dominant are losses due to secondary effects (and under what conditions, and in which regions)? Thus, which of these effects should receive higher priority research efforts in order to enhance PAGER’s overall assessment of earthquakes losses and alerting for the likelihood of secondary impacts? We find that while 21.5% of fatal earthquakes have deaths due to secondary (non-shaking) causes, only rarely are secondary effects the main cause of fatalities. The recent 2004 Great Sumatra–Andaman Islands earthquake is a notable exception, with extraordinary losses due to tsunami. The potential for secondary hazards varies greatly, and systematically, due to regional geologic and geomorphic conditions. Based on our findings, we have built country-specific disclaimers for PAGER that address potential for each hazard (Earle et al., Proceedings of the 14th World Conference of the Earthquake Engineering, Beijing, China, 2008). We will now focus on ways to model casualties from secondary effects based on their relative importance as well as their general predictability.

  17. Global earthquake casualties due to secondary effects: A quantitative analysis for improving rapid loss analyses

    USGS Publications Warehouse

    Marano, K.D.; Wald, D.J.; Allen, T.I.

    2010-01-01

    This study presents a quantitative and geospatial description of global losses due to earthquake-induced secondary effects, including landslide, liquefaction, tsunami, and fire for events during the past 40 years. These processes are of great importance to the US Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, which is currently being developed to deliver rapid earthquake impact and loss assessments following large/significant global earthquakes. An important question is how dominant are losses due to secondary effects (and under what conditions, and in which regions)? Thus, which of these effects should receive higher priority research efforts in order to enhance PAGER's overall assessment of earthquakes losses and alerting for the likelihood of secondary impacts? We find that while 21.5% of fatal earthquakes have deaths due to secondary (non-shaking) causes, only rarely are secondary effects the main cause of fatalities. The recent 2004 Great Sumatra-Andaman Islands earthquake is a notable exception, with extraordinary losses due to tsunami. The potential for secondary hazards varies greatly, and systematically, due to regional geologic and geomorphic conditions. Based on our findings, we have built country-specific disclaimers for PAGER that address potential for each hazard (Earle et al., Proceedings of the 14th World Conference of the Earthquake Engineering, Beijing, China, 2008). We will now focus on ways to model casualties from secondary effects based on their relative importance as well as their general predictability. ?? Springer Science+Business Media B.V. 2009.

  18. Rapid estimate of earthquake source duration: application to tsunami warning.

    NASA Astrophysics Data System (ADS)

    Reymond, Dominique; Jamelot, Anthony; Hyvernaud, Olivier

    2016-04-01

    We present a method for estimating the source duration of the fault rupture, based on the high-frequency envelop of teleseismic P-Waves, inspired from the original work of (Ni et al., 2005). The main interest of the knowledge of this seismic parameter is to detect abnormal low velocity ruptures that are the characteristic of the so called 'tsunami-earthquake' (Kanamori, 1972). The validation of the results of source duration estimated by this method are compared with two other independent methods : the estimated duration obtained by the Wphase inversion (Kanamori and Rivera, 2008, Duputel et al., 2012) and the duration calculated by the SCARDEC process that determines the source time function (M. Vallée et al., 2011). The estimated source duration is also confronted to the slowness discriminant defined by Newman and Okal, 1998), that is calculated routinely for all earthquakes detected by our tsunami warning process (named PDFM2, Preliminary Determination of Focal Mechanism, (Clément and Reymond, 2014)). Concerning the point of view of operational tsunami warning, the numerical simulations of tsunami are deeply dependent on the source estimation: better is the source estimation, better will be the tsunami forecast. The source duration is not directly injected in the numerical simulations of tsunami, because the cinematic of the source is presently totally ignored (Jamelot and Reymond, 2015). But in the case of a tsunami-earthquake that occurs in the shallower part of the subduction zone, we have to consider a source in a medium of low rigidity modulus; consequently, for a given seismic moment, the source dimensions will be decreased while the slip distribution increased, like a 'compact' source (Okal, Hébert, 2007). Inversely, a rapid 'snappy' earthquake that has a poor tsunami excitation power, will be characterized by higher rigidity modulus, and will produce weaker displacement and lesser source dimensions than 'normal' earthquake. References: CLément, J

  19. Blood loss estimation in epistaxis scenarios.

    PubMed

    Beer, H L; Duvvi, S; Webb, C J; Tandon, S

    2005-01-01

    Thirty-two members of staff from the Ear, Nose and Throat Department at Warrington General Hospital were asked to estimate blood loss in commonly encountered epistaxis scenarios. Results showed that once the measured volume was above 100 ml, visual estimation became grossly inaccurate. Comparison of medical and non-medical staff showed under-estimation was more marked in the non-medical group. Comparison of doctors versus nurses showed no difference in estimation, and no difference was found between grades of staff. PMID:15807956

  20. Centralized web-based loss estimation tool: INLET for disaster response

    NASA Astrophysics Data System (ADS)

    Huyck, C. K.; Chung, H.-C.; Cho, S.; Mio, M. Z.; Ghosh, S.; Eguchi, R. T.; Mehrotra, S.

    2006-03-01

    In the years following the 1994 Northridge earthquake, many researchers in the earthquake community focused on the development of GIS-based loss estimation tools such as HAZUS. These highly customizable programs have many users, and different results after an event can be problematic. Online IMS (Internet Map Servers) offer a centralized system where data, model updates and results cascade to all users. INLET (Internet-based Loss Estimation Tool) is the first online real-time loss estimation system available to the emergency management and response community within Southern California. In the event of a significant earthquake, Perl scripts written to respond to USGS ShakeCast notifications will call INLET routines that use USGS ShakeMaps to estimate losses within minutes after an event. INLET incorporates extensive publicly available GIS databases and uses damage functions simplified from FEMA's HAZUS (R) software. INLET currently estimates building damage, transportation impacts, and casualties. The online model simulates the effects of earthquakes, in the context of the larger RESCUE project, in order to test the integration of IT in evacuation routing. The simulation tool provides a "testbed" environment for researchers to model the effect that disaster awareness and route familiarity can have on traffic congestion and evacuation time.

  1. Time-varying loss forecast for an earthquake scenario in Basel, Switzerland

    NASA Astrophysics Data System (ADS)

    Herrmann, Marcus; Zechar, Jeremy D.; Wiemer, Stefan

    2014-05-01

    When an unexpected earthquake occurs, people suddenly want advice on how to cope with the situation. The 2009 L'Aquila quake highlighted the significance of public communication and pushed the usage of scientific methods to drive alternative risk mitigation strategies. For instance, van Stiphout et al. (2010) suggested a new approach for objective evacuation decisions on short-term: probabilistic risk forecasting combined with cost-benefit analysis. In the present work, we apply this approach to an earthquake sequence that simulated a repeat of the 1356 Basel earthquake, one of the most damaging events in Central Europe. A recent development to benefit society in case of an earthquake are probabilistic forecasts of the aftershock occurrence. But seismic risk delivers a more direct expression of the socio-economic impact. To forecast the seismic risk on short-term, we translate aftershock probabilities to time-varying seismic hazard and combine this with time-invariant loss estimation. Compared with van Stiphout et al. (2010), we use an advanced aftershock forecasting model and detailed settlement data to allow us spatial forecasts and settlement-specific decision-making. We quantify the risk forecast probabilistically in terms of human loss. For instance one minute after the M6.6 mainshock, the probability for an individual to die within the next 24 hours is 41 000 times higher than the long-term average; but the absolute value remains at minor 0.04 %. The final cost-benefit analysis adds value beyond a pure statistical approach: it provides objective statements that may justify evacuations. To deliver supportive information in a simple form, we propose a warning approach in terms of alarm levels. Our results do not justify evacuations prior to the M6.6 mainshock, but in certain districts afterwards. The ability to forecast the short-term seismic risk at any time-and with sufficient data anywhere-is the first step of personal decision-making and raising risk

  2. Blood Loss Estimation Using Gauze Visual Analogue

    PubMed Central

    Ali Algadiem, Emran; Aleisa, Abdulmohsen Ali; Alsubaie, Huda Ibrahim; Buhlaiqah, Noora Radhi; Algadeeb, Jihad Bagir; Alsneini, Hussain Ali

    2016-01-01

    Background Estimating intraoperative blood loss can be a difficult task, especially when blood is mostly absorbed by gauze. In this study, we have provided an improved method for estimating blood absorbed by gauze. Objectives To develop a guide to estimate blood absorbed by surgical gauze. Materials and Methods A clinical experiment was conducted using aspirated blood and common surgical gauze to create a realistic amount of absorbed blood in the gauze. Different percentages of staining were photographed to create an analogue for the amount of blood absorbed by the gauze. Results A visual analogue scale was created to aid the estimation of blood absorbed by the gauze. The absorptive capacity of different gauze sizes was determined when the gauze was dripping with blood. The amount of reduction in absorption was also determined when the gauze was wetted with normal saline before use. Conclusions The use of a visual analogue may increase the accuracy of blood loss estimation and decrease the consequences related to over or underestimation of blood loss. PMID:27626017

  3. The ratio of injured to fatalities in earthquakes, estimated from intensity and building properties

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Trendafiloski, G.

    2009-04-01

    a city with poorly constructed buildings. The over all ratio for Bam was R=0.33 and for three districts it was R=0.2. In the only other city in the epicentral area, Baravat, located within about four kilometers of the epicenter R=0.55. Our contention that R is a function of I is further supported by analyzing R(I) for earthquakes where R is known for several settlements. The uncertainties in input parameters like earthquake source properties and Fat are moderate, those in Inj are large. Nevertheless our results are robust because the difference between R in the developed and developing world is enormous and the dependence on I is obvious. We conclude that R in most earthquakes results from a mixture of low values near the epicenter and high values farther away where intensities decrease to VI. The range between settlements in one single earthquake can be approximately 0.2 < R < 100, due to varying distance and hence varying I. Further, R(developed) = 25 R(developing), approximately. We also simulated several past earthquakes in Algeria, Peru and Iran to compare the values of estimated R(I) resulting from the use of ATC-13 and HAZUS casualty matrices with observations. We evaluated these matrices because they are supposed to apply worldwide and they consider all damage states as possible cause of casualties. Our initial conclusion is that the later matrices fit the observations better, in particular for intensity range VII-IX. However, to improve the estimates for all intensity values, we propose that casualty matrices for estimating human losses due to earthquakes should account for differences in I and in the building quality in different parts of the world.

  4. Real Time Seismic Loss Estimation in Italy

    NASA Astrophysics Data System (ADS)

    Goretti, A.; Sabetta, F.

    2009-04-01

    By more than 15 years the Seismic Risk Office is able to perform a real-time evaluation of the earthquake potential loss in any part of Italy. Once the epicentre and the magnitude of the earthquake are made available by the National Institute for Geophysiscs and Volca-nology, the model, based on the Italian Geographic Information Sys-tems, is able to evaluate the extent of the damaged area and the consequences on the built environment. In recent years the model has been significantly improved with new methodologies able to conditioning the uncertainties using observa-tions coming from the fields during the first days after the event. However it is reputed that the main challenges in loss analysis are related to the input data, more than to methodologies. Unlike the ur-ban scenario, where the missing data can be collected with enough accuracy, the country-wise analysis requires the use of existing data bases, often collected for other purposed than seismic scenario evaluation, and hence in some way lacking of completeness and homogeneity. Soil properties, building inventory and population dis-tribution are the main input data that are to be known in any site of the whole Italian territory. To this end the National Census on Popu-lation and Dwellings has provided information on the residential building types and the population that lives in that building types. The critical buildings, such as Hospital, Fire Brigade Stations, Schools, are not included in the inventory, since the national plan for seismic risk assessment of critical buildings is still under way. The choice of a proper soil motion parameter, its attenuation with distance and the building type fragility are important ingredients of the model as well. The presentation will focus on the above mentioned issues, highlight-ing the different data sets used and their accuracy, and comparing the model, input data and results when geographical areas with dif-ferent extent are considered: from the urban scenarios

  5. Ground motions estimates for a cascadia earthquake from liquefaction evidence

    USGS Publications Warehouse

    Dickenson, S.E.; Obermeier, S.F.

    1998-01-01

    Paleoseismic studies conducted in the coastal regions of the Pacific Northwest in the past decade have revealed evidence of crustal downdropping and subsequent tsunami inundation, attributable to a large earthquake along the Cascadia subduction zone which occurred approximately 300 years ago, and most likely in 1700 AD. In order to characterize the severity of ground motions from this earthquake, we report on results of a field search for seismically induced liquefaction features. The search was made chiefly along the coastal portions of several river valleys in Washington, rivers along the central Oregon coast, as well as on islands in the Columbia River of Oregon and Washington. In this paper we focus only on the results of the Columbia River investigation. Numerous liquefaction features were found in some regions, but not in others. The regional distribution of liquefaction features is evaluated as a function of geologic and geotechnical factors at each site in order to estimate the intensity of ground shaking.

  6. Estimating the confidence of earthquake damage scenarios: examples from a logic tree approach

    NASA Astrophysics Data System (ADS)

    Molina, S.; Lindholm, C. D.

    2007-07-01

    Earthquake loss estimation is now becoming an important tool in mitigation planning, where the loss modeling usually is based on a parameterized mathematical representation of the damage problem. In parallel with the development and improvement of such models, the question of sensitivity to parameters that carry uncertainties becomes increasingly important. We have to this end applied the capacity spectrum method (CSM) as described in FEMA HAZUS-MH. Multi-hazard Loss Estimation Methodology, Earthquake Model, Advanced Engineering Building Module. Federal Emergency Management Agency, United States (2003), and investigated the effects of selected parameters. The results demonstrate that loss scenarios may easily vary by as much as a factor of two because of simple parameter variations. Of particular importance for the uncertainty is the construction quality of the structure. These results represent a warning against simple acceptance of unbounded damage scenarios and strongly support the development of computational methods in which parameter uncertainties are propagated through the computations to facilitate confidence bounds for the damage scenarios.

  7. Locating earthquakes with surface waves and centroid moment tensor estimation

    NASA Astrophysics Data System (ADS)

    Wei, Shengji; Zhan, Zhongwen; Tan, Ying; Ni, Sidao; Helmberger, Don

    2012-04-01

    Traditionally, P wave arrival times have been used to locate regional earthquakes. In contrast, the travel times of surface waves dependent on source excitation and the source parameters and depth must be determined independently. Thus surface wave path delays need to be known before such data can be used for location. These delays can be estimated from previous earthquakes using the cut-and-paste technique, Ambient Seismic Noise tomography, and from 3D models. Taking the Chino Hills event as an example, we show consistency of path corrections for (>10 s) Love and Rayleigh waves to within about 1 s obtained from these methods. We then use these empirically derived delay maps to determine centroid locations of 138 Southern California moderate-sized (3.5 > Mw> 5.7) earthquakes using surface waves alone. It appears that these methods are capable of locating the main zone of rupture within a few (˜3) km accuracy relative to Southern California Seismic Network locations with 5 stations that are well distributed in azimuth. We also address the timing accuracy required to resolve non-double-couple source parameters which trades-off with location with less than a km error required for a 10% Compensated Linear Vector Dipole resolution.

  8. Earthquake Loss Assessment for the Evaluation of the Sovereign Risk and Financial Sustainability of Countries and Cities

    NASA Astrophysics Data System (ADS)

    Cardona, O. D.

    2013-05-01

    Recently earthquakes have struck cities both from developing as well as developed countries, revealing significant knowledge gaps and the need to improve the quality of input data and of the assumptions of the risk models. The quake and tsunami in Japan (2011) and the disasters due to earthquakes in Haiti (2010), Chile (2010), New Zealand (2011) and Spain (2011), only to mention some unexpected impacts in different regions, have left several concerns regarding hazard assessment as well as regarding the associated uncertainties to the estimation of the future losses. Understanding probable losses and reconstruction costs due to earthquakes creates powerful incentives for countries to develop planning options and tools to cope with sovereign risk, including allocating the sustained budgetary resources necessary to reduce those potential damages and safeguard development. Therefore the use of robust risk models is a need to assess the future economic impacts, the country's fiscal responsibilities and the contingent liabilities for governments and to formulate, justify and implement risk reduction measures and optimal financial strategies of risk retention and transfer. Special attention should be paid to the understanding of risk metrics such as the Loss Exceedance Curve (empiric and analytical) and the Expected Annual Loss in the context of conjoint and cascading hazards.

  9. Soil amplification maps for estimating earthquake ground motions in the Central US

    USGS Publications Warehouse

    Bauer, R.A.; Kiefer, J.; Hester, N.

    2001-01-01

    The State Geologists of the Central United States Earthquake Consortium (CUSEC) are developing maps to assist State and local emergency managers and community officials in evaluating the earthquake hazards for the CUSEC region. The state geological surveys have worked together to produce a series of maps that show seismic shaking potential for eleven 1 X 2 degree (scale 1:250 000 or 1 in. ??? 3.9 miles) quadrangles that cover the high-risk area of the New Madrid Seismic Zone in eight states. Shear wave velocity values for the surficial materials were gathered and used to classify the soils according to their potential to amplify earthquake ground motions. Geologic base maps of surficial materials or 3-D material maps, either existing or produced for this project, were used in conjunction with shear wave velocities to classify the soils for the upper 15-30 m. These maps are available in an electronic form suitable for inclusion in the federal emergency management agency's earthquake loss estimation program (HAZUS). ?? 2001 Elsevier Science B.V. All rights reserved.

  10. The global historical and future economic loss and cost of earthquakes during the production of adaptive worldwide economic fragility functions

    NASA Astrophysics Data System (ADS)

    Daniell, James; Wenzel, Friedemann

    2014-05-01

    Over the past decade, the production of economic indices behind the CATDAT Damaging Earthquakes Database has allowed for the conversion of historical earthquake economic loss and cost events into today's terms using long-term spatio-temporal series of consumer price index (CPI), construction costs, wage indices, and GDP from 1900-2013. As part of the doctoral thesis of Daniell (2014), databases and GIS layers for a country and sub-country level have been produced for population, GDP per capita, net and gross capital stock (depreciated and non-depreciated) using studies, census information and the perpetual inventory method. In addition, a detailed study has been undertaken to collect and reproduce as many historical isoseismal maps, macroseismic intensity results and reproductions of earthquakes as possible out of the 7208 damaging events in the CATDAT database from 1900 onwards. a) The isoseismal database and population bounds from 3000+ collected damaging events were compared with the output parameters of GDP and net and gross capital stock per intensity bound and administrative unit, creating a spatial join for analysis. b) The historical costs were divided into shaking/direct ground motion effects, and secondary effects costs. The shaking costs were further divided into gross capital stock related and GDP related costs for each administrative unit, intensity bound couplet. c) Costs were then estimated based on the optimisation of the function in terms of costs vs. gross capital stock and costs vs. GDP via the regression of the function. Losses were estimated based on net capital stock, looking at the infrastructure age and value at the time of the event. This dataset was then used to develop an economic exposure for each historical earthquake in comparison with the loss recorded in the CATDAT Damaging Earthquakes Database. The production of economic fragility functions for each country was possible using a temporal regression based on the parameters of

  11. Estimating the Threat of Tsunamigenic Earthquakes and Earthquake Induced-Landslide Tsunami in the Caribbean

    NASA Astrophysics Data System (ADS)

    McCann, W. R.

    2007-05-01

    more likely to produce slow earthquakes. Subduction of rough seafloor may activate thrust faults within the accretionary prism above the main decollement, causing indentation of the prism toe. Later reactivation of a dormant decollement would enhance the possibility of slow earthquakes. Subduction of significant seafloor relief and corresponding indentation of the accretionary prism toe would then be another parameter to estimate the likelihood of slow earthquakes. Using these criteria, several regions of the Northeastern Caribbean stand out as more likely sources for slow earthquakes.

  12. Estimation of earthquake effects associated with a great earthquake in the New Madrid seismic zone

    USGS Publications Warehouse

    Hopper, Margaret G.; Algermissen, Sylvester Theodore; Dobrovolny, Ernest E.

    1983-01-01

    Estimates have been made of the effects of a large Ms = 8.6, Io = XI earthquake hypothesed to occur anywhere in the New Madrid seismic zone. The estimates are based on the distributions of intensities associated with the earthquakes of 1811-12, 1843 and 1895 although the effects of other historical shocks are also considered. The resulting composite type intensity map for a maximum intensity XI is believed to represent the upper level of shaking likely to occur. Specific intensity maps have been developed for six cities near the epicentral region taking into account the most likely distribution of site response in each city. Intensities found are: IX for Carbondale, IL; VIII and IX for Evansville, IN; VI and VIII for Little Rock, AR; IX and X for Memphis, TN; VIII, IX, and X for Paducah, KY; and VIII and X for Poplar Bluff, MO. On a regional scale, intensities are found to attenuate from the New Madrid seismic zone most rapidly to the west and southwest sides of the zone, most slowly to the northwest along the Mississippi River, on the northeast along the Ohio River, and on the southeast toward Georgia and South Carolina. Intensities attenuate toward the north, east, and south in a more normal fashion. Known liquefaction effects are documented but much more research is needed to define the liquefaction potential.

  13. Combining earthquakes and GPS data to estimate the probability of future earthquakes with magnitude Mw ≥ 6.0

    NASA Astrophysics Data System (ADS)

    Chen, K.-P.; Tsai, Y.-B.; Chang, W.-Y.

    2013-10-01

    According to Wyss et al. (2000) result indicates that future main earthquakes can be expected along zones characterized by low b values. In this study we combine Benioff strain with global positioning system (GPS) data to estimate the probability of future Mw ≥ 6.0 earthquakes for a grid covering Taiwan. An approach similar to the maximum likelihood method was used to estimate Gutenberg-Richter parameters a and b. The two parameters were then used to estimate the probability of simulating future earthquakes of Mw ≥ 6.0 for each of the 391 grids (grid interval = 0.1°) covering Taiwan. The method shows a high probability of earthquakes in western Taiwan along a zone that extends from Taichung southward to Nantou, Chiayi, Tainan and Kaohsiung. In eastern Taiwan, there also exists a high probability zone from Ilan southward to Hualian and Taitung. These zones are characterized by high earthquake entropy, high maximum shear strain rates, and paths of low b values. A relation between entropy and maximum shear strain rate is also obtained. It indicates that the maximum shear strain rate is about 4.0 times the entropy. The results of this study should be of interest to city planners, especially those concerned with earthquake preparedness. And providing the earthquake insurers to draw up the basic premium.

  14. Earthquake!

    ERIC Educational Resources Information Center

    Markle, Sandra

    1987-01-01

    A learning unit about earthquakes includes activities for primary grade students, including making inferences and defining operationally. Task cards are included for independent study on earthquake maps and earthquake measuring. (CB)

  15. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a ...

  16. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  17. Physics-based estimates of maximum magnitude of induced earthquakes

    NASA Astrophysics Data System (ADS)

    Ampuero, Jean-Paul; Galis, Martin; Mai, P. Martin

    2016-04-01

    In this study, we present new findings when integrating earthquake physics and rupture dynamics into estimates of maximum magnitude of induced seismicity (Mmax). Existing empirical relations for Mmax lack a physics-based relation between earthquake size and the characteristics of the triggering stress perturbation. To fill this gap, we extend our recent work on the nucleation and arrest of dynamic ruptures derived from fracture mechanics theory. There, we derived theoretical relations between the area and overstress of overstressed asperity and the ability of ruptures to either stop spontaneously (sub-critical ruptures) or runaway (super-critical ruptures). These relations were verified by comparison with simulation and laboratory results, namely 3D dynamic rupture simulations on faults governed by slip-weakening friction, and laboratory experiments of frictional sliding nucleated by localized stresses. Here, we apply and extend these results to situations that are representative for the induced seismicity environment. We present physics-based predictions of Mmax on a fault intersecting cylindrical reservoir. We investigate Mmax dependence on pore-pressure variations (by varying reservoir parameters), frictional parameters and stress conditions of the fault. We also derive Mmax as a function of injected volume. Our approach provides results that are consistent with observations but suggests different scaling with injected volume than that of empirical relation by McGarr, 2014.

  18. Strong Ground Motion Estimation During the Kutch, India Earthquake

    NASA Astrophysics Data System (ADS)

    Iyengar, R. N.; Kanth, S. T. G. Raghu

    2006-01-01

    In the absence of strong motion records, ground motion during the 26th January, 2001 Kutch, India earthquake, has been estimated by analytical methods. A contour map of peak ground acceleration (PGA) values in the near source region is provided. These results are validated by comparing them with spectral response recorder data and field observations. It is found that very near the epicenter, PGA would have exceeded 0.6 g. A set of three aftershock records have been used as empirical Green's functions to simulate ground acceleration time history and 5% damped response spectrum at Bhuj City. It is found that at Bhuj, PGA would have been 0.31 g 0.37 g. It is demonstrated that source mechanism models can be effectively used to understand spatial variability of large-scale ground movements near urban areas due to the rupture of active faults.

  19. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  20. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  1. Likely Human Losses in Future Earthquakes in Central Myanmar, Beyond the Northern end of the M9.3 Sumatra Rupture of 2004

    NASA Astrophysics Data System (ADS)

    Wyss, B. M.; Wyss, M.

    2007-12-01

    We estimate that the city of Rangoon and adjacent provinces (Rangoon, Rakhine, Ayeryarwady, Bago) represent an earthquake risk similar in severity to that of Istanbul and the Marmara Sea region. After the M9.3 Sumatra earthquake of December 2004 that ruptured to a point north of the Andaman Islands, the likelihood of additional ruptures in the direction of Myanmar and within Myanmar is increased. This assumption is especially plausible since M8.2 and M7.9 earthquakes in September 2007 extended the 2005 ruptures to the south. Given the dense population of the aforementioned provinces, and the fact that historically earthquakes of M7.5 class have occurred there (in 1858, 1895 and three in 1930), it would not be surprising, if similar sized earthquakes would occur in the coming decades. Considering that we predicted the extent of human losses in the M7.6 Kashmir earthquake of October 2005 approximately correctly six month before it occurred, it seems reasonable to attempt to estimate losses in future large to great earthquakes in central Myanmar and along its coast of the Bay of Bengal. We have calculated the expected number of fatalities for two classes of events: (1) M8 ruptures offshore (between the Andaman Islands and the Myanmar coast, and along Myanmar's coast of the Bay of Bengal. (2) M7.5 repeats of the historic earthquakes that occurred in the aforementioned years. These calculations are only order of magnitude estimates because all necessary input parameters are poorly known. The population numbers, the condition of the building stock, the regional attenuation law, the local site amplification and of course the parameters of future earthquakes can only be estimated within wide ranges. For this reason, we give minimum and maximum estimates, both within approximate error limits. We conclude that the M8 earthquakes located offshore are expected to be less harmful than the M7.5 events on land: For M8 events offshore, the minimum number of fatalities is estimated

  2. Quantitative Estimates of the Numbers of Casualties to be Expected due to Major Earthquakes Near Megacities

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Wenzel, F.

    2004-12-01

    Defining casualties as the sum of the fatalities plus injured, we use their mean number, as calculated by QUAKELOSS (developed by Extreme Situations Research Center, Moscow) as a measure of the extent of possible disasters due to earthquakes. Examples of cities we examined include Algiers, Cairo, Istanbul, Mumbai and Teheran, with populations ranging from about 3 to 20 million. With the assumption that the properties of the building stock has not changed with time since 1950, we find that the number of expected casualties will have increased about 5 to 10 fold by the year 2015. This increase is directly proportional to the increase of the population. For the assumed magnitude, we used M7 and M6.5 because shallow earthquakes in this range can occur in the seismogenic layer, without rupturing the surface. This means, they could occur anywhere in a seismically active area, not only along known faults. As a function of epicentral distance the fraction of casualties of the population decrease from about 6% at 20 km, to 3% at 30 km and 0.5% at 50 km, for an earthquake of M7. At 30 km distance, the assumed variation of the properties of the building stock from country to country give rise to variations of 1% to 5% for the estimate of the percent of the population that become casualties. As a function of earthquake size, the expected number of casualties drop by approximately an order of magnitude for an M6.5, compared to an M7, at 30 km distance. Because the computer code and database in QUAKELOSS are calibrated based on about 1000 earthquakes with fatalities, and verified by real-time loss estimates for about 60 cases, these results are probably of the correct order of magnitude. However, the results should not be taken as overly reliable, because (1) the probability calculations of the losses result in uncertainties of about a factor of two, (2) the method has been tested for medium size cities, not for megacities, and (3) many assumptions were made. Nevertheless, it is

  3. Global Earthquake and Volcanic Eruption Economic losses and costs from 1900-2014: 115 years of the CATDAT database - Trends, Normalisation and Visualisation

    NASA Astrophysics Data System (ADS)

    Daniell, James; Skapski, Jens-Udo; Vervaeck, Armand; Wenzel, Friedemann; Schaefer, Andreas

    2015-04-01

    Over the past 12 years, an in-depth database has been constructed for socio-economic losses from earthquakes and volcanoes. The effects of earthquakes and volcanic eruptions have been documented in many databases, however, many errors and incorrect details are often encountered. To combat this, the database was formed with socioeconomic checks of GDP, capital stock, population and other elements, as well as providing upper and lower bounds to each available event loss. The definition of economic losses within the CATDAT Damaging Earthquakes Database (Daniell et al., 2011a) as of v6.1 has now been redefined to provide three options of natural disaster loss pricing, including reconstruction cost, replacement cost and actual loss, in order to better define the impact of historical disasters. Similarly for volcanoes as for earthquakes, a reassessment has been undertaken looking at the historical net and gross capital stock and GDP at the time of the event, including the depreciated stock, in order to calculate the actual loss. A normalisation has then been undertaken using updated population, GDP and capital stock. The difference between depreciated and gross capital can be removed from the historical loss estimates which have been all calculated without taking depreciation of the building stock into account. The culmination of time series from 1900-2014 of net and gross capital stock, GDP, direct economic loss data, use of detailed studies of infrastructure age, and existing damage surveys, has allowed the first estimate of this nature. The death tolls in earthquakes from 1900-2014 are presented in various forms, showing around 2.32 million deaths due to earthquakes (with a range of 2.18 to 2.63 million) and around 59% due to masonry buildings and 28% from secondary effects. For the death tolls from the volcanic eruption database, 98000 deaths with a range from around 83000 to 107000 is seen from 1900-2014. The application of VSL life costing from death and injury

  4. A comparison of socio-economic loss analysis from the 2013 Haiyan Typhoon and Bohol Earthquake events in the Philippines in near real-time

    NASA Astrophysics Data System (ADS)

    Daniell, James; Mühr, Bernhard; Kunz-Plapp, Tina; Brink, Susan A.; Kunz, Michael; Khazai, Bijan; Wenzel, Friedemann

    2014-05-01

    In the aftermath of a disaster, the extent of the socioeconomic loss (fatalities, homelessness and economic losses) is often not known and it may take days before a reasonable estimate is known. Using the technique of socio-economic fragility functions developed (Daniell, 2014) using a regression of socio-economic indicators through time against historical empirical loss vs. intensity data, a first estimate can be established. With more information from the region as the disaster unfolds, a more detailed estimate can be provided via a calibration of the initial loss estimate parameters. In 2013, two main disasters hit the Philippines; the Bohol earthquake in October and the Haiyan typhoon in November. Although both disasters were contrasting and hit different regions, the same generalised methodology was used for initial rapid estimates and then the updating of the disaster loss estimate through time. The CEDIM Forensic Disaster Analysis Group of KIT and GFZ produced 6 reports for Bohol and 2 reports for Haiyan detailing various aspects of the disasters from the losses to building damage, the socioeconomic profile and also the social networking and disaster response. This study focusses on the loss analysis undertaken. The following technique was used:- 1. A regression of historical earthquake and typhoon losses for the Philippines was examined using the CATDAT Damaging Earthquakes Database, and various Philippines databases respectively. 2. The historical intensity impact of the examined events were placed in a GIS environment in order to allow correlation with the population and capital stock database from 1900-2013 to create a loss function. The modified human development index from 1900-2013 was also used to also calibrate events through time. 3. The earthquake intensity and the wind speed intensity was used from the 2013 events as well as the 2013 capital stock and population in order to calculate the number of fatalities (except in Haiyan), homeless and

  5. Earthquake recurrence rate estimates for eastern Washington and the Hanford Site

    SciTech Connect

    Rohay, A.C.

    1989-08-01

    The historical and instrumental records of earthquakes were used to estimate earthquake recurrence rates for input to a new seismic hazard analysis at the Hanford Site in eastern Washington. Two areas were evaluated, the eastern Washington region and the smaller Yakima Fold Belt, in which the Hanford Site is located. The completeness of a catalog of earthquakes was evaluated for earthquakes with Modified Mercalli Intensity (MMI) IV through VII. Only one MMI VII earthquake was reported in the last 100 years in eastern Washington. The reporting of MMI VI earthquakes appears to be complete for the last 80 years, and the reporting of MMI V earthquakes appears to be complete for the last 65 years. However, MMI IV earthquakes are consistently under-reported. For a limited set of earthquakes, both MMI and magnitude (M/sub L/) have been reported. A plot of these data indicated that the Gutenberg-Richter relationship could be used to estimate earthquakes magnitudes from intensities. A recurrence curve for the historical earthquake data was calculated using the maximum likelihood method, including corrections for the width of the magnitude conversion. The slope of the recurrence curve (i.e., b-value) was found to be -1.15. Another catalog, one that listed instrumentally detected earthquakes from 1969 to the present, was used to supplement the historical earthquake data. Magnitudes were determined using a coda-length method (M/sub c/) that had been approximately calibrated to local magnitude M/sub L/. For earthquakes whose M/sub c/ was between 3 and 5, the b-value ranged from -1.07 to - 1.12. 12 refs., 9 figs., 9 tabs.

  6. Errors in Expected Human Losses Due to Incorrect Seismic Hazard Estimates

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Nekrasova, A.; Kossobokov, V. G.

    2011-12-01

    The probability of strong ground motion is presented in seismic hazard maps, in which peak ground accelerations (PGA) with 10% probability of exceedance in 50 years are shown by color codes. It has become evident that these maps do not correctly give the seismic hazard. On the seismic hazard map of Japan, the epicenters of the recent large earthquakes are located in the regions of relatively low hazard. The errors of the GSHAP maps have been measured by the difference between observed and expected intensities due to large earthquakes. Here, we estimate how the errors in seismic hazard estimates propagate into errors in estimating the potential fatalities and affected population. We calculated the numbers of fatalities that would have to be expected in the regions of the nine earthquakes with more than 1,000 fatalities during the last 10 years with relatively reliable estimates of fatalities, assuming a magnitude which generates as a maximum intensity the one given by the GSHAP maps. This value is the number of fatalities to be exceeded with probability of 10% during 50 years. In most regions of devastating earthquakes, there are no instruments to measure ground accelerations. Therefore, we converted the PGA expected as a likely maximum based on the GSHAP maps to intensity. The magnitude of the earthquake that would cause the intensity expected by GSHAP as a likely maximum was calculated by M(GSHAP) = (I0 +1.5)/1.5. The numbers of fatalities, which were expected, based on earthquakes with M(GSHAP), were calculated using the loss estimating program QLARM. We calibrated this tool for each case by calculating the theoretical damage and numbers of fatalities (Festim) for the disastrous test earthquakes, generating a match with the observe numbers of fatalities (Fobs=Festim) by adjusting the attenuation relationship within the bounds of commonly observed laws. Calculating the numbers of fatalities expected for the earthquakes with M(GSHAP) will thus yield results that

  7. Estimating the extent of stress influence by using earthquake triggering groundwater level variations in Taiwan

    NASA Astrophysics Data System (ADS)

    Wang, Shih-Jung; Hsu, Kuo-Chin; Lai, Wen-Chi; Wang, Chein-Lee

    2015-11-01

    Groundwater level variations associated with earthquake events may reveal useful information. This study estimates the extent of stress influence, defined as the distance over which an earthquake can induce a step change of the groundwater level, using earthquake-triggering groundwater level variations in Taiwan. Groundwater variations were first characterized based on the dynamics of groundwater level changes dominantly triggered by earthquakes. The step-change data in co-seismic groundwater level variations were used to analyze the extent of stress influence for earthquakes. From the data analysis, the maximum extent of stress influence is 250 km around Taiwan. A two-dimensional approach was adopted to develop two models for estimating the maximum extent of stress influence for earthquakes. From the developed models, the extent of stress influence is proportional to the earthquake magnitude and inversely proportional to the groundwater level change. The model equations can be used to calculate the influence radius of stress from an earthquake by using the observed change of groundwater level and the earthquake magnitude. The models were applied to estimate the area of anomalous stress, defined as the possible areas where the strain energy is accumulated, using the cross areas method. The results show that the estimated area of anomalous stress is close to the epicenter. Complex geological structures and material heterogeneity and anisotropy may explain this disagreement. More data collection and model refinements can improve the proposed model. This study shows the potential of using groundwater level variations for capturing seismic information. The proposed concept of extent of stress influence can be used to estimate the earthquake effect in hydraulic engineering, mining engineering, and carbon dioxide sequestration, etc. This study provides a concept for estimating the possible areas of anomalous stress for a forthcoming earthquake.

  8. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  9. Multicomponent seismic loss estimation on the North Anatolian Fault Zone (Turkey)

    NASA Astrophysics Data System (ADS)

    karimzadeh Naghshineh, S.; Askan, A.; Erberik, M. A.; Yakut, A.

    2015-12-01

    Seismic loss estimation is essential to incorporate seismic risk of structures into an efficient decision-making framework. Evaluation of seismic damage of structures requires a multidisciplinary approach including earthquake source characterization, seismological prediction of earthquake-induced ground motions, prediction of structural responses exposed to ground shaking, and finally estimation of induced damage to structures. As the study region, Erzincan, a city on the eastern part of Turkey is selected which is located in the conjunction of three active strike-slip faults as North Anatolian Fault, North East Anatolian Fault and Ovacik fault. Erzincan city center is in a pull-apart basin underlain by soft sediments that has experienced devastating earthquakes such as the 27 December 1939 (Ms=8.0) and the 13 March 1992 (Mw=6.6) events, resulting in extensive amount of physical as well as economical losses. These losses are attributed to not only the high seismicity of the area but also as a result of the seismic vulnerability of the constructed environment. This study focuses on the seismic damage estimation of Erzincan using both regional seismicity and local building information. For this purpose, first, ground motion records are selected from a set of scenario events simulated with the stochastic finite fault methodology using regional seismicity parameters. Then, existing building stock are classified into specified groups represented with equivalent single-degree-of-freedom systems. Through these models, the inelastic dynamic structural responses are investigated with non-linear time history analysis. To assess the potential seismic damage in the study area, fragility curves for the classified structural types are derived. Finally, the estimated damage is compared with the observed damage during the 1992 Erzincan earthquake. The results are observed to have a reasonable match indicating the efficiency of the ground motion simulations and building analyses.

  10. Estimation of strong ground motions from hypothetical earthquakes on the Cascadia subduction zone, Pacific Northwest

    USGS Publications Warehouse

    Heaton, T.H.; Hartzell, S.H.

    1989-01-01

    Strong ground motions are estimated for the Pacific Northwest assuming that large shallow earthquakes, similar to those experienced in southern Chile, southwestern Japan, and Colombia, may also occur on the Cascadia subduction zone. Fifty-six strong motion recordings for twenty-five subduction earthquakes of Ms???7.0 are used to estimate the response spectra that may result from earthquakes Mw<81/4. Large variations in observed ground motion levels are noted for a given site distance and earthquake magnitude. When compared with motions that have been observed in the western United States, large subduction zone earthquakes produce relatively large ground motions at surprisingly large distances. An earthquake similar to the 22 May 1960 Chilean earthquake (Mw 9.5) is the largest event that is considered to be plausible for the Cascadia subduction zone. This event has a moment which is two orders of magnitude larger than the largest earthquake for which we have strong motion records. The empirical Green's function technique is used to synthesize strong ground motions for such giant earthquakes. Observed teleseismic P-waveforms from giant earthquakes are also modeled using the empirical Green's function technique in order to constrain model parameters. The teleseismic modeling in the period range of 1.0 to 50 sec strongly suggests that fewer Green's functions should be randomly summed than is required to match the long-period moments of giant earthquakes. It appears that a large portion of the moment associated with giant earthquakes occurs at very long periods that are outside the frequency band of interest for strong ground motions. Nevertheless, the occurrence of a giant earthquake in the Pacific Northwest may produce quite strong shaking over a very large region. ?? 1989 Birkha??user Verlag.

  11. Mathematical models for estimating earthquake casualties and damage cost through regression analysis using matrices

    NASA Astrophysics Data System (ADS)

    Urrutia, J. D.; Bautista, L. A.; Baccay, E. B.

    2014-04-01

    The aim of this study was to develop mathematical models for estimating earthquake casualties such as death, number of injured persons, affected families and total cost of damage. To quantify the direct damages from earthquakes to human beings and properties given the magnitude, intensity, depth of focus, location of epicentre and time duration, the regression models were made. The researchers formulated models through regression analysis using matrices and used α = 0.01. The study considered thirty destructive earthquakes that hit the Philippines from the inclusive years 1968 to 2012. Relevant data about these said earthquakes were obtained from Philippine Institute of Volcanology and Seismology. Data on damages and casualties were gathered from the records of National Disaster Risk Reduction and Management Council. The mathematical models made are as follows: This study will be of great value in emergency planning, initiating and updating programs for earthquake hazard reductionin the Philippines, which is an earthquake-prone country.

  12. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    USGS Publications Warehouse

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  13. Improving Estimates of Coseismic Subsidence from southern Cascadia Subduction Zone Earthquakes at northern Humboldt Bay, California

    NASA Astrophysics Data System (ADS)

    Padgett, J. S.; Engelhart, S. E.; Hemphill-Haley, E.; Kelsey, H. M.; Witter, R. C.

    2015-12-01

    Geological estimates of subsidence from past earthquakes help to constrain Cascadia subduction zone (CSZ) earthquake rupture models. To improve subsidence estimates for past earthquakes along the southern CSZ, we apply transfer function analysis on microfossils from 3 intertidal marshes in northern Humboldt Bay, California, ~60 km north of the Mendocino Triple Junction. The transfer function method uses elevation-dependent intertidal foraminiferal and diatom assemblages to reconstruct relative sea-level (RSL) change indicated by shifts in microfossil assemblages. We interpret stratigraphic evidence associated with sudden shifts in microfossils to reflect sudden RSL rise due to subsidence during past CSZ earthquakes. Laterally extensive (>5 km) and sharp mud-over-peat contacts beneath marshes at Jacoby Creek, Mad River Slough, and McDaniel Slough demonstrate widespread earthquake subsidence in northern Humboldt Bay. C-14 ages of plant macrofossils taken from above and below three contacts that correlate across all three sites, provide estimates of the times of subsidence at ~250 yr BP, ~1300 yr BP and ~1700 yr BP. Two further contacts observed at only two sites provide evidence for subsidence during possible CSZ earthquakes at ~900 yr BP and ~1100 yr BP. Our study contributes 20 AMS radiocarbon ages, of identifiable plant macrofossils, that improve estimates of the timing of past earthquakes along the southern CSZ. We anticipate that our results will provide more accurate and precise reconstructions of RSL change induced by southern CSZ earthquakes. Prior to our work, studies in northern Humboldt Bay provided subsidence estimates with vertical uncertainties >±0.5 m; too imprecise to adequately constrain earthquake rupture models. Our method, applied recently in coastal Oregon, has shown that subsidence during past CSZ earthquakes can be reconstructed with a precision of ±0.3m and substantially improves constraints on rupture models used for seismic hazard

  14. Uncertainty of earthquake losses due to model uncertainty of input ground motions in the Los Angeles area

    USGS Publications Warehouse

    Cao, T.; Petersen, M.D.

    2006-01-01

    In a recent study we used the Monte Carlo simulation method to evaluate the ground-motion uncertainty of the 2002 update of the California probabilistic seismic hazard model. The resulting ground-motion distribution is used in this article to evaluate the contribution of the hazard model to the uncertainty in earthquake loss ratio, the ratio of the expected loss to the total value of a structure. We use the Hazards U.S. (HAZUS) methodology for loss estimation because it is a widely used and publicly available risk model and intended for regional studies by public agencies and for use by governmental decision makers. We found that the loss ratio uncertainty depends not only on the ground-motion uncertainty but also on the mean ground-motion level. The ground-motion uncertainty, as measured by the coefficient of variation (COV), is amplified when converting to the loss ratio uncertainty because loss increases concavely with ground motion. By comparing the ground-motion uncertainty with the corresponding loss ratio uncertainty for the structural damage of light wood-frame buildings in Los Angeles area, we show that the COV of loss ratio is almost twice the COV of ground motion with a return period of 475 years around the San Andreas fault and other major faults in the area. The loss ratio for the 2475-year ground-motion maps is about a factor of three higher than for the 475-year maps. However, the uncertainties in ground motion and loss ratio for the longer return periods are lower than for the shorter return periods because the uncertainty parameters in the hazard logic tree are independent of the return period, but the mean ground motion increases with return period.

  15. Estimating surface faulting impacts from the shakeout scenario earthquake

    USGS Publications Warehouse

    Treiman, J.A.; Pontib, D.J.

    2011-01-01

    An earthquake scenario, based on a kinematic rupture model, has been prepared for a Mw 7.8 earthquake on the southern San Andreas Fault. The rupture distribution, in the context of other historic large earthquakes, is judged reasonable for the purposes of this scenario. This model is used as the basis for generating a surface rupture map and for assessing potential direct impacts on lifelines and other infrastructure. Modeling the surface rupture involves identifying fault traces on which to place the rupture, assigning slip values to the fault traces, and characterizing the specific displacements that would occur to each lifeline impacted by the rupture. Different approaches were required to address variable slip distribution in response to a variety of fault patterns. Our results, involving judgment and experience, represent one plausible outcome and are not predictive because of the variable nature of surface rupture. ?? 2011, Earthquake Engineering Research Institute.

  16. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  17. Bayesian estimation of system reliability under asymmetric loss

    NASA Astrophysics Data System (ADS)

    Thompson, Ronald David

    This research is concerned with estimating the reliability of a k-out-of-p system when the lifetimes of its p components are iid, when subjective beliefs about the behavior of the system's individual components are available, and when losses corresponding to overestimation and underestimation errors can be approximated by a suitable family of asymmetric loss functions. Point estimates for such systems are discussed in the context of Bayes estimation with respect to loss functions. A set of properties is proposed as being minimal properties that all loss functions appropriate to reliability estimation might satisfy. Several families of asymmetric loss functions that satisfy these minimal properties are discussed, and their corresponding posterior Bayes estimators are derived. One of these families, squarex loss functions, is a generalization of linex loss functions. The concept of loss robustness is discussed in the context of parametric families of asymmetric loss functions. As an application, the reliability of O-rings critical to the 1986 catastrophic failure of the Space Shuttle Challenger is estimated. Point estimation of negative exponential stress-strength k-out-of-p systems with respect to reference priors is discussed in this context of asymmetric loss functions.

  18. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  19. Source parameters of the 2013 Lushan, Sichuan, Ms7.0 earthquake and estimation of the near-fault strong ground motion

    NASA Astrophysics Data System (ADS)

    Meng, L.; Zhou, L.; Liu, J.

    2013-12-01

    Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity

  20. Using a genetic algorithm to estimate the details of earthquake slip distributions from point surface displacements

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nic Bhloscaidh, M.

    2016-03-01

    Examining fault activity over several earthquake cycles is necessary for long-term modeling of the fault strain budget and stress state. While this requires knowledge of coseismic slip distributions for successive earthquakes along the fault, these exist only for the most recent events. However, overlying the Sunda Trench, sparsely distributed coral microatolls are sensitive to tectonically induced changes in relative sea levels and provide a century-spanning paleogeodetic and paleoseismic record. Here we present a new technique called the Genetic Algorithm Slip Estimator to constrain slip distributions from observed surface deformations of corals. We identify a suite of models consistent with the observations, and from them we compute an ensemble estimate of the causative slip. We systematically test our technique using synthetic data. Applying the technique to observed coral displacements for the 2005 Nias-Simeulue earthquake and 2007 Mentawai sequence, we reproduce key features of slip present in previously published inversions such as the magnitude and location of slip asperities. From the displacement data available for the 1797 and 1833 Mentawai earthquakes, we present slip estimates reproducing observed displacements. The areas of highest modeled slip in the paleoearthquake are nonoverlapping, and our solutions appear to tile the plate interface, complementing one another. This observation is supported by the complex rupture pattern of the 2007 Mentawai sequence, underlining the need to examine earthquake occurrence through long-term strain budget and stress modeling. Although developed to estimate earthquake slip, the technique is readily adaptable for a wider range of applications.

  1. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  2. Probability estimates of seismic event occurrence compared to health hazards - Forecasting Taipei's Earthquakes

    NASA Astrophysics Data System (ADS)

    Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.

    2014-12-01

    Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.

  3. Coastal land loss and gain as potential earthquake trigger mechanism in SCRs

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2007-12-01

    In stable continental regions (SCRs), historic data show earthquakes can be triggered by natural tectonic sources in the interior of the crust and also by sources stemming from the Earth's sub/surface. Building off of this framework, the following abstract will discuss both as potential sources that might have triggered the 2007 ML4.2 Folkestone earthquake in Kent, England. Folkestone, located along the Southeast coast of Kent in England, is a mature aseismic region. However, a shallow earthquake with a local magnitude of ML = 4.2 occurred on April 28 2007 at 07:18 UTC about 1 km East of Folkestone (51.008° N, 1.206° E) between Dover and New Romney. The epicentral error is about ±5 km. While coastal land loss has major effects towards the Southwest and the Northeast of Folkestone, research observations suggest that erosion and landsliding do not exist in the immediate Folkestone city area (<1km). Furthermore, erosion removes rock material from the surface. This mass reduction decreases the gravitational stress component and would bring a fault away from failure, given a tectonic normal and strike-slip fault regime. In contrast, land gain by geoengineering (e.g., shingle accumulation) in the harbor of Folkestone dates back to 1806. The accumulated mass of sand and gravel accounted for a 2.8·109 kg (2.8 Mt) in 2007. This concentrated mass change less than 1 km away from the epicenter of the mainshock was able to change the tectonic stress in the strike-slip/normal stress regime. Since 1806, shear and normal stresses increased at most on oblique faults dipping 60±10°. The stresses reached values ranging between 1.0 KPa and 30.0 KPa in up to 2 km depth, which are critical for triggering earthquakes. Furthermore, the ratio between holding and driving forces continuously decreased for 200 years. In conclusion, coastal engineering at the surface most likely dominates as potential trigger mechanism for the 2007 ML4.2 Folkestone earthquake. It can be anticipated that

  4. Estimating locations and magnitudes of earthquakes in eastern North America from Modified Mercalli intensities

    USGS Publications Warehouse

    Bakun, W.H.; Johnston, A.C.; Hopper, M.G.

    2003-01-01

    We use 28 calibration events (3.7 ??? M ??? 7.3) from Texas to the Grand Banks, Newfoundland, to develop a Modified Mercalli intensity (MMI) model and associated site corrections for estimating source parameters of historical earthquakes in eastern North America. The model, MMI = 1.41 + 1.68 ?? M - 0.00345 ?? ?? - 2.08log (??), where ?? is the distance in kilometers from the epicenter and M is moment magnitude, provides unbiased estimates of M and its uncertainty, and, if site corrections are used, of source location. The model can be used for the analysis of historical earthquakes with only a few MMI assignments. We use this model, MMI site corrections, and Bakun and Wentworth's (1997 technique to estimate M and the epicenter for three important historical earthquakes. The intensity magnitude M1 is 6.1 for the 18 November 1755 earthquake near Cape Ann, Massachusetts; 6.0 for the 5 January 1843 earthquake near Marked Tree, Arkansas; and 6.0 for the 31 October 1895 earthquake. The 1895 event probably occurred in southern Illinois, about 100 km north of the site of significant ground failure effects near Charleston, Missouri.

  5. Ground motion modeling of the 1906 San Francisco earthquake II: Ground motion estimates for the 1906 earthquake and scenario events

    SciTech Connect

    Aagaard, B; Brocher, T; Dreger, D; Frankel, A; Graves, R; Harmsen, S; Hartzell, S; Larsen, S; McCandless, K; Nilsson, S; Petersson, N A; Rodgers, A; Sjogreen, B; Tkalcic, H; Zoback, M L

    2007-02-09

    We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  6. ShakeMap Atlas 2.0: an improved suite of recent historical earthquake ShakeMaps for global hazard analyses and loss model calibration

    USGS Publications Warehouse

    Garcia, D.; Mah, R.T.; Johnson, K.L.; Hearne, M.G.; Marano, K.D.; Lin, K.-W.; Wald, D.J.

    2012-01-01

    We introduce the second version of the U.S. Geological Survey ShakeMap Atlas, which is an openly-available compilation of nearly 8,000 ShakeMaps of the most significant global earthquakes between 1973 and 2011. This revision of the Atlas includes: (1) a new version of the ShakeMap software that improves data usage and uncertainty estimations; (2) an updated earthquake source catalogue that includes regional locations and finite fault models; (3) a refined strategy to select prediction and conversion equations based on a new seismotectonic regionalization scheme; and (4) vastly more macroseismic intensity and ground-motion data from regional agencies All these changes make the new Atlas a self-consistent, calibrated ShakeMap catalogue that constitutes an invaluable resource for investigating near-source strong ground-motion, as well as for seismic hazard, scenario, risk, and loss-model development. To this end, the Atlas will provide a hazard base layer for PAGER loss calibration and for the Earthquake Consequences Database within the Global Earthquake Model initiative.

  7. USGS approach to real-time estimation of earthquake-triggered ground failure - Results of 2015 workshop

    USGS Publications Warehouse

    Allstadt, Kate E.; Thompson, Eric M.; Wald, David J.; Hamburger, Michael W.; Godt, Jonathan W.; Knudsen, Keith L.; Jibson, Randall W.; Jessee, M. Anna; Zhu, Jing; Hearne, Michael; Baise, Laurie G.; Tanyas, Hakan; Marano, Kristin D.

    2016-01-01

    The U.S. Geological Survey (USGS) Earthquake Hazards and Landslide Hazards Programs are developing plans to add quantitative hazard assessments of earthquake-triggered landsliding and liquefaction to existing real-time earthquake products (ShakeMap, ShakeCast, PAGER) using open and readily available methodologies and products. To date, prototype global statistical models have been developed and are being refined, improved, and tested. These models are a good foundation, but much work remains to achieve robust and defensible models that meet the needs of end users. In order to establish an implementation plan and identify research priorities, the USGS convened a workshop in Golden, Colorado, in October 2015. This document summarizes current (as of early 2016) capabilities, research and operational priorities, and plans for further studies that were established at this workshop. Specific priorities established during the meeting include (1) developing a suite of alternative models; (2) making use of higher resolution and higher quality data where possible; (3) incorporating newer global and regional datasets and inventories; (4) reducing barriers to accessing inventory datasets; (5) developing methods for using inconsistent or incomplete datasets in aggregate; (6) developing standardized model testing and evaluation methods; (7) improving ShakeMap shaking estimates, particularly as relevant to ground failure, such as including topographic amplification and accounting for spatial variability; and (8) developing vulnerability functions for loss estimates.

  8. A discussion of the socio-economic losses and shelter impacts from the Van, Turkey Earthquakes of October and November 2011

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Kunz-Plapp, T.; Vervaeck, A.; Muehr, B.; Markus, M.

    2012-04-01

    The Van earthquake in 2011 hit at 10:41 GMT (13:41 Local) on Sunday, October 23rd, 2011. It was a Mw7.1-7.3 event located at a depth of around 10 km with the epicentre located directly between Ercis (pop. 75,000) and Van (pop. 370,000). Since then, the CEDIM Forensic Analysis Group (using a team of seismologists, engineers, sociologists and meteorologists) and www.earthquake-report.com has reported and analysed on the Van event. In addition, many damaging aftershocks occurring after the main eventwere analysed including a major aftershock centered in Van-Edremit on November 9th, 2011, causing much additional losses. The province of Van has around 1.035 million people as of the last census. The Van province is one of the poorest in Turkey and has much inequality between the rural and urban centers with an average HDI (Human Development Index) around that of Bhutan or Congo. The earthquakes are estimated to have caused 604 deaths (23 October) and 40 deaths (9 November); mostly due to falling debris and house collapse). In addition, between 1 billion TRY to 4 billion TRY (approx. 555 million USD - 2.2 billion USD) is estimated as total economic losses. This represents around 17 to 66% of the provincial GDP of the Van Province (approx. 3.3 billion USD) as of 2011. From the CATDAT Damaging Earthquakes Database, major earthquakes such as this one have occurred in the year 1111 causing major damage and having a magnitude around 6.5-7. In the year 1646 or 1648, Van was again struck by a M6.7 quake killing around 2000 people. In 1881, a M6.3 earthquake near Van killed 95 people. Again, in 1941, a M5.9 earthquake affected Ercis and Van killing between 190 and 430 people. 1945-1946 as well as 1972 brought again damaging and casualty-bearing earthquakes to the Van province. In 1976, the Van-Muradiye earthquake struck the border region with a M7, killing around 3840 people and causing around 51,000 people to become homeless. Key immediate lessons from similar historic

  9. Estimation of the occurrence rate of strong earthquakes based on hidden semi-Markov models

    NASA Astrophysics Data System (ADS)

    Votsi, I.; Limnios, N.; Tsaklidis, G.; Papadimitriou, E.

    2012-04-01

    The present paper aims at the application of hidden semi-Markov models (HSMMs) in an attempt to reveal key features for the earthquake generation, associated with the actual stress field, which is not accessible to direct observation. The models generalize the hidden Markov models by considering the hidden process to form actually a semi-Markov chain. Considering that the states of the models correspond to levels of actual stress fields, the stress field level at the occurrence time of each strong event is revealed. The dataset concerns a well catalogued seismically active region incorporating a variety of tectonic styles. More specifically, the models are applied in Greece and its surrounding lands, concerning a complete data sample with strong (M≥ 6.5) earthquakes that occurred in the study area since 1845 up to present. The earthquakes that occurred are grouped according to their magnitudes and the cases of two and three magnitude ranges for a corresponding number of states are examined. The parameters of the HSMMs are estimated and their confidence intervals are calculated based on their asymptotic behavior. The rate of the earthquake occurrence is introduced through the proposed HSMMs and its maximum likelihood estimator is calculated. The asymptotic properties of the estimator are studied, including the uniformly strongly consistency and the asymptotical normality. The confidence interval for the proposed estimator is given. We assume the state space of both the observable and the hidden process to be finite, the hidden Markov chain to be homogeneous and stationary and the observations to be conditionally independent. The hidden states at the occurrence time of each strong event are revealed and the rate of occurrence of an anticipated earthquake is estimated on the basis of the proposed HSMMs. Moreover, the mean time for the first occurrence of a strong anticipated earthquake is estimated and its confidence interval is calculated.

  10. Using Modified Mercalli Intensities to estimate acceleration response spectra for the 1906 San Francisco earthquake

    USGS Publications Warehouse

    Boatwright, J.; Bundock, H.; Seekins, L.C.

    2006-01-01

    We derive and test relations between the Modified Mercalli Intensity (MMI) and the pseudo-acceleration response spectra at 1.0 and 0.3 s - SA(1.0 s) and SA(0.3 s) - in order to map response spectral ordinates for the 1906 San Francisco earthquake. Recent analyses of intensity have shown that MMI ??? 6 correlates both with peak ground velocity and with response spectra for periods from 0.5 to 3.0 s. We use these recent results to derive a linear relation between MMI and log SA(1.0 s), and we refine this relation by comparing the SA(1.0 s) estimated from Boatwright and Bundock's (2005) MMI map for the 1906 earthquake to the SA(1.0 s) calculated from recordings of the 1989 Loma Prieta earthquake. South of San Jose, the intensity distributions for the 1906 and 1989 earthquakes are remarkably similar, despite the difference in magnitude and rupture extent between the two events. We use recent strong motion regressions to derive a relation between SA(1.0 s) and SA(0.3 s) for a M7.8 strike-slip earthquake that depends on soil type, acceleration level, and source distance. We test this relation by comparing SA(0.3 s) estimated for the 1906 earthquake to SA(0.3 s) calculated from recordings of both the 1989 Loma Prieta and 1994 Northridge earthquakes, as functions of distance from the fault. ?? 2006, Earthquake Engineering Research Institute.

  11. Estimating convective energy losses from solar central receivers

    SciTech Connect

    Siebers, D L; Kraabel, J S

    1984-04-01

    This report outlines a method for estimating the total convective energy loss from a receiver of a solar central receiver power plant. Two types of receivers are considered in detail: a cylindrical, external-type receiver and a cavity-type receiver. The method is intended to provide the designer with a tool for estimating the total convective energy loss that is based on current knowledge of convective heat transfer from receivers to the environment and that is adaptable to new information as it becomes available. The current knowledge consists of information from two recent large-scale experiments, as well as information already in the literature. Also outlined is a method for estimating the uncertainty in the convective loss estimates. Sample estimations of the total convective energy loss and the uncertainties in those convective energy loss estimates for the external receiver of the 10 MWe Solar Thermal Central Receiver Plant (Barstow, California) and the cavity receiver of the International Energy Agency Small Solar Power Systems Project (Almeria, Spain) are included in the appendices.

  12. Estimating the Probability of Earthquake-Induced Landslides

    NASA Astrophysics Data System (ADS)

    McRae, M. E.; Christman, M. C.; Soller, D. R.; Sutter, J. F.

    2001-12-01

    The development of a regionally applicable, predictive model for earthquake-triggered landslides is needed to improve mitigation decisions at the community level. The distribution of landslides triggered by the 1994 Northridge earthquake in the Oat Mountain and Simi Valley quadrangles of southern California provided an inventory of failures against which to evaluate the significance of a variety of physical variables in probabilistic models of static slope stability. Through a cooperative project, the California Division of Mines and Geology provided 10-meter resolution data on elevation, slope angle, coincidence of bedding plane and topographic slope, distribution of pre-Northridge landslides, internal friction angle and cohesive strength of individual geologic units. Hydrologic factors were not evaluated since failures in the study area were dominated by shallow, disrupted landslides in dry materials. Previous studies indicate that 10-meter digital elevation data is required to properly characterize the short, steep slopes on which many earthquake-induced landslides occur. However, to explore the robustness of the model at different spatial resolutions, models were developed at the 10, 50, and 100-meter resolution using classification and regression tree (CART) analysis and logistic regression techniques. Multiple resampling algorithms were tested for each variable in order to observe how resampling affects the statistical properties of each grid, and how relationships between variables within the model change with increasing resolution. Various transformations of the independent variables were used to see which had the strongest relationship with the probability of failure. These transformations were based on deterministic relationships in the factor of safety equation. Preliminary results were similar for all spatial scales. Topographic variables dominate the predictive capability of the models. The distribution of prior landslides and the coincidence of slope

  13. LOSS ESTIMATE FOR ITER ECH TRANSMISSION LINE INCLUDING MULTIMODE PROPAGATION

    SciTech Connect

    Shapiro, Michael; Bigelow, Tim S; Caughman, John B; Rasmussen, David A

    2010-01-01

    The ITER electron cyclotron heating (ECH) transmission lines (TLs) are 63.5-mm-diam corrugated waveguides that will each carry 1 MW of power at 170 GHz. The TL is defined here as the corrugated wave guide system connecting the gyrotron mirror optics unit (MO U) to the entrance of the ECH launcher and includes miter bends and other corrugated wave guide components. The losses on the ITER TL have been calculated for four possible cases corresponding to having HE(11) mode purity at the input of the TL of 100, 97, 90, and 80%. The losses due to coupling, ohmic, and mode conversion loss are evaluated in detail using a numerical code and analytical approaches. Estimates of the calorimetric loss on the line show that the output power is reduced by about 5, +/- 1% because of ohmic loss in each of the four cases. Estimates of the mode conversion loss show that the fraction of output power in the HE(11) mode is similar to 3% smaller than the fraction of input power in the HE(11) mode. High output mode purity therefore can be achieved only with significantly higher input mode purity. Combining both ohmic and mode conversion loss, the efficiency of the TL from the gyrotron MOU to the ECH launcher can be roughly estimated in theory as 92% times the fraction of input power in the HE(11) mode.

  14. The importance of in-situ observations for rapid loss estimates in the Euro-Med region

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Mazet Roux, G.; Gilles, S.

    2009-04-01

    A major (M>7) earthquake occurring in a densely populated area will inevitably cause significant damage and generally speaking the poorer the country the higher the number of fatalities. It was clear for any earthquake monitoring agency that the M7.8 Wenchuan earthquake in May 2008 was a disaster as soon its magnitude and location had been estimated. However, the loss estimates of moderate to strong earthquakes (M5 to M6) occurring close to an urban area is much trickier because the losses are the result of the convolution of many parameters (location, magnitude, depth, directivity, seismic attenuation, site effects, building vulnerability, repartition of the population at the time of the event…) which are either affected by non-negligible uncertainties or poorly constrained at least at a global scale. Just considering one of this parameter, the epicentral location: In this range of magnitude, the characteristic size of the potentially damaged area is comparable to the typical epicentral location uncertainty obtained in real time, i.e. 10 to 15 km. It is then not possible to discriminate in real time between an earthquake location right below a town which could cause significant damage and a location 15 km away which impact would be much lower. Clearly, if the uncertainties affecting each of the parameters are properly taken into account, for such earthquakes the resulting scenarios of losses will range from no impact to very significant impact and then the results will not be of much use. The way to reduce the uncertainties on the loss estimates in such cases is then to collect in-situ information on the local shaking level and/or on the actual damage at a number of localities. In area of low seismic hazard, the cost of installing dense accelerometric network is, in practice, too high and the only remaining solution is to rapidly collect observations of the damage. That is what the EMSC has been developing for the last few years by involving the Citizen in

  15. A General Method to Estimate Earthquake Moment and Magnitude using Regional Phase Amplitudes

    SciTech Connect

    Pasyanos, M E

    2009-11-19

    This paper presents a general method of estimating earthquake magnitude using regional phase amplitudes, called regional M{sub o} or regional M{sub w}. Conceptually, this method uses an earthquake source model along with an attenuation model and geometrical spreading which accounts for the propagation to utilize regional phase amplitudes of any phase and frequency. Amplitudes are corrected to yield a source term from which one can estimate the seismic moment. Moment magnitudes can then be reliably determined with sets of observed phase amplitudes rather than predetermined ones, and afterwards averaged to robustly determine this parameter. We first examine in detail several events to demonstrate the methodology. We then look at various ensembles of phases and frequencies, and compare results to existing regional methods. We find regional M{sub o} to be a stable estimator of earthquake size that has several advantages over other methods. Because of its versatility, it is applicable to many more events, particularly smaller events. We make moment estimates for earthquakes ranging from magnitude 2 to as large as 7. Even with diverse input amplitude sources, we find magnitude estimates to be more robust than typical magnitudes and existing regional methods and might be tuned further to improve upon them. The method yields a more meaningful quantity of seismic moment, which can be recast as M{sub w}. Lastly, it is applied here to the Middle East region using an existing calibration model, but it would be easy to transport to any region with suitable attenuation calibration.

  16. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    USGS Publications Warehouse

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  17. Heterogeneous rupture in the great Cascadia earthquake of 1700 inferred from coastal subsidence estimates

    USGS Publications Warehouse

    Wang, Pei-Ling; Engelhart, Simon E.; Wang, Kelin; Hawkes, Andrea D.; Horton, Benjamin P.; Nelson, Alan R.; Witter, Robert C.

    2013-01-01

    Past earthquake rupture models used to explain paleoseismic estimates of coastal subsidence during the great A.D. 1700 Cascadia earthquake have assumed a uniform slip distribution along the megathrust. Here we infer heterogeneous slip for the Cascadia margin in A.D. 1700 that is analogous to slip distributions during instrumentally recorded great subduction earthquakes worldwide. The assumption of uniform distribution in previous rupture models was due partly to the large uncertainties of then available paleoseismic data used to constrain the models. In this work, we use more precise estimates of subsidence in 1700 from detailed tidal microfossil studies. We develop a 3-D elastic dislocation model that allows the slip to vary both along strike and in the dip direction. Despite uncertainties in the updip and downdip slip extensions, the more precise subsidence estimates are best explained by a model with along-strike slip heterogeneity, with multiple patches of high-moment release separated by areas of low-moment release. For example, in A.D. 1700, there was very little slip near Alsea Bay, Oregon (~44.4°N), an area that coincides with a segment boundary previously suggested on the basis of gravity anomalies. A probable subducting seamount in this area may be responsible for impeding rupture during great earthquakes. Our results highlight the need for more precise, high-quality estimates of subsidence or uplift during prehistoric earthquakes from the coasts of southern British Columbia, northern Washington (north of 47°N), southernmost Oregon, and northern California (south of 43°N), where slip distributions of prehistoric earthquakes are poorly constrained.

  18. Modified Mercalli Intensity for scenario earthquakes in Evansville, Indiana

    USGS Publications Warehouse

    Cramer, Chris; Haase, Jennifer; Boyd, Oliver

    2012-01-01

    Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the fact that Evansville is close to the Wabash Valley and New Madrid seismic zones, there is concern about the hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake. Earthquake-hazard maps provide one way of conveying such estimates of strong ground shaking and will help the region prepare for future earthquakes and reduce earthquake-caused losses.

  19. Rapid Estimation of Macroseismic Intensity for On-site Earthquake Early Warning in Italy from Early Radiated Energ

    NASA Astrophysics Data System (ADS)

    Emolo, A.; Zollo, A.; Brondi, P.; Picozzi, M.; Mucciarelli, M.

    2015-12-01

    Earthquake Early Warning System (EEWS) are effective tools for the risk mitigation in active seismic regions. Recently, a feasibility study of a nation-wide earthquake early warning systems has been conducted for Italy considering the RAN Network and the EEW software platform PRESTo. This work showed that a reliable estimations in terms of magnitude and epicentral localization would be available within 3-4 seconds after the first P-wave arrival. On the other hand, given the RAN's density, a regional EEWS approach would result in a Blind Zone (BZ) of 25-30 km in average. Such BZ dimension would provide lead-times greater than zero only for events having magnitude larger than 6.5. Considering that in Italy also smaller events are capable of generating great losses both in human and economic terms, as dramatically experienced during the recent 2009 L'Aquila (ML 5.9) and 2012 Emilia (ML 5.9) earthquakes, it has become urgent to develop and test on-site approaches. The present study is focused on the development of a new on-site EEW metodology for the estimation of the macroseismic intensity at a target site or area. In this analysis we have used a few thousands of accelerometric traces recorded by RAN related to the largest earthquakes (ML>4) occurred in Italy in the period 1997-2013. The work is focused on the integral EW parameter Squared Velocity Integral (IV2) and on its capability to predict the peak ground velocity PGV and the Housner Intensity IH, as well as from these latters we parameterized a new relation between IV2 and the Macroseismic Intensity. To assess the performance of the developed on-site EEW relation, we used data of the largest events occurred in Italy in the last 6 years recorded by the Osservatorio Sismico delle Strutture, as well as on the recordings of the moderate earthquake reported by INGV Strong Motion Data. The results shows that the macroseismic intensity values predicted by IV2 and the one estimated by PGV and IH are in good agreement.

  20. Strong Earthquake Motion Estimates for Three Sites on the U.C. Riverside Campus

    SciTech Connect

    Archuleta, R.; Elgamal, A.; Heuze, F.; Lai, T.; Lavalle, D.; Lawrence, B.; Liu, P.C.; Matesic, L.; Park, S.; Riemar, M.; Steidl, J.; Vucetic, M.; Wagoner, J.; Yang, Z.

    2000-11-01

    The approach of the Campus Earthquake Program (CEP) is to combine the substantial expertise that exists within the UC system in geology, seismology, and geotechnical engineering, to estimate the earthquake strong motion exposure of UC facilities. These estimates draw upon recent advances in hazard assessment, seismic wave propagation modeling in rocks and soils, and dynamic soil testing. The UC campuses currently chosen for application of our integrated methodology are Riverside, San Diego, and Santa Barbara. The procedure starts with the identification of possible earthquake sources in the region and the determination of the most critical fault(s) related to earthquake exposure of the campus. Combined geological, geophysical, and geotechnical studies are then conducted to characterize each campus with specific focus on the location of particular target buildings of special interest to the campus administrators. We drill and geophysically log deep boreholes next to the target structure, to provide direct in-situ measurements of subsurface material properties, and to install uphole and downhole 3-component seismic sensors capable of recording both weak and strong motions. The boreholes provide access below the soil layers, to deeper materials that have relatively high seismic shear-wave velocities. Analyses of conjugate downhole and uphole records provide a basis for optimizing the representation of the low-strain response of the sites. Earthquake rupture scenarios of identified causative faults are combined with the earthquake records and with nonlinear soil models to provide site-specific estimates of strong motions at the selected target locations. The predicted ground motions are shared with the UC consultants, so that they can be used as input to the dynamic analysis of the buildings. Thus, for each campus targeted by the CEP project, the strong motion studies consist of two phases, Phase 1--initial source and site characterization, drilling, geophysical

  1. Strong earthquake motion estimates for three sites on the U.C. San Diego campus

    SciTech Connect

    Day, S; Doroudian, M; Elgamal, A; Gonzales, S; Heuze, F; Lai, T; Minster, B; Oglesby, D; Riemer, M; Vernon, F; Vucetic, M; Wagoner, J; Yang, Z

    2002-05-07

    The approach of the Campus Earthquake Program (CEP) is to combine the substantial expertise that exists within the UC system in geology, seismology, and geotechnical engineering, to estimate the earthquake strong motion exposure of UC facilities. These estimates draw upon recent advances in hazard assessment, seismic wave propagation modeling in rocks and soils, and dynamic soil testing. The UC campuses currently chosen for application of our integrated methodology are Riverside, San Diego, and Santa Barbara. The procedure starts with the identification of possible earthquake sources in the region and the determination of the most critical fault(s) related to earthquake exposure of the campus. Combined geological, geophysical, and geotechnical studies are then conducted to characterize each campus with specific focus on the location of particular target buildings of special interest to the campus administrators. We drill, sample, and geophysically log deep boreholes next to the target structure, to provide direct in-situ measurements of subsurface material properties, and to install uphole and downhole 3-component seismic sensors capable of recording both weak and strong motions. The boreholes provide access below the soil layers, to deeper materials that have relatively high seismic shear-wave velocities. Analyses of conjugate downhole and uphole records provide a basis for optimizing the representation of the low-strain response of the sites. Earthquake rupture scenarios of identified causative faults are combined with the earthquake records and with nonlinear soil models to provide site-specific estimates of strong motions at the selected target locations. The predicted ground motions are shared with the UC consultants, so that they can be used as input to the dynamic analysis of the buildings. Thus, for each campus targeted by the CEP project, the strong motion studies consist of two phases, Phase 1--initial source and site characterization, drilling

  2. Lessons on Seismic Hazard Estimation from the 2003 Bingol, Turkey Earthquake

    NASA Astrophysics Data System (ADS)

    Nalbant, S. S.; Steacy, S.; McCloskey, J.

    2003-12-01

    In a 2002 paper the stress state along the East Anatolian Fault Zone (EAFZ) was estimated by the addition of long term tectonic loading to the static stressing effect of a series of large historical earthquakes. The results clearly indicated two areas of particular concern. The first extended along the EAFZ between the cities of Kahraman Maras and Malatya and the second along the trend of the EAFZ between the cities of Elazig and Bingol. The Bingol (M6.4, 1 May 2003) earthquake occurred within this second area with a focal mechanism which was consistent with left lateral rupture of a buried segment of the EAFZ, prompting suggestions that this represented a success for the idea of using Coulomb Stress Modelling to assess seismic hazard. This success, however, depended on the confirmation of the orientation of the earthquake fault; in the event, and in the absence of surface ruptures, aftershock distributions unambiguously showed that the event was a right lateral failure on an unmapped structure conjugate to the EAFZ. The Bingol earthquake was, therefore, not encouraged by the stress field modelled in the 2002 study. Here we reflect on the lessons learned from this case. We identify three possible reasons for the discrepancy between the calculations and the occurrence of the Bingol earthquake. Firstly, historical earthquakes used in the 2002 study may have been incorrectly modelled in either size or location. Secondly, earthquakes not included in the study, due to either their size or occurrence time, may have had a significant effect on the stress field. Or, finally, the secular stress used to load the faults was inappropriate. We argue that it is through a combination of historical seismology guided and constrained by structural geology, directed paleoseismology and coupled with stress modelling which has been informed by detailed GPS data that an integrated seismic hazard program might have the best chance of success.

  3. A hierarchical Bayesian approach for earthquake location and data uncertainty estimation in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Arroucau, Pierre; Custódio, Susana

    2015-04-01

    Solving inverse problems requires an estimate of data uncertainties. This usually takes the form of a data covariance matrix, which determines the shape of the model posterior distribution. Those uncertainties are yet not always known precisely and it is common practice to simply set them to a fixed, reasonable value. In the case of earthquake location, the hypocentral parameters (longitude, latitude, depth and origin time) are typically inverted for using seismic phase arrival times. But quantitative data variance estimates are rarely provided. Instead, arrival time catalogs usually associate phase picks with a quality factor, which is subsequently interpreted more or less arbitrarily in terms of data uncertainty in the location procedure. Here, we present a hierarchical Bayesian algorithm for earthquake location in 3D heterogeneous media, in which not only the earthquake hypocentral parameters, but also the P- and S-wave arrival time uncertainties, are inverted for, hence allowing more realistic posterior model covariance estimates. Forward modeling is achieved by means of the Fast Marching Method (FMM), an eikonal solver which has the ability to take interfaces into account, so direct, reflected and refracted phases can be used in the inversion. We illustrate the ability of our algorithm to retrieve earthquake hypocentral parameters as well as data uncertainties through synthetic examples and using a subset of arrival time catalogs for mainland Portugal and its Atlantic margin.

  4. Toward reliable automated estimates of earthquake source properties from body wave spectra

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Ben-Zion, Yehuda

    2016-06-01

    We develop a two-stage methodology for automated estimation of earthquake source properties from body wave spectra. An automated picking algorithm is used to window and calculate spectra for both P and S phases. Empirical Green's functions are stacked to minimize nongeneric source effects such as directivity and are used to deconvolve the spectra of target earthquakes for analysis. In the first stage, window lengths and frequency ranges are defined automatically from the event magnitude and used to get preliminary estimates of the P and S corner frequencies of the target event. In the second stage, the preliminary corner frequencies are used to update various parameters to increase the amount of data and overall quality of the deconvolved spectral ratios (target event over stacked Empirical Green's function). The obtained spectral ratios are used to estimate the corner frequencies, strain/stress drops, radiated seismic energy, apparent stress, and the extent of directivity for both P and S waves. The technique is applied to data generated by five small to moderate earthquakes in southern California at hundreds of stations. Four of the five earthquakes are found to have significant directivity. The developed automated procedure is suitable for systematic processing of large seismic waveform data sets with no user involvement.

  5. A Hierarchical Bayesian Approcah for Earthquake Location and Data Uncertainty Estimation in 3D Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Arroucau, P.; Custodio, S.

    2014-12-01

    Solving inverse problems requires an estimate of data uncertainties. This usually takes the form of a data covariance matrix, which determines the shape of the model posterior distribution. Those uncertainties are yet not always known precisely and it is common practice to simply set them to a fixed, reasonable value. In the case of earthquake location, the hypocentral parameters (longitude, latitude, depth and origin time) are typically inverted for using seismic phase arrival times. But quantitative data variance estimates are rarely provided. Instead, arrival time catalogs usually associate phase picks with a quality factor, which is subsequently interpreted more or less arbitrarily in terms of data uncertainty in the location procedure. Here, we present a hierarchical Bayesian algorithm for earthquake location in 3D heterogeneous media, in which not only the earthquake hypocentral parameters, but also the P- and S-wave arrival time uncertainties, are inverted for, hence allowing more realistic posterior model covariance estimates. Forward modeling is achieved by means of the Fast Marching Method (FMM), an eikonal solver which has the ability to take interfaces into account, so direct, reflected and refracted phases can be used in the inversion. We illustrate the ability of our algorithm to retrieve earthquake hypocentral parameters as well as data uncertainties through synthetic examples and using a subset of arrival time catalogs for mainland Portugal and its Atlantic margin.

  6. Re-estimating the epicenter of the 1927 Jericho earthquake using spatial distribution of intensity data

    NASA Astrophysics Data System (ADS)

    Zohar, Motti; Marco, Shmuel

    2012-07-01

    We present a new approach for re-estimating an epicenter of historical earthquake using the spatial distribution of intensity data. We use macroseismic data related to the 1927 Jericho earthquake since this is the first strong earthquake recorded by modern seismographs and is also well documented by historical evidence and reports. The epicenter is located in two sequential steps: (1) Correction of previously-evaluated seismic intensities in accordance with the local site-attributes: construction quality, topographic slope, groundwater level, and surface geology; (2) Spatial correlation of these intensities with a logarithmic variant of the epicentral distance. The resulted location (approximated to 35.5°/31.8°) is consistent with the seismogram-based location calculated by Avni et al. (2002) and also of Ben Menahem et al. (1976) with a spatial error of 50 km. The proposed method suggests an additional approach to the formers based mainly upon spatial analysis of intensity data.

  7. Earthquake.

    PubMed

    Cowen, A R; Denney, J P

    1994-04-01

    On January 25, 1 week after the most devastating earthquake in Los Angeles history, the Southern California Hospital Council released the following status report: 928 patients evacuated from damaged hospitals. 805 beds available (136 critical, 669 noncritical). 7,757 patients treated/released from EDs. 1,496 patients treated/admitted to hospitals. 61 dead. 9,309 casualties. Where do we go from here? We are still waiting for the "big one." We'll do our best to be ready when Mother Nature shakes, rattles and rolls. The efforts of Los Angeles City Fire Chief Donald O. Manning cannot be overstated. He maintained department command of this major disaster and is directly responsible for implementing the fire department's Disaster Preparedness Division in 1987. Through the chief's leadership and ability to forecast consequences, the city of Los Angeles was better prepared than ever to cope with this horrendous earthquake. We also pay tribute to the men and women who are out there each day, where "the rubber meets the road." PMID:10133439

  8. A spatially explicit estimate of avoided forest loss.

    PubMed

    Honey-Rosés, Jordi; Baylis, Kathy; Ramírez, M Isabel

    2011-10-01

    With the potential expansion of forest conservation programs spurred by climate-change agreements, there is a need to measure the extent to which such programs achieve their intended results. Conventional methods for evaluating conservation impact tend to be biased because they do not compare like areas or account for spatial relations. We assessed the effect of a conservation initiative that combined designation of protected areas with payments for environmental services to conserve over wintering habitat for the monarch butterfly (Danaus plexippus) in Mexico. To do so, we used a spatial-matching estimator that matches covariates among polygons and their neighbors. We measured avoided forest loss (avoided disturbance and deforestation) by comparing forest cover on protected and unprotected lands that were similar in terms of accessibility, governance, and forest type. Whereas conventional estimates of avoided forest loss suggest that conservation initiatives did not protect forest cover, we found evidence that the conservation measures are preserving forest cover. We found that the conservation measures protected between 200 ha and 710 ha (3-16%) of forest that is high-quality habitat for monarch butterflies, but had a smaller effect on total forest cover, preserving between 0 ha and 200 ha (0-2.5%) of forest with canopy cover >70%. We suggest that future estimates of avoided forest loss be analyzed spatially to account for how forest loss occurs across the landscape. Given the forthcoming demand from donors and carbon financiers for estimates of avoided forest loss, we anticipate our methods and results will contribute to future studies that estimate the outcome of conservation efforts. PMID:21902720

  9. Strong Earthquake Motion Estimates for the UCSB Campus, and Related Response of the Engineering 1 Building

    SciTech Connect

    Archuleta, R.; Bonilla, F.; Doroudian, M.; Elgamal, A.; Hueze, F.

    2000-06-06

    This is the second report on the UC/CLC Campus Earthquake Program (CEP), concerning the estimation of exposure of the U.C. Santa Barbara campus to strong earthquake motions (Phase 2 study). The main results of Phase 1 are summarized in the current report. This document describes the studies which resulted in site-specific strong motion estimates for the Engineering I site, and discusses the potential impact of these motions on the building. The main elements of Phase 2 are: (1) determining that a M 6.8 earthquake on the North Channel-Pitas Point (NCPP) fault is the largest threat to the campus. Its recurrence interval is estimated at 350 to 525 years; (2) recording earthquakes from that fault on March 23, 1998 (M 3.2) and May 14, 1999 (M 3.2) at the new UCSB seismic station; (3) using these recordings as empirical Green's functions (EGF) in scenario earthquake simulations which provided strong motion estimates (seismic syntheses) at a depth of 74 m under the Engineering I site; 240 such simulations were performed, each with the same seismic moment, but giving a broad range of motions that were analyzed for their mean and standard deviation; (4) laboratory testing, at U.C. Berkeley and U.C. Los Angeles, of soil samples obtained from drilling at the UCSB station site, to determine their response to earthquake-type loading; (5) performing nonlinear soil dynamic calculations, using the soil properties determined in-situ and in the laboratory, to calculate the surface strong motions resulting from the seismic syntheses at depth; (6) comparing these CEP-generated strong motion estimates to acceleration spectra based on the application of state-of-practice methods - the IBC 2000 code, UBC 97 code and Probabilistic Seismic Hazard Analysis (PSHA), this comparison will be used to formulate design-basis spectra for future buildings and retrofits at UCSB; and (7) comparing the response of the Engineering I building to the CEP ground motion estimates and to the design

  10. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    USGS Publications Warehouse

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200  km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  11. Estimating Phosphorus Loss in Runoff from Manure and Fertilizer for a Phosphorus Loss Quantification Tool

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Non-point source pollution of fresh waters by phosphorus (P) is a concern because it contributes to accelerated eutrophication. Qualitative P Indexes that estimate the risk of field-scale P loss have been developed in the USA and Europe. However, given the state of the science concerning agricultura...

  12. Estimation of postfire nutrient loss in the Florida everglades.

    PubMed

    Qian, Y; Miao, S L; Gu, B; Li, Y C

    2009-01-01

    Postfire nutrient release into ecosystem via plant ash is critical to the understanding of fire impacts on the environment. Factors determining a postfire nutrient budget are prefire nutrient content in the combustible biomass, burn temperature, and the amount of combustible biomass. Our objective was to quantitatively describe the relationships between nutrient losses (or concentrations in ash) and burning temperature in laboratory controlled combustion and to further predict nutrient losses in field fire by applying predictive models established based on laboratory data. The percentage losses of total nitrogen (TN), total carbon (TC), and material mass showed a significant linear correlation with a slope close to 1, indicating that TN or TC loss occurred predominantly through volatilization during combustion. Data obtained in laboratory experiments suggest that the losses of TN, TC, as well as the ratio of ash total phosphorus (TP) concentration to leaf TP concentration have strong relationships with burning temperature and these relationships can be quantitatively described by nonlinear equations. The potential use of these nonlinear models relating nutrient loss (or concentration) to temperature in predicting nutrient concentrations in field ash appear to be promising. During a prescribed fire in the northern Everglades, 73.1% of TP was estimated to be retained in ash while 26.9% was lost to the atmosphere, agreeing well with the distribution of TP during previously reported wild fires. The use of predictive models would greatly reduce the cost associated with measuring field ash nutrient concentrations. PMID:19643746

  13. Estimates of the magnitude of aseismic slip associated with small earthquakes near San Juan Bautista, CA

    NASA Astrophysics Data System (ADS)

    Hawthorne, J. C.; Simons, M.

    2013-12-01

    The recurrence intervals of repeating earthquakes raise the possibility that much of the slip associated with small earthquakes is aseismic. To test this hypothesis, we examine the co- and post-seismic strain changes associated with Mc 2 to 4 earthquakes on the San Andreas Fault. We consider several thousand events that occurred near USGS strainmeter SJT, at the northern end of the creeping section. Most of the strain changes associated with these events are below the noise level on a single record, so we bin the earthquakes into 3 to 5 groups according to their magnitude. We then invert for an average time history of strain per seismic moment for each group. The seismic moment M0 is assumed to scale as 10β Mc, where Mc is the preferred magnitude in the NCSN catalog, and β is between 1.1 and 1.6. We try several approaches to account for the spatial pattern of strain, but we focus on the ɛE-N strain component (east extension minus north extension) because it is the most robust to model. Each of the estimated strain time series displays a step at the time of the earthquakes. The ratio of the strain step to seismic moment is larger for the bin with smaller events. If we assume that M0~ 101.5Mc, the ratio increases by a factor of 3 to 5 per unit decrease in Mc. This increase in strain per moment would imply that most of the slip within an hour of small events is aseismic. For instance, the aseismic moment of a Mc 2 earthquake would be at least 5 to 10 times the seismic moment. However, much of the variation in strain per seismic moment is eliminated for a smaller but still plausible value of β. If M0~101.2Mc, the strain per moment increases by about a factor of 2 per unit decrease in Mc.

  14. Dose estimates in a loss of lead shielding truck accident.

    SciTech Connect

    Dennis, Matthew L.; Osborn, Douglas M.; Weiner, Ruth F.; Heames, Terence John

    2009-08-01

    The radiological transportation risk & consequence program, RADTRAN, has recently added an updated loss of lead shielding (LOS) model to it most recent version, RADTRAN 6.0. The LOS model was used to determine dose estimates to first-responders during a spent nuclear fuel transportation accident. Results varied according to the following: type of accident scenario, percent of lead slump, distance to shipment, and time spent in the area. This document presents a method of creating dose estimates for first-responders using RADTRAN with potential accident scenarios. This may be of particular interest in the event of high speed accidents or fires involving cask punctures.

  15. Comparison of models for piping transmission loss estimations

    NASA Astrophysics Data System (ADS)

    Catron, Fred W.; Mann, J. Adin

    2005-09-01

    A frequency dependent model for the transmission loss of piping is important for accurate estimates of the external radiation from pipes and the vibration level of the pipe walls. A statistical energy analysis model is used to predict the transmission loss of piping. Key terms in the model are the modal density and the radiation efficiency of the piping wall. Several available models for each are compared in reference to measured data. In low frequency octave bands, the modal density is low. The model of the transmission loss in these octave bands is augmented with a mass law model in the low frequency regime where the number of modes is small. The different models and a comparison of the models will be presented.

  16. A phase coherence approach to estimating the spatial extent of earthquakes

    NASA Astrophysics Data System (ADS)

    Hawthorne, Jessica C.; Ampuero, Jean-Paul

    2016-04-01

    We present a new method for estimating the spatial extent of seismic sources. The approach takes advantage of an inter-station phase coherence computation that can identify co-located sources (Hawthorne and Ampuero, 2014). Here, however, we note that the phase coherence calculation can eliminate the Green's function and give high values only if both earthquakes are point sources---if their dimensions are much smaller than the wavelengths of the propagating seismic waves. By examining the decrease in coherence at higher frequencies (shorter wavelengths), we can estimate the spatial extents of the earthquake ruptures. The approach can to some extent be seen as a simple way of identifying directivity or variations in the apparent source time functions recorded at various stations. We apply this method to a set of well-recorded earthquakes near Parkfield, CA. We show that when the signal to noise ratio is high, the phase coherence remains high well above 50 Hz for closely spaced M<1.5 earthquake. The high-frequency phase coherence is smaller for larger earthquakes, suggesting larger spatial extents. The implied radii scale roughly as expected from typical magnitude-corner frequency scalings. We also examine a second source of high-frequency decoherence: spatial variation in the shape of the Green's functions. This spatial decoherence appears to occur on a similar wavelengths as the decoherence associated with the apparent source time functions. However, the variation in Green's functions can be normalized away to some extent by comparing observations at multiple components on a single station, which see the same apparent source time functions.

  17. Building Time-Dependent Earthquake Recurrence Models for Probabilistic Loss Computations

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Nyst, M.

    2013-12-01

    We present a Risk Management perspective on earthquake recurrence on mature faults, and the ways that it can be modeled. The specificities of Risk Management relative to Probabilistic Seismic Hazard Assessment (PSHA), include the non-linearity of the exceedance probability curve for losses relative to the frequency of event occurrence, the fact that losses at all return periods are needed (and not at discrete values of the return period), and the set-up of financial models which sometimes require the modeling of realizations of the order in which events may occur (I.e., simulated event dates are important, whereas only average rates of occurrence are routinely used in PSHA). We use New Zealand as a case study and review the physical characteristics of several faulting environments, contrasting them against properties of three probability density functions (PDFs) widely used to characterize the inter-event time distributions in time-dependent recurrence models. We review the data available to help constrain both the priors and the recurrence process. And we propose that with the current level of knowledge, the best way to quantify the recurrence of large events on mature faults is to use a Bayesian combination of models, i.e., the decomposition of the inter-event time distribution into a linear combination of individual PDFs with their weight given by the posterior distribution. Finally we propose to the community : 1. A general debate on how best to incorporate our knowledge (e.g., from geology, geomorphology) on plausible models and model parameters, but also preserve the information on what we do not know; and 2. The creation and maintenance of a global database of priors, data, and model evidence, classified by tectonic region, special fluid characteristic (pH, compressibility, pressure), fault geometry, and other relevant properties so that we can monitor whether some trends emerge in terms of which model dominates in which conditions.

  18. Regional intensity attenuation models for France and the estimation of magnitude and location of historical earthquakes

    USGS Publications Warehouse

    Bakun, W.H.; Scotti, O.

    2006-01-01

    Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.

  19. Teleseismic estimates of radiated seismic energy: The E/M 0 discriminant for tsunami earthquakes

    NASA Astrophysics Data System (ADS)

    Newman, Andrew V.; Okal, Emile A.

    1998-11-01

    We adapt the formalism of Boatwright and Choy for the computation of radiated seismic energy from broadband records at teleseismic distances to the real-time situation when neither the depth nor the focal geometry of the source is known accurately. The analysis of a large data set of more than 500 records from 52 large, recent earthquakes shows that this procedure yields values of the estimated energy, EE, in good agreement with values computed from available source parameters, for example as published by the National Earthquake Information Center (NEIC), the average logarithmic residual being only 0.26 units. We analyze the energy-to-moment ratio by defining Θ = log10(EE/M0). For regular earthquakes, this parameter agrees well with values expected from theoretical models and from the worldwide NEIC catalogue. There is a one-to-one correspondence between values of Θ that are deficient by one full unit or more, and the so-called "tsunami earthquakes", previously identified in the literature as having exceedingly slow sources, and believed due to the presence of sedimentary structures in the fault zone. Our formalism can be applied to single-station measurements, and its coupling to automated real-time measurements of the seismic moment using the mantle magnitude Mm should significantly improve real-time tsunami warning.

  20. Rupture process of the 1946 Nankai earthquake estimated using seismic waveforms and geodetic data

    NASA Astrophysics Data System (ADS)

    Murotani, Satoko; Shimazaki, Kunihiko; Koketsu, Kazuki

    2015-08-01

    The rupture process of the 1946 Nankai earthquake (MJMA 8.0) was estimated using seismic waveforms from teleseismic and strong motion stations together with geodetic data from leveling surveys and tide gauges. The results of joint inversion analysis showed that two areas with large slip are more confined than in previous studies. In our inversion, we assumed spatially varying strike and dip angles and depth of each subfault by fitting those to the actual complex shape of the upper surface of the Philippine Sea plate in the Nankai Trough region. As a result, we calculated the total seismic moment, M0 = 5.5 × 1021 Nm; the moment magnitude, Mw = 8.4; and a maximum slip of 5.1 m, occurring at a point south of Cape Muroto. The estimated slip distribution on the west side of the fault plane appears somewhat complicated, but it explains well the vertical deformations at Tosashimizu and in the vicinity of Inomisaki. Arguments have been made that the westernmost part slipped slowly after the earthquake over a period of days or months as an afterslip because the seismic waveforms can be largely explained without the slip in this part. However, in order to explain the displacement recorded by the tide gauge at Tosashimizu, we conclude that the westernmost part slipped simultaneously with the earthquake. Splay faulting, which was suggested in previous studies, is not required in our model to explain the seismic waveforms and geodetic data.

  1. Rapid estimation of the moment magnitude of the 2011 off the Pacific coast of Tohoku earthquake from coseismic strain steps

    NASA Astrophysics Data System (ADS)

    Itaba, S.; Matsumoto, N.; Kitagawa, Y.; Koizumi, N.

    2012-12-01

    The 2011 off the Pacific coast of Tohoku earthquake, of moment magnitude (Mw) 9.0, occurred at 14:46 Japan Standard Time (JST) on March 11, 2011. The coseismic strain steps caused by the fault slip of this earthquake were observed in the Tokai, Kii Peninsula and Shikoku by the borehole strainmeters which were carefully set by Geological Survey of Japan, AIST. Using these strain steps, we estimated a fault model for the earthquake on the boundary between the Pacific and North American plates. Our model, which is estimated only from several minutes' strain data, is largely consistent with the final fault models estimated from GPS and seismic wave data. The moment magnitude can be estimated about 6 minutes after the origin time, and 4 minutes after wave arrival. According to the fault model, the moment magnitude of the earthquake is 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency announced just after earthquake occurrence was 7.9. Generally coseismic strain steps are considered to be less reliable than seismic waves and GPS data. However our results show that the coseismic strain steps observed by the borehole strainmeters, which were carefully set and monitored, can be relied enough to decide the earthquake magnitude precisely and rapidly. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using strain steps is one of the strong methods for rapid estimation of the magnitude of great earthquakes.

  2. Estimating the economic loss of recent North Atlantic fisheries management

    NASA Astrophysics Data System (ADS)

    Merino, Gorka; Barange, Manuel; Fernandes, Jose A.; Mullon, Christian; Cheung, William; Trenkel, Verena; Lam, Vicky

    2014-12-01

    It is accepted that world's fisheries are not generally exploited at their biological or their economic optimum. Most fisheries assessments focus on the biological capacity of fish stocks to respond to harvesting and few have attempted to estimate the economic efficiency at which ecosystems are exploited. The latter is important as fisheries contribute considerably to the economic development of many coastal communities. Here we estimate the overall potential economic rent for the fishing industry in the North Atlantic to be B€ 12.85, compared to current estimated profits of B€ 0.63. The difference between the potential and the net profits obtained from North Atlantic fisheries is therefore B€ 12.22. In order to increase the profits of North Atlantic fisheries to a maximum, total fish biomass would have to be rebuilt to 108 Mt (2.4 times more than present) by reducing current total fishing effort by 53%. Stochastic simulations were undertaken to estimate the uncertainty associated with the aggregate bioeconomic model that we use and we estimate the economic loss NA fisheries in a range of 2.5 and 32 billion of euro. We provide economic justification for maintaining or restoring fish stocks to above their MSY biomass levels. Our conclusions are consistent with similar global scale studies.

  3. Comparison of three tests for estimating gastroenteral protein loss

    SciTech Connect

    Glaubitti, D.; Marx, M.; Weller, H.

    1984-01-01

    A decisive step in the diagnosis of exudative gastroenteropathy which shows a pathologically increased transfer of plasma proteins into the stomach or intestine is the measurement of fecal radioactivity after intravenous administration of radionuclide-labeled large organic compounds or of small inorganic compounds attaching themselves to plasma proteins within the patient. In 24 patients (12 men and women each) aged 40 to 66 years, the gastroenteral protein loss was estimated after intravenous injection of Cr-51 chloride, Cr-51 human serum albumin, or Fe-59 iron dextran. Each test lasted 6 days. There was an interval of 2 weeks between 2 tests. The feces were collected completely within the test period for determination of radioactivity. External probe counting over liver, spleen, right kidney, and thyroid was performed daily up to 10 days. The results obtained with Cr-51 chloride presented the largest range whereas the test with Fe-59 iron dextran exhibited both the smallest deviation from the mean value and the lowest normal range. During the tests for gastroenteral protein loss external probe counting demonstrated no distinct tendency to a more rapid radionuclide loss from liver, spleen, and kidney in the patients suffering from exudative gastroenteropathy when compared with healthy subjects. The authors conclude that the most suitable test to estimate gastroenteral protein loss is the Fe-59 iron dextran test although Fe-59 iron dextran is not available commercially and causes a higher radiation burden than the other tests do. In second place, the Cr-51 chloride test should be used, the radiopharmaceutical of which is less expensive and has no significant disadvantage in comparison with Cr-51 human serum albumin.

  4. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

  5. Energy Losses Estimation During Pulsed-Laser Seam Welding

    NASA Astrophysics Data System (ADS)

    Sebestova, Hana; Havelkova, Martina; Chmelickova, Hana

    2014-06-01

    The finite-element tool SYSWELD (ESI Group, Paris, France) was adapted to simulate pulsed-laser seam welding. Besides temperature field distribution, one of the possible outputs of the welding simulation is the amount of absorbed power necessary to melt the required material volume including energy losses. Comparing absorbed or melting energy with applied laser energy, welding efficiencies can be calculated. This article presents achieved results of welding efficiency estimation based on the assimilation both experimental and simulation output data of the pulsed Nd:YAG laser bead on plate welding of 0.6-mm-thick AISI 304 stainless steel sheets using different beam powers.

  6. Estimating the Loss of Crew and Loss of Mission for Crew Spacecraft

    NASA Technical Reports Server (NTRS)

    Lutomski, Michael G.

    2011-01-01

    Once the US Space Shuttle retires in 2011, the Russian Soyuz Launcher and Soyuz Spacecraft will comprise the only means for crew transportation to and from the International Space Station (ISS). The U.S. Government and NASA have contracted for crew transportation services to the ISS with Russia. The resulting implications for the US space program including issues such as astronaut safety must be carefully considered. Are the astronauts and cosmonauts safer on the Soyuz than the Space Shuttle system? Is the Soyuz launch system more robust than the Space Shuttle? The Soyuz launcher has been in operation for over 40 years. There have been only two loss of life incidents and two loss of mission incidents. Given that the most recent incident took place in 1983, how do we determine current reliability of the system? Do failures of unmanned Soyuz rockets impact the reliability of the currently operational man-rated launcher? Does the Soyuz exhibit characteristics that demonstrate reliability growth and how would that be reflected in future estimates of success? NASA s next manned rocket and spacecraft development project will have to meet the Agency Threshold requirements set forth by NASA. The reliability targets are currently several times higher than the Shuttle and possibly even the Soyuz. Can these targets be compared to the reliability of the Soyuz to determine whether they are realistic and achievable? To help answer these questions this paper will explore how to estimate the reliability of the Soyuz Launcher/Spacecraft system, compare it to the Space Shuttle, and its potential impacts for the future of manned spaceflight. Specifically it will look at estimating the Loss of Crew (LOC) and Loss of Mission (LOM) probability using historical data, reliability growth, and Probabilistic Risk Assessment techniques used to generate these numbers.

  7. Coseismic landsliding estimates for an Alpine Fault earthquake and the consequences for erosion of the Southern Alps, New Zealand

    NASA Astrophysics Data System (ADS)

    Robinson, T. R.; Davies, T. R. H.; Wilson, T. M.; Orchiston, C.

    2016-06-01

    Landsliding resulting from large earthquakes in mountainous terrain presents a substantial hazard and plays an important role in the evolution of mountain ranges. However estimating the scale and effect of landsliding from an individual earthquake prior to its occurrence is difficult. This study presents first order estimates of the scale and effects of coseismic landsliding resulting from a plate boundary earthquake in the South Island of New Zealand. We model an Mw 8.0 earthquake on the Alpine Fault, which has produced large (M 7.8-8.2) earthquakes every 329 ± 68 years over the last 8 ka, with the last earthquake ~ 300 years ago. We suggest that such an earthquake could produce ~ 50,000 ± 20,000 landslides at average densities of 2-9 landslides km- 2 in the area of most intense landsliding. Between 50% and 90% are expected to occur in a 7000 km2 zone between the fault and the main divide of the Southern Alps. Total landslide volume is estimated to be 0.81 + 0.87/- 0.55 km3. In major northern and southern river catchments, total landslide volume is equivalent to up to a century of present-day aseismic denudation measured from suspended sediment yields. This suggests that earthquakes occurring at century-timescales are a major driver of erosion in these regions. In the central Southern Alps, coseismic denudation is equivalent to less than a decade of aseismic denudation, suggesting precipitation and uplift dominate denudation processes. Nevertheless, the estimated scale of coseismic landsliding is considered to be a substantial hazard throughout the entire Southern Alps and is likely to present a substantial issue for post-earthquake response and recovery.

  8. THE MISSING EARTHQUAKES OF HUMBOLDT COUNTY: RECONCILING RECURRENCE INTERVAL ESTIMATES, SOUTHERN CASCADIA SUBDUCTION ZONE

    NASA Astrophysics Data System (ADS)

    Patton, J. R.; Leroy, T. H.

    2009-12-01

    Earthquake and tsunami hazard for northwestern California and southern Oregon is predominately based on estimates of recurrence for earthquakes on the Cascadia subduction zone and upper plate thrust faults, each with unique deformation and recurrence histories. Coastal northern California is uniquely located to enable us to distinguish these different sources of seismic hazard as the accretionary prism extends on land in this region. This region experiences ground deformation from rupture of upper plate thrust faults like the Little Salmon fault. Most of this region is thought to be above the locked zone of the megathrust, so is subject to vertical deformation during the earthquake cycle. Secondary evidence of earthquake history is found here in the form of marsh soils that coseismically subside and commonly are overlain by estuarine mud and rarely tsunami sand. It is not currently known what the source of the subsidence is for this region; it may be due to upper plate rupture, megathrust rupture, or a combination of the two. Given that many earlier investigations utilized bulk peat for 14C age determinations and that these early studies were largely reconnaissance work, these studies need to be reevaluated. Recurrence Interval estimates are inconsistent when comparing terrestrial (~500 years) and marine (~220 years) data sets. This inconsistency may be due to 1) different sources of archival bias in marine and terrestrial data sets and/or 2) different sources of deformation. Factors controlling successful archiving of paleoseismic data are considered as this relates to geologic setting and how that might change through time. We compile, evaluate, and rank existing paleoseismic data in order to prioritize future paleoseismic investigations. 14C ages are recalibrated and quality assessments are made for each age determination. We then evaluate geologic setting and prioritize important research locations and goals based on these existing data. Terrestrial core

  9. Estimation of Future Changes in Flood Disaster Losses

    NASA Astrophysics Data System (ADS)

    Konoshima, L.; Hirabayashi, Y.; Roobavannan, M.

    2012-12-01

    Disaster losses can be estimated by hazard intensity, exposure, and vulnerabilities. Many studies have addressed future economic losses from river floods, most of which are focused on Europe (Bouwer et al, 2010). Here flood disaster losses are calculated using the output of multi-model ensembles of CMIP5 GCMs in order to estimate the changes in damage loss due to climate change. For the global distribution of the expected future population and GDP, the ALPS scenario of RITE is population for is used. Here, flood event is defined as river discharge that has a probability of having 100 years return period. The time series of annual maximum daily discharge was fitted using moment fitting method for GEV distribution at each grid. L-moment method (Hosking and Wallis 1997) is used for estimating the parameters of distribution. For probability distribution, Gumbel distribution and Generalized Extreme Value (GEV) distribution were tested to see the future changes of 100-year value. Using the calculation of 100-year flood of present condition and annual maximum discharge for present and future climate conditions, the area exceeding 100-year flood is calculated for each 30 years. And to estimate the economic impact of future changes in occurrence of 100-year flood, affected total GDP is calculated by multiplying the affected population with country's GDP in areas exceeding 100-year flood value of present climate for each present and future conditions. The 100-year flood value is fixed with the value of present condition in calculating the affected value on the future condition. To consider the effect of the climatic condition and changes of economic growth, the regions are classified by continents. The Southeast Asia is divided into Japan and South Korea (No.1) and other countries (No.2), since the GDP and GDP growth rate within the two areas is quite different compared to other regions. Figure 1 shows the average and standard deviation (1-sigma) of future changing ratio

  10. Fast Estimate of Rupture Process of Large Earthquakes via Real Time Hi-net Data

    NASA Astrophysics Data System (ADS)

    Wang, D.; Kawakatsu, H.; Mori, J. J.

    2014-12-01

    We developed a real time system based on Hi-net seismic array that can offer fast and reliable source information, for example, source extent and rupture velocity, for earthquakes that occur at distance of roughly 30°- 85°with respect to the array center. We perform continuous grid search on a Hi-net real time data stream to identify possible source locations (following Nishida, K., Kawakatsu, H., and S. Obara, 2008). Earthquakes that occurred off the bright area of the array (30°- 85°with respect to the array center) will be ignored. Once a large seismic event is identified successfully, back-projection will be implemented to trace the source propagation and energy radiation. Results from extended global GRiD-MT and real time W phase inversion will be combined for the better identification of large seismic events. The time required is mainly due to the travel time from the epicenter to the array stations, so we can get the results between 6 to 13 min depending on the epicenter distances. This system can offer fast and robust estimates of earthquake source information, which will be useful for disaster mitigation, such as tsunami evacuation, emergency rescue, and aftershock hazard evaluation.

  11. Reevaluation of the macroseismic effects of the 1887 Sonora, Mexico earthquake and its magnitude estimation

    USGS Publications Warehouse

    Suárez, Gerardo; Hough, Susan E.

    2008-01-01

    The Sonora, Mexico, earthquake of 3 May 1887 occurred a few years before the start of the instrumental era in seismology. We revisit all available accounts of the earthquake and assign Modified Mercalli Intensities (MMI), interpreting and analyzing macroseismic information using the best available modern methods. We find that earlier intensity assignments for this important earthquake were unjustifiably high in many cases. High intensity values were assigned based on accounts of rock falls, soil failure or changes in the water table, which are now known to be very poor indicators of shaking severity and intensity. Nonetheless, reliable accounts reveal that light damage (intensity VI) occurred at distances of up to ~200 km in both Mexico and the United States. The resulting set of 98 reevaluated intensity values is used to draw an isoseismal map of this event. Using the attenuation relation proposed by Bakun (2006b), we estimate an optimal moment magnitude of Mw7.6. Assuming this magnitude is correct, a fact supported independently by documented rupture parameters assuming standard scaling relations, our results support the conclusion that northern Sonora as well as the Basin and Range province are characterized by lower attenuation of intensities than California. However, this appears to be at odds with recent results that Lg attenuation in the Basin and Range province is comparable to that in California.

  12. Earthquake shaking hazard estimates and exposure changes in the conterminous United States

    USGS Publications Warehouse

    Jaiswal, Kishor S.; Petersen, Mark D.; Rukstales, Kenneth S.; Leith, William S.

    2015-01-01

    A large portion of the population of the United States lives in areas vulnerable to earthquake hazards. This investigation aims to quantify population and infrastructure exposure within the conterminous U.S. that are subjected to varying levels of earthquake ground motions by systematically analyzing the last four cycles of the U.S. Geological Survey's (USGS) National Seismic Hazard Models (published in 1996, 2002, 2008 and 2014). Using the 2013 LandScan data, we estimate the numbers of people who are exposed to potentially damaging ground motions (peak ground accelerations at or above 0.1g). At least 28 million (~9% of the total population) may experience 0.1g level of shaking at relatively frequent intervals (annual rate of 1 in 72 years or 50% probability of exceedance (PE) in 50 years), 57 million (~18% of the total population) may experience this level of shaking at moderately frequent intervals (annual rate of 1 in 475 years or 10% PE in 50 years), and 143 million (~46% of the total population) may experience such shaking at relatively infrequent intervals (annual rate of 1 in 2,475 years or 2% PE in 50 years). We also show that there is a significant number of critical infrastructure facilities located in high earthquake-hazard areas (Modified Mercalli Intensity ≥ VII with moderately frequent recurrence interval).

  13. The CATDAT damaging earthquakes database

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  14. Deep Structure and Earthquake Generating Properties in the Yamasaki Fault Zone Estimated from Dense Seismic Observation

    NASA Astrophysics Data System (ADS)

    Nishigami, K.; Shibutani, T.; Katao, H.; Yamaguchi, S.; Mamada, Y.

    2010-12-01

    We have been estimating crustal heterogeneous structure and earthquake generating properties in and around the Yamasaki fault zone, which is a left-lateral strike-slip active fault with a total length of about 80 km in southwest Japan. We deployed dense seismic observation network, composed of 32 stations with average spacing of 5-10 km around the Yamasaki fault zone. We estimate detailed fault structure such as fault dip and shape, segmentation, and possible location of asperities and rupture initiation point, as well as generating properties of earthquakes in the fault zone, through analyses of accurate hypocenter distribution, focal mechanism, 3-D velocity tomography, coda wave inversion, and other waveform analyses. We also deployed a linear seismic array across the fault, composed of 20 stations with about 20 m spacing, in order to delineate the fault-zone structure in more detail using the seismic waves trapped inside the low velocity zone. We also estimate detailed resistivity structure at shallow depth of the fault zone by AMT (audio-frequency magnetotelluric) and MT surveys. In the scattering analysis of coda waves, we used 2,391 wave traces from 121 earthquakes that occurred in 2002, 2003, 2008 and 2009, recorded at 60 stations, including dense temporary and routine stations. We estimated 3-D distribution of relative scattering coefficients along the Yamasaki fault zone. Microseismicity is high and scattering coefficient is relatively larger in the upper crust along the entire fault zone. The distribution of strong scatterers suggests that the Ohara and Hijima faults, which are the segments in the northwestern part of the Yamasaki fault zone, have almost vertical fault plane from surface to a depth of about 15 km. We used seismic network data operated by Universities, NIED, AIST, and JMA. This study has been carried out as a part of the project "Study on evaluation of earthquake source faults based on surveys of inland active faults" by Japan Nuclear

  15. Twitter as Information Source for Rapid Damage Estimation after Major Earthquakes

    NASA Astrophysics Data System (ADS)

    Eggert, Silke; Fohringer, Joachim

    2014-05-01

    Natural disasters like earthquakes require a fast response from local authorities. Well trained rescue teams have to be available, equipment and technology has to be ready set up, information have to be directed to the right positions so the head quarter can manage the operation precisely. The main goal is to reach the most affected areas in a minimum of time. But even with the best preparation for these cases, there will always be the uncertainty of what really happened in the affected area. Modern geophysical sensor networks provide high quality data. These measurements, however, are only mapping disjoint values from their respective locations for a limited amount of parameters. Using observations of witnesses represents one approach to enhance measured values from sensors ("humans as sensors"). These observations are increasingly disseminated via social media platforms. These "social sensors" offer several advantages over common sensors, e.g. high mobility, high versatility of captured parameters as well as rapid distribution of information. Moreover, the amount of data offered by social media platforms is quite extensive. We analyze messages distributed via Twitter after major earthquakes to get rapid information on what eye-witnesses report from the epicentral area. We use this information to (a) quickly learn about damage and losses to support fast disaster response and to (b) densify geophysical networks in areas where there is sparse information to gain a more detailed insight on felt intensities. We present a case study from the Mw 7.1 Philippines (Bohol) earthquake that happened on Oct. 15 2013. We extract Twitter messages, so called tweets containing one or more specified keywords from the semantic field of "earthquake" and use them for further analysis. For the time frame of Oct. 15 to Oct 18 we get a data base of in total 50.000 tweets whereof 2900 tweets are geo-localized and 470 have a photo attached. Analyses for both national level and locally for

  16. Enhanced estimation of loss in the presence of Kerr nonlinearity

    NASA Astrophysics Data System (ADS)

    Rossi, Matteo A. C.; Albarelli, Francesco; Paris, Matteo G. A.

    2016-05-01

    We address the characterization of dissipative bosonic channels and show that estimation of the loss rate by Gaussian probes (coherent or squeezed) is improved in the presence of Kerr nonlinearity. In particular, enhancement of precision may be substantial for short interaction time, i.e., for media of moderate size, e.g., biological samples. We analyze in detail the behavior of the quantum Fisher information (QFI), and determine the values of nonlinearity maximizing the QFI as a function of the interaction time and of the parameters of the input signal. We also discuss the precision achievable by photon counting and quadrature measurement and present additional results for truncated, few-photon, probe signals. Finally, we discuss the origin of the precision enhancement, showing that it cannot be linked quantitatively to the non-Gaussianity or the nonclassicality of the interacting probe signal.

  17. Estimating conditional quantiles with the help of the pinball loss

    SciTech Connect

    Steinwart, Ingo

    2008-01-01

    Using the so-called pinball loss for estimating conditional quantiles is a well-known tool in both statistics and machine learning. So far, however, only little work has been done to quantify the efficiency of this tool for non-parametric (modified) empirical risk minimization approaches. The goal of this work is to fill this gap by establishing inequalities that describe how close approximate pinball risk minimizers are to the corresponding conditional quantile. These inequalities, which hold under mild assumptions on the data-generating distribution, are then used to establish so-called variance bounds which recently turned out to play an important role in the statistical analysis of (modified) empirical risk minimization approaches. To illustrate the use of the established inequalities, we then use them to establish an oracle inequality for support vector machines that use the pinball loss. Here, it turns out that we obtain learning rates which are optimal in a min-max sense under some standard assumptions on the regularity of the conditional quantile function.

  18. A Simplified Approach to Earthquake Risk in Mainland China

    NASA Astrophysics Data System (ADS)

    Chen, Qi-Fu; Mi, Hongliang; Huang, Jing

    2005-06-01

    There are limitations in conventional earthquake loss procedures if attempts are made to apply these to assess the social and economic impacts of recent disastrous earthquakes. This paper addresses the need to develop an applicable model for estimating the significant increases of earthquake loss in mainland China. The casualties of earthquakes were studied first. The casualties of earthquakes are strongly related to earthquake strength, occurrence time (day or night) and the distribution of population in the affected area. Using data on earthquake casualties in mainland China from 1980 to 2000, we suggest a relationship between average losses of life and the magnitude of earthquakes. Combined with information on population density and earthquake occurrence times, we use these data to give a further relationship between the loss of life and factors like population density, intensity and occurrence time of the earthquake. Earthquakes that occurred from 2001 to 2003 were tested for the given relationships. This paper also explores the possibility of using a macroeconomic indicator, here GDP (Gross Domestic Product), to roughly estimate earthquake exposure in situations where no detailed insurance or similar inventories exist, thus bypassing some problems of the conventional method.

  19. Estimation of Crustal Thickness in Nepal Himalayas Using Local and Regional Earthquake Data

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, S.; Koulakov, I.; Maksotova, G.; Raoof, J.; Kayal, J. R.; Jakovlev, A.; Vasilevsky, A.

    2014-12-01

    Variation of crustal thickness beneath Nepal Himalayas is estimated by tomographic inversion of regional earthquake data. The Nepal Himalayas is fairly well distributed with denser network and earthquakes. Some 10864 P- and 5293 S-arrival times from 821 selected events Mw > 4.0 recorded during 2004-2014 are used for this study; on average, almost 20 phases per event have been available. The tomographic results shed a new light on crustal thickness variation along and across the Nepal Himalayas. The crustal thickness varies between 40 and 80 km from foothills to high Himalayas, which is verified by synthetic modeling. The crustal thickness also widely varies along the strike of the Himalayas. The zones of higher and lower crustal thicknesses may be correlated with some hidden transverse structures in the foothills region, which are well reflected in gravity and magnetic maps. The estimated crustal thickness matches fairly well with the free air gravity anomaly; thinner crust corresponds to lower gravity anomaly and vice versa. Some correlation with the magnetic field anomaly is also observed. Higher magnetic anomaly corresponds to thicker crust. We propose that the more rigid segments of incoming Indian crust comprising of igneous and metamorphic rocks cause more compression in the Himalayan thrust zone and leads to stronger crustal thickening. Under thrusting of weaker crust / sediments, on the other hand, is associated with less shortening, and thus cause the thinner crust in the collision zone.

  20. Application of universal kriging for estimation of earthquake ground motion: Statistical significance of results

    SciTech Connect

    Carr, J.R.; Roberts, K.P.

    1989-02-01

    Universal kriging is compared with ordinary kriging for estimation of earthquake ground motion. Ordinary kriging is based on a stationary random function model; universal kriging is based on a nonstationary random function model representing first-order drift. Accuracy of universal kriging is compared with that for ordinary kriging; cross-validation is used as the basis for comparison. Hypothesis testing on these results shows that accuracy obtained using universal kriging is not significantly different from accuracy obtained using ordinary kriging. Test based on normal distribution assumptions are applied to errors measured in the cross-validation procedure; t and F tests reveal no evidence to suggest universal and ordinary kriging are different for estimation of earthquake ground motion. Nonparametric hypothesis tests applied to these errors and jackknife statistics yield the same conclusion: universal and ordinary kriging are not significantly different for this application as determined by a cross-validation procedure. These results are based on application to four independent data sets (four different seismic events).

  1. Estimation of seismic source parameters for earthquakes in the southern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Rhee, H.; Sheen, D.

    2013-12-01

    Recent seismicity in the Korean Peninsula is shown to be low but there is the potential for more severe seismic activity. Historical records show that there were many damaging earthquakes around the Peninsula. Absence of instrumental records of damaging earthquakes hinders our efforts to understand seismotectonic characteristics in the Peninsula and predict seismic hazards. Therefore it is important to analyze instrumental records precisely to help improve our knowledge of seismicity in this region. Several studies on seismic source parameters in the Korean Peninsula were performed to find source parameters for a single event (Kim, 2001; Jo and Baag, 2007; Choi, 2009; Choi and Shim, 2009; Choi, 2010; Choi and Noh, 2010; Kim et al., 2010), to find relationships between source parameters (Kim and Kim, 2008; Shin and Kang, 2008) or to determine the input parameters for the stochastic strong ground motion simulation (Jo and Baag, 2001; Junn et al., 2002). In all previous studies, however, the source parameters were estimated only from small numbers of large earthquakes in this region. To understand the seismotectonic environment in low seismicity region, it will be better that a study on the source parameters is performed by using as many data as we can. In this study, therefore, we estimated seismic source parameters, such as the corner frequency, Brune stress drop and moment magnitude, from 503 events with ML≥1.6 that occurred in the southern part of the Korean Peninsula from 2001 to 2012. The data set consist of 2,834 S-wave trains on three-component seismograms recorded at broadband seismograph stations which have been operating by the Korea Meteorological Administration and the Korea Institute of Geoscience and Mineral Resources. To calculate the seismic source parameters, we used the iterative method of Jo and Baag (2001) based on the methods of Snoke (1987) and Andrews (1986). In this method, the source parameters are estimated by using the integration of

  2. Seismic moment of the 1891 Nobi, Japan, earthquake estimated from historical seismograms

    NASA Astrophysics Data System (ADS)

    Fukuyama, E.; Muramatu, I.; Mikumo, T.

    2007-06-01

    The seismic moment of the 1891 Nobi, Japan, earthquake has been evaluated from the historical seismogram recorded at the Central Meteorological Observatory in Tokyo. For this purpose, synthetic seismograms from point and finite source models with various fault parameters have been calculated by a discrete wave-number method, incorporating the instrumental response of the Gray-Milne-Ewing seismograph, and then compared with the original records. Our estimate of the seismic moment (Mo) is 1.8 × 1020 N m corresponding to a moment magnitude (Mw) 7.5. This is significantly smaller than the previous estimates from the distribution of damage, but is consistent with that inferred from geological field survey (Matsuda, 1974) of the surface faults.

  3. Uncertainty estimations for moment tensor inversions: the issue of the 2012 May 20 Emilia earthquake

    NASA Astrophysics Data System (ADS)

    Scognamiglio, Laura; Magnoni, Federica; Tinti, Elisa; Casarotti, Emanuele

    2016-08-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Geoscientists ordinarily use moment tensor catalogues, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their analysis. The 2012 May 20 Emilia main shock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. A variability of ˜0.5 units in magnitude leads to a controversial knowledge of the real size of the event and reveals how the solutions could be poorly constrained. In this work, we investigate the stability of the moment tensor solution for this earthquake, studying the effect of five different 1-D velocity models, the number and the distribution of the stations used in the inversion procedure. We also introduce a 3-D velocity model to account for structural heterogeneity. We finally estimate the uncertainties associated to the computed focal planes and the obtained Mw. We conclude that our reliable source solutions provide a moment magnitude that ranges from 5.87, 1-D model, to 5.96, 3-D model, reducing the variability of the literature to ˜0.1. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, requires coming out with disclosed assumptions and explicit processing workflows. Finally and, probably more important, when moment tensor solution is used for secondary analyses it has to be combined with the same main boundary conditions (e.g. wave-velocity propagation model) to avoid conflicting results.

  4. Estimating earthquake-induced failure probability and downtime of critical facilities.

    PubMed

    Porter, Keith; Ramer, Kyle

    2012-01-01

    Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways. PMID:22576139

  5. The range split-spectrum method for ionosphere estimation applied to the 2008 Kyrgyzstan earthquake

    NASA Astrophysics Data System (ADS)

    Gomba, Giorgio; Eineder, Michael

    2015-04-01

    L-band remote sensing systems, like the future Tandem-L mission, are disrupted by the ionized upper part of the atmosphere called ionosphere. The ionosphere is a region of the upper atmosphere composed by gases that are ionized by the solar radiation. The extent of the effects induced on a SAR measurement is given by the electron density integrated along the radio-wave paths and on its spatial variations. The main effect of the ionosphere on microwaves is to cause an additional delay, which introduces a phase difference between SAR measurements modifying the interferometric phase. The objectives of the Tandem-L mission are the systematic monitoring of dynamic Earth processes like Earth surface deformations, vegetation structure, ice and glacier changes and ocean surface currents. The scientific requirements regarding the mapping of surface deformation due to tectonic processes, earthquakes, volcanic cycles and anthropogenic factors demand deformation measurements; namely one, two or three dimensional displacement maps with resolutions of a few hundreds of meters and accuracies of centimeter to millimeter level. Ionospheric effects can make impossible to produce deformation maps with such accuracy and must therefore be estimated and compensated. As an example of this process, the implementation of the range split-spectrum method proposed in [1,2] will be presented and applied to an example dataset. The 2008 Kyrgyzstan Earthquake of October 5 is imaged by an ALOS PALSAR interferogram; a part from the earthquake, many fringes due to strong ionospheric variations can also be seen. The compensated interferogram shows how the ionosphere-related fringes were successfully estimated and removed. [1] Rosen, P.A.; Hensley, S.; Chen, C., "Measurement and mitigation of the ionosphere in L-band Interferometric SAR data," Radar Conference, 2010 IEEE , vol., no., pp.1459,1463, 10-14 May 2010 [2] Brcic, R.; Parizzi, A.; Eineder, M.; Bamler, R.; Meyer, F., "Estimation and

  6. A simple approach to estimate earthquake magnitude from the arrival time of the peak acceleration amplitude

    NASA Astrophysics Data System (ADS)

    Noda, S.; Yamamoto, S.

    2014-12-01

    In order for Earthquake Early Warning (EEW) to be effective, the rapid determination of magnitude (M) is important. At present, there are no methods which can accurately determine M even for extremely large events (ELE) for EEW, although a number of the methods have been suggested. In order to solve the problem, we use a simple approach derived from the fact that the time difference (Top) from the onset of the body wave to the arrival time of the peak acceleration amplitude of the body wave scales with M. To test this approach, we use 15,172 accelerograms of regional earthquakes (most of them are M4-7 events) from the K-NET, as the first step. Top is defined by analyzing the S-wave in this step. The S-onsets are calculated by adding the theoretical S-P times to the P-onsets which are manually picked. As the result, it is confirmed that logTop has high correlation with Mw, especially for the higher frequency band (> 2Hz). The RMS of residuals between Mw and M estimated in this step is less than 0.5. In case of the 2011 Tohoku earthquake, M is estimated to be 9.01 at 150 seconds after the initiation of the event.To increase the number of the ELE data, we add the teleseismic high frequency P-wave records to the analysis, as the second step. According to the result of various back-projection analyses, we consider the teleseismic P-waves to contain information on the entire rupture process. The BHZ channel data of the Global Seismographic Network for 24 events are used in this step. 2-4Hz data from the stations in the epicentral distance range of 30-85 degrees are used following the method of Hara [2007]. All P-onsets are manually picked. Top obtained from the teleseimic data show good correlation with Mw, complementing the one obtained from the regional data. We conclude that the proposed approach is quite useful for estimating reliable M for EEW, even for the ELE.

  7. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window.

    PubMed

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-01-01

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude. PMID:25346344

  8. Early magnitude estimation for the MW7.9 Wenchuan earthquake using progressively expanded P-wave time window

    PubMed Central

    Peng, Chaoyong; Yang, Jiansi; Zheng, Yu; Xu, Zhiqiang; Jiang, Xudong

    2014-01-01

    More and more earthquake early warning systems (EEWS) are developed or currently being tested in many active seismic regions of the world. A well-known problem with real-time procedures is the parameter saturation, which may lead to magnitude underestimation for large earthquakes. In this paper, the method used to the MW9.0 Tohoku-Oki earthquake is explored with strong-motion records of the MW7.9, 2008 Wenchuan earthquake. We measure two early warning parameters by progressively expanding the P-wave time window (PTW) and distance range, to provide early magnitude estimates and a rapid prediction of the potential damage area. This information would have been available 40 s after the earthquake origin time and could have been refined in the successive 20 s using data from more distant stations. We show the suitability of the existing regression relationships between early warning parameters and magnitude, provided that an appropriate PTW is used for parameter estimation. The reason for the magnitude underestimation is in part a combined effect of high-pass filtering and frequency dependence of the main radiating source during the rupture process. Finally we suggest only using Pd alone for magnitude estimation because of its slight magnitude saturation compared to the τc magnitude. PMID:25346344

  9. Re-estimated fault model of the 17th century great earthquake off Hokkaido using tsunami deposit data

    NASA Astrophysics Data System (ADS)

    Ioki, Kei; Tanioka, Yuichiro

    2016-01-01

    Paleotsunami researches revealed that a great earthquake occurred off eastern Hokkaido, Japan and generated a large tsunami in the 17th century. Tsunami deposits from this event have been found at far inland from the Pacific coast in eastern Hokkaido. Previous study estimated the fault model of the 17th century great earthquake by comparing locations of lowland tsunami deposits and computed tsunami inundation areas. Tsunami deposits were also traced at high cliff near the coast as high as 18 m above the sea level. Recent paleotsunami study also traced tsunami deposits at other high cliffs along the Pacific coast. The fault model estimated from previous study cannot explain the tsunami deposit data at high cliffs near the coast. In this study, we estimated the fault model of the 17th century great earthquake to explain both lowland widespread tsunami deposit areas and tsunami deposit data at high cliffs near the coast. We found that distributions of lowland tsunami deposits were mainly explained by wide rupture area at the plate interface in Tokachi-Oki segment and Nemuro-Oki segment. Tsunami deposits at high cliff near the coast were mainly explained by very large slip of 25 m at the shallow part of the plate interface near the trench in those segments. The total seismic moment of the 17th century great earthquake was calculated to be 1.7 ×1022 Nm (Mw 8.8). The 2011 great Tohoku earthquake ruptured large area off Tohoku and very large slip amount was found at the shallow part of the plate interface near the trench. The 17th century great earthquake had the same characteristics as the 2011 great Tohoku earthquake.

  10. Crustal parameters estimated from P-waves of earthquakes recorded at a small array

    USGS Publications Warehouse

    Murdock, J.N.; Steppe, J.A.

    1980-01-01

    The P-arrival times of local and regional earthquakes that are outside of a small network of seismometers can be used to interpret crustal parameters beneath the network by employing the time-term technique. Even when the estimate of the refractor velocity is poorly determined, useful estimates of the station time-terms can be made. The method is applied to a 20 km diameter network of eight seismic stations which was operated near Castaic, California, during the winter of 1972-73. The stations were located in sedimentary basins. Beneath the network, the sedimentary rocks of the basins are known to range from 1 to more than 4 km in thickness. Relative time-terms are estimated from P-waves assumed to be propagated by a refractor in the mid-crust, and again from P-waves propagated by a refractor in the upper basement. For the range of velocities reported by others, the two sets of time-terms are very similar. They suggest that both refractors dip to the southwest, and the geology also indicates that the basement dips in this direction. In addition, the P-wave velocity estimated for the refractor of mid-crustal depths, roughly 6.7 km/sec, agrees with values reported by others. Thus, even in this region of complicated geologic structure, the method appears to give realistic results. ?? 1980 Birkha??user Verlag.

  11. Estimation of earthquake source parameters by the inversion of waveform data: synthetic waveforms

    USGS Publications Warehouse

    Sipkin, S.A.

    1982-01-01

    Two methods are presented for the recovery of a time-dependent moment-tensor source from waveform data. One procedure utilizes multichannel signal-enhancement theory; in the other a multichannel vector-deconvolution approach, developed by Oldenburg (1982) and based on Backus-Gilbert inverse theory, is used. These methods have the advantage of being extremely flexible; both may be used either routinely or as research tools for studying particular earthquakes in detail. Both methods are also robust with respect to small errors in the Green's functions and may be used to refine estimates of source depth by minimizing the misfits to the data. The multichannel vector-deconvolution approach, although it requires more interaction, also allows a trade-off between resolution and accuracy, and complete statistics for the solution are obtained. The procedures have been tested using a number of synthetic body-wave data sets, including point and complex sources, with satisfactory results. ?? 1982.

  12. Simultaneous estimation of earthquake source parameters and crustal Q value from broadband data of selected aftershocks of the 2001 M w 7.7 Bhuj earthquake

    NASA Astrophysics Data System (ADS)

    Saha, A.; Lijesh, S.; Mandal, P.

    2012-12-01

    This paper presents the simultaneous estimation of source parameters and crustal Q values for small to moderate-size aftershocks ( M w 2.1-5.1) of the M_{w }7.7 2001 Bhuj earthquake. The horizontal-component S-waves of 144 well located earthquakes (2001-2010) recorded at 3-10 broadband seismograph sites in the Kachchh Seismic Zone, Gujarat, India are analyzed, and their seismic corner frequencies, long-period spectral levels and crustal Q values are simultaneously estimated by inverting the horizontal component of the S-wave displacement spectrum using the Levenberg-Marquardt nonlinear inversion technique, wherein the inversion scheme is formulated based on the ω-square source spectral model. The static stress drops (Δ σ) are then calculated from the corner frequency and seismic moment. The estimated source parameters suggest that the seismic moment ( M 0) and source radius ( r) of aftershocks are varying from 1.12 × 1012 to 4.00 × 1016 N-m and 132.57 to 513.20 m, respectively. Whereas, estimated stress drops (Δ σ) and multiplicative factor ( E mo) values range from 0.01 to 20.0 MPa and 1.05 to 3.39, respectively. The corner frequencies are found to be ranging from 2.36 to 8.76 Hz. The crustal S-wave quality factor varies from 256 to 1882 with an average of 840 for the Kachchh region, which agrees well with the crustal Q value of the seismically active New Madrid region, USA. Our estimated stress drop values are quite large compared to the other similar size Indian intraplate earthquakes, which can be attributed to the presence of crustal mafic intrusives and aqueous fluids in the lower crust as revealed by the earlier tomographic study of the region.

  13. Estimation of slip rate and fault displacement during shallow earthquake rupture in the Nankai subduction zone

    NASA Astrophysics Data System (ADS)

    Hamada, Yohei; Sakaguchi, Arito; Tanikawa, Wataru; Yamaguchi, Asuka; Kameda, Jun; Kimura, Gaku

    2015-03-01

    Enormous earthquakes repeatedly occur in subduction zones, and the slips along megathrusts, in particular those propagating to the toe of the forearc wedge, generate ruinous tsunamis. Quantitative evaluation of slip parameters (i.e., slip velocity, rise time and slip distance) of past slip events at shallow, tsunamigenic part of the fault is critical to characterize such earthquakes. Here, we attempt to quantify these parameters of slips that may have occurred along the shallow megasplay fault and the plate boundary décollement in the Nankai Trough, off southwest Japan. We apply a kinetic modeling to vitrinite reflectance profiles on the two fault rock samples obtained from Integrated Ocean Drilling Program (IODP). This approach constitutes two calculation procedures: heat generation and numerical profile fitting of vitrinite reflectance data. For the purpose of obtaining optimal slip parameters, residue calculation is implemented to estimate fitting accuracy. As the result, the measured distribution of vitrinite reflectance is reasonably fitted with heat generation rate (dot{Q}) and slip duration ( t r ) of 16,600 J/s/m2 and 6,250 s, respectively, for the megasplay and 23,200 J/s/m2 and 2,350 s, respectively, for the frontal décollement, implying slow and long-term slips. The estimated slip parameters are then compared with previous reports. The maximum temperature, Tmax, for the Nankai megasplay fault is consistent with the temperature constraint suggested by a previous work. Slow slip velocity, long-term rise time, and large displacement are recognized in these fault zones (both of the megasplay, the frontal décollement). These parameters are longer and slower than typical coseismic slip, but are rather consistent with rapid afterslip.

  14. Mass Loss and Surface Displacement Estimates in Greenland from GRACE

    NASA Astrophysics Data System (ADS)

    Jensen, Tim; Forsberg, Rene

    2015-04-01

    The estimation of ice sheet mass changes from GRACE is basically an inverse problem, the solution is non-unique and several procedures for determining the mass distribution exists. We present Greenland mass loss results from two such procedures, namely a direct spherical harmonic inversion procedure possible through a thin layer assumption, and a generalized inverse masscon procedure. These results are updated to the end of 2014, including the unusual 2013 mass gain anomaly, and show a good agreement when taking into account leakage from the Canadian Icecaps. The GRACE mass changes are further compared to GPS uplift data on the bedrock along the edge of the ice sheet. The solid Earth deformation is assumed to consist of an elastic deformation of the crust and an anelastic deformation of the underlying mantle (GIA). The crustal deformation is due to current surface loading effects and therefore contains a strong seasonal component of variation, superimposed on a secular trend. The majority of the anelastic GIA deformation of the mantle is believed to be approximately constant. An accelerating secular trend and seasonal changes, as seen in Greenland, is therefore assumed to be due to elastic deformation from changes in surface mass loading from the ice sheet. The GRACE and GPS comparison is only valid by assuring that the signal content of the two observables are consistent. The GPS receivers are measuring movement at a single point on the bedrock surface, and therefore sensitive to a limited loading footprint, while the GRACE satellites on the other hand measures a filtered, attenuated gravitational field, at an altitude of approximately 500 km, making it sensitive to a much larger area. Despite this, the seasonal loading signal in the two observables show a reasonably good agreement.

  15. Estimation of co-seismic stress change of the 2008 Wenchuan Ms8.0 earthquake

    SciTech Connect

    Sun Dongsheng; Wang Hongcai; Ma Yinsheng; Zhou Chunjing

    2012-09-26

    In-situ stress change near the fault before and after a great earthquake is a key issue in the geosciences field. In this work, based on the 2008 Great Wenchuan earthquake fault slip dislocation model, the co-seismic stress tensor change due to the Wenchuan earthquake and the distribution functions around the Longmen Shan fault are given. Our calculated results are almost consistent with the before and after great Wenchuan earthquake in-situ measuring results. The quantitative assessment results provide a reference for the study of the mechanism of earthquakes.

  16. Ground-motion modeling of the 1906 San Francisco Earthquake, part II: Ground-motion estimates for the 1906 earthquake and scenario events

    USGS Publications Warehouse

    Aagaard, B.T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.

    2008-01-01

    We estimate the ground motions produce by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  17. Estimation of earthquake source parameters from GRACE observations of changes in Earth's gravitational potential field using normal modes

    NASA Astrophysics Data System (ADS)

    Sterenborg, G.; Simons, F. J.; Welch, E.; Morrow, E.; Mitrovica, J. X.

    2013-12-01

    Since its launch in 2002, the Gravity Recovery and Climate Experiment (GRACE) has yielded tremendous insights into the spatio-temporal changes of mass redistribution in the Earth system. Such changes occur on widely varying spatial and temporal scales and take place both on Earth's surface, e.g., atmospheric mass fluctuations and the exchange of water, snow and ice, as well as in its interior, e.g., glacial isostatic adjustment and earthquakes. Each of these processes causes changes in the Earth's gravitational potential field which GRACE observes. One example is the Antarctic and Greenland ice mass changes inferred from GRACE observations of the changing geopotential as well as the associated time rate of change of its degree 2 and 4 zonal harmonics observed by satellite laser ranging. Deforming the Earth's surface and interior both co- and post-seismically, with some of the deformation permanent, earthquakes can affect the geopotential at a spatial scale up to thousands of kilometers and at temporal scales from seconds to months. Traditional measurements of earthquakes, e.g., by seismometers, GPS and inSAR, observe the co- and post-seismic surface displacements and are invaluable in understanding earthquake triggering mechanisms, slip distributions, rupture dynamics and slow post-seismic changes. Space-based observations of geopotential changes can add a whole new dimension to this as such observations are also sensitive to changes in the Earth's interior, over a larger area affected by the earthquake, over longer timescales, beyond that of Earth's longest period normal mode, and because they have global sensitivity including over sparsely instrumented oceanic domains. We use a joint seismic and gravitational normal-mode formalism to quantify changes in the gravitational potential due to different types of earthquakes, comparing them to predictions from dislocation models. We discuss the inverse problem of estimating the source parameters of large earthquakes

  18. BEAM LOSS ESTIMATES AND CONTROL FOR THE BNL NEUTRINO FACILITY.

    SciTech Connect

    WENG, W.-T.; LEE, Y.Y.; RAPARIA, D.; TSOUPAS, N.; BEEBE-WANG, J.; WEI, J.; ZHANG, S.Y.

    2005-05-16

    The requirement for low beam loss is very important both to protect the beam component, and to make the hands-on maintenance possible. In this report, the design considerations to achieving high intensity and low loss will be presented. We start by specifying the beam loss limit at every physical process followed by the proper design and parameters for realizing the required goals. The process considered in this paper include the emittance growth in the linac, the H{sup -} injection, the transition crossing, the coherent instabilities and the extraction losses.

  19. The source model and recurrence interval of Genroku-type Kanto earthquakes estimated from paleo-shoreline data

    NASA Astrophysics Data System (ADS)

    Sato, Toshinori; Higuchi, Harutaka; Miyauchi, Takahiro; Endo, Kaori; Tsumura, Noriko; Ito, Tanio; Noda, Akemi; Matsu'ura, Mitsuhiro

    2016-02-01

    In the southern Kanto region of Japan, where the Philippine Sea plate is descending at the Sagami trough, two different types of large interplate earthquakes have occurred repeatedly. The 1923 (Taisho) and 1703 (Genroku) Kanto earthquakes characterize the first and second types, respectively. A reliable source model has been obtained for the 1923 event from seismological and geodetical data, but not for the 1703 event because we have only historical records and paleo-shoreline data about it. We developed an inversion method to estimate fault slip distribution of interplate repeating earthquakes from paleo-shoreline data on the idea of crustal deformation cycles associated with subduction-zone earthquakes. By applying the inversion method to the present heights of the Genroku and Holocene marine terraces developed along the coasts of the southern Boso and Miura peninsulas, we estimated the fault slip distribution of the 1703 Genroku earthquake as follows. The source region extends along the Sagami trough from the Miura peninsula to the offing of the southern Boso peninsula, which covers the southern two thirds of the source region of the 1923 Kanto earthquake. The coseismic slip takes the maximum of 20 m at the southern tip of the Boso peninsula, and the moment magnitude (Mw) is calculated as 8.2. From the interseismic slip-deficit rates at the plate interface obtained by GPS data inversion, assuming that the total slip deficit is compensated by coseismic slip, we can roughly estimate the average recurrence interval as 350 years for large interplate events of any type and 1400 years for the Genroku-type events.

  20. Stable isotope values in coastal sediment estimate subsidence near Girdwood during the 1964 great Alaska earthquake

    NASA Astrophysics Data System (ADS)

    Bender, A. M.; Witter, R. C.; Rogers, M.; Saenger, C. P.

    2013-12-01

    Subsidence during the Mw 9.2, 1964 great Alaska earthquake lowered Turnagain Arm near Girdwood, Alaska by ~1.5m and caused rapid relative sea-level (RSL) rise that shifted estuary mud flats inland over peat-forming wetlands. Sharp mud-over-peat contacts record these environment shifts at sites along Turnagain Arm including Bird Point, 11km west of Girdwood. Transfer functions based on changes in intertidal microfossil populations across these contacts accurately estimate earthquake subsidence at Girdwood, but poor preservation of microfossils hampers this method at other sites in Alaska. We test a new method that employs compositions of stable carbon and nitrogen isotopes in intertidal sediments as proxies for elevation. Because marine sediment sources are expected to have higher δ13C and δ15N than terrestrial sources, we hypothesize that these values should decrease with elevation in modern intertidal sediment, and should also be more positive in estuarine mud above sharp contacts that record RSL rise than in peaty sediment below. We relate δ13C and δ15N values above and below the 1964 mud/peat contact to values in modern sediment of known elevation, and use these values qualitatively to indicate sediment source, and quantitatively to estimate the amount of RSL rise across the contact. To establish a site-specific sea level datum, we deployed a pressure transducer and compensatory barometer to record a 2-month tide series at Bird Point. We regressed the high tides from this series against corresponding NOAA verified high tides at Anchorage (~50km west of Bird Point) to calculate a high water datum within ×0.14m standard error (SE). To test whether or not modern sediment isotope values decrease with elevation, we surveyed a 60-m-long modern transect, sampling surface sediment at ~0.10m vertical intervals. Results from this transect show a decrease of 4.64‰ in δ13C and 3.97‰ in δ15N between tide flat and upland sediment. To evaluate if δ13C and δ15N

  1. The energy radiated by the 26 December 2004 Sumatra-Andaman earthquake estimated from 10-minute P-wave windows

    USGS Publications Warehouse

    Choy, G.L.; Boatwright, J.

    2007-01-01

    The rupture process of the Mw 9.1 Sumatra-Andaman earthquake lasted for approximately 500 sec, nearly twice as long as the teleseismic time windows between the P and PP arrival times generally used to compute radiated energy. In order to measure the P waves radiated by the entire earthquake, we analyze records that extend from the P-wave to the S-wave arrival times from stations at distances ?? >60??. These 8- to 10-min windows contain the PP, PPP, and ScP arrivals, along with other multiply reflected phases. To gauge the effect of including these additional phases, we form the spectral ratio of the source spectrum estimated from extended windows (between TP and TS) to the source spectrum estimated from normal windows (between TP and TPP). The extended windows are analyzed as though they contained only the P-pP-sP wave group. We analyze four smaller earthquakes that occurred in the vicinity of the Mw 9.1 mainshock, with similar depths and focal mechanisms. These smaller events range in magnitude from an Mw 6.0 aftershock of 9 January 2005 to the Mw 8.6 Nias earthquake that occurred to the south of the Sumatra-Andaman earthquake on 28 March 2005. We average the spectral ratios for these four events to obtain a frequency-dependent operator for the extended windows. We then correct the source spectrum estimated from the extended records of the 26 December 2004 mainshock to obtain a complete or corrected source spectrum for the entire rupture process (???600 sec) of the great Sumatra-Andaman earthquake. Our estimate of the total seismic energy radiated by this earthquake is 1.4 ?? 1017 J. When we compare the corrected source spectrum for the entire earthquake to the source spectrum from the first ???250 sec of the rupture process (obtained from normal teleseismic windows), we find that the mainshock radiated much more seismic energy in the first half of the rupture process than in the second half, especially over the period range from 3 sec to 40 sec.

  2. Exploration of deep sedimentary layers in Tacna city, southern Peru, using microtremors and earthquake data for estimation of local amplification

    NASA Astrophysics Data System (ADS)

    Yamanaka, Hiroaki; Gamero, Mileyvi Selene Quispe; Chimoto, Kosuke; Saguchi, Kouichiro; Calderon, Diana; La Rosa, Fernándo Lázares; Bardales, Zenón Aguilar

    2016-01-01

    S-wave velocity profiles of sedimentary layers in Tacna, southern Peru, based on analysis of microtremor array data and earthquake records, have been determined for estimation of site amplification. We investigated vertical component of microtremors in temporary arrays at two sites in the city for Rayleigh wave phase velocity. A receiver function was also estimated from existing earthquake data at a strong motion station near one of the microtremor exploration sites. The phase velocity and the receiver function were jointly inverted to S-wave velocity profiles. The depths to the basement with an S-wave velocity of 2.8 km/s at the two sites are similar as about 1 km. The top soil at the site in a severely damaged area in the city had a lower S-wave velocity than that in a slightly damaged area during the 2001 southern Peru earthquake. We subsequently estimate site amplifications from the velocity profiles and find that amplification is large at periods from 0.2 to 0.8 s at the damaged area indicating possible reasons for the differences in the damage observed during the 2001 southern Peru earthquake.

  3. Study of an image restoration method based on Poisson-maximum likelihood estimation method for earthquake ruin scene

    NASA Astrophysics Data System (ADS)

    Song, Yanxing; Yang, Jingsong; Cheng, Lina; Liu, Shucong

    2014-09-01

    An image restoration method based on Poisson-maximum likelihood estimation method (PMLE) for earthquake ruin scene is proposed in this paper. The PMLE algorithm is introduced at first, and automatic acceleration method is used in the algorithm to accelerate the iterative process, then an image of earthquake ruin scene is processed with this image restoration method. The spectral correlation method and PSNR (peak signal-to-noise ratio) are chosen respectively to validate the restoration effect of the method, the simulation results show that iterations in this method will effect the PSNR of the processed image and operation time, and this method can restore image of earthquake ruin scene effectively and has a good practicability.

  4. Using safety inspection data to estimate shaking intensity for the 1994 Northridge earthquake

    USGS Publications Warehouse

    Thywissen, K.; Boatwright, J.

    1998-01-01

    We map the shaking intensity suffered in Los Angeles County during the 17 January 1994, Northridge earthquake using municipal safety inspection data. The intensity is estimated from the number of buildings given red, yellow, or green tags, aggregated by census tract. Census tracts contain from 200 to 4000 residential buildings and have an average area of 6 km2 but are as small as 2 and 1 km2 in the most densely populated areas of the San Fernando Valley and downtown Los Angeles, respectively. In comparison, the zip code areas on which standard MMI intensity estimates are based are six times larger, on average, than the census tracts. We group the buildings by age (before and after 1940 and 1976), by number of housing units (one, two to four, and five or more), and by construction type, and we normalize the tags by the total number of similar buildings in each census tract. We analyze the seven most abundant building categories. The fragilities (the fraction of buildings in each category tagged within each intensity level) for these seven building categories are adjusted so that the intensity estimates agree. We calibrate the shaking intensity to correspond with the modified Mercalli intensities (MMI) estimated and compiled by Dewey et al. (1995); the shapes of the resulting isoseismals are similar, although we underestimate the extent of the MMI = 6 and 7 areas. The fragility varies significantly between different building categories (by factors of 10 to 20) and building ages (by factors of 2 to 6). The post-1940 wood-frame multi-family (???5 units) dwellings make up the most fragile building category, and the post-1940 wood-frame single-family dwellings make up the most resistant building category.

  5. Estimation of slip parameters associated with frictional heating during the 1999 Taiwan Chi-Chi earthquake by vitrinite reflectance geothermometry

    NASA Astrophysics Data System (ADS)

    Maekawa, Yuka; Hirono, Tetsuro; Yabuta, Hikaru; Mukoyoshi, Hideki; Kitamura, Manami; Ikehara, Minoru; Tanikawa, Wataru; Ishikawa, Tsuyoshi

    2014-12-01

    To estimate the slip parameters and understand the fault lubrication mechanism during the 1999 Taiwan Chi-Chi earthquake, we applied vitrinite reflectance geothermometry to samples retrieved from the Chelungpu fault. We found a marked reflectance anomaly of 1.30% ± 0.21% in the primary slip zone of the earthquake, whereas the reflectances in the surrounding deformed and host rocks were 0.45% to 0.77%. By applying a kinetic model of vitrinite thermal maturation together with a one-dimensional heat and thermal diffusion equation, we determined the shear stress and peak temperature in the slip zone during the earthquake to be 1.00 ± 0.04 MPa and 626°C ± 25°C, respectively. Taking into account the probable overestimation of the temperature owing to a mechanochemically enhanced reaction or flash heating at grain contacts, this temperature should be considered an upper limit. The lower limit was previously constrained to 400°C by studies of fluid-mobile trace-element concentrations and magnetic minerals. Therefore, we inferred that the peak temperature during the Chi-Chi earthquake was 400°C to 626°C, corresponding to an apparent friction coefficient of 0.01 to 0.06. Such low friction and the previous evidence of a high-temperature fluid suggest that thermal pressurization likely contributed to dynamic weakening during the Chi-Chi earthquake.

  6. Effects of tag loss on direct estimates of population growth rate

    USGS Publications Warehouse

    Rotella, J.J.; Hines, J.E.

    2005-01-01

    The temporal symmetry approach of R. Pradel can be used with capture-recapture data to produce retrospective estimates of a population's growth rate, lambda(i), and the relative contributions to lambda(i) from different components of the population. Direct estimation of lambda(i) provides an alternative to using population projection matrices to estimate asymptotic lambda and is seeing increased use. However, the robustness of direct estimates of lambda(1) to violations of several key assumptions has not yet been investigated. Here, we consider tag loss as a possible source of bias for scenarios in which the rate of tag loss is (1) the same for all marked animals in the population and (2) a function of tag age. We computed analytic approximations of the expected values for each of the parameter estimators involved in direct estimation and used those values to calculate bias and precision for each parameter estimator. Estimates of lambda(i) were robust to homogeneous rates of tag loss. When tag loss rates varied by tag age, bias occurred for some of the sampling situations evaluated, especially those with low capture probability, a high rate of tag loss, or both. For situations with low rates of tag loss and high capture probability, bias was low and often negligible. Estimates of contributions of demographic components to lambda(i) were not robust to tag loss. Tag loss reduced the precision of all estimates because tag loss results in fewer marked animals remaining available for estimation. Clearly tag loss should be prevented if possible, and should be considered in analyses of lambda(i), but tag loss does not necessarily preclude unbiased estimation of lambda(i).

  7. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  8. Estimation of human heat loss in five Mediterranean regions.

    PubMed

    Bilgili, M; Simsek, E; Sahin, B; Yasar, A; Ozbek, A

    2015-10-01

    This study investigates the effects of seasonal weather differences on the human body's heat losses in the Mediterranean region of Turkey. The provinces of Adana, Antakya, Osmaniye, Mersin and Antalya were chosen for the research, and monthly atmospheric temperatures, relative humidity, wind speed and atmospheric pressure data from 2007 were used. In all these provinces, radiative, convective and evaporative heat losses from the human body based on skin surface and respiration were analyzed from meteorological data by using the heat balance equation. According to the results, the rate of radiative, convective and evaporative heat losses from the human body varies considerably from season to season. In all the provinces, 90% of heat loss was caused by heat transfer from the skin, with the remaining 10% taking place through respiration. Furthermore, radiative and convective heat loss through the skin reached the highest values in the winter months at approximately between 110 and 140W/m(2), with the lowest values coming in the summer months at roughly 30-50W/m(2). PMID:26025784

  9. Strong near-trench locking and its temporal change in the rupture area of the 2011 Tohoku-oki earthquake estimated from cumulative slip and slip vectors of interplate earthquakes

    NASA Astrophysics Data System (ADS)

    Uchida, N.; Hasegawa, A.; Matsuzawa, T.

    2012-12-01

    The 2011 Mw 9.0 Tohoku-oki earthquake is characterized by large near-trench slip that excited disastrous Tsunami. It is of great importance to estimate the coupling state near the trench to understand temporal evolution of interplate coupling near the earthquake source as well as for the assessment of tsunami risk along the trench. However, the coupling states at the near trench areas far from the land are usually not well constrained. The cumulative offset of small repeating earthquakes reflects the in situ slip history on a fault and the slip vectors of interplate earthquakes reflect heterogeneous distribution of coupling on the plate boundary. In this study, we use the repeating earthquake and slip vector data to estimate spatio-temporal change in slip and coupling in and around the source area of the Tohoku-oki earthquake near the Japan trench. The repeating earthquake data for 27 years before the Tohoku-oki earthquake show absence of repeating earthquake groups in the large-coseismic-slip area and low and variable slip rates in the moderate-coseismic-slip region surrounding the large-slip. The absence of repeaters itself could have been explained by both models with very weak coupling and very strong coupling. However, the rotation of slip vectors of interplate earthquakes at the deeper extension of the large-coseismic-slip suggest the plate boundary was locked in the near-trench area before the earthquake, which is consistent with the estimation by Hasegawa et al. (2012) based on stress tensor analysis of the upper plate events near the trench axis. The repeating earthquake data, on the other hand, show small but distinct increases in the slip rate in the 3-5 years before the earthquake near the area of large coseismic slip suggesting preseismic unfastening of the locked area in the last stage of the earthquake cycle. After the Tohoku-oki earthquake, repeating earthquakes activity in the main rupture area disappeared almost completely and slip vectors of

  10. The tsunami source area of the 2003 Tokachi-oki earthquake estimated from tsunami travel times and its relationship to the 1952 Tokachi-oki earthquake

    USGS Publications Warehouse

    Hirata, K.; Tanioka, Y.; Satake, K.; Yamaki, S.; Geist, E.L.

    2004-01-01

    We estimate the tsunami source area of the 2003 Tokachi-oki earthquake (Mw 8.0) from observed tsunami travel times at 17 Japanese tide gauge stations. The estimated tsunami source area (???1.4 ?? 104 km2) coincides with the western-half of the ocean-bottom deformation area (???2.52 ?? 104 km2) of the 1952 Tokachi-oki earthquake (Mw 8.1), previously inferred from tsunami waveform inversion. This suggests that the 2003 event ruptured only the western-half of the 1952 rupture extent. Geographical distribution of the maximum tsunami heights in 2003 differs significantly from that of the 1952 tsunami, supporting this hypothesis. Analysis of first-peak tsunami travel times indicates that a major uplift of the ocean-bottom occurred approximately 30 km to the NNW of the mainshock epicenter, just above a major asperity inferred from seismic waveform inversion. Copyright ?? The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences.

  11. Uncertainty in Climatology-Based Estimates of Soil Water Infiltration Losses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Local climatology is often used to estimate infiltration losses at the field scale. The objective of this work was to assess the uncertainty associated with such estimates. We computed infiltration losses from the water budget of a soil layer from monitoring data on water flux values at the soil su...

  12. Estimates of stress drop and crustal tectonic stress from the 27 February 2010 Maule, Chile, earthquake: Implications for fault strength

    USGS Publications Warehouse

    Luttrell, K.M.; Tong, X.; Sandwell, D.T.; Brooks, B.A.; Bevis, M.G.

    2011-01-01

    The great 27 February 2010 Mw 8.8 earthquake off the coast of southern Chile ruptured a ???600 km length of subduction zone. In this paper, we make two independent estimates of shear stress in the crust in the region of the Chile earthquake. First, we use a coseismic slip model constrained by geodetic observations from interferometric synthetic aperture radar (InSAR) and GPS to derive a spatially variable estimate of the change in static shear stress along the ruptured fault. Second, we use a static force balance model to constrain the crustal shear stress required to simultaneously support observed fore-arc topography and the stress orientation indicated by the earthquake focal mechanism. This includes the derivation of a semianalytic solution for the stress field exerted by surface and Moho topography loading the crust. We find that the deviatoric stress exerted by topography is minimized in the limit when the crust is considered an incompressible elastic solid, with a Poisson ratio of 0.5, and is independent of Young's modulus. This places a strict lower bound on the critical stress state maintained by the crust supporting plastically deformed accretionary wedge topography. We estimate the coseismic shear stress change from the Maule event ranged from-6 MPa (stress increase) to 17 MPa (stress drop), with a maximum depth-averaged crustal shear-stress drop of 4 MPa. We separately estimate that the plate-driving forces acting in the region, regardless of their exact mechanism, must contribute at least 27 MPa trench-perpendicular compression and 15 MPa trench-parallel compression. This corresponds to a depth-averaged shear stress of at least 7 MPa. The comparable magnitude of these two independent shear stress estimates is consistent with the interpretation that the section of the megathrust fault ruptured in the Maule earthquake is weak, with the seismic cycle relieving much of the total sustained shear stress in the crust. Copyright 2011 by the American

  13. Optimized sensor location for estimating story-drift angle for tall buildings subject to earthquakes

    NASA Astrophysics Data System (ADS)

    Ozawa, Sayuki; Mita, Akira

    2016-04-01

    Structural Health Monitoring (SHM) is a technology that can evaluate the extent of the deterioration or the damage of the building quantitatively. Most SHM systems utilize only a few sensors and the sensors are placed equally including the roof. However, the location of the sensors has not been verified. Therefore, in this study, the optimal location of the sensors is studied for estimating the inter-story drift angle which is used in immediate diagnosis after an earthquake. This study proposes a practical optimal sensor location method after testing all the possible sensor location combinations. From the simulation results of all location patterns, it was proved that placing the sensor on the roof is not always optimal. This result is practically useful as it is difficult to place the sensor on the roof in most cases. Modal Assurance Criterion (MAC) is one of the practical optimal sensor location methods. I proposed MASS Modal Assurance Criterion (MAC*) which incorporate the mass matrix of the building into the MAC. Either the mass matrix or the stiffness matrix needs to be considered for the orthogonality of the mode vectors, normal MAC does not consider this condition. The location of sensors determined by MAC* was superior to the previous method, MAC. In this study, an important knowledge of the location of sensors was provided for implementing SHM systems.

  14. Earthquake Analysis.

    ERIC Educational Resources Information Center

    Espinoza, Fernando

    2000-01-01

    Indicates the importance of the development of students' measurement and estimation skills. Analyzes earthquake data recorded at seismograph stations and explains how to read and modify the graphs. Presents an activity for student evaluation. (YDS)

  15. Loss of Information in Estimating Item Parameters in Incomplete Designs

    ERIC Educational Resources Information Center

    Eggen, Theo J. H. M.; Verelst, Norman D.

    2006-01-01

    In this paper, the efficiency of conditional maximum likelihood (CML) and marginal maximum likelihood (MML) estimation of the item parameters of the Rasch model in incomplete designs is investigated. The use of the concept of F-information (Eggen, 2000) is generalized to incomplete testing designs. The scaled determinant of the F-information…

  16. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Earle, Paul; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  17. Source parameters of the 2008 Bukavu-Cyangugu earthquake estimated from InSAR and teleseismic data

    NASA Astrophysics Data System (ADS)

    D'Oreye, Nicolas; González, Pablo J.; Shuler, Ashley; Oth, Adrien; Bagalwa, Louis; Ekström, Göran; Kavotha, Déogratias; Kervyn, François; Lucas, Celia; Lukaya, François; Osodundu, Etoy; Wauthier, Christelle; Fernández, José

    2011-02-01

    Earthquake source parameter determination is of great importance for hazard assessment, as well as for a variety of scientific studies concerning regional stress and strain release and volcano-tectonic interaction. This is especially true for poorly instrumented, densely populated regions such as encountered in Africa, where even the distribution of seismicity remains poorly documented. In this paper, we combine data from satellite radar interferometry (InSAR) and teleseismic waveforms to determine the source parameters of the Mw 5.9 earthquake that occurred on 2008 February 3 near the cities of Bukavu (DR Congo) and Cyangugu (Rwanda). This was the second largest earthquake ever to be recorded in the Kivu basin, a section of the western branch of the East African Rift (EAR). This earthquake is of particular interest due to its shallow depth and proximity to active volcanoes and Lake Kivu, which contains high concentrations of dissolved carbon dioxide and methane. The shallow depth and possible similarity with dyking events recognized in other parts of EAR suggested the potential association of the earthquake with a magmatic intrusion, emphasizing the necessity of accurate source parameter determination. In general, we find that estimates of fault plane geometry, depth and scalar moment are highly consistent between teleseismic and InSAR studies. Centroid-moment-tensor (CMT) solutions locate the earthquake near the southern part of Lake Kivu, while InSAR studies place it under the lake itself. CMT solutions characterize the event as a nearly pure double-couple, normal faulting earthquake occurring on a fault plane striking 350° and dipping 52° east, with a rake of -101°. This is consistent with locally mapped faults, as well as InSAR data, which place the earthquake on a fault striking 355° and dipping 55° east, with a rake of -98°. The depth of the earthquake was constrained by a joint analysis of teleseismic P and SH waves and the CMT data set, showing that

  18. Cascadia's Staggering Losses

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Vogt, B.

    2001-05-01

    Recent worldwide earthquakes have resulted in staggering losses. The Northridge, California; Kobe, Japan; Loma Prieta, California; Izmit, Turkey; Chi-Chi, Taiwan; and Bhuj, India earthquakes, which range from magnitudes 6.7 to 7.7, have all occurred near populated areas. These earthquakes have resulted in estimated losses between \\3 and \\300 billion, with tens to tens of thousands of fatalities. Subduction zones are capable of producing the largest earthquakes. The 1939 M7.8 Chilean, the 1960 M9.5 Chilean, the 1964 M9.2 Alaskan, the 1970 M7.8 Peruvian, the 1985 M7.9 Mexico City and the 2001 M7.7 Bhuj earthquakes are damaging subduction zone quakes. The Cascadia fault zone poses a tremendous hazard in the Pacific Northwest due to the ground shaking and tsunami inundation hazards combined with the population. To address the Cascadia subduction zone threat, the Oregon Department of Geology and Mineral Industries conducted a preliminary statewide loss study. The 1998 Oregon study incorporated a M8.5 quake, the influence of near surface soil effects and default building, social and economic data available in FEMA's HAZUS97 software. Direct financial losses are projected at over \\$12 billion. Casualties are estimated at about 13,000. Over 5,000 of the casualties are estimated to result in fatalities from hazards relating to tsunamis and unreinforced masonry buildings.

  19. Source Process of the 2010 Great Chile Earthquake (Mw8.8) Estimated Using Observed Tsunami Waveforms

    NASA Astrophysics Data System (ADS)

    Tanioka, Y.; Gusman, A. R.

    2010-12-01

    The great earthquake, Mw 8.8, occurred in Chile on 27 February, 2010 at 06:34:14 UTC. The number of casualties by this earthquake was reached 800, and more than 500 people among that were killed by tsunamis. The large tsunami was generated by the earthquake and propagated through Pacific and reached along the coast of Pacific include Hawaii, Japan, and Alaska. The maximum run-up height of the tsunami was 28 m in Chile. The tsunami was observed at DART real-time tsunami monitoring systems installed in the Pacific by NOAA-PMEL and also tide gauges around Pacific. In this paper, the tsunami waveforms observed at 9 DART stations, 32412, 51406, 51426, 54401, 43412, 46412, 46409, 46403, and 21413, are used to estimate the slip distribution of the 2010 Chile earthquake. The source area of 500km x 150km is divided into 30 subfaults of 50 km x 50 km. The Global CMT solution shows the focal mechanism of the earthquake, strike=18degree, dip=18degree, rake=112degree. Those fault parameters are assumed for all subfaults. The tsunami is numerically computed on actual bathymetry. The finite-difference computation for the linear long-wave equations are carried out in the whole Pacific. The grid size is 5 minutes, about 9km. Tsunami waveforms at 9 DART stations are computed from each subfault with a unit amount of slip, and used as the Green’s function for the inversion. The result of the tsunami inversion indicates that the large slip amount of more than 10m is estimated in the source area from about 150 km northeast of the epicenter to about 200 km southwest of the epicenter. The maximum slip amount is estimated to be 19 m at a subfault located at the southwest of the epicenter. The total length of the rupture length is found to be about 400-350 km. The result also indicates the bilateral rupture process of the great Chile earthquake. The total seismic moment calculated from the slip distribution is 2.6 x 10^{22} Nm (Mw 8.9) by assuming the rigidity of 4 x 10^{10} N/m^{2}. This

  20. Loss estimation and damage forecast using database provided

    NASA Astrophysics Data System (ADS)

    Pyrchenko, V.; Byrova, V.; Petrasov, A.

    2009-04-01

    There is a wide spectrum of development of natural hazards is observed in Russian territory. It the necessity of investigation of numerous events of dangerous natural processes, researches of mechanisms of their development and interaction with each other (synergetic amplification or new hazards emerging) with the purpose of the forecast of possible losses. Employees of Laboratory of the analysis of geological risk IEG RAS have created a database about displays of natural hazards in territory of Russia, which contains the information on 1310 cases of their display during 1991 - 2008. The wide range of the used sources has determined certain difficulties in creation of Database and has demanded to develop a special new technique of unification of the information received at different times. One of points of this technique is classification of negative consequences of display of the natural hazards, considering a death-roll, wounded mans, victims and direct economic damage. This Database has allowed to track dynamics of natural hazards and the emergency situations caused by them (ES) for the considered period, and also to define laws of their development in territory of Russia in time and space. It gives the chance to create theoretical, methodological and methodical bases of forecasting of possible losses with a certain degree of probability for territory of Russia and for its separate regions that guarantees in the future maintenance of adequate, operative and efficient pre-emptive decision-making.

  1. A teleseismic study of the 2002 Denali fault, Alaska, earthquake and implications for rapid strong-motion estimation

    USGS Publications Warehouse

    Ji, C.; Helmberger, D.V.; Wald, D.J.

    2004-01-01

    Slip histories for the 2002 M7.9 Denali fault, Alaska, earthquake are derived rapidly from global teleseismic waveform data. In phases, three models improve matching waveform data and recovery of rupture details. In the first model (Phase I), analogous to an automated solution, a simple fault plane is fixed based on the preliminary Harvard Centroid Moment Tensor mechanism and the epicenter provided by the Preliminary Determination of Epicenters. This model is then updated (Phase II) by implementing a more realistic fault geometry inferred from Digital Elevation Model topography and further (Phase III) by using the calibrated P-wave and SH-wave arrival times derived from modeling of the nearby 2002 M6.7 Nenana Mountain earthquake. These models are used to predict the peak ground velocity and the shaking intensity field in the fault vicinity. The procedure to estimate local strong motion could be automated and used for global real-time earthquake shaking and damage assessment. ?? 2004, Earthquake Engineering Research Institute.

  2. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    PubMed

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  3. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    PubMed Central

    Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  4. Workshop on continuing actions to reduce potential losses from future earthquakes in the Northeastern United States: proceedings of conference XXI

    SciTech Connect

    Hays, W.W.; Gori, P.L.

    1983-01-01

    This workshop was designed to define the earthquake threat in the eastern United States and to improve earthquake preparedness. Four major themes were addressed: (1) the nature of the earthquake threat in the northeast and what can be done to improve the state of preparedness; (2) increasing public awareness and concern for the earthquake hazard in the northeast; (3) improving the state of preparedness through scientific, engineering, and social science research; and (4) possible functions of one or more seismic safety organizations. Papers have been abstracted separately. (ACR)

  5. Combined UAVSAR and GPS Estimates of Fault Slip for the M 6.0 South Napa Earthquake

    NASA Astrophysics Data System (ADS)

    Donnellan, A.; Parker, J. W.; Hawkins, B.; Hensley, S.; Jones, C. E.; Owen, S. E.; Moore, A. W.; Wang, J.; Pierce, M. E.; Rundle, J. B.

    2014-12-01

    Combined UAVSAR and GPS Estimates of Fault Slip for the M 6.0 South Napa Earthquake Andrea Donnellan, Jay Parker, Brian Hawkins, Scott Hensley, Cathleen Jones, Susan Owen, Angelyn Moore Jet Propulsion Laboratory, California Institute of Technology Marlon Pierce, Jun Wang Indiana University John Rundle University of California, Davis The South Napa to Santa Rosa area has been observed with NASA's UAVSAR since late 2009 as part of an experiment to monitor areas identified as having a high probability of an earthquake. The M 6.0 South Napa earthquake occurred on 24 August 2014. The area was flown 29 May 2014 preceeding the earthquake, and again on 29 August 2014, five days after the earthquake. The UAVSAR results show slip on a single fault at the south end of the rupture near the epicenter of the event. The rupture branches out into multiple faults further north near the Napa area. A combined inversion of rapid GPS results and the unwrapped UAVSAR interferogram indicate nearly pure strike slip motion. Using this assumption, the UAVSAR data show horizontal right-lateral slip across the fault of 19 cm at the south end of the rupture and increasing to 70 cm northward over a distance of 6.5 km. The joint inversion indicates slip of ~30 cm on a network of sub-parallel faults is concentrated in a zone about 17 km long. The lower depths of the faults are 5-8.5 km. The eastern two sub-parallel faults break the surface, while three faults to the west are buried at depths ranging from 2-6 km with deeper depths to the north and west. The geodetic moment release is equivalent to a M 6.1 event. Additional ruptures are observed in the interferogram, but the inversions suggest that they represent superficial slip that does not contribute to the overall moment release.

  6. Earthquake related VLF activity and Electron Precipitation as a Major Agent of the Inner Radiation Belt Losses

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Georgios C.; Sidiropoulos, Nikolaos; Barlas, Georgios

    2015-04-01

    The radiation belt electron precipitation (RBEP) into the topside ionosphere is a phenomenon which is known for several decades. However, the inner radiation belt source and loss mechanisms have not still well understood, including PBEP. Here we present the results of a systematic study of RBEP observations, as obtained from the satellite DEMETER and the series of POES satellites, in comparison with variation of seismic activity. We found that a type of RBEP bursts lasting for ~1-3 min present special characteristics in the inner region of the inner radiation belt before large (M >~7, or even M>~5) earthquakes (EQs), as for instance: characteristic (a) flux-time profiles, (b) energy spectrum, (c) electron flux temporal evolution, (d) spatial distributions (e) broad band VLF activity, some days before an EQ and (f) stopping a few hours before the EQ occurrence above the epicenter. In this study we present results from both case and statistical studies which provide significant evidence that, among EQs-lightings-Earth based transmitters, strong seismic activity during a substorm makes the main contribution to the long lasting (~1-3 min) RBEP events at middle latitudes.

  7. Defeating Earthquakes

    NASA Astrophysics Data System (ADS)

    Stein, R. S.

    2012-12-01

    The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by

  8. Equations for estimating horizontal response spectra and peak acceleration from western North American earthquakes: A summary of recent work

    USGS Publications Warehouse

    Boore, D.M.; Joyner, W.B.; Fumal, T.E.

    1997-01-01

    In this paper we summarize our recently-published work on estimating horizontal response spectra and peak acceleration for shallow earthquakes in western North America. Although none of the sets of coefficients given here for the equations are new, for the convenience of the reader and in keeping with the style of this special issue, we provide tables for estimating random horizontal-component peak acceleration and 5 percent damped pseudo-acceleration response spectra in terms of the natural, rather than common, logarithm of the ground-motion parameter. The equations give ground motion in terms of moment magnitude, distance, and site conditions for strike-slip, reverse-slip, or unspecified faulting mechanisms. Site conditions are represented by the shear velocity averaged over the upper 30 m, and recommended values of average shear velocity are given for typical rock and soil sites and for site categories used in the National Earthquake Hazards Reduction Program's recommended seismic code provisions. In addition, we stipulate more restrictive ranges of magnitude and distance for the use of our equations than in our previous publications. Finally, we provide tables of input parameters that include a few corrections to site classifications and earthquake magnitude (the corrections made a small enough difference in the ground-motion predictions that we chose not to change the coefficients of the prediction equations).

  9. Revisiting borehole strain, typhoons, and slow earthquakes using quantitative estimates of precipitation-induced strain changes

    NASA Astrophysics Data System (ADS)

    Hsu, Ya-Ju; Chang, Yuan-Shu; Liu, Chi-Ching; Lee, Hsin-Ming; Linde, Alan T.; Sacks, Selwyn I.; Kitagawa, Genshio; Chen, Yue-Gau

    2015-06-01

    Taiwan experiences high deformation rates, particularly along its eastern margin where a shortening rate of about 30 mm/yr is experienced in the Longitudinal Valley and the Coastal Range. Four Sacks-Evertson borehole strainmeters have been installed in this area since 2003. Liu et al. (2009) proposed that a number of strain transient events, primarily coincident with low-barometric pressure during passages of typhoons, were due to deep-triggered slow slip. Here we extend that investigation with a quantitative analysis of the strain responses to precipitation as well as barometric pressure and the Earth tides in order to isolate tectonic source effects. Estimates of the strain responses to barometric pressure and groundwater level changes for the different stations vary over the ranges -1 to -3 nanostrain/millibar(hPa) and -0.3 to -1.0 nanostrain/hPa, respectively, consistent with theoretical values derived using Hooke's law. Liu et al. (2009) noted that during some typhoons, including at least one with very heavy rainfall, the observed strain changes were consistent with only barometric forcing. By considering a more extensive data set, we now find that the strain response to rainfall is about -5.1 nanostrain/hPa. A larger strain response to rainfall compared to that to air pressure and water level may be associated with an additional strain from fluid pressure changes that take place due to infiltration of precipitation. Using a state-space model, we remove the strain response to rainfall, in addition to those due to air pressure changes and the Earth tides, and investigate whether corrected strain changes are related to environmental disturbances or tectonic-original motions. The majority of strain changes attributed to slow earthquakes seem rather to be associated with environmental factors. However, some events show remaining strain changes after all corrections. These events include strain polarity changes during passages of typhoons (a characteristic that is

  10. An Optimum Model to Estimate Path Losses for 400 MHz Band Land Mobile Radio

    NASA Astrophysics Data System (ADS)

    Miyashita, Michifumi; Terada, Takashi; Serizawa, Yoshizumi

    It is difficult to estimate path loss for land mobile radio using a single path loss model such as diffraction model or Okumura model individually when mobile radio is utilized in widespread area. Furthermore, high accuracy of the path loss estimation is needed when the radio system is digitized because degradation of CNR due to interference deteriorates communications. In this paper, conventional path loss models, i.e. the diffraction model, Okumura model and two-ray model, were evaluated with 400 MHz land mobile radio field measurements, and a method of improving path loss estimation by using each of these conventional models selectively was proposed. The ratio of error between -10 dB and +10 dB for the method applying the correction factors derived from our field measurements was 71.41 %, while the ratios for the conventional diffraction and Okumura models without any correction factors were 26.71 % and 49.42 %, respectively.

  11. New Method for Estimating Landslide Losses for Major Winter Storms in California.

    NASA Astrophysics Data System (ADS)

    Wills, C. J.; Perez, F. G.; Branum, D.

    2014-12-01

    We have developed a prototype system for estimating the economic costs of landslides due to winter storms in California. This system uses some of the basic concepts and estimates of the value of structures from the HAZUS program developed for FEMA. Using the only relatively complete landslide loss data set that we could obtain, data gathered by the City of Los Angeles in 1978, we have developed relations between landslide susceptibility and loss ratio for private property (represented as the value of wood frame structures from HAZUS). The landslide loss ratios estimated from the Los Angeles data are calibrated using more generalized data from the 1982 storms in the San Francisco Bay area to develop relationships that can be used to estimate loss for any value of 2-day or 30-day rainfall averaged over a county. The current estimates for major storms are long projections from very small data sets, subject to very large uncertainties, so provide a very rough estimate of the landslide damage to structures and infrastructure on hill slopes. More importantly, the system can be extended and improved with additional data and used to project landslide losses in future major winter storms. The key features of this system—the landslide susceptibility map, the relationship between susceptibility and loss ratio, and the calibration of estimates against losses in past storms—can all be improved with additional data. Most importantly, this study highlights the importance of comprehensive studies of landslide damage. Detailed surveys of landslide damage following future storms that include locations and amounts of damage for all landslides within an area are critical for building a well-calibrated system to project future landslide losses. Without an investment in post-storm landslide damage surveys, it will not be possible to improve estimates of the magnitude or distribution of landslide damage, which can range up to billions of dollars.

  12. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  13. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  14. Estimation of furrow irrigation sediment loss using an artificial neural network

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The area irrigated by furrow irrigation in the U.S. has been steadily decreasing but still represents about 20% of the total irrigated area in the U.S. Furrow irrigation sediment loss is a major water quality issue and a method for estimating sediment loss is needed to quantify the environmental imp...

  15. Estimation of soil loss by water erosion in the Chinese Loess Plateau using Universal Soil Loss Equation and GRACE

    NASA Astrophysics Data System (ADS)

    Schnitzer, S.; Seitz, F.; Eicker, A.; Güntner, A.; Wattenbach, M.; Menzel, A.

    2013-06-01

    For the estimation of soil loss by erosion in the strongly affected Chinese Loess Plateau we applied the Universal Soil Loss Equation (USLE) using a number of input data sets (monthly precipitation, soil types, digital elevation model, land cover and soil conservation measures). Calculations were performed in ArcGIS and SAGA. The large-scale soil erosion in the Loess Plateau results in a strong non-hydrological mass change. In order to investigate whether the resulting mass change from USLE may be validated by the gravity field satellite mission GRACE (Gravity Recovery and Climate Experiment), we processed different GRACE level-2 products (ITG, GFZ and CSR). The mass variations estimated in the GRACE trend were relatively close to the observed sediment yield data of the Yellow River. However, the soil losses resulting from two USLE parameterizations were comparatively high since USLE does not consider the sediment delivery ratio. Most eroded soil stays in the study area and only a fraction is exported by the Yellow River. Thus, the resultant mass loss appears to be too small to be resolved by GRACE.

  16. Combining MODIS and Landsat imagery to estimate and map boreal forest cover loss

    USGS Publications Warehouse

    Potapov, P.; Hansen, M.C.; Stehman, S.V.; Loveland, T.R.; Pittman, K.

    2008-01-01

    Estimation of forest cover change is important for boreal forests, one of the most extensive forested biomes, due to its unique role in global timber stock, carbon sequestration and deposition, and high vulnerability to the effects of global climate change. We used time-series data from the MODerate Resolution Imaging Spectroradiometer (MODIS) to produce annual forest cover loss hotspot maps. These maps were used to assign all blocks (18.5 by 18.5??km) partitioning the boreal biome into strata of high, medium and low likelihood of forest cover loss. A stratified random sample of 118 blocks was interpreted for forest cover and forest cover loss using high spatial resolution Landsat imagery from 2000 and 2005. Area of forest cover gross loss from 2000 to 2005 within the boreal biome is estimated to be 1.63% (standard error 0.10%) of the total biome area, and represents a 4.02% reduction in year 2000 forest cover. The proportion of identified forest cover loss relative to regional forest area is much higher in North America than in Eurasia (5.63% to 3.00%). Of the total forest cover loss identified, 58.9% is attributable to wildfires. The MODIS pan-boreal change hotspot estimates reveal significant increases in forest cover loss due to wildfires in 2002 and 2003, with 2003 being the peak year of loss within the 5-year study period. Overall, the precision of the aggregate forest cover loss estimates derived from the Landsat data and the value of the MODIS-derived map displaying the spatial and temporal patterns of forest loss demonstrate the efficacy of this protocol for operational, cost-effective, and timely biome-wide monitoring of gross forest cover loss. ?? 2008 Elsevier Inc.

  17. Estimating the similarity of earthquake focal mechanisms from waveform cross-correlation in regions of minimal local azimuthal station coverage

    NASA Astrophysics Data System (ADS)

    Kilb, D. L.; Martynov, V.; Bowen, J.; Vernon, F.; Eakins, J.

    2002-12-01

    In the Xinjiang province of China, ~2000 earthquakes were recorded by the Tien Shan network during 1997-1999 that exhibit a clear spatial progression of seismicity. This progression, which is confined to a 50 km diameter region, is undetectable in other data catalogs, both global (e.g., REB, PDE, CMT) and local (KIS). The two largest earthquakes in this sequence were the M6.1 August 2, 1998, and the M6.2 August 27, 1998, earthquakes. According to the Harvard moment tensor solutions, both events ruptured faults that trend parallel to the geologic structures in the region (~N55W). However, the August 27 event was a vertical strike slip event while the August 2 event ruptured a dipping fault and had a normal component of slip. These slip directions are counter to what we expect for this fold-and-thrust-belt, which typically has earthquakes with thrust mechanisms. Often seismological researchers make the assumption that aftershocks have the same focal mechanism as their associated mainshocks and/or assume all aftershock fault planes are similarly oriented. We test this assumption by examining the similarity of aftershock mechanisms from the August 2nd and 27th mainshocks. It is difficult to determine focal mechanisms from inversions of full seismic waveforms because the velocity model in the Tien Shan region is so complicated a 3D velocity model would be required. Also, the azimuthal station coverage is poor. Alternative, it impossible to determine accurate focal mechanisms from first motion data because the closest seismic stations have weak and complicated first arrivals. Our approach more easily determines the similarity of earthquake focal mechanisms using waveform cross-correlation. In this way information from the full waveform is utilized, and there is no need to make estimates of the complicated velocity structure. In general, we find there is minimal correlation between pairs of event waveforms (filter 1-8 Hz) within each aftershock sequence. For example, at

  18. Impact-based earthquake alerts with the U.S. Geological Survey's PAGER system: what's next?

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Garcia, D.; So, E.; Hearne, M.

    2012-01-01

    In September 2010, the USGS began publicly releasing earthquake alerts for significant earthquakes around the globe based on estimates of potential casualties and economic losses with its Prompt Assessment of Global Earthquakes for Response (PAGER) system. These estimates significantly enhanced the utility of the USGS PAGER system which had been, since 2006, providing estimated population exposures to specific shaking intensities. Quantifying earthquake impacts and communicating estimated losses (and their uncertainties) to the public, the media, humanitarian, and response communities required a new protocol—necessitating the development of an Earthquake Impact Scale—described herein and now deployed with the PAGER system. After two years of PAGER-based impact alerting, we now review operations, hazard calculations, loss models, alerting protocols, and our success rate for recent (2010-2011) events. This review prompts analyses of the strengths, limitations, opportunities, and pressures, allowing clearer definition of future research and development priorities for the PAGER system.

  19. Fuzzy Discrimination Analysis Method for Earthquake Energy K-Class Estimation with respect to Local Magnitude Scale

    NASA Astrophysics Data System (ADS)

    Mumladze, T.; Gachechiladze, J.

    2014-12-01

    The purpose of the present study is to establish relation between earthquake energy K-class (the relative energy characteristic) defined as logarithm of seismic waves energy E in joules obtained from analog stations data and local (Richter) magnitude ML obtained from digital seismograms. As for these data contain uncertainties the effective tools of fuzzy discrimination analysis are suggested for subjective estimates. Application of fuzzy analysis methods is an innovative approach to solving a complicated problem of constracting a uniform energy scale through the whole earthquake catalogue, also it avoids many of the data collection problems associated with probabilistic approaches; and it can handle incomplete information, partial inconsistency and fuzzy descriptions of data in a natural way. Another important task is to obtain frequency-magnitude relation based on K parameter, calculation of the Gutenberg-Richter parameters (a, b) and examining seismic activity in Georgia. Earthquake data files are using for periods: from 1985 to 1990 and from 2004 to 2009 for area j=410 - 430.5, l=410 - 470.

  20. GPS estimates of microplate motions, northern Caribbean: evidence for a Hispaniola microplate and implications for earthquake hazard

    NASA Astrophysics Data System (ADS)

    Benford, B.; DeMets, C.; Calais, E.

    2012-09-01

    We use elastic block modelling of 126 GPS site velocities from Jamaica, Hispaniola, Puerto Rico and other islands in the northern Caribbean to test for the existence of a Hispaniola microplate and estimate angular velocities for the Gônave, Hispaniola, Puerto Rico-Virgin Islands and two smaller microplates relative to each other and the Caribbean and North America plates. A model in which the Gônave microplate spans the whole plate boundary between the Cayman spreading centre and Mona Passage west of Puerto Rico is rejected at a high confidence level. The data instead require an independently moving Hispaniola microplate between the Mona Passage and a likely diffuse boundary within or offshore from western Hispaniola. Our updated angular velocities predict 6.8 ± 1.0 mm yr-1 of left-lateral slip along the seismically hazardous Enriquillo-Plantain Garden fault zone of southwest Hispaniola, 9.8 ± 2.0 mm yr-1 of slip along the Septentrional fault of northern Hispaniola and ˜14-15 mm yr-1 of left-lateral slip along the Oriente fault south of Cuba. They also predict 5.7 ± 1 mm yr-1 of fault-normal motion in the vicinity of the Enriquillo-Plantain Garden fault zone, faster than previously estimated and possibly accommodated by folds and faults in the Enriquillo-Plantain Garden fault zone borderlands. Our new and a previous estimate of Gônave-Caribbean plate motion suggest that enough elastic strain accumulates to generate one to two Mw˜ 7 earthquakes per century along the Enriquillo-Plantain Garden and nearby faults of southwest Hispaniola. That the 2010 M= 7.0 Haiti earthquake ended a 240-yr-long period of seismic quiescence in this region raises concerns that it could mark the onset of a new earthquake sequence that will relieve elastic strain that has accumulated since the late 18th century.

  1. Toward Reconciling Magnitude Discrepancies Estimated from Paleoearthquake Data: A New Approach for Predicting Earthquake Magnitudes from Fault Segment Lengths

    NASA Astrophysics Data System (ADS)

    Carpenter, N. S.; Payne, S. J.; Schafer, A. L.

    2011-12-01

    We recognize a discrepancy in magnitudes estimated for several Basin and Range faults in the Intermountain Seismic Belt, U.S.A. For example, magnitudes predicted for the Wasatch (Utah), Lost River (Idaho), and Lemhi (Idaho) faults from fault segment lengths, Lseg, where lengths are defined between geometrical, structural, and/or behavioral discontinuities assumed to persistently arrest rupture, are consistently less than magnitudes calculated from displacements, D, along these same segments. For self-similarity, empirical relationships (e.g. Wells and Coppersmith, 1994) should predict consistent magnitudes (M) using diverse fault dimension values for a given fault (i.e. M ~ Lseg, should equal M ~ D). Typically, the empirical relationships are derived from historical earthquake data and parameter values used as input into these relationships are determined from field investigations of paleoearthquakes. A commonly used assumption - grounded in the characteristic-earthquake model of Schwartz and Coppersmith (1984) - is equating Lseg with surface rupture length, SRL. Many large historical events yielded secondary and/or sympathetic faulting (e.g. 1983 Borah Peak, Idaho earthquake) which are included in the measurement of SRL and used to derive empirical relationships. Therefore, calculating magnitude from the M ~ SRL relationship using Lseg as SRL leads to an underestimation of magnitude and the M ~ Lseg and M ~ D discrepancy. Here, we propose an alternative approach to earthquake magnitude estimation involving a relationship between moment magnitude, Mw, and length, where length is Lseg instead of SRL. We analyze seven historical, surface-rupturing, strike-slip and normal faulting earthquakes for which segmentation of the causative fault and displacement data are available and whose rupture included at least one entire fault segment, but not two or more. The preliminary Mw ~ Lseg results are strikingly consistent with Mw ~ D calculations using paleoearthquake data for

  2. Coseismic Fault Slip of the September 16, 2015 Mw 8.3 Illapel, Chile Earthquake Estimated from InSAR Data

    NASA Astrophysics Data System (ADS)

    Zhang, Yingfeng; Zhang, Guohong; Hetland, Eric A.; Shan, Xinjian; Wen, Shaoyan; Zuo, Ronghu

    2016-04-01

    The complete surface deformation of 2015 Mw 8.3 Illapel, Chile earthquake is obtained using SAR interferograms obtained for descending and ascending Sentinel-1 orbits. We find that the Illapel event is predominantly thrust, as expected for an earthquake on the interface between the Nazca and South America plates, with a slight right-lateral strike slip component. The maximum thrust-slip and right-lateral strike slip reach 8.3 and 1.5 m, respectively, both located at a depth of 8 km, northwest to the epicenter. The total estimated seismic moment is 3.28 × 1021 N.m, corresponding to a moment magnitude Mw 8.27. In our model, the rupture breaks all the way up to the sea-floor at the trench, which is consistent with the destructive tsunami following the earthquake. We also find the slip distribution correlates closely with previous estimates of interseismic locking distribution. We argue that positive coulomb stress changes caused by the Illapel earthquake may favor earthquakes on the extensional faults in this area. Finally, based on our inferred coseismic slip model and coulomb stress calculation, we envision that the subduction interface that last slipped in the 1922 Mw 8.4 Vallenar earthquake might be near the upper end of its seismic quiescence, and the earthquake potential in this region is urgent.

  3. Napa Earthquake impact on water systems

    NASA Astrophysics Data System (ADS)

    Wang, J.

    2014-12-01

    South Napa earthquake occurred in Napa, California on August 24 at 3am, local time, and the magnitude is 6.0. The earthquake was the largest in SF Bay Area since the 1989 Loma Prieta earthquake. Economic loss topped $ 1 billion. Wine makers cleaning up and estimated the damage on tourism. Around 15,000 cases of lovely cabernet were pouring into the garden at the Hess Collection. Earthquake potentially raise water pollution risks, could cause water crisis. CA suffered water shortage recent years, and it could be helpful on how to prevent underground/surface water pollution from earthquake. This research gives a clear view on drinking water system in CA, pollution on river systems, as well as estimation on earthquake impact on water supply. The Sacramento-San Joaquin River delta (close to Napa), is the center of the state's water distribution system, delivering fresh water to more than 25 million residents and 3 million acres of farmland. Delta water conveyed through a network of levees is crucial to Southern California. The drought has significantly curtailed water export, and salt water intrusion reduced fresh water outflows. Strong shaking from a nearby earthquake can cause saturated, loose, sandy soils liquefaction, and could potentially damage major delta levee systems near Napa. Napa earthquake is a wake-up call for Southern California. It could potentially damage freshwater supply system.

  4. Reassessment of liquefaction potential and estimation of earthquake- induced settlements at Paducah Gaseous Diffusion Plant, Paducah, Kentucky. Final report

    SciTech Connect

    Sykora, D.W.; Yule, D.E.

    1996-04-01

    This report documents a reassessment of liquefaction potential and estimation of earthquake-induced settlements for the U.S. Department of Energy (DOE), Paducah Gaseous Diffusion Plant (PGDP), located southwest of Paducah, KY. The U.S. Army Engineer Waterways Experiment Station (WES) was authorized to conduct this study from FY91 to FY94 by the DOE, Oak Ridge Operations (ORO), Oak Ridge, TN, through Inter- Agency Agreement (IAG) No. DE-AI05-91OR21971. The study was conducted under the Gaseous Diffusion Plant Safety Analysis Report (GDP SAR) Program.

  5. Volcano-tectonic earthquakes: A new tool for estimating intrusive volumes and forecasting eruptions

    NASA Astrophysics Data System (ADS)

    White, Randall; McCausland, Wendy

    2016-01-01

    We present data on 136 high-frequency earthquakes and swarms, termed volcano-tectonic (VT) seismicity, which preceded 111 eruptions at 83 volcanoes, plus data on VT swarms that preceded intrusions at 21 other volcanoes. We find that VT seismicity is usually the earliest reported seismic precursor for eruptions at volcanoes that have been dormant for decades or more, and precedes eruptions of all magma types from basaltic to rhyolitic and all explosivities from VEI 0 to ultraplinian VEI 6 at such previously long-dormant volcanoes. Because large eruptions occur most commonly during resumption of activity at long-dormant volcanoes, VT seismicity is an important precursor for the Earth's most dangerous eruptions. VT seismicity precedes all explosive eruptions of VEI ≥ 5 and most if not all VEI 4 eruptions in our data set. Surprisingly we find that the VT seismicity originates at distal locations on tectonic fault structures at distances of one or two to tens of kilometers laterally from the site of the eventual eruption, and rarely if ever starts beneath the eruption site itself. The distal VT swarms generally occur at depths almost equal to the horizontal distance of the swarm from the summit out to about 15 km distance, beyond which hypocenter depths level out. We summarize several important characteristics of this distal VT seismicity including: swarm-like nature, onset days to years prior to the beginning of magmatic eruptions, peaking of activity at the time of the initial eruption whether phreatic or magmatic, and large non-double couple component to focal mechanisms. Most importantly we show that the intruded magma volume can be simply estimated from the cumulative seismic moment of the VT seismicity from: Log10 V = 0.77 Log ΣMoment - 5.32, with volume, V, in cubic meters and seismic moment in Newton meters. Because the cumulative seismic moment can be approximated from the size of just the few largest events, and is quite insensitive to precise locations

  6. Applying a fuzzy-set-based method for robust estimation of coupling loss factors

    NASA Astrophysics Data System (ADS)

    Nunes, R. F.; Ahmida, K. M.; Arruda, J. R. F.

    2007-10-01

    Finite element models have been used by many authors to provide accurate estimations of coupling loss factors. Although much progress has been achieved in this area, little attention has been paid to the influence of uncertain parameters in the finite element model used to estimate these factors. It is well known that, in the mid-frequency range, uncertainty is a major issue. In this context, a spectral element method combined with a special implementation of a fuzzy-set-based method, which is called the transformation method, is proposed as an alternative to compute coupling loss factors. The proposed technique is applied to a frame-type junction, which can consist of two beams connected at an arbitrary angle. In this context, two problems are investigated. In the first one, the influence of the confidence intervals of the coupling loss factors on the estimated energy envelopes assuming a unit power input is considered. In the other problem the influence of the envelope of the input power obtained considering the confidence intervals of the coupling loss factors is also taken into account. The estimates of the intervals are obtained by using the spectral element method combined with a fuzzy-set-based method. Results using a Monte Carlo analysis for the estimation of the coupling loss factors under the influence of uncertain parameters are shown for comparison and verification of the fuzzy method.

  7. Probabilistic estimation of earthquake-induced tsunami occurrences in the Adriatic and northern Ionian seas

    NASA Astrophysics Data System (ADS)

    Armigliato, Alberto; Tinti, Stefano

    2010-05-01

    In the framework of the EU-funded project TRANSFER (Tsunami Risk ANd Strategies For the European Region we faced the problem of assessing quantitatively the tsunami hazard in the Adriatic and north Ionian Seas. Tsunami catalogues indicate that the Ionian Sea coasts has been hit by several large historical tsunamis, some of which of local nature (especially along eastern Sicily, eastern Calabria and the Greek Ionian Islands), while others had trans-basin relevance, like those generated in correspondence with the western Hellenic Trench. In the Adriatic Sea the historical tsunami activity is indeed lower, but not negligible: the most exposed regions on the western side of the basin are Romagna-Marche, Gargano and southern Apulia, while in the eastern side the Dalmatian and Albanian coastlines show the largest tsunami exposure. To quantitatively assess the exposure of the selected coastlines to tsunamis we used a hybrid statistical-deterministic approach, already applied in the recent past to the southern Tyrrhenian and Ionian coasts of Italy. The general idea is to base the tsunami hazard analyses on the computation of the probability of occurrence of tsunamigenic earthquakes, which is appropriate in basins where the number of known historical tsunamis is too scarce to be used in reliable statistical analyses, and the largest part of the tsunamis had tectonic origin. The approach is based on the combination of two steps of different nature. The first step consists in the creation of a single homogeneous earthquake catalogue starting from suitably selected catalogues pertaining to each of the main regions facing the Adriatic and north Ionian basins (Italy, Croatia, Montenegro, Greece). The final catalogue contains 6619 earthquakes with moment magnitude ranging from 4.5 to 8.3 and focal depth lower than 50 km. The limitations in magnitude and depth are based on the assumption that earthquakes of magnitude lower than 4.5 and depth greater than 50 km have no significant

  8. Earthquake-triggered liquefaction in Southern Siberia and surroundings: a base for predictive models and seismic hazard estimation

    NASA Astrophysics Data System (ADS)

    Lunina, Oksana

    2016-04-01

    The forms and location patterns of soil liquefaction induced by earthquakes in southern Siberia, Mongolia, and northern Kazakhstan in 1950 through 2014 have been investigated, using field methods and a database of coseismic effects created as a GIS MapInfo application, with a handy input box for large data arrays. Statistical analysis of the data has revealed regional relationships between the magnitude (Ms) of an earthquake and the maximum distance of its environmental effect to the epicenter and to the causative fault (Lunina et al., 2014). Estimated limit distances to the fault for the Ms = 8.1 largest event are 130 km that is 3.5 times as short as those to the epicenter, which is 450 km. Along with this the wider of the fault the less liquefaction cases happen. 93% of them are within 40 km from the causative fault. Analysis of liquefaction locations relative to nearest faults in southern East Siberia shows the distances to be within 8 km but 69% of all cases are within 1 km. As a result, predictive models have been created for locations of seismic liquefaction, assuming a fault pattern for some parts of the Baikal rift zone. Base on our field and world data, equations have been suggested to relate the maximum sizes of liquefaction-induced clastic dikes (maximum width, visible maximum height and intensity index of clastic dikes) with Ms and local shaking intensity corresponding to the MSK-64 macroseismic intensity scale (Lunina and Gladkov, 2015). The obtained results make basis for modeling the distribution of the geohazard for the purposes of prediction and for estimating the earthquake parameters from liquefaction-induced clastic dikes. The author would like to express their gratitude to the Institute of the Earth's Crust, Siberian Branch of the Russian Academy of Sciences for providing laboratory to carry out this research and Russian Scientific Foundation for their financial support (Grant 14-17-00007).

  9. Postpartum blood loss: visual estimation versus objective quantification with a novel birthing drape

    PubMed Central

    Lertbunnaphong, Tripop; Lapthanapat, Numporn; Leetheeragul, Jarunee; Hakularb, Pussara; Ownon, Amporn

    2016-01-01

    INTRODUCTION Immediate postpartum haemorrhage (PPH) is the most common cause of maternal mortality worldwide. Most recommendations focus on its prevention and management. Visual estimation of blood loss is widely used for the early detection of PPH, but the most appropriate method remains unclear. This study aimed to compare the efficacy of visual estimation and objective measurement using a sterile under-buttock drape, to determine the volume of postpartum blood loss. METHODS This study evaluated patients aged ≥ 18 years with low-risk term pregnancies, who delivered vaginally. Immediately after delivery, a birth attendant inserted the drape under the patient’s buttocks. Postpartum blood loss was measured by visual estimation and then compared with objective measurement using the drape. All participants received standard intra- and postpartum care. RESULTS In total, 286 patients with term pregnancies were enrolled. There was a significant difference in postpartum blood loss between visual estimation and objective measurement using the under-buttock drape (178.6 ± 133.1 mL vs. 259.0 ± 174.9 mL; p < 0.0001). Regarding accuracy at 100 mL discrete categories of postpartum blood loss, visual estimation was found to be inaccurate, resulting in underestimation, with low correspondence (27.6%) and poor agreement (Cohen’s kappa coefficient 0.07; p < 0.05), compared with objective measurement using the drape. Two-thirds of cases of immediate PPH (65.4%) were misdiagnosed using visual estimation. CONCLUSION Visual estimation is not optimal for measurement of postpartum blood loss in PPH. This method should be withdrawn from standard obstetric practice and replaced with objective measurement using the sterile under-buttock drape. PMID:27353510

  10. The use of streambed temperatures to estimate transmission losses on an experimental channel.

    SciTech Connect

    Ramon C. Naranjo; Michael H. Young; Richard Niswonger; Julianne J. Miller; Richard H. French

    2001-10-18

    Quantifying channel transmission losses in arid environments is important for a variety of reasons, from engineering design of flood control structures to evaluating recharge. To quantify the losses in an alluvial channel, an experiment was performed on a 2-km reach of an alluvial fan located on the Nevada Test Site. The channel was subjected to three separate flow events. Transmission losses were estimated using standard discharge monitoring and subsurface temperature modeling approach. Four stations were equipped to continuously monitor stage, temperature, and water content. Streambed temperatures measured at 0, 30, 50 and 100 cm depths were used to calibrate VS2DH, a two-dimensional, variably saturated flow model. Average losses based on the difference in flow between each station indicate that 21 percent, 27 percent, and 53 percent of the flow was reduced downgradient of the source. Results from the temperature monitoring identified locations with large thermal gradients, suggesting a conduction-dominated heat transfer on streambed sediments where caliche-cemented surfaces were present. Transmission losses at the lowermost segment corresponded to the smallest thermal gradient, suggesting an advection-dominated heat transfer. Losses predicted by VS2DH are within an order of magnitude of the estimated losses based on discharge measurements. The differences in losses are a result of the spatial extent to which the modeling results are applied and lateral subsurface flow.

  11. Estimating high frequency energy radiation of large earthquakes by image deconvolution back-projection

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Takeuchi, Nozomu; Kawakatsu, Hitoshi; Mori, Jim

    2016-09-01

    High frequency energy radiation of large earthquakes is a key to evaluating shaking damage and is an important source characteristic for understanding rupture dynamics. We developed a new inversion method, Image Deconvolution Back-Projection (IDBP) to retrieve high frequency energy radiation of seismic sources by linear inversion of observed images from a back-projection approach. The observed back-projection image for multiple sources is considered as a convolution of the image of the true radiated energy and the array response for a point source. The array response that spreads energy both in space and time is evaluated by using data of a smaller reference earthquake that can be assumed to be a point source. The synthetic test of the method shows that the spatial and temporal resolution of the source is much better than that for the conventional back-projection method. We applied this new method to the 2001 Mw 7.8 Kunlun earthquake using data recorded by Hi-net in Japan. The new method resolves a sharp image of the high frequency energy radiation with a significant portion of supershear rupture.

  12. Estimation of the seismic moment tensor from teleseismic body wave data with applications to intraplate and mantle earthquakes

    NASA Astrophysics Data System (ADS)

    Fitch, Thomas J.; McCowan, Douglas W.; Shields, Michael W.

    1980-07-01

    Amplitude data from direct and near-source reflected phases are inverted to obtain point-source moment tensors. The inversion scheme is computationally efficient, and the results can be interpreted without the uniqueness problems that plague many geophysical inversion schemes. This follows from the linear relationship between the moment tensor components and the recorded waveforms. The L1 norm is used as an optimum solution criterion, thereby allowing first motions to be included in the data set. A mixed data set is warranted when only a small number of amplitude measurements are available. Displacement amplitudes at the recording stations are estimated by seismogram modeling in the case of the shallow earthquake and by the application of an optimum lag inverse filter in the case of the deep earthquake. The inverse filter is designed to remove the combined effects of the recording system and signal distortion owing to anelasticity. Long-period P waves from an intraplate earthquake located between the Caribbean arc and the mid-Atlantic ridge at a depth of 25 km reveal a source with a moment time function in the far field that has rise and fall times of 2±1 s. By implication the duration of faulting was short in comparison with shallow earthquakes of similar size at active plate margins. Approximately 89% of the total moment of 0.8×1025 dyn cm pertains to a change in deviatoric stress, which is represented almost totally by a double couple. A 20% increase in the double couple component was achieved by a systematic steepening by 5°-8° of takeoff angles for ray paths to teleseismic distances computed from the Herrin travel times. A submoho source depth is assumed, consistent with generally accepted models of oceanic lithosphere. The double couple component from the moment tensor is similar to the first motion solution but is dominated by a strike-slip rather than a dip-slip radiation pattern. Amplitudes and first motion polarities from a deep earthquake beneath the

  13. Estimation of Damaged Areas due to the 2010 Chile Earthquake and Tsunami Using SAR Imagery of Alos/palsar

    NASA Astrophysics Data System (ADS)

    Made, Pertiwi Jaya Ni; Miura, Fusanori; Besse Rimba, A.

    2016-06-01

    A large-scale earthquake and tsunami affect thousands of people and cause serious damages worldwide every year. Quick observation of the disaster damage is extremely important for planning effective rescue operations. In the past, acquiring damage information was limited to only field surveys or using aerial photographs. In the last decade, space-borne images were used in many disaster researches, such as tsunami damage detection. In this study, SAR data of ALOS/PALSAR satellite images were used to estimate tsunami damage in the form of inundation areas in Talcahuano, the area near the epicentre of the 2010 Chile earthquake. The image processing consisted of three stages, i.e. pre-processing, analysis processing, and post-processing. It was conducted using multi-temporal images before and after the disaster. In the analysis processing, inundation areas were extracted through the masking processing. It consisted of water masking using a high-resolution optical image of ALOS/AVNIR-2 and elevation masking which built upon the inundation height using DEM image of ASTER-GDEM. The area result was 8.77 Km2. It showed a good result and corresponded to the inundation map of Talcahuano. Future study in another area is needed in order to strengthen the estimation processing method.

  14. Estimating field-of-view loss in bathymetric lidar: application to large-scale simulations.

    PubMed

    Carr, Domenic; Tuell, Grady

    2014-07-20

    When designing a bathymetric lidar, it is important to study simulated waveforms for various combinations of system and environmental parameters. To predict a system's ranging accuracy, it is often necessary to analyze thousands of waveforms. In these large-scale simulations, estimating field-of-view loss is a challenge because the calculation is complex and computationally intensive. This paper describes a new procedure for quickly approximating this loss, and illustrates how it can be used to efficiently predict ranging accuracy. PMID:25090208

  15. Programmable calculator program for linear somatic cell scores to estimate mastitis yield losses.

    PubMed

    Kirk, J H

    1984-02-01

    A programmable calculator program calculates loss of milk yield in dairy cows based on linear somatic cell count scores. The program displays the distribution of the herd by lactation number and linear score for present and optimal goal situations. Loss of yield is in pounds and dollars by cow and herd. The program estimates optimal milk production and numbers of fewer cows at the goal for mastitis infection. PMID:6546938

  16. Estimation of evaporative loss based on the stable isotope composition of water using Hydrocalculator

    NASA Astrophysics Data System (ADS)

    Skrzypek, Grzegorz; Mydłowski, Adam; Dogramaci, Shawan; Hedley, Paul; Gibson, John J.; Grierson, Pauline F.

    2015-04-01

    Accurate quantification of evaporative losses to the atmosphere from surface water bodies is essential for calibration and validation of hydrological models, particularly in remote arid and semi-arid regions, where intermittent rivers are generally minimally gauged. Analyses of the stable hydrogen and oxygen isotope composition of water can be used to estimate evaporative losses from individual pools in such regions in the absence of instrumental data but calculations can be complex, especially in highly variable systems. In this study, we reviewed and combined the most recent equations required for estimation of evaporative losses based on the revised Craig-Gordon model. The updated procedure is presented step-by-step, increasing ease of replication of all calculations. The main constraints and sources of uncertainties in the model were also evaluated. Based on this procedure we have designed a new software, Hydrocalculator, that allows quick and robust estimation of evaporative losses based on isotopic composition of water. The software was validated against measures of field pan evaporation under arid conditions in northwest Australia as well as published data from other regions. We found that the major factor contributing to the overall uncertainty in evaporative loss calculations using this method is uncertainty in estimation of the isotope composition of ambient air moisture.

  17. Estimating tag loss of the Atlantic Horseshoe crab, Limulus polyphemus, using a multi-state model

    USGS Publications Warehouse

    Butler, Catherine Alyssa; McGowan, Conor P.; Grand, James B.; Smith, David

    2012-01-01

    The Atlantic Horseshoe crab, Limulus polyphemus, is a valuable resource along the Mid-Atlantic coast which has, in recent years, experienced new management paradigms due to increased concern about this species role in the environment. While current management actions are underway, many acknowledge the need for improved and updated parameter estimates to reduce the uncertainty within the management models. Specifically, updated and improved estimates of demographic parameters such as adult crab survival in the regional population of interest, Delaware Bay, could greatly enhance these models and improve management decisions. There is however, some concern that difficulties in tag resighting or complete loss of tags could be occurring. As apparent from the assumptions of a Jolly-Seber model, loss of tags can result in a biased estimate and underestimate a survival rate. Given that uncertainty, as a first step towards estimating an unbiased estimate of adult survival, we first took steps to estimate the rate of tag loss. Using data from a double tag mark-resight study conducted in Delaware Bay and Program MARK, we designed a multi-state model to allow for the estimation of mortality of each tag separately and simultaneously.

  18. Method for estimating spatially variable seepage loss and hydraulic conductivity in intermittent and ephemeral streams

    USGS Publications Warehouse

    Niswonger, R.G.; Prudic, D.E.; Fogg, G.E.; Stonestrom, D.A.; Buckland, E.M.

    2008-01-01

    A method is presented for estimating seepage loss and streambed hydraulic conductivity along intermittent and ephemeral streams using streamflow front velocities in initially dry channels. The method uses the kinematic wave equation for routing streamflow in channels coupled to Philip's equation for infiltration. The coupled model considers variations in seepage loss both across and along the channel. Water redistribution in the unsaturated zone is also represented in the model. Sensitivity of the streamflow front velocity to parameters used for calculating seepage loss and for routing streamflow shows that the streambed hydraulic conductivity has the greatest sensitivity for moderate to large seepage loss rates. Channel roughness, geometry, and slope are most important for low seepage loss rates; however, streambed hydraulic conductivity is still important for values greater than 0.008 m/d. Two example applications are presented to demonstrate the utility of the method. Copyright 2008 by the American Geophysical Union.

  19. A chemodynamic approach for estimating losses of target organic chemicals from water during sample holding time

    USGS Publications Warehouse

    Capel, P.D.; Larson, S.J.

    1995-01-01

    Minimizing the loss of target organic chemicals from environmental water samples between the time of sample collection and isolation is important to the integrity of an investigation. During this sample holding time, there is a potential for analyte loss through volatilization from the water to the headspace, sorption to the walls and cap of the sample bottle; and transformation through biotic and/or abiotic reactions. This paper presents a chemodynamic-based, generalized approach to estimate the most probable loss processes for individual target organic chemicals. The basic premise is that the investigator must know which loss process(es) are important for a particular analyte, based on its chemodynamic properties, when choosing the appropriate method(s) to prevent loss.

  20. Real time earthquake information and tsunami estimation system for Indonesia, Philippines and Central-South American regions

    NASA Astrophysics Data System (ADS)

    Pulido Hernandez, N. E.; Inazu, D.; Saito, T.; Senda, J.; Fukuyama, E.; Kumagai, H.

    2015-12-01

    Southeast Asia as well as Central-South American regions are within the most active seismic regions in the world. To contribute to the understanding of source process of earthquakes the National Research Institute for Earth Science and Disaster Prevention NIED maintains the international seismic Network (ISN) since 2007. Continuous seismic waveforms from 294 broadband seismic stations in Indonesia, Philippines, and Central-South America regions are received in real time at NIED, and used for automatic location of seismic events. Using these data we perform automatic and manual estimation of moment tensor of seismic events (Mw>4.5) by using the SWIFT program developed at NIED. We simulate the propagation of local tsunamis in these regions using a tsunami simulation code and visualization system developed at NIED, combined with CMT parameters estimated by SWIFT. The goals of the system are to provide a rapid and reliable earthquake and tsunami information in particular for large seismic, and produce an appropriate database of earthquake source parameters and tsunami simulations for research. The system uses the hypocenter location and magnitude of earthquakes automatically determined at NIED by the SeisComP3 system (GFZ) from the continuous seismic waveforms in the region, to perform the automated calculation of moment tensors by SWIFT, and then carry out the automatic simulation and visualization of tsunami. The system generates maps of maximum tsunami heights within the target regions and along the coasts and display them with the fault model parameters used for tsunami simulations. Tsunami calculations are performed for all events with available automatic SWIFT/CMT solutions. Tsunami calculations are re-computed using SWIFT manual solutions for events with Mw>5.5 and centroid depths shallower than 100 km. Revised maximum tsunami heights as well as animation of tsunami propagation are also calculated and displayed for the two double couple solutions by SWIFT

  1. Uncertainty in sample estimates and the implicit loss function for soil information.

    NASA Astrophysics Data System (ADS)

    Lark, Murray

    2015-04-01

    One significant challenge in the communication of uncertain information is how to enable the sponsors of sampling exercises to make a rational choice of sample size. One way to do this is to compute the value of additional information given the loss function for errors. The loss function expresses the costs that result from decisions made using erroneous information. In certain circumstances, such as remediation of contaminated land prior to development, loss functions can be computed and used to guide rational decision making on the amount of resource to spend on sampling to collect soil information. In many circumstances the loss function cannot be obtained prior to decision making. This may be the case when multiple decisions may be based on the soil information and the costs of errors are hard to predict. The implicit loss function is proposed as a tool to aid decision making in these circumstances. Conditional on a logistical model which expresses costs of soil sampling as a function of effort, and statistical information from which the error of estimates can be modelled as a function of effort, the implicit loss function is the loss function which makes a particular decision on effort rational. In this presentation the loss function is defined and computed for a number of arbitrary decisions on sampling effort for a hypothetical soil monitoring problem. This is based on a logistical model of sampling cost parameterized from a recent geochemical survey of soil in Donegal, Ireland and on statistical parameters estimated with the aid of a process model for change in soil organic carbon. It is shown how the implicit loss function might provide a basis for reflection on a particular choice of sample size by comparing it with the values attributed to soil properties and functions. Scope for further research to develop and apply the implicit loss function to help decision making by policy makers and regulators is then discussed.

  2. Estimation of insurance related losses resulting from coastal flooding in France

    NASA Astrophysics Data System (ADS)

    Naulin, J. P.; Moncoulon, D.; Le Roy, S.; Pedreros, R.; Idier, D.; Oliveros, C.

    2015-04-01

    A model has been developed in order to estimate insurance-related losses caused by coastal flooding in France. The deterministic part of the model aims at identifying the potentially flood-impacted sectors and the subsequent insured losses a few days after the occurrence of a storm surge event on any part of the French coast. This deterministic component is a combination of three models: a hazard model, a vulnerability model and a damage model. The first model uses the PREVIMER system to estimate the water level along the coast. A storage-cell flood model propagates these water levels over the land and thus determines the probable inundated areas. The vulnerability model, for its part, is derived from the insurance schedules and claims database; combining information such as risk type, class of business and insured values. The outcome of the vulnerability and hazard models are then combined with the damage model to estimate the event damage and potential insured losses. This system shows satisfactory results in the estimation of the magnitude of the known losses related to the flood caused by the Xynthia storm. However, it also appears very sensitive to the water height estimated during the flood period, conditioned by the junction between sea water levels and coastal topography for which the accuracy is still limited in the system.

  3. Estimation of insurance-related losses resulting from coastal flooding in France

    NASA Astrophysics Data System (ADS)

    Naulin, J. P.; Moncoulon, D.; Le Roy, S.; Pedreros, R.; Idier, D.; Oliveros, C.

    2016-01-01

    A model has been developed in order to estimate insurance-related losses caused by coastal flooding in France. The deterministic part of the model aims at identifying the potentially flood-impacted sectors and the subsequent insured losses a few days after the occurrence of a storm surge event on any part of the French coast. This deterministic component is a combination of three models: a hazard model, a vulnerability model, and a damage model. The first model uses the PREVIMER system to estimate the water level resulting from the simultaneous occurrence of a high tide and a surge caused by a meteorological event along the coast. A storage-cell flood model propagates these water levels over the land and thus determines the probable inundated areas. The vulnerability model, for its part, is derived from the insurance schedules and claims database, combining information such as risk type, class of business, and insured values. The outcome of the vulnerability and hazard models are then combined with the damage model to estimate the event damage and potential insured losses. This system shows satisfactory results in the estimation of the magnitude of the known losses related to the flood caused by the Xynthia storm. However, it also appears very sensitive to the water height estimated during the flood period, conditioned by the junction between seawater levels and coastal topography, the accuracy for which is still limited by the amount of information in the system.

  4. Estimation of slip scenarios of mega-thrust earthquakes and strong motion simulations for Central Andes, Peru

    NASA Astrophysics Data System (ADS)

    Pulido, N.; Tavera, H.; Aguilar, Z.; Chlieh, M.; Calderon, D.; Sekiguchi, T.; Nakai, S.; Yamazaki, F.

    2012-12-01

    We have developed a methodology for the estimation of slip scenarios for megathrust earthquakes based on a model of interseismic coupling (ISC) distribution in subduction margins obtained from geodetic data, as well as information of recurrence of historical earthquakes. This geodetic slip model (GSM) delineates the long wavelength asperities within the megathrust. For the simulation of strong ground motion it becomes necessary to introduce short wavelength heterogeneities to the source slip to be able to efficiently simulate high frequency ground motions. To achieve this purpose we elaborate "broadband" source models constructed by combining the GSM with several short wavelength slip distributions obtained from a Von Karman PSD function with random phases. Our application of the method to Central Andes in Peru, show that this region has presently the potential of generating an earthquake with moment magnitude of 8.9, with a peak slip of 17 m and a source area of approximately 500 km along strike and 165 km along dip. For the strong motion simulations we constructed 12 broadband slip models, and consider 9 possible hypocenter locations for each model. We performed strong motion simulations for the whole central Andes region (Peru), spanning an area from the Nazca ridge (16^o S) to the Mendana fracture (9^o S). For this purpose we use the hybrid strong motion simulation method of Pulido et al. (2004), improved to handle a general slip distribution. Our simulated PGA and PGV distributions indicate that a region of at least 500 km along the coast of central Andes is subjected to a MMI intensity of approximately 8, for the slip model that yielded the largest ground motions among the 12 slip models considered, averaged for all assumed hypocenter locations. This result is in agreement with the macroseismic intensity distribution estimated for the great 1746 earthquake (M~9) in central Andes (Dorbath et al. 1990). Our results indicate that the simulated PGA and PGV for

  5. An Atlas of ShakeMaps for Selected Global Earthquakes

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul; Marano, Kristin D.

    2008-01-01

    An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.

  6. Meeting focuses on catastrophic Asian earthquakes

    NASA Astrophysics Data System (ADS)

    Gupta, Harsh K.

    The International Association of Seismology and Physics of the Earth's Interior (IAS-PEI) and the Asian Seismological Commission met August 1-3, 1996, in Tangshan, China. Twenty years ago, Tangshan was destroyed by the century's worst earthquake, which killed an estimated 243,000 people.It was the first meeting of the Asian Seismological Commission (ASC), a group formed in 1995 by the IASPEI umbrella, to improve understanding of geological processes in Asia and to mitigate earthquake disasters. Because of its widespread seismic activity, the vast, populated territory of Asia has more catastrophic earthquakes than other regions of the world (see Figure 1). During the period from 1892 to 1992, 50 percent of the world's major earthquakes (magnitude greater than 8) occurred in Asia and the Southern Pacific region. Economic losses of more than $100 billion from the most recent major Asian earthquake that occurred in Kobe, Japan, in early 1995, make Kobe the most expensive earthquake in the world. In September 1993, the Latur earthquake in the stable shield region of southern India claimed 10,000 lives, and although of only 6.1 magnitude, was the deadliest stable continental region earthquake.

  7. New characteristics of intensity assessment of Sichuan Lushan "4.20" M s7.0 earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Baitao; Yan, Peilei; Chen, Xiangzhao

    2014-08-01

    The post-earthquake rapid accurate assessment of macro influence of seismic ground motion is of significance for earthquake emergency relief, post-earthquake reconstruction and scientific research. The seismic intensity distribution map released by the Lushan earthquake field team of the China Earthquake Administration (CEA) five days after the strong earthquake ( M7.0) occurred in Lushan County of Sichuan Ya'an City at 8:02 on April 20, 2013 provides a scientific basis for emergency relief, economic loss assessment and post-earthquake reconstruction. In this paper, the means for blind estimation of macroscopic intensity, field estimation of macro intensity, and review of intensity, as well as corresponding problems are discussed in detail, and the intensity distribution characteristics of the Lushan "4.20" M7.0 earthquake and its influential factors are analyzed, providing a reference for future seismic intensity assessments.

  8. Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984

    USGS Publications Warehouse

    Sipkin, S.A.

    1987-01-01

    The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.

  9. Estimation of return periods of multiple losses per winter associated with historical windstorm series over Germany

    NASA Astrophysics Data System (ADS)

    Karremann, Melanie; Pinto, Joaquim G.; von Bomhard, Philipp; Klawa, Matthias

    2014-05-01

    During the last decades, several windstorm series hit Western Europe leading to large cumulative economic losses. Such storm series are an example of serial clustering of extreme cyclones and present a considerable risk for the insurance industry. Here, clustering of events and return periods of storm series for Germany are quantified based on potential losses using empirical models. Two reanalysis datasets and observations from 123 German Weather Service stations are considered for the winters 1981/1982 to 2010/2011. Based on these datasets, histograms of events exceeding selected return levels (1-, 2- and 5-year) are derived. Return periods of historical storm series are estimated based on the Poisson and the negative Binomial distribution. About 4680 years of global circulation model simulations forced with current climate conditions are analysed to provide a better assessment of historical return periods. Estimations differ between the considered distributions. Except for frequent and weak events, the return period estimates obtained with the Poisson distribution clearly deviate from empirical data. This clearly documents overdispersion in the loss data, thus indicating the clustering of potential loss events. Better assessments are achieved for the negative Binomial distribution, e.g. 34 to 53 years for the storm series like 1989/1990. The overdispersion (clustering) of potential loss events clearly states the importance of an adequate risk assessment of multiple events per winter for economical applications.

  10. A smartphone application for earthquakes that matter!

    NASA Astrophysics Data System (ADS)

    Bossu, Rémy; Etivant, Caroline; Roussel, Fréderic; Mazet-Roux, Gilles; Steed, Robert

    2014-05-01

    level of shaking intensity with empirical models of fatality losses calibrated on past earthquakes in each country. Non-seismic detections and macroseismic questionnaires collected online are combined to identify as many as possible of the felt earthquakes regardless their magnitude. Non seismic detections include Twitter earthquake detections, developed by the US Geological Survey, where the number of tweets containing the keyword "earthquake" is monitored in real time and flashsourcing, developed by the EMSC, which detect traffic surges on its rapid earthquake information website caused by the natural convergence of eyewitnesses who rush to the Internet to investigate the cause of the shaking that they have just felt. All together, we estimate that the number of detected felt earthquakes is around 1 000 per year, compared with the 35 000 earthquakes annually reported by the EMSC! Felt events are already the subject of the web page "Latest significant earthquakes" on EMSC website (http://www.emsc-csem.org/Earthquake/significant_earthquakes.php) and of a dedicated Twitter service @LastQuake. We will present the identification process of the earthquakes that matter, the smartphone application itself (to be released in May) and its future evolutions.

  11. Proceedings of Conference XVIII: a workshop on "Continuing actions to reduce losses from earthquakes in the Mississippi Valley area," 24-26 May, 1982, St. Louis, Missouri

    USGS Publications Warehouse

    Gori, Paula L., (Edited By); Hays, Walter W.; Kitzmiller, Carla, (compiler)

    1983-01-01

    payoff and trre lowest cost and effort requirements. These action plans, which identify steps that can be undertaken immediately to reduce losses from earthquakes in each of the seven States in the Mississippi Valley area, are contained in this report. The draft 5-year plan for the Central United States, prepared in the Knoxville workshop, was the starting point of the small group discussions in the St. Louis workshop which lead to the action plans contained in this report. For completeness, the draft 5-year plan for the Central United States is reproduced as Appendix B.

  12. Estimating losses of dry matter from alfalfa-orchardgrass mixtures following rainfall events

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Studies designed to assess the effects of natural or simulated rainfall events on wilting experimental hays often have been hampered by questionable and erratic estimates of DM recovery following wetting. An alternative methodology for measuring losses of DM may be to use water-insoluble, cell-wall ...

  13. Identification and Estimation of Postseismic Deformation: Implications for Plate Motion Models, Models of the Earthquake Cycle, and Terrestrial Reference Frame Definition

    NASA Astrophysics Data System (ADS)

    Kedar, S.; Bock, Y.; Moore, A. W.; Argus, D. F.; Fang, P.; Liu, Z.; Haase, J. S.; Su, L.; Owen, S. E.; Goldberg, D.; Squibb, M. B.; Geng, J.

    2015-12-01

    Postseismic deformation indicates a viscoelastic response of the lithosphere. It is critical, then, to identify and estimate the extent of postseismic deformation in both space and time, not only for its inherent information on crustal rheology and earthquake physics, but also since it must considered for plate motion models that are derived geodetically from the "steady-state" interseismic velocities, models of the earthquake cycle that provide interseismic strain accumulation and earthquake probability forecasts, as well as terrestrial reference frame definition that is the basis for space geodetic positioning. As part of the Solid Earth Science ESDR System) SESES project under a NASA MEaSUREs grant, JPL and SIO estimate combined daily position time series for over 1800 GNSS stations, both globally and at plate boundaries, independently using the GIPSY and GAMIT software packages, but with a consistent set of a prior epoch-date coordinates and metadata. The longest time series began in 1992, and many of them contain postseismic signals. For example, about 90 of the global GNSS stations out of more than 400 that define the ITRF have experienced one or more major earthquakes and 36 have had multiple earthquakes; as expected, most plate boundary stations have as well. We quantify the spatial (distance from rupture) and temporal (decay time) extent of postseismic deformation. We examine parametric models (log, exponential) and a physical model (rate- and state-dependent friction) to fit the time series. Using a PCA analysis, we determine whether or not a particular earthquake can be uniformly fit by a single underlying postseismic process - otherwise we fit individual stations. Then we investigate whether the estimated time series velocities can be directly used as input to plate motion models, rather than arbitrarily removing the apparent postseismic portion of a time series and/or eliminating stations closest to earthquake epicenters.

  14. Seismic hazard and risks estimates for Himalayas and surrounding regions based on the Unified Scaling Law for Earthquakes

    NASA Astrophysics Data System (ADS)

    Nekrasova, Anastasia; Kossobokov, Vladimir; Parvez, Imtiyaz

    2013-04-01

    The parameters A, B, and C of the Unified Scaling Law for Earthquakes (USLE) in Himalayas and surrounding regions have been studied on the basis of a variable space and time scale approach. The basic law of seismicity, the Gutenberg-Richter recurrence relation, is suggested in a modified form involving a spatial term: log N(M,L) = A - B•(M-6) + C•log L, where N(M,L) is the expected annual number of mainshocks of a certain magnitude M within an area of linear size L. The observed temporal variability of the A, B, C coefficients indicates significant changes of seismic activity at the time scales of a few decades. For Himalayan region, the value of A ranges between -1.95 to -0.66, which determines the average rate of earthquakes that accordingly differs by a factor of 20 or more. The value of B mainly ranges between 0.5 to 1.7, while the fractal dimension of the local seismic prone setting, C, changes from under 1 to 1.4 and larger. We have used the deterministic approach to estimate the corresponding peak ground acceleration (PGA) from the estimated A, B and C based magnitude and the maximum observed magnitude during 1900-2012 to prepare the seismic hazard map of Himalayas with spatially distributed PGA. Further an attempt is made to generate the earthquake risk maps of the region based on the population density exposed to the seismic hazard. Any kind of risk estimates R(g) at location g results from a convolution of the natural hazard H(g) with the exposed object under consideration O(g) along with its vulnerability V(O(g)). Note that g could be a point, or a line, or some area on or under the Earth surface and that distribution of hazards, as well as objects of concern and their vulnerability, could be time-dependent. There exist many different risk estimates even if the same object of risk and the same hazard are involved. Specifically, it may result from the different laws of convolution, as well as from different kinds of vulnerability of an object of risk

  15. Nitrogen losses from dairy manure estimated through nitrogen mass balance and chemical markers

    USGS Publications Warehouse

    Hristov, Alexander N.; Zaman, S.; Vander Pol, M.; Ndegwa, P.; Campbell, L.; Silva, S.

    2009-01-01

    Ammonia is an important air and water pollutant, but the spatial variation in its concentrations presents technical difficulties in accurate determination of ammonia emissions from animal feeding operations. The objectives of this study were to investigate the relationship between ammonia volatilization and ??15N of dairy manure and the feasibility of estimating ammonia losses from a dairy facility using chemical markers. In Exp. 1, the N/P ratio in manure decreased by 30% in 14 d as cumulative ammonia losses increased exponentially. Delta 15N of manure increased throughout the course of the experiment and ??15N of emitted ammonia increased (p < 0.001) quadratically from -31??? to -15 ???. The relationship between cumulative ammonia losses and ??15N of manure was highly significant (p < 0.001; r2 = 0.76). In Exp. 2, using a mass balance approach, approximately half of the N excreted by dairy cows (Bos taurus) could not be accounted for in 24 h. Using N/P and N/K ratios in fresh and 24-h manure, an estimated 0.55 and 0.34 (respectively) of the N excreted with feces and urine could not be accounted for. This study demonstrated that chemical markers (P, K) can be successfully used to estimate ammonia losses from cattle manure. The relationship between manure ??15N and cumulative ammonia loss may also be useful for estimating ammonia losses. Although promising, the latter approach needs to be further studied and verified in various experimental conditions and in the field. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  16. The use of body mass loss to estimate metabolic rate in birds.

    PubMed

    Portugal, Steven J; Guillemette, Magella

    2011-03-01

    During starvation, energy production occurs at the expense of body reserve utilisation which results in body mass loss. Knowing the role of the fuels involved in this body mass loss, along with their energy density, can allow an energy equivalent of mass loss to be calculated. Therefore, it is possible to determine daily energy expenditure (DEE) if two body mass loss measurements at an interval of a few days are obtained. The technique can be cheap, minimally stressful for the animals involved, and the data relatively simple to gather. Here we review the use of body mass loss to estimate DEE in birds through critiquing the strengths and weaknesses of the technique, and detail the methodology and considerations that must be adhered to for accurate measures of DEE to be obtained. Owing to the biology of the species, the use of the technique has been used predominantly in Antarctic seabirds, particularly penguins and albatrosses. We demonstrate how reliable the technique can be in predicting DEE in a non-Antarctic species, common eiders (Somateria mollissima), the female of which undergoes a fasting period during incubation. We conclude that using daily body mass loss to estimate DEE can be a useful and effective approach provided that (1) the substrate being consumed during mass loss is known, (2) the kinetics of body mass loss are understood for the species in question and (3) only species that enter a full phase II of a fast (where substrate catabolism reaches a steady state) and are not feeding for a period of time are appropriate for this method. PMID:21144908

  17. Annual South American Forest Loss Estimates (1989-2011) Based on Passive Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    van Marle, M.; van der Werf, G.; de Jeu, R.; Liu, Y.

    2014-12-01

    Vegetation dynamics, such as forest loss, are an important factor in global climate, but long-term and consistent information on these dynamics on continental scales is lacking. We have quantified large-scale forest loss over the 90s and 00s in the tropical biomes of South America using a passive-microwave satellite-based vegetation product. Our forest loss estimates are based on remotely sensed vegetation optical depth (VOD), which is an indicator of vegetation water content simultaneously retrieved with soil moisture. The advantage of low-frequency microwave remote sensing is that aerosols and clouds do not affect the observations. Furthermore, the longer wavelengths of passive microwaves penetrate deeper into vegetation than other products derived from optical and thermal sensors. This has the consequence that both woody parts of vegetation and leaves can be observed. The merged VOD product of AMSR-E and SSM/I observations, which covers over 23 years of daily observations, is used. We used this data stream and an outlier detection algorithm to quantify spatial and temporal variations in forest loss dynamics. Qualitatively, our results compared favorably to the newly developed Global Forest Change (GFC) maps based on Landsat data (r2=0.96), and this allowed us to convert the VOD outlier count to forest loss. Our results are spatially explicit with a 0.25-degree resolution and annual time step and we will present our estimates on country level. The added benefit of our results compared to GFC is the longer time period. The results indicate a relatively steady increase in forest loss in Brazil from 1989 until 2003, followed by two high forest loss years and a declining trend afterwards. This contrasts with other South American countries such as Bolivia and Peru, where forest losses increased in almost the whole 00s in comparison with the 90s.

  18. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  19. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  20. Annual South American forest loss estimates based on passive microwave remote sensing (1990-2010)

    NASA Astrophysics Data System (ADS)

    van Marle, M. J. E.; van der Werf, G. R.; de Jeu, R. A. M.; Liu, Y. Y.

    2016-02-01

    Consistent forest loss estimates are important to understand the role of forest loss and deforestation in the global carbon cycle, for biodiversity studies, and to estimate the mitigation potential of reducing deforestation. To date, most studies have relied on optical satellite data and new efforts have greatly improved our quantitative knowledge on forest dynamics. However, most of these studies yield results for only a relatively short time period or are limited to certain countries. We have quantified large-scale forest loss over a 21-year period (1990-2010) in the tropical biomes of South America using remotely sensed vegetation optical depth (VOD). This passive microwave satellite-based indicator of vegetation water content and vegetation density has a much coarser spatial resolution than optical data but its temporal resolution is higher and VOD is not impacted by aerosols and cloud cover. We used the merged VOD product of the Advanced Microwave Scanning Radiometer (AMSR-E) and Special Sensor Microwave Imager (SSM/I) observations, and developed a change detection algorithm to quantify spatial and temporal variations in forest loss dynamics. Our results compared reasonably well with the newly developed Landsat-based Global Forest Change (GFC) maps, available for the 2001 onwards period (r2 = 0.90 when comparing annual country-level estimates). This allowed us to convert our identified changes in VOD to forest loss area and compute these from 1990 onwards. We also compared these calibrated results to PRODES (r2 = 0.60 when comparing annual state-level estimates). We found that South American forest exhibited substantial interannual variability without a clear trend during the 1990s, but increased from 2000 until 2004. After 2004, forest loss decreased again, except for two smaller peaks in 2007 and 2010. For a large part, these trends were driven by changes in Brazil, which was responsible for 56 % of the total South American forest loss area over our study

  1. Perspectives on earthquake hazards in the New Madrid seismic zone, Missouri

    USGS Publications Warehouse

    Thenhaus, P.C.

    1990-01-01

    A sequence of three great earthquakes struck the Central United States during the winter of 1811-1812 in the area of New Madrid, Missouri. they are considered to be the greatest earthquakes in the conterminous U.S because they were felt and caused damage at far greater distances than any other earthquakes in U.S history. The large population currently living within the damage area of these earthquakes means that widespread destruction and loss of life is likely if the sequence were repeated. In contrast to California, where the earthquakes are felt frequently, the damaging earthquakes that have occurred in the Easter U.S-in 155 (Cape Ann, Mass.), 1811-12 (New Madrid, Mo.), 1886 (Charleston S.C) ,and 1897 (Giles County, Va.- are generally regarded as only historical phenomena (fig. 1). The social memory of these earthquakes no longer exists. A fundamental problem in the Eastern U.S, therefore, is that the earthquake hazard is not generally considered today in land-use and civic planning. This article offers perspectives on the earthquake hazard of the New Madrid seismic zone through discussions of the geology of the Mississippi Embayment, the historical earthquakes that have occurred there, the earthquake risk, and the "tools" that geoscientists have to study the region. The so-called earthquake hazard is defined  by the characterization of the physical attributes of the geological structures that cause earthquakes, the estimation of the recurrence times of the earthquakes, the estimation of the recurrence times of the earthquakes, their potential size, and the expected ground motions. the term "earthquake risk," on the other hand, refers to aspects of the expected damage to manmade strctures and to lifelines as a result of the earthquake hazard.  

  2. The radiated seismic energy and apparent stress of interplate and intraplate earthquakes at subduction zone environments; implications for seismic hazard estimation

    USGS Publications Warehouse

    Choy, George L.; Boatwright, John L.; Kirby, Stephen H.

    2001-01-01

    The radiated seismic energies (ES) of 980 shallow subduction-zone earthquakes with magnitudes ? 5.8 are used to examine global patterns of energy release and apparent stress. In contrast to traditional methods which have relied upon empirical formulas, these energies are computed through direct spectral analysis of broadband seismic waveforms. Energy gives a physically different measure of earthquake size than moment. Moment, being derived from the low-frequency asymptote of the displacement spectra, is related to the final static displacement. Thus, moment is crucial to the long-term tectonic implication of an earthquake. In contrast, energy, being derived from the velocity power spectra, is more a measure of seismic potential for damage to anthropogenic structures. There is considerable scatter in the plot of ES-M0 for worldwide earthquakes. For any given M0, the ES can vary by as much as an order of magnitude about the mean regression line. The global variation between ES and M0, while large, is not random. When subsets of ES-M0 are plotted as a function of seismic region, tectonic setting and faulting type, the scatter in data is often substantially reduced. There are two profound implications for the estimation of seismic and tsunamic hazard. First, it is now feasible to characterize the apparent stress for particular regions. Second, a given M0 does not have a unique ES. This means that M0 alone is not sufficient to describe all aspects of an earthquake. In particular, we have found examples of interplate thrust-faulting earthquakes and intraslab normal-faulting earthquakes occurring in the same epicentral region with vastly different macroseismic effects. Despite the gross macroseismic disparities, the MW?s in these examples were identical. However, the Me?s (energy magnitudes) successfully distinguished the earthquakes that were more damaging.

  3. Earthquake Hazard and Risk Assessment for Turkey

    NASA Astrophysics Data System (ADS)

    Betul Demircioglu, Mine; Sesetyan, Karin; Erdik, Mustafa

    2010-05-01

    Using a GIS-environment to present the results, seismic risk analysis is considered as a helpful tool to support the decision making for planning and prioritizing seismic retrofit intervention programs at large scale. The main ingredients of seismic risk analysis consist of seismic hazard, regional inventory of buildings and vulnerability analysis. In this study, the assessment of the national earthquake hazard based on the NGA ground motion prediction models and the comparisons of the results with the previous models have been considered, respectively. An evaluation of seismic risk based on the probabilistic intensity ground motion prediction for Turkey has been investigated. According to the Macroseismic approach of Giovinazzi and Lagomarsino (2005), two alternative vulnerability models have been used to estimate building damage. The vulnerability and ductility indices for Turkey have been taken from the study of Giovinazzi (2005). These two vulnerability models have been compared with the observed earthquake damage database. A good agreement between curves has been clearly observed. In additional to the building damage, casualty estimations based on three different methods for each return period and for each vulnerability model have been presented to evaluate the earthquake loss. Using three different models of building replacement costs, the average annual loss (AAL) and probable maximum loss ratio (PMLR) due to regional earthquake hazard have been provided to form a basis for the improvement of the parametric insurance model and the determination of premium rates for the compulsory earthquake insurance in Turkey.

  4. Comparison of the Cut-and-Paste and Full Moment Tensor Methods for Estimating Earthquake Source Parameters

    NASA Astrophysics Data System (ADS)

    Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.

    2008-12-01

    Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the

  5. An Estimation Approach for PWM Carrier Loss on Rotor in Slotless Permanent Magnet Motors

    NASA Astrophysics Data System (ADS)

    Kosaka, Takashi; Shikayama, Toru; Matsui, Nobuyuki

    This paper presents an analytical estimation approach for PWM carrier loss on the rotor in the design stage of slotless permanent magnet motors. The experimental studies using 400W, 3000r/min test motor show that the eddy current on the rotor surfaced by rare-earth magnets decreases the winding inductance and increases the winding resistance as the supplied frequency rises. The resultant lower inductance for high frequency over 10kHz produces a large amount of current harmonics caused by voltage PWM as well as the carrier loss combined with the resistance increment. At first, the frequency dependent winding inductance and resistance of test motor are estimated by 3D-finite element method considering the eddy current on the rotor. The current spectrum is subsequently calculated from the obtained frequency dependent winding impedance and the simulated voltage spectrum. The carrier loss is finally derived from the current spectrum and the calculated resistance increment. The effectiveness of the proposed estimation approach for PWM carrier loss on the rotor is experimentally verified using test motor.

  6. Body protein losses estimated by nitrogen balance and potassium-40 counting

    SciTech Connect

    Belyea, R.L.; Babbitt, C.L.; Sedgwick, H.T.; Zinn, G.M.

    1986-07-01

    Body protein losses estimated from N balance were compared with those estimated by 40K counting. Six nonlactating dairy cows were fed an adequate N diet for 7 wk, a low N diet for 9 wk, and a replete N diet for 3 wk. The low N diet contained high cell wall grass hay plus ground corn, starch, and molasses. Soybean meal was added to the low N diet to increase N in the adequate N and replete N diets. Intake was measured daily. Digestibilities, N balance, and body composition (estimated by 40K counting) were determined during each dietary regimen. During low N treatment, hay dry matter intake declined 2 kg/d, and supplement increased about .5 kg/d. Dry matter digestibility was not altered by N treatment. Protein and acid detergent fiber digestibilities decreased from 40 and 36% during adequate N to 20 and 2%, respectively, during low N. Fecal and urinary N also declined when cows were fed the low N diet. By the end of repletion, total intake, fiber, and protein digestibilities as well as N partition were similar to or exceeded those during adequate N intake. Body protein (N) loss was estimated by N balance to be about 3 kg compared with 8 kg by 40K counting. Body fat losses (32 kg) were large because of low energy digestibility and intake. Seven kilograms of body fat were regained during repletion, but there was no change in body protein.

  7. Annual South American forest loss estimates based on passive microwave remote sensing (1990-2010)

    NASA Astrophysics Data System (ADS)

    van Marle, M. J. E.; van der Werf, G. R.; de Jeu, R. A. M.; Liu, Y. Y.

    2015-07-01

    Consistent forest loss estimates are important to understand the role of forest loss and deforestation in the global carbon cycle, for biodiversity studies, and to estimate the mitigation potential of reducing deforestation. To date, most studies have relied on optical satellite data and new efforts have greatly improved our quantitative knowledge on forest dynamics. However, most of these studies yield results for only a relatively short time period or are limited to certain countries. We have quantified large-scale forest losses over a 21 year period (1990-2010) in the tropical biomes of South America using remotely sensed vegetation optical depth (VOD). This passive microwave satellite-based indicator of vegetation water content and vegetation density has a much coarser spatial resolution than optical but its temporal resolution is higher and VOD is not impacted by aerosols and cloud cover. We used the merged VOD product of the Advanced Microwave Scanning Radiometer (AMSR-E) and Special Sensor Microwave Imager (SSM/I) observations, and developed a change detection algorithm to quantify spatial and temporal variations in forest loss dynamics. Our results compared favorably to the newly developed Global Forest Change (GFC) maps based on Landsat data and available for the 2001 onwards period (r2 = 0.90 when comparing annual country-level estimates), which allowed us to convert our results to forest loss area and compute these from 1990 onwards. We found that South American forest exhibited substantial interannual variability without a clear trend during the 1990s, but increased from 2000 until 2004. After 2004, forest loss decreased again, except for two smaller peaks in 2007 and 2010. For a large part, these trends were driven by changes in Brazil, which was responsible for 56 % of the total South American forest loss over our study period according to our results. One of the key findings of our study is that while forest losses decreased in Brazil after 2005

  8. Handbook for the estimation of microwave propagation effects: Link calculations for earth-space paths (path loss and noise estimation)

    NASA Technical Reports Server (NTRS)

    Crane, R. K.; Blood, D. W.

    1979-01-01

    A single model for a standard of comparison for other models when dealing with rain attenuation problems in system design and experimentation is proposed. Refinements to the Global Rain Production Model are incorporated. Path loss and noise estimation procedures as the basic input to systems design for earth-to-space microwave links operating at frequencies from 1 to 300 GHz are provided. Topics covered include gaseous absorption, attenuation by rain, ionospheric and tropospheric scintillation, low elevation angle effects, radome attenuation, diversity schemes, link calculation, and receiver noise emission by atmospheric gases, rain, and antenna contributions.

  9. Neglect of bandwidth of Odontocetes echo location clicks biases propagation loss and single hydrophone population estimates.

    PubMed

    Ainslie, Michael A

    2013-11-01

    Passive acoustic monitoring with a single hydrophone has been suggested as a cost-effective method to monitor population density of echolocating marine mammals, by estimating the distance at which the hydrophone is able to intercept the echolocation clicks and distinguish these from the background. To avoid a bias in the estimated population density, this method relies on an unbiased estimate of the detection range and therefore of the propagation loss (PL). When applying this method, it is common practice to estimate PL at the center frequency of a broadband echolocation click and to assume this narrowband PL applies also to the broadband click. For a typical situation this narrowband approximation overestimates PL, underestimates the detection range and consequently overestimates the population density by an amount that for fixed center frequency increases with increasing pulse bandwidth and sonar figure of merit. PMID:24180761

  10. Estimation of Retinal Ganglion Cell Loss in Glaucomatous Eyes With a Relative Afferent Pupillary Defect

    PubMed Central

    Tatham, Andrew J.; Meira-Freitas, Daniel; Weinreb, Robert N.; Marvasti, Amir H.; Zangwill, Linda M.; Medeiros, Felipe A.

    2014-01-01

    Purpose. To estimate retinal ganglion cell (RGC) losses associated with a relative afferent pupillary defect (RAPD) in glaucoma. Methods. A cross-sectional study was conducted including both eyes of 103 participants from the Diagnostic Innovations in Glaucoma Study. A total of 77 subjects had glaucoma in at least one eye and 26 were healthy. Pupil responses were assessed using an automated pupillometer that records the magnitude of RAPD as an “RAPD score.” Standard automated perimetry (SAP) and optical coherence tomography (OCT) also were performed. Retinal ganglion cell counts were estimated using empirical formulas that combine estimates from SAP and OCT. The estimated percentage RGC loss was calculated using the combined structure function index (CSFI). Results. There was good correlation between RAPD magnitude and intereye differences in estimated RGCs (R2 = 0.492, P < 0.001), mean deviation (R2 = 0.546, P < 0.001), retinal nerve fiber layer thickness (R2 = 0.362, P < 0.001), and CSFI (R2 = 0.484, P < 0.001). Therefore, a high RAPD score is likely to indicate large asymmetric RGC losses. The relationship between intereye difference in RGC counts and RAPD score was described best by the formula; RGC difference = 21,896 + 353,272 * RAPD score. No healthy subjects had an absolute RAPD score > 0.3, which was associated with asymmetry of 105,982 cells (or 12%). Conclusions. Good correlation between the magnitude of RAPD and intereye differences in mean deviation and estimated RGC counts suggests pupillometry may be useful for quantifying asymmetric damage in glaucoma. (ClinicalTrials.gov number, NCT00221897.) PMID:24282221

  11. Systems, methods and computer readable media for estimating capacity loss in rechargeable electrochemical cells

    DOEpatents

    Gering, Kevin L.

    2013-06-18

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples charge characteristics of the electrochemical cell. The computing system periodically determines cell information from the charge characteristics of the electrochemical cell. The computing system also periodically adds a first degradation characteristic from the cell information to a first sigmoid expression, periodically adds a second degradation characteristic from the cell information to a second sigmoid expression and combines the first sigmoid expression and the second sigmoid expression to develop or augment a multiple sigmoid model (MSM) of the electrochemical cell. The MSM may be used to estimate a capacity loss of the electrochemical cell at a desired point in time and analyze other characteristics of the electrochemical cell. The first and second degradation characteristics may be loss of active host sites and loss of free lithium for Li-ion cells.

  12. Estimation of the Iron Loss in Deep-Sea Permanent Magnet Motors considering Seawater Compressive Stress

    PubMed Central

    Wei, Yanyu; Zou, Jibin; Li, Jianjun; Qi, Wenjuan; Li, Yong

    2014-01-01

    Deep-sea permanent magnet motor equipped with fluid compensated pressure-tolerant system is compressed by the high pressure fluid both outside and inside. The induced stress distribution in stator core is significantly different from that in land type motor. Its effect on the magnetic properties of stator core is important for deep-sea motor designers but seldom reported. In this paper, the stress distribution in stator core, regarding the seawater compressive stress, is calculated by 2D finite element method (FEM). The effect of compressive stress on magnetic properties of electrical steel sheet, that is, permeability, BH curves, and BW curves, is also measured. Then, based on the measured magnetic properties and calculated stress distribution, the stator iron loss is estimated by stress-electromagnetics-coupling FEM. At last the estimation is verified by experiment. Both the calculated and measured results show that stator iron loss increases obviously with the seawater compressive stress. PMID:25177717

  13. Damping loss factor estimation of two-dimensional orthotropic structures from a displacement field measurement

    NASA Astrophysics Data System (ADS)

    Cherif, Raef; Chazot, Jean-Daniel; Atalla, Noureddine

    2015-11-01

    This paper presents a damping loss factor estimation method of two-dimensional orthotropic structures. The method is based on a scanning laser vibrometer measurement. The dispersion curves of the studied structures are first estimated at several chosen angles of propagation with a spatial Fourier transform. Next the global damping loss factor is evaluated with the proposed inverse wave method. The method is first tested using numerical results obtained from a finite element model. The accuracy of the proposed method is then experimentally investigated on an isotropic aluminium panel and two orthoropic sandwich composite panels with a honeycomb core. The results are finally compared and validated over a large frequency band with classical methods such as the half-power bandwidth method (3 dB method), the decay rate method and the steady-state power input method. The present method offers the possibility of structural characterization with a simple measurement scan.

  14. Estimating formation properties from early-time recovery in wells subject to turbulent head losses

    USGS Publications Warehouse

    Shapiro, A.M.; Oki, D.S.; Greene, E.A.

    1998-01-01

    A mathematical model is developed to interpret the early-time recovering water level following the termination of pumping in wells subject to turbulent head losses. The model assumes that turbulent head losses dissipate immediately when pumping ends. In wells subject to both borehole storage and turbulent head losses, the early-time recovery exhibits a slope equal to 1/2 on log-log plots of the recovery versus time. This half-slope response should not be confused with the half-slope response associated with a linear flow regime during aquifer tests. The presence of a borehole skin due to formation damage or stimulation around the pumped well alters the early-time recovery in wells subject to turbulent head losses and gives the appearance of borehole storage, where the recovery exhibits a unit slope on log-log plots of recovery versus time. Type curves can be used to estimate the formation storafivity from the early-time recovery data. In wells that are suspected of having formation damage or stimulation, the type curves can be used to estimate the 'effective' radius of the pumped well, if an estimate of the formation storativity is available from observation wells or other information. Type curves for a homogeneous and isotropic dual-porosity aquifer are developed and applied to estimate formation properties and the effect of formation stimulation from a single-well test conducted in the Madison limestone near Rapid City, South Dakota.A mathematical model is developed to interpret the early-time recovering water level following the termination of pumping in wells subject to turbulent head losses. The model assumes that turbulent head losses dissipate immediately when pumping ends. In wells subject to both borehole storage and turbulent head losses, the early-time recovery exhibits a slope equal to 1/2 on log-log plots of the recovery versus time. This half-slope response should not be confused with the half-slope response associated with a linear flow regime during

  15. Combining double difference and amplitude ratio approaches for Q estimates at the NW Bohemia earthquake swarm region

    NASA Astrophysics Data System (ADS)

    Kriegerowski, Marius; Cesca, Simone; Krüger, Frank; Dahm, Torsten; Horálek, Josef

    2016-04-01

    Aside from the propagation velocity of seismic waves, their attenuation can provide a direct measure of rock properties in the sampled subspace. We present a new attenuation tomography approach exploiting relative amplitude spectral ratios of earthquake pairs. We focus our investigation on North West Bohemia - a region characterized by intense earthquake swarm activity in a confined source region. The inter-event distances are small compared to the epicentral distances to the receivers meeting a fundamental requirement of the method. Due to the similar event locations also the ray paths are very similar. Consequently, the relative spectral ratio is affected mostly by rock properties along the path of the vector distance and thus representative of the focal region. In order to exclude effects of the seismic source spectra, only the high frequency content beyond the corner frequency is taken into consideration. This requires high quality as well as high sampling records. Future improvements in that respect can be expected from the ICDP proposal "Eger rift", which includes plans to install borehole monitoring in the investigated region. 1D and 3D synthetic tests show the feasibility of the presented method. Furthermore, we demonstrate influences of perturbations in source locations and travel time estimates on the determination of Q. Errors in Q scale linearly with errors in the differential travel times. These sources of errors can be attributed to the complex velocity structure of the investigated region. A critical aspect is the signal-to-noise ratio, which imposes a strong limitation and emphasizes the demand for high quality recordings. Hence, the presented method is expected to benefit from bore hole installations. Since we focus our analysis on the NW Bohemia case study example, a synthetic earthquake catalog incorporating source characteristics deduced from preceding moment tensor inversions coupled with a realistic velocity model provides us with a realistic

  16. Stellar model chromospheres. VI - Empirical estimates of the chromospheric radiative losses of late-type stars

    NASA Technical Reports Server (NTRS)

    Linsky, J. L.; Ayres, T. R.

    1978-01-01

    A method is developed for estimating the nonradiative heating of stellar chromospheres by measuring the net radiative losses in strong Fraunhofer line cores. This method is applied to observations of the Mg II resonance lines in a sample of 32 stars including the sun. At most a small dependence of chromospheric nonradiative heating on stellar surface gravity is found, which is contrary to the large effect predicted by recent calculations based on acoustic-heating theories.

  17. Estimating the rate of retinal ganglion cell loss to detect glaucoma progression: An observational cohort study.

    PubMed

    Hirooka, Kazuyuki; Izumibata, Saeko; Ukegawa, Kaori; Nitta, Eri; Tsujikawa, Akitaka

    2016-07-01

    This study aimed to evaluate the relationship between glaucoma progression and estimates of the retinal ganglion cells (RGCs) obtained by combining structural and functional measurements in patients with glaucoma.In the present observational cohort study, we examined 116 eyes of 62 glaucoma patients. Using Cirrus optical coherence tomography (OCT), a minimum of 5 serial retinal nerve fiber layer (RNFL) measurements were performed in all eyes. There was a 3-year separation between the first and last measurements. Visual field (VF) testing was performed on the same day as the RNFL imaging using the Swedish Interactive Threshold Algorithm Standard 30-2 program of the Humphrey Field Analyzer. Estimates of the RGC counts were obtained from standard automated perimetry (SAP) and OCT, with a weighted average then used to determine a final estimate of the number of RGCs for each eye. Linear regression was used to calculate the rate of the RGC loss, and trend analysis was used to evaluate both serial RNFL thicknesses and VF progression.Use of the average RNFL thickness parameter of OCT led to detection of progression in 14 of 116 eyes examined, whereas the mean deviation slope detected progression in 31 eyes. When the rates of RGC loss were used, progression was detected in 41 of the 116 eyes, with a mean rate of RGC loss of -28,260 ± 8110 cells/year.Estimation of the rate of RGC loss by combining structural and functional measurements resulted in better detection of glaucoma progression compared to either OCT or SAP. PMID:27472691

  18. Loss estimation of debris flow events in mountain areas - An integrated tool for local authorities

    NASA Astrophysics Data System (ADS)

    Papathoma-Koehle, M.; Zischg, A.; Fuchs, S.; Keiler, M.; Glade, T.

    2012-04-01

    Torrents prone to debris flows regularly cause extensive destruction of the built environment, loss of life stock, agricultural land and loss of life in mountain areas. Climate change may increase the frequency and intensity of such events. On the other hand, extensive development of mountain areas is expected to change the spatial pattern of elements at risk exposed and their vulnerability. Consequently, the costs of debris flow events are likely to increase in the coming years. Local authorities responsible for disaster risk reduction are in need of tools that may enable them to assess the future consequences of debris flow events, in particular with respect to the vulnerability of elements at risk. An integrated tool for loss estimation is presented here which is based on a newly developed vulnerability curve and which is applied in test sites in the Province of South Tyrol, Italy. The tool has a dual function: 1) continuous updating of the database regarding damages and process intensities that will eventually improve the existing vulnerability curve and 2) loss estimation of future events and hypothetical events or built environment scenarios by using the existing curve. The tool integrates the vulnerability curve together with new user friendly forms of damage documentation. The integrated tool presented here can be used by local authorities not only for the recording of damage caused by debris flows and the allocation of compensation to the owners of damaged buildings but also for land use planning, cost benefit analysis of structural protection measures and emergency planning.

  19. Bayesian Tsunami-Waveform Inversion and Tsunami-Source Uncertainty Estimation for the 2011 Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Dettmer, J.; Hossen, M. J.; Cummins, P. R.

    2014-12-01

    This paper develops a Bayesian inversion to infer spatio-temporal parameters of the tsunami source (sea surface) due to megathrust earthquakes. To date, tsunami-source parameter uncertainties are poorly studied. In particular, the effects of parametrization choices (e.g., discretisation, finite rupture velocity, dispersion) on uncertainties have not been quantified. This approach is based on a trans-dimensional self-parametrization of the sea surface, avoids regularization, and provides rigorous uncertainty estimation that accounts for model-selection ambiguity associated with the source discretisation. The sea surface is parametrized using self-adapting irregular grids which match the local resolving power of the data and provide parsimonious solutions for complex source characteristics. Finite and spatially variable rupture velocity fields are addressed by obtaining causal delay times from the Eikonal equation. Data are considered from ocean-bottom pressure and coastal wave gauges. Data predictions are based on Green-function libraries computed from ocean-basin scale tsunami models for cases that include/exclude dispersion effects. Green functions are computed for elementary waves of Gaussian shape and grid spacing which is below the resolution of the data. The inversion is applied to tsunami waveforms from the great Mw=9.0 2011 Tohoku-Oki (Japan) earthquake. Posterior results show a strongly elongated tsunami source along the Japan trench, as obtained in previous studies. However, we find that the tsunami data is fit with a source that is generally simpler than obtained in other studies, with a maximum amplitude less than 5 m. In addition, the data are sensitive to the spatial variability of rupture velocity and require a kinematic source model to obtain satisfactory fits which is consistent with other work employing linear multiple time-window parametrizations.

  20. Estimated losses of plant biodiversity in the United States from historical N deposition (1985-2010).

    PubMed

    Clark, Christopher M; Morefield, Philip E; Gilliam, Frank S; Pardo, Linda H

    2013-07-01

    Although nitrogen (N) deposition is a significant threat to herbaceous plant biodiversity worldwide, it is not a new stressor for many developed regions. Only recently has it become possible to estimate historical impacts nationally for the United States. We used 26 years (1985-2010) of deposition data, with ecosystem-specific functional responses from local field experiments and a national critical loads (CL) database, to generate scenario-based estimates of herbaceous species loss. Here we show that, in scenarios using the low end of the CL range, N deposition exceeded critical loads over 0.38, 6.5, 13.1, 88.6, and 222.1 million ha for the Mediterranean California, North American Desert, Northwestern Forested Mountains, Great Plains, and Eastern Forest ecoregions, respectively, with corresponding species losses ranging from < 1% to 30%. When we ran scenarios assuming ecosystems were less sensitive (using a common CL of 10 kg x ha(-1) x yr(-1), and the high end of the CL range) minimal losses were estimated. The large range in projected impacts among scenarios implies uncertainty as to whether current critical loads provide protection to terrestrial plant biodiversity nationally and urge greater research in refining critical loads for U.S. ecosystems. PMID:23951703

  1. Period-dependent source rupture behavior of the 2011 Tohoku earthquake estimated by multi period-band Bayesian waveform inversion

    NASA Astrophysics Data System (ADS)

    Kubo, H.; Asano, K.; Iwata, T.; Aoi, S.

    2014-12-01

    Previous studies for the period-dependent source characteristics of the 2011 Tohoku earthquake (e.g., Koper et al., 2011; Lay et al., 2012) were based on the short and long period source models using different method. Kubo et al. (2013) obtained source models of the 2011 Tohoku earthquake using multi period-bands waveform data by a common inversion method and discussed its period-dependent source characteristics. In this study, to achieve more in detail spatiotemporal source rupture behavior of this event, we introduce a new fault surface model having finer sub-fault size and estimate the source models in multi period-bands using a Bayesian inversion method combined with a multi-time-window method. Three components of velocity waveforms at 25 stations of K-NET, KiK-net, and F-net of NIED are used in this analysis. The target period band is 10-100 s. We divide this period band into three period bands (10-25 s, 25-50 s, and 50-100 s) and estimate a kinematic source model in each period band using a Bayesian inversion method with MCMC sampling (e.g., Fukuda & Johnson, 2008; Minson et al., 2013, 2014). The parameterization of spatiotemporal slip distribution follows the multi-time-window method (Hartzell & Heaton, 1983). The Green's functions are calculated by the 3D FDM (GMS; Aoi & Fujiwara, 1999) using a 3D velocity structure model (JIVSM; Koketsu et al., 2012). The assumed fault surface model is based on the Pacific plate boundary of JIVSM and is divided into 384 subfaults of about 16 * 16 km^2. The estimated source models in multi period-bands show the following source image: (1) First deep rupture off Miyagi at 0-60 s toward down-dip mostly radiating relatively short period (10-25 s) seismic waves. (2) Shallow rupture off Miyagi at 45-90 s toward up-dip with long duration radiating long period (50-100 s) seismic wave. (3) Second deep rupture off Miyagi at 60-105 s toward down-dip radiating longer period seismic waves then that of the first deep rupture. (4) Deep

  2. Geometry of Pacific plate in Kuril-Japan trench zones estimated from earthquake distribution using LT-OBS network and seismic structures by marine surveys

    NASA Astrophysics Data System (ADS)

    Shinohara, M.; Yamada, T.; Kuwano, A.; Nakahigashi, K.; Machida, Y.; Mochizuki, K.; Kanazawa, T.; Takanami, T.; Hino, R.

    2009-12-01

    The seismicity of the Japan arc region is as high as that observed in other areas of subduction of oceanic plates. The Japan Trench and Kuril Trench are plate convergent zones where the Pacific Plate is subducting below the Japan island. In addition, the trench is crooked off Erimo cape, Hokkaido. It is considered the bend of the trench causes complex shape of the plate boundary. There is a possibility that an asperity of a large earthquake is controlled by a shape of a plate boundary. Associated with the plate convergence, many earthquakes occur beneath landward slopes of the Japan Trench and the Kuril Trench. Such earthquakes are considered to occur mainly at plate boundary between the Pacific plate and the landward plate in landward slope of the Kuril trench and the Japan trench. Therefore, to obtain precise hypocenter distribution of earthquakes occurring in the regions is essential to estimate geometry of the plate boundary. For several years, we performed dense seafloor earthquake observation using Long-Term Ocean Bottom Seismometers (LT-OBSs) in this region, including the aftershock observation of the 2003 Tokachi-oki earthquakes which is a large interplate earthquake around the Japan island arc. In the region off Nemuro, dense seafloor observation was carried out from 2005 to 2006 for one year using LT-OBSs. In the region off Aomori, we performed the same type of a seafloor earthquake observation from 2004 to 2007 for two years in total. Ninety-two LT-OBSs were used for the observations, and an interval of the LT-OBS is approximately 20 km. The LT-OBS has three-component seismometer with a natural period of 1 Hz, and reaches a recoding period of 1 year. As a result, we obtained the precise hypocenter distribution from the region off Nemuro to the region off Aomori, and the hypocenter distribution of huge number of earthquakes enables us to estimate the geometry of the plate boundary. Additionally, seismic surveys using OBSs and controlled source were

  3. Source parameters of the 2014 Mw 6.1 South Napa earthquake estimated from the Sentinel 1A, COSMO-SkyMed and GPS data

    NASA Astrophysics Data System (ADS)

    Guangcai, Feng; Zhiwei, Li; Xinjian, Shan; Bing, Xu; Yanan, Du

    2015-08-01

    Using the combination of two InSAR and one GPS data sets, we present the detailed source model of the 2014 Mw 6.1 South Napa earthquake, the biggest tremor to hit the San Francisco Bay Area since the 1989 Mw 6.9 Loma Prieta earthquake. The InSAR data are from the Sentinel-1A (S1A) and COSMO-SkyMed (CS) satellites, and GPS data are provided by Nevada Geodetic Laboratory. We firstly obtain the complete coseismic deformation fields of this event and estimate the InSAR data errors, then using the S1A data to construct the fault geometry, one main and two short parallel sub-faults which haven't been identified by field investigation. As expected the geometry is in good agreement with the aftershock distribution. By inverting the InSAR and GPS data, we derive a three segment slip and rake models. Our model indicates that this event was a right-lateral strike-slip earthquake with a slight reverse component in the West Napa Fault as we estimated. The fault is ~ 30 km long and more than 80% of the seismic moment was released at the center of the fault segment, where the slip reached its maximum (up to 1 m). We also find that our geodetic moment magnitude is 2.07 × 1018 Nm, corresponding to Mw 6.18, larger than that of USGS (Mw 6.0) and GCMT (Mw 6.1). This difference may partly be explained by our InSAR data including about one week's postseismic deformation and aftershocks. The results also demonstrate high SNR and great ability of the newly launched Sentinel-1A in earthquake study. Furthermore, this study suggests that this earthquake has potential to trigger nearby faults, especially the Green Valley fault where the coulomb stress was imparted by the 2014 South Napa earthquake.

  4. Maximum Earthquake Magnitude Assessments by Japanese Government Committees (Invited)

    NASA Astrophysics Data System (ADS)

    Satake, K.

    2013-12-01

    The 2011 Tohoku earthquake (M 9.0) was the largest earthquake in Japanese history and such a gigantic earthquake was not foreseen around Japan. After the 2011 disaster, various government committees in Japan have discussed and assessed the maximum credible earthquake size around Japan, but their values vary without definite consensus. I will review them with earthquakes along the Nankai Trough as an example. The Central Disaster Management Council, under Cabinet Office, set up a policy for the future tsunami disaster mitigation. The possible future tsunamis are classified into two levels: L1 and L2. The L2 tsunamis are the largest possible tsunamis with low frequency of occurrence, for which saving people's lives is the first priority with soft measures such as tsunami hazard maps, evacuation facilities or disaster education. The L1 tsunamis are expected to occur more frequently, typically once in a few decades, for which hard countermeasures such as breakwater must be prepared. The assessments of L1 and L2 events are left to local governments. The CDMC also assigned M 9.1 as the maximum size of earthquake along the Nankai trough, then computed the ground shaking and tsunami inundation for several scenario earthquakes. The estimated loss is about ten times the 2011 disaster, with maximum casualties of 320,000 and economic loss of 2 trillion dollars. The Headquarters of Earthquake Research Promotion, under MEXT, was set up after the 1995 Kobe earthquake and has made long-term forecast of large earthquakes and published national seismic hazard maps. The future probability of earthquake occurrence, for example in the next 30 years, was calculated from the past data of large earthquakes, on the basis of characteristic earthquake model. The HERP recently revised the long-term forecast of Naknai trough earthquake; while the 30 year probability (60 - 70 %) is similar to the previous estimate, they noted the size can be M 8 to 9, considering the variability of past

  5. Flood control and loss estimation for paddy field at midstream of Chao Phraya River Basin, Thailand

    NASA Astrophysics Data System (ADS)

    Cham, T. C.; Mitani, Y.

    2015-09-01

    2011 Thailand flood has brought serious impact to downstream of Chao Phraya River Basin. The flood peak period started from August, 2011 to the end of October, 2011. This research focuses on midstream of Chao Phraya River Basin, which is Nakhon Sawan area includes confluence of Nan River and Yom River, also confluence of Ping River and Nan River. The main purpose of this research is to understand the flood generation, estimate the flood volume and loss of paddy field, also recommends applicable flood counter measurement to ease the flood condition at downstream of Chao Phraya River Basin. In order to understand the flood condition, post-analysis is conducted at Nakhon Sawan. The post-analysis consists of field survey to measure the flood marks remained and interview with residents to understand living condition during flood. The 2011 Thailand flood generation at midstream is simulated using coupling of 1D and 2D hydrodynamic model to understand the flood generation during flood peak period. It is calibrated and validated using flood marks measured and streamflow data received from Royal Irrigation Department (RID). Validation of results shows good agreement between simulated result and actual condition. Subsequently, 3 scenarios of flood control are simulated and Geographic Information System (GIS) is used to assess the spatial distribution of flood extent and reduction of loss estimation at paddy field. In addition, loss estimation for paddy field at midstream is evaluated using GIS with the calculated inundation depth. Results show the proposed flood control at midstream able to minimize 5% of the loss of paddy field in 26 provinces.

  6. Uncertainty of canal seepage losses estimated using flowing water balance with acoustic Doppler devices

    NASA Astrophysics Data System (ADS)

    Martin, Chad A.; Gates, Timothy K.

    2014-09-01

    Seepage losses from unlined irrigation canals amount to a large fraction of the total volume of water diverted for agricultural use, posing problems to both water conservation and water quality. Quantifying these losses and identifying areas where they are most prominent are crucial for determining the severity of seepage-related complications and for assessing the potential benefits of seepage reduction technologies and materials. A relatively easy and inexpensive way to estimate losses over an extensive segment of a canal is the flowing water balance, or inflow-outflow, method. Such estimates, however, have long been considered fraught with ambiguity due both to measurement error and to spatial and temporal variability. This paper presents a water balance analysis that evaluates uncertainty in 60 tests on two typical earthen irrigation canals. Monte Carlo simulation is used to account for a number of different sources of uncertainty. Issues of errors in acoustic Doppler flow measurement, in water level readings, and in evaporation estimates are considered. Storage change and canal wetted perimeter area, affected by variability in the canal prism, as well as lagged vs. simultaneous measurements of discharge at the inflow and outflow ends also are addressed. Mean estimated seepage loss rates for the tested canal reaches ranged from about -0.005 (gain) to 0.110 m3 s-1 per hectare of canal wetted perimeter (or -0.043 to 0.95 m d-1) with estimated probability distributions revealing substantial uncertainty. Across the tests, the average coefficient of variation was about 240% and the average 90th inter-percentile range was 0.143 m3 s-1 per hectare (1.24 m d-1). Sensitivity analysis indicates that while the predominant influence on seepage uncertainty is error in measured discharge at the upstream and downstream ends of the canal test reach, the magnitude and uncertainty of storage change due to unsteady flow also is a significant influence. Recommendations are

  7. The impact of uncertain precipitation data on insurance loss estimates using a Flood Catastrophe Model

    NASA Astrophysics Data System (ADS)

    Sampson, C. C.; Fewtrell, T. J.; O'Loughlin, F.; Pappenberger, F.; Bates, P. B.; Freer, J. E.; Cloke, H. L.

    2014-01-01

    Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and re-insurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge corrected rainfall radar, meteorological re-analysis data (ERA-Interim) and a satellite rainfall product (CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find these loss estimates to be highly sensitive to uncertainties propagated from the driving observational datasets, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.

  8. The impact of uncertain precipitation data on insurance loss estimates using a flood catastrophe model

    NASA Astrophysics Data System (ADS)

    Sampson, C. C.; Fewtrell, T. J.; O'Loughlin, F.; Pappenberger, F.; Bates, P. B.; Freer, J. E.; Cloke, H. L.

    2014-06-01

    Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and reinsurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland, in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than most commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module, and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge-corrected rainfall radar, meteorological reanalysis data (European Centre for Medium-Range Weather Forecasts Reanalysis-Interim; ERA-Interim) and a satellite rainfall product (The Climate Prediction Center morphing method; CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find the loss estimates to be more sensitive to uncertainties propagated from the driving precipitation data sets than to other uncertainties in the hazard and

  9. Routine estimate of focal depths for moderate and small earthquakes by modelling regional depth phase sPmP in eastern Canada

    NASA Astrophysics Data System (ADS)

    Ma, S.; Peci, V.; Adams, J.; McCormack, D.

    2003-04-01

    ROUTINE ESTIMATE OF FOCAL DEPTHS FOR MODERATE AND SMALL EARTHQUAKES BY MODELLING REGIONAL DEPTH PHASE sPmP IN EASTERN CANADA Shutian Ma, Veronika Peci, John Adams, and David McCormack(1) (1) National Earthquake Hazards Program, Geological Survey of Canada, 7 Observatory Crescent, Ottawa, ON, K1A 0Y3, Canada Shutian Ma (ma@seismo.nrcan.gc.ca/613-947 3520) Veronika Peci (peci@seismo.nrcan.gc.ca/613-995 7100) John Adams (adams@seismo.nrcan.gc.ca/613-995 5519) David McCormack (cormack@seismo.nrcan.gc.ca/613-992 8766) Earthquake focal depths are critical parameters for basic seismological research, seismotectonic study, seismic hazard assessment, and event discrimination. Focal depths for most earthquakes with Mw >= 4.5 can be estimated from teleseismic arrival times of P, pP and sP. For maller earthquakes, focal depths can be stimated from Pg and Sg arrival times recorded at close stations. However, for most earthquakes in eastern Canada, teleseismic signals are too weak and seismograph spacing too sparse for depth estimation. The regional phase sPmP is very sensitive to focal depth, generally well developed at epicentral distances greater than 100 km, and clearly recorded at many stations in eastern Canada for earthquakes with mN >= 2.8. We developed a procedure to estimate focal depth routinely with sPmP. We select vertical waveforms recorded at distances from about 100 to 300 km (using Geotool and SAC2000), generate synthetic waveforms (using reflectivity method) for a typical focal mechanism and for a suitable range of depths, and choose the depth at which the synthetic best matches the selected waveform. The software is easy to operate. For routine work an experienced operator can get a focal depth with waveform modelling within 10 minutes after the waveform is selected, or in a couple of minutes get a rough focal depth from sPmP and Pg or PmP arrival times without waveform modelling. We have confirmed our sPmP modelling results by two comparisons: (1) to depths

  10. Modal analysis of thin cylindrical shells with cardboard liners and estimation of loss factors

    NASA Astrophysics Data System (ADS)

    Koruk, Hasan; Dreyer, Jason T.; Singh, Rajendra

    2014-04-01

    Cardboard liners are often installed within automotive drive shafts to reduce radiated noise over a certain frequency range. However, the precise mechanisms that yield noise attenuation are not well understood. To overcome this void, a thin shell (under free boundaries) with different cardboard liner thicknesses is examined using analytical, computational and experimental methods. First, an experimental procedure is introduced to determine the modal behavior of a cylindrical shell with a cardboard liner. Then, acoustic and vibration frequency response functions are measured in acoustic free field, and natural frequencies and the loss factors of structures are determined. The adverse effects caused by closely spaced modes during the identification of modal loss factors are minimized, and variations in measured natural frequencies and loss factors are explored. Material properties of a cardboard liner are also determined using an elastic plate treated with a thin liner. Finally, the natural frequencies and modal loss factors of a cylindrical shell with cardboard liners are estimated using analytical and computational methods, and the sources of damping mechanisms are identified. The proposed procedure can be effectively used to model a damped cylindrical shell (with a cardboard liner) to predict its vibro-acoustic response.

  11. Estimating Earthquake Hazards in the San Pedro Shelf Region, Southern California

    NASA Astrophysics Data System (ADS)

    Baher, S.; Fuis, G.; Normark, W. R.; Sliter, R.

    2003-12-01

    The San Pedro Shelf (SPS) region of the inner California Borderland offshore southern California poses a significant seismic hazard to the contiguous Los Angeles Area, as a consequence of late Cenozoic compressional reactivation of mid-Cenozoic extensional faults. The extent of the hazard, however, is poorly understood because of the complexity of fault geometries and uncertainties in earthquake locations. The major faults in the region include the Palos Verdes, THUMS Huntington Beach and the Newport-Inglewood fault zones. We report here the analysis and interpretation of wide-angle seismic-reflection and refraction data recorded as part of the Los Angeles Region Seismic Experiment line 1 (LARSE 1), multichannel seismic (MCS) reflection data obtained by the USGS (1998-2000) and industry borehole stratigraphy. The onshore-offshore velocity model, which is based on forward modeling of the refracted P-wave arrival times, is used to depth migrate the LARSE 1 section. Borehole stratigraphy allows correlation of the onshore and offshore velocity models because state regulations prevent collection of deep-penetration acoustic data nearshore (within 3 mi.). Our refraction study is an extension of ten Brink et al., 2000 tomographic inversion of LARSE I data. They found high velocities (> 6 km/sec) at about ~3.5 km depth from the Catalina Fault (CF) to the SPS. We find these velocities, shallower (around 2 km depth) beneath the Catalina Ridge (CR) and SPS, but at a depth 2.5-3.0 km elsewhere in the study region. This change in velocity structure can provide additional constraints for the tectonic processes of this region. The structural horizons observed in the LARSE 1 reflection data are tied to adjacent MCS lines. We find localized folding and faulting at depth (~2 km) southwest of the CR and on the SPS slope. Quasi-laminar beds, possible of pelagic origin follow the contours of earlier folded (wavelength ~1 km) and faulted Cenozoic sedimentary and volcanic rocks. Depth to

  12. Kinematic source parameter estimation for the 1995 Mw 7.2 Gulf of Aqaba Earthquake by using InSAR and teleseismic data in a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Bathke, Hannes; Feng, Guangcai; Heimann, Sebastian; Nikkhoo, Mehdi; Zielke, Olaf; Jónsson, Sigurjon; Mai, Martin

    2016-04-01

    The 1995 Mw 7.2 Gulf of Aqaba earthquake was primarily a left-lateral strike-slip earthquake, occurring on the Dead Sea transform fault at the western border of the Arabian plate. The tectonic setting within the trans-tensional Gulf of Aqaba is complex, consisting of several en echelon transform faults and pull-apart basins. Several studies have been published, focusing on this earthquake using either InSAR or teleseismic (P and SH waves) data. However, the published finite-fault rupture models of the earthquake differ significantly. For example, it still remains unclear whether the Aqaba fault, the Aragonese fault or the Arnona fault ruptured in the event. It is also possible that several segments were activated. The main problem with past studies is that either InSAR or teleseismic data were used, but not both. Teleseismic data alone are unable to locate the event well, while the InSAR data are limited in the near field due to the earthquake's offshore location. In addition, the source fault is roughly north-south oriented and InSAR has limited sensitivity to north-south displacements. Here we improve on previous studies by using InSAR and teleseismic data jointly to constrain the source model. In addition, we use InSAR data from two additional tracks that have not been used before, which provides a more complete displacement field of the earthquake. Furthermore, in addition to the fault model parameters themselves, we also estimate the parameter uncertainties, which were not reported in previous studies. Based on these uncertainties we estimate a model-prediction covariance matrix in addition to the data covariance matrix that we then use in Bayesian inference sampling to solve for the static slip-distribution on the fault. By doing so, we avoid using a Laplacian smoothing operator, which is often subjective and may pose an unphysical constraint to the problem. Our results show that fault slip on only the Aragonese fault can satisfactorily explain the InSAR data

  13. An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.

    1994-01-01

    Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.

  14. Completeness of the fossil record: Estimating losses due to small body size

    NASA Astrophysics Data System (ADS)

    Cooper, Roger A.; Maxwell, Phillip A.; Crampton, James S.; Beu, Alan G.; Jones, Craig M.; Marshall, Bruce A.

    2006-04-01

    Size bias in the fossil record limits its use for interpreting patterns of past biodiversity and ecological change. Using comparative size frequency distributions of exceptionally good regional records of New Zealand Holocene and Cenozoic Mollusca in museum archive collections, we derive first-order estimates of the magnitude of the bias against small body size and the effect of this bias on completeness of the fossil record. Our database of 3907 fossil species represents an original living pool of 9086 species, from which ˜36% have been removed by size culling, 27% from the smallest size class (<5 mm). In contrast, non-size-related losses compose only 21% of the total. In soft rocks, the loss of small taxa can be reduced by nearly 50% through the employment of exhaustive collection and preparation techniques.

  15. Regional Estimates of Drought-Induced Tree Canopy Loss across Texas

    NASA Astrophysics Data System (ADS)

    Schwantes, A.; Swenson, J. J.; González-Roglich, M.; Johnson, D. M.; Domec, J. C.; Jackson, R. B.

    2015-12-01

    The severe drought of 2011 killed millions of trees across the state of Texas. Drought-induced tree-mortality can have significant impacts to carbon cycling, regional biophysics, and community composition. We quantified canopy cover loss across the state using remotely sensed imagery from before and after the drought at multiple scales. First, we classified ~200 orthophotos (1-m spatial resolution) from the National Agriculture Imagery Program, using a supervised maximum likelihood classification. Area of canopy cover loss in these classifications was highly correlated (R2 = 0.8) with ground estimates of canopy cover loss, measured in 74 plots across 15 different sites in Texas. These 1-m orthophoto classifications were then used to calibrate and validate coarser scale (30-m) Landsat imagery to create wall-to-wall tree canopy cover loss maps across the state of Texas. We quantified percent dead and live canopy within each pixel of Landsat to create continuous maps of dead and live tree cover, using two approaches: (1) a zero-inflated beta distribution model and (2) a random forest algorithm. Widespread canopy loss occurred across all the major natural systems of Texas, with the Edwards Plateau region most affected. In this region, on average, 10% of the forested area was lost due to the 2011 drought. We also identified climatic thresholds that controlled the spatial distribution of tree canopy loss across the state. However, surprisingly, there were many local hot spots of canopy loss, suggesting that not only climatic factors could explain the spatial patterns of canopy loss, but rather other factors related to soil, landscape, management, and stand density also likely played a role. As increases in extreme droughts are predicted to occur with climate change, it will become important to define methods that can detect associated drought-induced tree mortality across large regions. These maps could then be used (1) to quantify impacts to carbon cycling and regional

  16. Estimation of Earthquake Source Properties Along the East African Rift Using Full Waveforms

    NASA Astrophysics Data System (ADS)

    Baker, B.; Roecker, S. W.

    2015-12-01

    Recently, the Continental Rifting in Africa: Fluids-Tectonic Interaction (CRAFTI) experiment was conducted in northern Tanzania and southern Kenya as a means to better evaluate the effect of tectonic and magmatic strain along the east African rift. Towards this goal S. Roecker has computed a 3D structural model by joint inversion of gravity, local seismic body wave, and surface wave data. The joint inversion in turn produces a quality estimate of the compressional, shear, and density structure in the region. In the process of tomography of local body wave data it was observed that there exist some anomalously deep seismic events. To better quantify these events we look towards waveform modeling in this new and laterally heterogeneous structural model. It is thought that better quantification of later arriving direct and scattered phases will provide better resolved estimates of the event locations and lower the trade-off between source time and depth uncertainty inherent in travel time inversions. Since our main objective is testing the validity of seismic depths in the travel time inversion we will favor a grid search based approach around the current hypocenters using a method similar Zhao, 2006. To expedite processing, we make use of seismic reciprocity and save the strain wave fields produced by impulsive sources at receiver locations in the vicinity of the initial hypocenters. We then perform a moment tensor inversion at each location around the hypocenter, estimate the corresponding source time function, compute the resulting synthetics, and finally calculate a cumulative waveform misfit objective function for all stations. It is thought this procedure should well sample the objective function in the neighborhood of the initial hypocenters and thereby provide an avenue for resolution analysis of the event depths.

  17. Prediction of earthquake-triggered landslide event sizes

    NASA Astrophysics Data System (ADS)

    Braun, Anika; Havenith, Hans-Balder; Schlögel, Romy

    2016-04-01

    Seismically induced landslides are a major environmental effect of earthquakes, which may significantly contribute to related losses. Moreover, in paleoseismology landslide event sizes are an important proxy for the estimation of the intensity and magnitude of past earthquakes and thus allowing us to improve seismic hazard assessment over longer terms. Not only earthquake intensity, but also factors such as the fault characteristics, topography, climatic conditions and the geological environment have a major impact on the intensity and spatial distribution of earthquake induced landslides. We present here a review of factors contributing to earthquake triggered slope failures based on an "event-by-event" classification approach. The objective of this analysis is to enable the short-term prediction of earthquake triggered landslide event sizes in terms of numbers and size of the affected area right after an earthquake event occurred. Five main factors, 'Intensity', 'Fault', 'Topographic energy', 'Climatic conditions' and 'Surface geology' were used to establish a relationship to the number and spatial extend of landslides triggered by an earthquake. The relative weight of these factors was extracted from published data for numerous past earthquakes; topographic inputs were checked in Google Earth and through geographic information systems. Based on well-documented recent earthquakes (e.g. Haiti 2010, Wenchuan 2008) and on older events for which reliable extensive information was available (e.g. Northridge 1994, Loma Prieta 1989, Guatemala 1976, Peru 1970) the combination and relative weight of the factors was calibrated. The calibrated factor combination was then applied to more than 20 earthquake events for which landslide distribution characteristics could be cross-checked. One of our main findings is that the 'Fault' factor, which is based on characteristics of the fault, the surface rupture and its location with respect to mountain areas, has the most important

  18. On Assessment and Estimation of Potential Losses due to Land Subsidence in Urban Areas of Indonesia

    NASA Astrophysics Data System (ADS)

    Abidin, Hasanuddin Z.; Andreas, Heri; Gumilar, Irwan; Sidiq, Teguh P.

    2016-04-01

    subsidence have also relation among each other, the accurate quantification of the potential losses caused by land subsidence in urban areas is not an easy task to accomplish. The direct losses can be easier to estimate than the indirect losses. For example, the direct losses due to land subsidence in Bandung was estimated to be at least 180 Million USD; but the indirect losses is still unknown.

  19. Estimated ground motion from the 1994 Northridge, California, earthquake at the site of interstate 10 and La Cienega Boulevard bridge collapse, West Los Angeles, California

    USGS Publications Warehouse

    Boore, D.M.; Gibbs, J.F.; Joyner, W.B.; Tinsley, J.C.; Ponti, D.J.

    2003-01-01

    We have estimated ground motions at the site of a bridge collapse during the 1994 Northridge, California, earthquake. The estimated motions are based on correcting motions recorded during the mainshock 2.3 km from the collapse site for the relative site response of the two sites. Shear-wave slownesses and damping based on analysis of borehole measurements at the two sites were used in the site response analysis. We estimate that the motions at the collapse site were probably larger, by factors ranging from 1.2 to 1.6, than at the site at which the ground motion was recorded, for periods less than about 1 sec.

  20. Use of plume mapping data to estimate chlorinated solvent mass loss

    USGS Publications Warehouse

    Barbaro, J.R.; Neupane, P.P.

    2006-01-01

    Results from a plume mapping study from November 2000 through February 2001 in the sand-and-gravel surficial aquifer at Dover Air Force Base, Delaware, were used to assess the occurrence and extent of chlorinated solvent mass loss by calculating mass fluxes across two transverse cross sections and by observing changes in concentration ratios and mole fractions along a longitudinal cross section through the core of the plume. The plume mapping investigation was conducted to determine the spatial distribution of chlorinated solvents migrating from former waste disposal sites. Vertical contaminant concentration profiles were obtained with a direct-push drill rig and multilevel piezometers. These samples were supplemented with additional ground water samples collected with a minipiezometer from the bed of a perennial stream downgradient of the source areas. Results from the field program show that the plume, consisting mainly of tetrachloroethylene (PCE), trichloroethene (TCE), and cis-1,2-dichloroethene (cis-1,2-DCE), was approximately 670 m in length and 120 m in width, extended across much of the 9- to 18-m thickness of the surficial aquifer, and discharged to the stream in some areas. The analyses of the plume mapping data show that losses of the parent compounds, PCE and TCE, were negligible downgradient of the source. In contrast, losses of cis-1,2-DCE, a daughter compound, were observed in this plume. These losses very likely resulted from biodegradation, but the specific reaction mechanism could not be identified. This study demonstrates that plume mapping data can be used to estimate the occurrence and extent of chlorinated solvent mass loss from biodegradation and assess the effectiveness of natural attenuation as a remedial measure.

  1. Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Mahdi Alavi, S. M.; Saif, Mehrdad

    2013-12-01

    This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.

  2. A comparative study of commercial lithium ion battery cycle life in electric vehicle: Capacity loss estimation

    NASA Astrophysics Data System (ADS)

    Han, Xuebing; Ouyang, Minggao; Lu, Languang; Li, Jianqiu

    2014-12-01

    Now the lithium ion batteries are widely used in electric vehicles (EV). The cycle life is among the most important characteristics of the power battery in EV. In this report, the battery cycle life experiment is designed according to the actual working condition in EV. Five different commercial lithium ion cells are cycled alternatively under 45 °C and 5 °C and the test results are compared. Based on the cycle life experiment results and the identified battery aging mechanism, the battery cycle life models are built and fitted by the genetic algorithm. The capacity loss follows a power law relation with the cycle times and an Arrhenius law relation with the temperature. For automotive application, to save the cost and the testing time, a battery SOH (state of health) estimation method combined the on-line model based capacity estimation and regular calibration is proposed.

  3. Rapidly Estimated Seismic Source Parameters for the 16 September 2015 Illapel, Chile M w 8.3 Earthquake

    NASA Astrophysics Data System (ADS)

    Ye, Lingling; Lay, Thorne; Kanamori, Hiroo; Koper, Keith D.

    2016-02-01

    On 16 September 2015, a great ( M w 8.3) interplate thrust earthquake ruptured offshore Illapel, Chile, producing a 4.7-m local tsunami. The last major rupture in the region was a 1943 M S 7.9 event. Seismic methods for rapidly characterizing the source process, of value for tsunami warning, were applied. The source moment tensor could be obtained robustly by W-phase inversion both within minutes (Chilean researchers had a good solution using regional data within 5 min) and within an hour using broadband seismic data. Short-period teleseismic P wave back-projections indicate northward rupture expansion from the hypocenter at a modest rupture expansion velocity of 1.5-2.0 km/s. Finite-fault inversions of teleseismic P and SH waves using that range of rupture velocities and a range of dips from 16°, consistent with the local slab geometry and some moment tensor solutions, to 22°, consistent with long-period moment tensor inversions, indicate a 180- to 240-km bilateral along-strike rupture zone with larger slip northwest to north of the epicenter (with peak slip of 7-10 m). Using a shallower fault model dip shifts slip seaward toward the trench, while a steeper dip moves it closer to the coastline. Slip separates into two patches as assumed rupture velocity increases. In all cases, localized ~5 m slip extends down-dip below the coast north of the epicenter. The seismic moment estimates for the range of faulting parameters considered vary from 3.7 × 1021 Nm (dip 16°) to 2.7 × 1021 Nm (dip 22°), the static stress drop estimates range from 2.6 to 3.5 MPa, and the radiated seismic energy, up to 1 Hz, is about 2.2-3.15 × 1016 J.

  4. Estimating Loss-of-Coolant Accident Frequencies for the Standardized Plant Analysis Risk Models

    SciTech Connect

    S. A. Eide; D. M. Rasmuson; C. L. Atwood

    2008-09-01

    The U.S. Nuclear Regulatory Commission maintains a set of risk models covering the U.S. commercial nuclear power plants. These standardized plant analysis risk (SPAR) models include several loss-of-coolant accident (LOCA) initiating events such as small (SLOCA), medium (MLOCA), and large (LLOCA). All of these events involve a loss of coolant inventory from the reactor coolant system. In order to maintain a level of consistency across these models, initiating event frequencies generally are based on plant-type average performance, where the plant types are boiling water reactors and pressurized water reactors. For certain risk analyses, these plant-type initiating event frequencies may be replaced by plant-specific estimates. Frequencies for SPAR LOCA initiating events previously were based on results presented in NUREG/CR-5750, but the newest models use results documented in NUREG/CR-6928. The estimates in NUREG/CR-6928 are based on historical data from the initiating events database for pressurized water reactor SLOCA or an interpretation of results presented in the draft version of NUREG-1829. The information in NUREG-1829 can be used several ways, resulting in different estimates for the various LOCA frequencies. Various ways NUREG-1829 information can be used to estimate LOCA frequencies were investigated and this paper presents two methods for the SPAR model standard inputs, which differ from the method used in NUREG/CR-6928. In addition, results obtained from NUREG-1829 are compared with actual operating experience as contained in the initiating events database.

  5. Izmit, Turkey 1999 Earthquake Interferogram

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This image is an interferogram that was created using pairs of images taken by Synthetic Aperture Radar (SAR). The images, acquired at two different times, have been combined to measure surface deformation or changes that may have occurred during the time between data acquisition. The images were collected by the European Space Agency's Remote Sensing satellite (ERS-2) on 13 August 1999 and 17 September 1999 and were combined to produce these image maps of the apparent surface deformation, or changes, during and after the 17 August 1999 Izmit, Turkey earthquake. This magnitude 7.6 earthquake was the largest in 60 years in Turkey and caused extensive damage and loss of life. Each of the color contours of the interferogram represents 28 mm (1.1 inches) of motion towards the satellite, or about 70 mm (2.8 inches) of horizontal motion. White areas are outside the SAR image or water of seas and lakes. The North Anatolian Fault that broke during the Izmit earthquake moved more than 2.5 meters (8.1 feet) to produce the pattern measured by the interferogram. Thin red lines show the locations of fault breaks mapped on the surface. The SAR interferogram shows that the deformation and fault slip extended west of the surface faults, underneath the Gulf of Izmit. Thick black lines mark the fault rupture inferred from the SAR data. Scientists are using the SAR interferometry along with other data collected on the ground to estimate the pattern of slip that occurred during the Izmit earthquake. This then used to improve computer models that predict how this deformation transferred stress to other faults and to the continuation of the North Anatolian Fault, which extends to the west past the large city of Istanbul. These models show that the Izmit earthquake further increased the already high probability of a major earthquake near Istanbul.

  6. Economic Estimation of the Losses Caused by Surface Water Pollution Accidents in China From the Perspective of Water Bodies’ Functions

    PubMed Central

    Yao, Hong; You, Zhen; Liu, Bo

    2016-01-01

    The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies’ functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident’s origin and other indirect losses. In the valuation of damage to people’s life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water’s recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole. PMID:26805869

  7. Economic Estimation of the Losses Caused by Surface Water Pollution Accidents in China From the Perspective of Water Bodies' Functions.

    PubMed

    Yao, Hong; You, Zhen; Liu, Bo

    2016-02-01

    The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies' functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident's origin and other indirect losses. In the valuation of damage to people's life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water's recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole. PMID:26805869

  8. The USGS Earthquake Scenario Project

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Petersen, M. D.; Wald, L. A.; Frankel, A. D.; Quitoriano, V. R.; Lin, K.; Luco, N.; Mathias, S.; Bausch, D.

    2009-12-01

    The U.S. Geological Survey’s (USGS) Earthquake Hazards Program (EHP) is producing a comprehensive suite of earthquake scenarios for planning, mitigation, loss estimation, and scientific investigations. The Earthquake Scenario Project (ESP), though lacking clairvoyance, is a forward-looking project, estimating earthquake hazard and loss outcomes as they may occur one day. For each scenario event, fundamental input includes i) the magnitude and specified fault mechanism and dimensions, ii) regional Vs30 shear velocity values for site amplification, and iii) event metadata. A grid of standard ShakeMap ground motion parameters (PGA, PGV, and three spectral response periods) is then produced using the well-defined, regionally-specific approach developed by the USGS National Seismic Hazard Mapping Project (NHSMP), including recent advances in empirical ground motion predictions (e.g., the NGA relations). The framework also allows for numerical (3D) ground motion computations for specific, detailed scenario analyses. Unlike NSHMP ground motions, for ESP scenarios, local rock and soil site conditions and commensurate shaking amplifications are applied based on detailed Vs30 maps where available or based on topographic slope as a proxy. The scenario event set is comprised primarily by selection from the NSHMP events, though custom events are also allowed based on coordination of the ESP team with regional coordinators, seismic hazard experts, seismic network operators, and response coordinators. The event set will be harmonized with existing and future scenario earthquake events produced regionally or by other researchers. The event list includes approximate 200 earthquakes in CA, 100 in NV, dozens in each of NM, UT, WY, and a smaller number in other regions. Systematic output will include all standard ShakeMap products, including HAZUS input, GIS, KML, and XML files used for visualization, loss estimation, ShakeCast, PAGER, and for other systems. All products will be

  9. New constraints on the rupture process of the 1999 August 17 Izmit earthquake deduced from estimates of stress glut rate moments

    NASA Astrophysics Data System (ADS)

    Clévédé, E.; Bouin, M.-P.; Bukchin, B.; Mostinskiy, A.; Patau, G.

    2004-12-01

    This paper illustrates the use of integral estimates given by the stress glut rate moments of total degree 2 for constraining the rupture scenario of a large earthquake in the particular case of the 1999 Izmit mainshock. We determine the integral estimates of the geometry, source duration and rupture propagation given by the stress glut rate moments of total degree 2 by inverting long-period surface wave (LPSW) amplitude spectra. Kinematic and static models of the Izmit earthquake published in the literature are quite different from one another. In order to extract the characteristic features of this event, we calculate the same integral estimates directly from those models and compare them with those deduced from our inversion. While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. With the aim of understand this discrepancy, we use simple equivalent kinematic models to reproduce the integral estimates of the considered rupture processes (including ours) by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the joint analysis of the LPSW solution and source tomographies allows us to elucidate the scattering of source processes published for this earthquake and to discriminate between the models. Our results strongly suggest that (1) there was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; (2) the apparent rupture velocity decreases on this segment.

  10. Bayesian Estimation of 3D Non-planar Fault Geometry and Slip: An application to the 2011 Megathrust (Mw 9.1) Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Dutta, Rishabh; Jónsson, Sigurjón

    2016-04-01

    Earthquake faults are generally considered planar (or of other simple geometry) in earthquake source parameter estimations. However, simplistic fault geometries likely result in biases in estimated slip distributions and increased fault slip uncertainties. In case of large subduction zone earthquakes, these biases and uncertainties propagate into tsunami waveform modeling and other calculations related to postseismic studies, Coulomb failure stresses, etc. In this research, we parameterize 3D non-planar fault geometry for the 2011 Tohoku-Oki earthquake (Mw 9.1) and estimate these geometrical parameters along with fault slip parameters from onland and offshore GPS using Bayesian inference. This non-planar fault is formed using several 3rd degree polynomials in along-strike (X-Y plane) and along-dip (X-Z plane) directions that are tied together using a triangular mesh. The coefficients of these polynomials constitute the fault geometrical parameters. We use the trench and locations of past seismicity as a priori information to constrain these fault geometrical parameters and the Laplacian to characterize the fault slip smoothness. Hyper-parameters associated to these a priori constraints are estimated empirically and the posterior probability distribution of the model (fault geometry and slip) parameters is sampled using an adaptive Metropolis Hastings algorithm. The across-strike uncertainties in the fault geometry (effectively the local fault location) around high-slip patches increases from 6 km at 10km depth to about 35 km at 50km depth, whereas around low-slip patches the uncertainties are larger (from 7 km to 70 km). Uncertainties in reverse slip are found to be higher at high slip patches than at low slip patches. In addition, there appears to be high correlation between adjacent patches of high slip. Our results demonstrate that we can constrain complex non-planar fault geometry together with fault slip from GPS data using past seismicity as a priori

  11. Estimation and optimization of loss-of-pair uncertainties based on PIV correlation functions

    NASA Astrophysics Data System (ADS)

    Scharnowski, Sven; Kähler, Christian J.

    2016-02-01

    The uncertainty quantification of particle image velocimetry (PIV) measurements is still an open problem, and to date, no consensus exists about the best suited approach. When the spatial resolution is not appropriate, the largest uncertainties are usually caused by flow gradients. But also the amount of loss-of-pairs due to out-of-plane flow motion and insufficient light-sheet overlap causes strong uncertainties in real experiments. In this paper, we show how the amount of loss-of-pairs can be quantified using the volume of the correlation function normalized by the volume of the autocorrelation function. The findings are an important step toward a reliable uncertainty estimation of instantaneous planar velocity fields computed from PIV and stereo-PIV data. Another important consequence of the analysis is that the results allow for the optimization of PIV and stereo-PIV setups in view of minimizing the total error. In particular, it is shown that the best results (concerning the relative uncertainty) can be achieved if the out-of-plane loss-of-correlation is smaller than one ( F o ). The only exception is the case where the out-of-plane motion is exactly zero. The predictions are confirmed experimentally in the last part of the paper.

  12. Estimated Lifetime Medical and Work-Loss Costs of Fatal Injuries--United States, 2013.

    PubMed

    Florence, Curtis; Simon, Thomas; Haegerich, Tamara; Luo, Feijun; Zhou, Chao

    2015-10-01

    Injury-associated deaths have substantial economic consequences. In 2013, unintentional injury was the fourth leading cause of death, suicide was the tenth, and homicide was the sixteenth; these three causes accounted for approximately 187,000 deaths in the United States. To assess the economic impact of fatal injuries, CDC analyzed death data from the National Vital Statistics System for 2013, along with cost of injury data using the Web-Based Injury Statistics Query and Reporting System. This report updates a previous study that analyzed death data from the year 2000, and employs recently revised methodology for determining the costs of injury outcomes, which uses the most current economic data and incorporates improvements for estimating medical costs associated with injury. Number of deaths, crude and age-specific death rates, and total lifetime work-loss costs and medical costs were calculated for fatal injuries by sex, age group, intent (intentional versus unintentional), and mechanism of injury. During 2013, the rate of fatal injury was 61.0 per 100,000 population, with combined medical and work-loss costs exceeding $214 billion. Costs from fatal injuries represent approximately one third of the total $671 billion medical and work-loss costs associated with all injuries in 2013. The magnitude of the economic burden associated with injury-associated deaths underscores the need for effective prevention. PMID:26421530

  13. Sound absorption coefficient in situ: an alternative for estimating soil loss factors.

    PubMed

    Freire, Rosane; Meletti de Abreu, Marco Henrique; Okada, Rafael Yuri; Soares, Paulo Fernando; GranhenTavares, Célia Regina

    2015-01-01

    The relationship between the sound absorption coefficient and factors of the Universal Soil Loss Equation (USLE) was determined in a section of the Maringá Stream basin, Paraná State, by using erosion plots. In the field, four erosion plots were built on a reduced scale, with dimensions of 2.0×12.5m. With respect to plot coverage, one was kept with bare soil and the others contained forage grass (Brachiaria), corn and wheat crops, respectively. Planting was performed without any type of conservation practice in an area with a 9% slope. A sedimentation tank was placed at the end of each plot to collect the material transported. For the acoustic system, pink noise was used in the measurement of the proposed monitoring, for collecting information on incident and reflected sound pressure levels. In general, obtained values of soil loss confirmed that 94.3% of material exported to the basin water came from the bare soil plot, 2.8% from the corn plot, 1.8% from the wheat plot, and 1.1% from the forage grass plot. With respect to the acoustic monitoring, results indicated that at 16kHz erosion plot coverage type had a significant influence on the sound absorption coefficient. High correlation coefficients were found in estimations of the A and C factors of the USLE, confirming that the acoustic technique is feasible for the determination of soil loss directly in the field. PMID:24972796

  14. Photogrammetrically Derived Estimates of Glacier Mass Loss in the Upper Susitna Drainage Basin, Alaska Range, Alaska

    NASA Astrophysics Data System (ADS)

    Wolken, G. J.; Whorton, E.; Murphy, N.

    2014-12-01

    Glaciers in Alaska are currently experiencing some of the highest rates of mass loss on Earth, with mass wastage rates accelerating during the last several decades. Glaciers, and other components of the hydrologic cycle, are expected to continue to change in response to anticipated future atmospheric warming, thus, affecting the quantity and timing of river runoff. This study uses sequential digital elevation model (DEM) analysis to estimate the mass loss of glaciers in the upper Susitna drainage basin, Alaska Range, for the purpose of validating model simulations of past runoff changes. We use mainly stereo optical airborne and satellite data for several epochs between 1949 and 2014, and employ traditional stereo-photogrammetric and structure from motion processing techniques to derive DEMs of the upper Susitna basin glaciers. This work aims to improve the record of glacier change in the central Alaska Range, and serves as a critical validation dataset for a hydrological model that simulates the potential effects of future glacier mass loss on changes in river runoff over the lifespan of the proposed Susitna-Watana Hydroelectric Project.

  15. Estimating nitrogen losses in furrow irrigated soil amended by compost using HYDRUS-2D model

    NASA Astrophysics Data System (ADS)

    Iqbal, Shahid; Guber, Andrey; Zaman Khan, Haroon; ullah, Ehsan

    2014-05-01

    Furrow irrigation commonly results in high nitrogen (N) losses from soil profile via deep infiltration. Estimation of such losses and their reduction is not a trivial task because furrow irrigation creates highly nonuniform distribution of soil water that leads to preferential water and N fluxes in soil profile. Direct measurements of such fluxes are impractical. The objective of this study was to assess applicability of HYDRUS-2D model for estimating nitrogen balance in manure amended soil under furrow irrigation. Field experiments were conducted in a sandy loam soil amended by poultry manure compost (PMC) and pressmud compost (PrMC) fertilizers. The PMC and PrMC contained 2.5% and 0.9% N and were applied at 5 rates: 2, 4, 6, 8 and 10 ton/ha. Plots were irrigated starting from 26th day from planting using furrows with 1x1 ridge to furrow aspect ratio. Irrigation depths were 7.5 cm and time interval between irrigations varied from 8 to 15 days. Results of the field experiments showed that approximately the same corn yield was obtained with considerably higher N application rates using PMC than using PrMC as a fertilizer. HYDRUS-2D model was implemented to evaluate N fluxes in soil amended by PMC and PrMC fertilizers. Nitrogen exchange between two pools of organic N (compost and soil) and two pools of mineral N (soil NH4-N and soil NO3-N) was modeled using mineralization and nitrification reactions. Sources of mineral N losses from soil profile included denitrification, root N uptake and leaching with deep infiltration of water. HYDRUS-2D simulations showed that the observed increases in N root water uptake and corn yields associated with compost application could not be explained by the amount of N added to soil profile with the compost. Predicted N uptake by roots significantly underestimated the field data. Good agreement between simulated and field-estimated values of N root uptake was achieved when the rate of organic N mineralization was increased

  16. Estimating annual soil carbon loss in agricultural peatland soils using a nitrogen budget approach.

    PubMed

    Kirk, Emilie R; van Kessel, Chris; Horwath, William R; Linquist, Bruce A

    2015-01-01

    Around the world, peatland degradation and soil subsidence is occurring where these soils have been converted to agriculture. Since initial drainage in the mid-1800s, continuous farming of such soils in the California Sacramento-San Joaquin Delta (the Delta) has led to subsidence of up to 8 meters in places, primarily due to soil organic matter (SOM) oxidation and physical compaction. Rice (Oryza sativa) production has been proposed as an alternative cropping system to limit SOM oxidation. Preliminary research on these soils revealed high N uptake by rice in N fertilizer omission plots, which we hypothesized was the result of SOM oxidation releasing N. Testing this hypothesis, we developed a novel N budgeting approach to assess annual soil C and N loss based on plant N uptake and fallow season N mineralization. Through field experiments examining N dynamics during growing season and winter fallow periods, a complete annual N budget was developed. Soil C loss was calculated from SOM-N mineralization using the soil C:N ratio. Surface water and crop residue were negligible in the total N uptake budget (3 - 4 % combined). Shallow groundwater contributed 24 - 33 %, likely representing subsurface SOM-N mineralization. Assuming 6 and 25 kg N ha-1 from atmospheric deposition and biological N2 fixation, respectively, our results suggest 77 - 81 % of plant N uptake (129 - 149 kg N ha-1) was supplied by SOM mineralization. Considering a range of N uptake efficiency from 50 - 70 %, estimated net C loss ranged from 1149 - 2473 kg C ha-1. These findings suggest that rice systems, as currently managed, reduce the rate of C loss from organic delta soils relative to other agricultural practices. PMID:25822494

  17. Estimating Annual Soil Carbon Loss in Agricultural Peatland Soils Using a Nitrogen Budget Approach

    PubMed Central

    Kirk, Emilie R.; van Kessel, Chris; Horwath, William R.; Linquist, Bruce A.

    2015-01-01

    Around the world, peatland degradation and soil subsidence is occurring where these soils have been converted to agriculture. Since initial drainage in the mid-1800s, continuous farming of such soils in the California Sacramento-San Joaquin Delta (the Delta) has led to subsidence of up to 8 meters in places, primarily due to soil organic matter (SOM) oxidation and physical compaction. Rice (Oryza sativa) production has been proposed as an alternative cropping system to limit SOM oxidation. Preliminary research on these soils revealed high N uptake by rice in N fertilizer omission plots, which we hypothesized was the result of SOM oxidation releasing N. Testing this hypothesis, we developed a novel N budgeting approach to assess annual soil C and N loss based on plant N uptake and fallow season N mineralization. Through field experiments examining N dynamics during growing season and winter fallow periods, a complete annual N budget was developed. Soil C loss was calculated from SOM-N mineralization using the soil C:N ratio. Surface water and crop residue were negligible in the total N uptake budget (3 – 4 % combined). Shallow groundwater contributed 24 – 33 %, likely representing subsurface SOM-N mineralization. Assuming 6 and 25 kg N ha-1 from atmospheric deposition and biological N2 fixation, respectively, our results suggest 77 – 81 % of plant N uptake (129 – 149 kg N ha-1) was supplied by SOM mineralization. Considering a range of N uptake efficiency from 50 – 70 %, estimated net C loss ranged from 1149 – 2473 kg C ha-1. These findings suggest that rice systems, as currently managed, reduce the rate of C loss from organic delta soils relative to other agricultural practices. PMID:25822494

  18. A new pan-tropical estimate of carbon loss in natural and managed forests in 2000-2012

    NASA Astrophysics Data System (ADS)

    Tyukavina, A.; Baccini, A.; Hansen, M.; Potapov, P.; Stehman, S. V.; Houghton, R. A.; Krylov, A.; Turubanova, S.; Goetz, S. J.

    2015-12-01

    Clearing of tropical forests, which includes semi-permanent conversion of forests to other land uses (deforestation) and more temporary forest disturbances, is a significant source of carbon emissions. The previous estimates of tropical forest carbon loss vary among studies due to the differences in definitions, methodologies and data inputs. The best currently available satellite-derived datasets, such as a 30-m forest cover loss map by Hansen et al. (2013), may be used to produce methodologically consistent carbon loss estimates for the entire tropical region, but forest cover loss area derived from maps is biased due to classification errors. In this study we produced an unbiased estimate of forest cover loss area from a validation sample, as suggested by good practice recommendations. Stratified random sampling was implemented with forest carbon stock strata defined based on Landsat-derived tree canopy cover, height, intactness (Potapov et al., 2008) and forest cover loss (Hansen et al., 2013). The largest difference between the sample-based and Hansen et al. (2013) forest loss area estimates occurred in humid tropical Africa. This result supports the earlier finding (Tyukavina et al., 2013) that Landsat-based forest cover loss maps may significantly underestimate loss area in regions with small-scale forest dynamics while performing well in regions with large industrial forest clearing, such as Brazil and Indonesia (where differences between sample-based and map estimates were within 10%). To produce final carbon loss estimates, sample-based forest loss area estimates for each stratum were related to GLAS-lidar derived forest biomass (Baccini et al., 2012). Our sample-based results distinguish gross losses of aboveground carbon from natural forests (0.59 PgC/yr), which include primary, mature secondary forests and natural woodlands, and from managed forests (0.43 PgC/yr), which include plantations, agroforestry systems and areas of subsistence agriculture

  19. Estimating Losses from Volcanic Ash in case of a Mt. Baekdu Eruption

    NASA Astrophysics Data System (ADS)

    Yu, Soonyoung; Yoon, Seong-Min; Kim, Sung-Wook; Choi, Eun-Kyeong

    2014-05-01

    We will present the preliminary result of economic losses in South Korea in case of a Mt. Baedu eruption. The Korean peninsula has Mt. Baekdu in North Korea, which will soon enter an active phase, according to volcanologists. The anticipated eruption will be explosive given the viscous and grassy silica-rich magma, and is expected to be one of the largest in recent millennia. We aim to assess the impacts of this eruption to South Korea and help government prepare for the volcanic disasters. In particular, the economic impact from volcanic ash is estimated given the distance from Mt. Baedu to South Korea. In order to scientifically estimate losses from volcanic ash, we need volcanic ash thickness, inventory database, and damage functions between ash thickness and damage ratios for each inventory item. We use the volcanic ash thickness calculated by other research groups in Korea, and they estimated the ash thickness for each eruption scenario using average wind fields. Damage functions are built using the historical damage data in the world, and inventory database is obtained from available digital maps in Korea. According to the preliminary results, the economic impact from volcanic ash is not significant because the ash is rarely deposited in South Korea under general weather conditions. However, the ash can impact human health and environment. Also worst case scenarios can have the significant economic impacts in Korea, and may result in global issues. Acknowledgement: This research was supported by a grant [NEMA-BAEKDUSAN-2012-1-3] from the Volcanic Disaster Preparedness Research Center sponsored by National Emergency Management Agency of Korea.

  20. Estimating losses in an entanglement concentration scheme using the phenomenological operator approach to dissipation in cavity quantum electrodynamics

    NASA Astrophysics Data System (ADS)

    de Almeida, N. G.; Moussa, M. H. Y.; Napolitano, R. d. J.

    2011-08-01

    In a previous paper, we developed a phenomenological-operator technique aiming to simplify the estimate of losses due to dissipation in cavity quantum electrodynamics. In this paper, we apply that technique to estimate losses during an entanglement concentration process in the context of dissipative cavities. In addition, some results, previously used without proof to justify our phenomenological-operator approach, are now formally derived, including an equivalent way to formulate the Wigner-Weisskopf approximation.

  1. Missing Great Earthquakes

    NASA Astrophysics Data System (ADS)

    Hough, S. E.; Martin, S.

    2013-12-01

    The occurrence of three earthquakes with Mw greater than 8.8, and six earthquakes larger than Mw8.5, since 2004 has raised interest in the long-term rate of great earthquakes. Past studies have focused on rates since 1900, which roughly marks the start of the instrumental era. Yet substantial information is available for earthquakes prior to 1900. A re-examination of the catalog of global historical earthquakes reveals a paucity of Mw ≥ 8.5 events during the 18th and 19th centuries compared to the rate during the instrumental era (Hough, 2013, JGR), suggesting that the magnitudes of some documented historical earthquakes have been underestimated, with approximately half of all Mw≥8.5 earthquakes missing or underestimated in the 19th century. Very large (Mw≥8.5) magnitudes have traditionally been estimated for historical earthquakes only from tsunami observations given a tautological assumption that all such earthquakes generate significant tsunamis. Magnitudes would therefore tend to be underestimated for deep megathrust earthquakes that generated relatively small tsunamis, deep earthquakes within continental collision zones, earthquakes that produced tsunamis that were not documented, outer rise events, and strike-slip earthquakes such as the 11 April 2012 Sumatra event. We further show that, where magnitudes of historical earthquakes are estimated from earthquake intensities using the Bakun and Wentworth (1997, BSSA) method, magnitudes of great earthquakes can be significantly underestimated. Candidate 'missing' great 19th century earthquakes include the 1843 Lesser Antilles earthquake, which recent studies suggest was significantly larger than initial estimates (Feuillet et al., 2012, JGR; Hough, 2013), and an 1841 Kamchatka event, for which Mw9 was estimated by Gusev and Shumilina (2004, Izv. Phys. Solid Ear.). We consider cumulative moment release rates during the 19th century compared to that during the 20th and 21st centuries, using both the Hough

  2. ESTIMATING SURFACE RUNOFF LOSS OF DISSOLVED PHOSPHORUS FROM MANURE APPLICATIONS TO CROPLAND FOR THE WISCONSIN PHOSPHOROUS INDEX

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Wisconsin Phosphorus (P) Index is a field-level runoff P loss risk assessment tool for evaluating agricultural management practices. It assigns an annual risk ranking to a field by estimating annual sediment-bound and dissolved P losses to the nearest surface water. On cropland with no recent ma...

  3. Conditional density estimation with dimensionality reduction via squared-loss conditional entropy minimization.

    PubMed

    Tangkaratt, Voot; Xie, Ning; Sugiyama, Masashi

    2015-01-01

    Regression aims at estimating the conditional mean of output given input. However, regression is not informative enough if the conditional density is multimodal, heteroskedastic, and asymmetric. In such a case, estimating the conditional density itself is preferable, but conditional density estimation (CDE) is challenging in high-dimensional space. A naive approach to coping with high dimensionality is to first perform dimensionality reduction (DR) and then execute CDE. However, a two-step process does not perform well in practice because the error incurred in the first DR step can be magnified in the second CDE step. In this letter, we propose a novel single-shot procedure that performs CDE and DR simultaneously in an integrated way. Our key idea is to formulate DR as the problem of minimizing a squared-loss variant of conditional entropy, and this is solved using CDE. Thus, an additional CDE step is not needed after DR. We demonstrate the usefulness of the proposed method through extensive experiments on various data sets, including humanoid robot transition and computer art. PMID:25380340

  4. Estimating landslide losses - preliminary results of a seven-State pilot project

    USGS Publications Warehouse

    Highland, Lynn M.

    2006-01-01

    reliable information on economic losses associated with landslides. Each State survey examined the availability, distribution, and inherent uncertainties of economic loss data in their study areas. Their results provide the basis for identifying the most fruitful methods of collecting landslide loss data nationally, using methods that are consistent and provide common goals. These results can enhance and establish the future directions of scientific investigation priorities by convincingly documenting landslide risks and consequences that are universal throughout the 50 States. This report is organized as follows: A general summary of the pilot project history, goals, and preliminary conclusions from the Lincoln, Neb. workshop are presented first. Internet links are then provided for each State report, which appear on the internet in PDF format and which have been placed at the end of this open-file report. A reference section follows the reports, and, lastly, an Appendix of categories of landslide loss and sources of loss information is included for the reader's information. Please note: The Oregon Geological Survey has also submitted a preliminary report on indirect loss estimation methodology, which is also linked with the others. Each State report is unique and presented in the form in which it was submitted, having been independently peer reviewed by each respective State survey. As such, no universal 'style' or format has been adopted as there have been no decisions on which inventory methods will be recommended to the 50 states, as of this writing. The reports are presented here as information for decision makers, and for the record; although several reports provide recommendations on inventory methods that could be adopted nationwide, currently no decisions have been made on adopting a uniform methodology for the States.

  5. Everyday Earthquakes.

    ERIC Educational Resources Information Center

    Svec, Michael

    1996-01-01

    Describes methods to access current earthquake information from the National Earthquake Information Center. Enables students to build genuine learning experiences using real data from earthquakes that have recently occurred. (JRH)

  6. National-scale estimation of gross forest aboveground carbon loss: a case study of the Democratic Republic of the Congo

    NASA Astrophysics Data System (ADS)

    Tyukavina, A.; Stehman, S. V.; Potapov, P. V.; Turubanova, S. A.; Baccini, A.; Goetz, S. J.; Laporte, N. T.; Houghton, R. A.; Hansen, M. C.

    2013-12-01

    Recent advances in remote sensing enable the mapping and monitoring of carbon stocks without relying on extensive in situ measurements. The Democratic Republic of the Congo (DRC) is among the countries where national forest inventories (NFI) are either non-existent or out of date. Here we demonstrate a method for estimating national-scale gross forest aboveground carbon (AGC) loss and associated uncertainties using remotely sensed-derived forest cover loss and biomass carbon density data. Lidar data were used as a surrogate for NFI plot measurements to estimate carbon stocks and AGC loss based on forest type and activity data derived using time-series multispectral imagery. Specifically, DRC forest type and loss from the FACET (Forêts d’Afrique Centrale Evaluées par Télédétection) product, created using Landsat data, were related to carbon data derived from the Geoscience Laser Altimeter System (GLAS). Validation data for FACET forest area loss were created at a 30-m spatial resolution and compared to the 60-m spatial resolution FACET map. We produced two gross AGC loss estimates for the DRC for the last decade (2000-2010): a map-scale estimate (53.3 ± 9.8 Tg C yr-1) accounting for whole-pixel classification errors in the 60-m resolution FACET forest cover change product, and a sub-grid estimate (72.1 ± 12.7 Tg C yr-1) that took into account 60-m cells that experienced partial forest loss. Our sub-grid forest cover and AGC loss estimates, which included smaller-scale forest disturbances, exceed published assessments. Results raise the issue of scale in forest cover change mapping and validation, and subsequent impacts on remotely sensed carbon stock change estimation, particularly for smallholder dominated systems such as the DRC.

  7. Distributed soil loss estimation system including ephemeral gully development and tillage erosion

    NASA Astrophysics Data System (ADS)

    Vieira, D. A. N.; Dabney, S. M.; Yoder, D. C.

    2015-03-01

    A new modelling system is being developed to provide spatially-distributed runoff and soil erosion predictions for conservation planning that integrates the 2D grid-based variant of the Revised Universal Soil Loss Equation, version 2 model (RUSLER), the Ephemeral Gully Erosion Estimator (EphGEE), and the Tillage Erosion and Landscape Evolution Model (TELEM). Digital representations of the area of interest (field, farm or entire watershed) are created using high-resolution topography and data retrieved from established databases of soil properties, climate, and agricultural operations. The system utilizes a library of processing tools (LibRaster) to deduce surface drainage from topography, determine the location of potential ephemeral gullies, and subdivide the study area into catchments for calculations of runoff and sheet-and-rill erosion using RUSLER. EphGEE computes gully evolution based on local soil erodibility and flow and sediment transport conditions. Annual tillage-induced morphological changes are computed separately by TELEM.

  8. Losses estimation in transonic wet steam flow through linear blade cascade

    NASA Astrophysics Data System (ADS)

    Dykas, Sławomir; Majkut, Mirosław; Strozik, Michał; Smołka, Krystian

    2015-04-01

    Experimental investigations of non-equilibrium spontaneous condensation in transonic steam flow were carried out in linear blade cascade. The linear cascade consists of the stator blades of the last stage of low pressure steam turbine. The applied experimental test section is a part of a small scale steam power plant located at Silesian University of Technology in Gliwice. The steam parameters at the test section inlet correspond to the real conditions in low pressure part of 200MWe steam turbine. The losses in the cascade were estimated using measured static pressure and temperature behind the cascade and the total parameters at inlet. The static pressure measurements on the blade surface as well as the Schlieren pictures were used to assess the flow field in linear cascade of steam turbine stator blades.

  9. An Estimation of the Climatic Effects of Stratospheric Ozone Losses during the 1980s. Appendix K

    NASA Technical Reports Server (NTRS)

    MacKay, Robert M.; Ko, Malcolm K. W.; Shia, Run-Lie; Yang, Yajaing; Zhou, Shuntai; Molnar, Gyula

    1997-01-01

    In order to study the potential climatic effects of the ozone hole more directly and to assess the validity of previous lower resolution model results, the latest high spatial resolution version of the Atmospheric and Environmental Research, Inc., seasonal radiative dynamical climate model is used to simulate the climatic effects of ozone changes relative to the other greenhouse gases. The steady-state climatic effect of a sustained decrease in lower stratospheric ozone, similar in magnitude to the observed 1979-90 decrease, is estimated by comparing three steady-state climate simulations: 1) 1979 greenhouse gas concentrations and 1979 ozone, II) 1990 greenhouse gas concentrations with 1979 ozone, and III) 1990 greenhouse gas concentrations with 1990 ozone. The simulated increase in surface air temperature resulting from nonozone greenhouse gases is 0.272 K. When changes in lower stratospheric ozone are included, the greenhouse warming is 0.165 K, which is approximately 39% lower than when ozone is fixed at the 1979 concentrations. Ozone perturbations at high latitudes result in a cooling of the surface-troposphere system that is greater (by a factor of 2.8) than that estimated from the change in radiative forcing resulting from ozone depiction and the model's 2 x CO, climate sensitivity. The results suggest that changes in meridional heat transport from low to high latitudes combined with the decrease in the infrared opacity of the lower stratosphere are very important in determining the steady-state response to high latitude ozone losses. The 39% compensation in greenhouse warming resulting from lower stratospheric ozone losses is also larger than the 28% compensation simulated previously by the lower resolution model. The higher resolution model is able to resolve the high latitude features of the assumed ozone perturbation, which are important in determining the overall climate sensitivity to these perturbations.

  10. Analytical model for estimation of eddy current and power loss in conducting plate and its application

    NASA Astrophysics Data System (ADS)

    Sinha, Gautam; Prabhu, S. S.

    2011-06-01

    A model is developed to study the eddy current induced in a thin conducting but nonmagnetic plate of finite size when exposed to a time varying magnetic field. The applied field may be uniform or vary in space. This model can accurately estimate the eddy current contour in the plate and loss due to eddy current. Power losses for plates of various dimensions and at different frequencies are calculated to establish the accuracy of the model. We have also calculated the magnetic field generated by the induced eddy current when the plate of finite size is placed between the two parallel poles of a dipole magnet made of magnetic material of very high permeability. The force acting on the plate due to the interaction of the induced eddy current and the applied external field is also calculated. The model can predict the time variation of force and eddy current. The model may be applicable to understand the effect of eddy current on the vacuum chamber of an accelerator. Various other applications, where this model is useful, are also reported. The results are compared against the results obtained by a simulation using a finite element based code. Here the rectangular plate is considered but the model can be applicable for other geometries as well.

  11. Source estimate and tsunami forecast from far-field deep-ocean tsunami waveforms—The 27 February 2010 Mw 8.8 Maule earthquake

    NASA Astrophysics Data System (ADS)

    Yoshimoto, Masahiro; Watada, Shingo; Fujii, Yushiro; Satake, Kenji

    2016-01-01

    We inverted the 2010 Maule earthquake tsunami waveforms recorded at DART (Deep-ocean Assessment and Reporting Tsunamis) stations in the Pacific Ocean by taking into account the effects of the seawater compressibility, elasticity of the solid Earth, and gravitational potential change. These effects slow down the tsunami speed and consequently move the slip offshore or updip direction, consistent with the slip distribution obtained by a joint inversion of DART, tide gauge, GPS, and coastal geodetic data. Separate inversions of only near-field DART data and only far-field DART data produce similar slip distributions. The former demonstrates that accurate tsunami arrival times and waveforms of trans-Pacific tsunamis can be forecast in real time. The latter indicates that if the tsunami source area is as large as the 2010 Maule earthquake, the tsunami source can be accurately estimated from the far-field deep-ocean tsunami records without near-field data.

  12. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  13. Earthquake prediction comes of age

    SciTech Connect

    Lindth, A. . Office of Earthquakes, Volcanoes, and Engineering)

    1990-02-01

    In the last decade, scientists have begun to estimate the long-term probability of major earthquakes along the San Andreas fault. In 1985, the U.S. Geological Survey (USGS) issued the first official U.S. government earthquake prediction, based on research along a heavily instrumented 25-kilometer section of the fault in sparsely populated central California. Known as the Parkfield segment, this section of the Sand Andreas had experienced its last big earthquake, a magnitude 6, in 1966. Estimated probabilities of major quakes along the entire San Andreas by a working group of California earthquake experts, using new geologic data and careful analysis of past earthquakes, are reported.

  14. Ground deformation associated with the 2008 Sichuan Earthquake in China, estimated using a SAR offset-tracking method

    NASA Astrophysics Data System (ADS)

    Kobayashi, T.; Takada, Y.; Furuya, M.; Murakami, M.

    2008-12-01

    Introduction: A catastrophic earthquake struck China"fs Sichuan area on May 12, 2008, with the moment magnitude of 7.9 (USGS). The hypocenter and their aftershocks are distributed along the western edge of the Sichuan Basin, suggesting that this seismic event occurred at the Longmeng Shan fault zone which is constituted of major three active faults (Wenchuan-Maowen, Beichuan, and Pengguan faults). However, it is unclear whether these faults were directly involved in the mainshock rupture. An interferometry SAR (InSAR) analysis generally has a merit that we can detect ground deformation in a vast region with high precision, however, for the Sichuan event, the surface deformation near the fault zone has not been satisfactorily detected from the InSAR analyses due to a low coherency. An offset-tracking method is less precise but more robust for detecting large ground deformation than the interferometric approach. Our purpose is to detect the detail ground deformation immediately near the faults involved in the Sichuan event with applying the offset-tracking method. Analysis Method: We analyzed ALOS/PALSAR images, which have been taken from Path 471 to 476 of ascending track, acquired before and after the mainshock. We processed SAR data from the level-1.0 product, using a software package from Gamma Remote Sensing. For offset-tracking analysis we adopt intensity tracking method which is performed by cross-correlating samples of backscatter intensity of a master SAR image with samples from the corresponding search area of a slave image in order to estimate range and azimuth offset fields. We reduce stereoscopic effects that produce apparent offsets, using SRTM3 DEM data. Results: We have successfully obtained the surface deformation in range (radar look direction) component, while in azimuth (flight direction) no significant deformation can be detected in some orbits due to "gazimuth streaks"h that are errors caused by ionospheric effects. Some concluding remarks are

  15. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    SciTech Connect

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-18

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.

  16. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    NASA Astrophysics Data System (ADS)

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-01

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1-2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S0 and A0, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A0 to thickness variations was shown to be superior to S0, however, the attenuation from A0 when a liquid loading was present was much higher than S0. A0 was less sensitive to the presence of coatings on the surface of than S0.

  17. Estimation of postseismic deformation parameters from continuous GPS data in northern Sumatra after the 2004 Sumatra-Andaman earthquake

    NASA Astrophysics Data System (ADS)

    Anugrah, Bimar; Meilano, Irwan; Gunawan, Endra; Efendi, Joni

    2015-12-01

    Continuous global positioning system (GPS) in northern Sumatra detected signal of the ongoing physical process of postseismic deformation after the M9.2 2004 Sumatra-Andaman earthquake. We analyze the characteristics of postseismic deformation of the 2004 earthquake based on GPS networks operated by BIG, and the others named AGNeSS, and SuGAr networks located in northern Sumatra. We use a simple analytical logarithmic and exponential function to evaluate the postseismic deformation parameters of the 2004 earthquake. We find that GPS data in northern Sumatra during time periods of 2005-2012 are fit better using the logarithmic function with τlog of 104.2 ± 0.1 than using the exponential function. Our result clearly indicates that other physical mechanisms of postseismic deformation should be taken into account rather than a single physical mechanism of afterslip only.

  18. Estimates of stress changes from the 2010 Maule, Chile earthquake: the influence on crustal faults and volcanos

    NASA Astrophysics Data System (ADS)

    Keiding, M.; Heidbach, O.; Moreno, M.; Baez, J. C.; Melnick, D.; Kukowski, N.

    2012-04-01

    The south-central Chile margin is an active plate boundary where the accumulated stress in the subduction interface is released frequently by megathrust earthquakes (Mw>8.5). The Maule earthquake of February 27 2010 affected about 500 km of the plate boundary producing spectacular tectonic deformation and a devastating tsunami. A compilation of pre-, co-, and post-earthquake geologic and geodetic data offers the opportunity of gain insight into the processes that control strain accumulation and stress changes associated to megathrust events. The fore-arc deformation is primarily controlled by the stresses that are transferred through the locked parts of the plate interface and the release of stresses during megathrust events. During a great interplate faulting event, upper plate faults, rooted in the plate interface, can play a key role in controlling fluid pressurization. Hence, the hydraulic behavior of splay faults may induce variations of shear strength and may promote dynamic slip weakening along a crustal fault. Furthermore, the co-seismic stress transfer from megathrust earthquakes can severely affect nearby volcanos promoting eruptions and local deformation. InSAR and time-series of continuous GPS in the aftermath of the Maule earthquake show evidences of activation of the NW-striking Lanalhue fault system as well as pressure increase at the Antuco volcano. We build a 3D geomechanical-numerical model that consists of 1.8 million finite elements and incorporates realistic geometries adapted from geophysical data sets as well as the major crustal faults in the region. An updated co-seismic slip model is obtained based on a joint inversion of InSAR and GPS data. The model is used to compute stress changes in the upper plate in order to investigate how the Maule earthquake may have affected the crustal faults and volcanoes in the region.

  19. Probabilistic Methodology for Estimation of Number and Economic Loss (Cost) of Future Landslides in the San Francisco Bay Region, California

    USGS Publications Warehouse

    Crovelli, Robert A.; Coe, Jeffrey A.

    2008-01-01

    The Probabilistic Landslide Assessment Cost Estimation System (PLACES) presented in this report estimates the number and economic loss (cost) of landslides during a specified future time in individual areas, and then calculates the sum of those estimates. The analytic probabilistic methodology is based upon conditional probability theory and laws of expectation and variance. The probabilistic methodology is expressed in the form of a Microsoft Excel computer spreadsheet program. Using historical records, the PLACES spreadsheet is used to estimate the number of future damaging landslides and total damage, as economic loss, from future landslides caused by rainstorms in 10 counties of the San Francisco Bay region in California. Estimates are made for any future 5-year period of time. The estimated total number of future damaging landslides for the entire 10-county region during any future 5-year period of time is about 330. Santa Cruz County has the highest estimated number of damaging landslides (about 90), whereas Napa, San Francisco, and Solano Counties have the lowest estimated number of damaging landslides (5?6 each). Estimated direct costs from future damaging landslides for the entire 10-county region for any future 5-year period are about US $76 million (year 2000 dollars). San Mateo County has the highest estimated costs ($16.62 million), and Solano County has the lowest estimated costs (about $0.90 million). Estimated direct costs are also subdivided into public and private costs.

  20. Estimation of Maximum Magnitude (c-value) and its Certainty for Modified Gutenberg-Richter Formulas, Based on Historical and Instrumental Japanese Intraplate Earthquake Catalogs

    NASA Astrophysics Data System (ADS)

    Kumamoto, T.; Hagiwara, Y.

    2002-12-01

    A-, b-, and c-values for the original Gutenberg-Richter formula (GR) and modified GR formulas (Utsu, 1978) were estimated using a dataset of combined historical (1595-1925 A.D.) and instrumental (1926-2000) Japanese earthquake data for 18 intraplate seismo-tectonic provinces depicted on a new tectonic map of Japan (Kakimi et al., 2002). The theoretical relationships between the b-values of the original and modified GR formulas, and the certainty of b- and c-values, were evaluated with respect to the dataset. The GR formula generally used for earthquake magnitude and frequency relationships demonstrates that earthquake frequency in each magnitude class is about ten times that of the next highest class. This is expressed as: log n(M) = a-bM, where n(M) is the number of earthquakes of a given magnitude M, and a- and b-values are constants representing the level of seismicity and the ratio of small to large events, respectively. In this formula, the expected maximum magnitude (c-value) in a given earthquake catalog is calculated using one more assumption: a maximum-magnitude earthquake should occur only once in a given period, because the c-value is not a characteristic parameter of the original GR formula. Utsu (1978) proposed that the GR formula be modified by introducing the c-value, and presented two formulas: a truncated GR formula (TGR), expressed as log n(M) = a - bM (M is equal to or smaller than c); n(M) = 0 (M is greater than c); and a modified GR formula (MGR), expressed as log n(M) = a - bM + log (c-M) (M is smaller than c); n(M) = 0 (M is equal to or greater than c). Calculations for 18 Japanese seismo-tectonic provinces revealed the following relation: b(GR) > b(TGR) > b(MGR). This is a theoretical relationship, which means that b- and c-values are relative parameters within one formula, and that comparison of b- and c-values between different GR formulas is meaningless. Furthermore, the distribution of b- and c-values in 18 intraplate seismo

  1. Scaling relationship between corner frequencies and seismic moments of ultra micro earthquakes estimated with coda-wave spectral ratio -the Mponeng mine in South Africa

    NASA Astrophysics Data System (ADS)

    Wada, N.; Kawakata, H.; Murakami, O.; Doi, I.; Yoshimitsu, N.; Nakatani, M.; Yabe, Y.; Naoi, M. M.; Miyakawa, K.; Miyake, H.; Ide, S.; Igarashi, T.; Morema, G.; Pinder, E.; Ogasawara, H.

    2011-12-01

    Scaling relationship between corner frequencies, fc, and seismic moments, Mo is an important clue to understand the seismic source characteristics. Aki (1967) showed that Mo is proportional to fc-3 for large earthquakes (cubic law). Iio (1986) claimed breakdown of the cubic law between fc and Mo for smaller earthquakes (Mw < 2), and Gibowicz et al. (1991) also showed the breakdown for the ultra micro and small earthquakes (Mw < -2). However, it has been reported that the cubic law holds even for micro earthquakes (-1 < Mw > 4) by using high quality data observed at a deep borehole (Abercrombie, 1995; Ogasawara et al., 2001; Hiramatsu et al., 2002; Yamada et al., 2007). In order to clarify the scaling relationship for smaller earthquakes (Mw < -1), we analyzed ultra micro earthquakes using very high sampling records (48 kHz) of borehole seismometers installed within a hard rock at the Mponeng mine in South Africa. We used 4 tri-axial accelerometers of three-component that have a flat response up to 25 kHz. They were installed to be 10 to 30 meters apart from each other at 3,300 meters deep. During the period from 2008/10/14 to 2008/10/30 (17 days), 8,927 events were recorded. We estimated fc and Mo for 60 events (-3 < Mw < -1) within 200 meters from the seismometers. Assuming the Brune's source model, we estimated fc and Mo from spectral ratios. Common practice is using direct waves from adjacent events. However, there were only 5 event pairs with the distance between them less than 20 meters and Mw difference over one. In addition, the observation array is very small (radius less than 30 m), which means that effects of directivity and radiation pattern on direct waves are similar at all stations. Hence, we used spectral ratio of coda waves, since these effects are averaged and will be effectively reduced (Mayeda et al., 2007; Somei et al., 2010). Coda analysis was attempted only for relatively large 20 events (we call "coda events" hereafter) that have coda energy

  2. A Match-based approach to the estimation of polar stratospheric ozone loss using Aura Microwave Limb Sounder observations

    NASA Astrophysics Data System (ADS)

    Livesey, N. J.; Santee, M. L.; Manney, G. L.

    2015-04-01

    The well-established "Match" approach to quantifying chemical destruction of ozone in the polar lower stratosphere is applied to ozone observations from the Microwave Limb Sounder (MLS) on NASA's Aura spacecraft. Quantification of ozone loss requires distinguishing transport- and chemically induced changes in ozone abundance. This is accomplished in the Match approach by examining cases where trajectories indicate that the same airmass has been observed on multiple occasions. The method was pioneered using ozone sonde observations, for which hundreds of matched ozone observations per winter are typically available. The dense coverage of the MLS measurements, particularly at polar latitudes, allows matches to be made to thousands of observations each day. This study is enabled by recently developed MLS Lagrangian Trajectory Diagnostic (LTD) support products. Sensitivity studies indicate that the largest influence on the ozone loss estimates are the value of potential vorticity (PV) used to define the edge of the polar vortex (within which matched observations must lie) and the degree to which the PV of an airmass is allowed to vary between matched observations. Applying Match calculations to MLS observations of nitrous oxide, a long-lived tracer whose expected rate of change on these timescales is negligible, enables quantification of the impact of transport errors on the Match-based ozone loss estimates. Our loss estimates are generally in agreement with previous estimates for selected Arctic winters, though indicating smaller losses than many other studies. Arctic ozone losses are greatest during the 2010/11 winter, as seen in prior studies, with 2.0 ppmv (parts per million by volume) loss estimated at 450 K potential temperature. As expected, Antarctic winter ozone losses are consistently greater than those for the Arctic, with less interannual variability (e.g., ranging between 2.3 and 3.0 ppmv at 450 K). This study exemplifies the insights into atmospheric

  3. Soil loss estimation and prioritization of sub-watersheds of Kali River basin, Karnataka, India, using RUSLE and GIS.

    PubMed

    Markose, Vipin Joseph; Jayappa, K S

    2016-04-01

    Most of the mountainous regions in tropical humid climatic zone experience severe soil loss due to natural factors. In the absence of measured data, modeling techniques play a crucial role for quantitative estimation of soil loss in such regions. The objective of this research work is to estimate soil loss and prioritize the sub-watersheds of Kali River basin using Revised Universal Soil Loss Equation (RUSLE) model. Various thematic layers of RUSLE factors such as rainfall erosivity (R), soil erodibility (K), topographic factor (LS), crop management factor (C), and support practice factor (P) have been prepared by using multiple spatial and non-spatial data sets. These layers are integrated in geographic information system (GIS) environment and estimated the soil loss. The results show that ∼42 % of the study area falls under low erosion risk and only 6.97 % area suffer from very high erosion risk. Based on the rate of soil loss, 165 sub-watersheds have been prioritized into four categories-very high, high, moderate, and low erosion risk. Anthropogenic activities such as deforestation, construction of dams, and rapid urbanization are the main reasons for high rate of soil loss in the study area. The soil erosion rate and prioritization maps help in implementation of a proper watershed management plan for the river basin. PMID:26969157

  4. Estimating the loss of C, N and microbial biomass from Biological Soil Crusts under simulated rainfall

    NASA Astrophysics Data System (ADS)

    Gommeaux, M.; Malam Issa, O.; Bouchet, T.; Valentin, C.; Rajot, J.-L.; Bertrand, I.; Alavoine, G.; Desprats, J.-F.; Cerdan, O.; Fatondji, D.

    2012-04-01

    Most areas where biological soil crusts (BSC) develop undergo a climate with heavy but sparse rainfall events. The hydrological response of the BSC, namely the amount of runoff, is highly variable. Rainfall simulation experiments were conducted in Sadoré, south-western Niger. The aim was to estimate the influence of the BSC coverage on the quantity and quality of water, particles and solutes exported during simulated rainfall events. Ten 1 m2 plots were selected based on their various degree of BSC cover (4-89%) and type of underlying physical crust (structural or erosion crusts). The plots are located on similar sandy soil with moderate slope (3-6%). The experiments consisted of two rainfall events, spaced at 22-hours interval: 60 mm/h for 20 min, and 120 mm/h for 10 min. During each experiments particles dectached and runoff water were collected and filtered in the laboratory. C and N content were determined both in water and sediments samples.. These analyses were completed by measurements of phospholipid fatty acids and chlorophyll a contents in sediments and BSC samples collected before and after the rainfall. Mineral N and microbial biomass carbon of BSC samples were also analysed. The results confirmed that BSC reduce the loss of particles and exert a protective effect on soils with regard to particle detachment by raindrop. However there is no general relationship between the BSC coverage and the loss of C and N due to runoff. Contrarily, the C and N content in the sediments is negatively correlated to their mass. The type of physical crust on which the BSC develop also has to be taken into account. These results will contribute to the region-wide modeling of the role of BSC in biogeochemical cycles.

  5. Estimate of the direct production losses in Canadian dairy herds with subclinical Mycobacterium avium subspecies paratuberculosis infection

    PubMed Central

    Tiwari, Ashwani; VanLeeuwen, John A.; Dohoo, Ian R.; Keefe, Greg P.; Weersink, Alfons

    2008-01-01

    The objective of this study was to estimate the annual losses from Mycobacterium avium subspecies paratuberculosis (MAP) for an average, MAP-seropositive, Canadian dairy herd. A partial-budget simulation model was developed with 4 components of direct production losses (decreased milk production, premature voluntary culling, mortality, and reproductive losses). Input values were obtained primarily from a national seroprevalence survey of 373 Canadian dairy farms in 8 of 10 provinces. The model took into account the variability and uncertainty of the required input values; consequently, it produced probability distributions of the estimated losses. For an average Canadian dairy herd with 12.7% of 61 cows seropositive for MAP, the mean loss was $2992 (95% C.I., $143 to $9741) annually, or $49 per cow per year. Additional culling, decreased milk production, mortality, and reproductive losses accounted for 46%, 9%, 16%, and 29% of the losses, respectively. Canadian dairy producers should use best management practices to reduce these substantial annual losses. PMID:18624066

  6. A multiple-approach radiometric age estimate for the Rotoiti and Earthquake Flat eruptions, New Zealand, with implications for the MIS 4/3 boundary

    USGS Publications Warehouse

    Wilson, C.J.N.; Rhoades, D.A.; Lanphere, M.A.; Calvert, A.T.; Houghton, B.F.; Weaver, S.D.; Cole, J.W.

    2007-01-01

    Pyroclastic fall deposits of the paired Rotoiti and Earthquake Flat eruptions from the Taupo Volcanic Zone (New Zealand) combine to form a widespread isochronous horizon over much of northern New Zealand and the southwest Pacific. This horizon is important for correlating climatic and environmental changes during the Last Glacial period, but has been the subject of numerous disparate age estimates between 35.1??2.8 and 71??6 ka (all errors are 1 s.d.), obtained by a variety of techniques. A potassium-argon (K-Ar) age of 64??4 ka was previously determined on bracketing lavas at Mayor Island volcano, offshore from the Taupo Volcanic Zone. We present a new, more-precise 40Ar/39Ar age determination on a lava flow on Mayor Island, that shortly post-dates the Rotoiti/Earthquake Flat fall deposits, of 58.5??1.1 ka. This value, coupled with existing ages from underlying lavas, yield a new estimate for the age of the combined eruptions of 61.0??1.4 ka, which is consistent with U-Th disequilibrium model-age data for zircons from the Rotoiti deposits. Direct 40Ar/39Ar age determinations of plagioclase and biotite from the Rotoiti and Earthquake Flat eruption products yield variable values between 49.6??2.8 and 125.3??10.0 ka, with the scatter attributed to low radiogenic Ar yields, and/or alteration, and/or inheritance of xenocrystic material with inherited Ar. Rotoiti/Earthquake Flat fall deposits occur in New Zealand in association with palynological indicators of mild climate, attributed to Marine Isotope Stage (MIS) 3 and thus used to suggest an age that is post-59 ka. The natures of the criteria used to define the MIS 4/3 boundary in the Northern and Southern hemispheres, however, imply that the new 61 ka age for the Rotoiti/Earthquake Flat eruption deposits will provide the inverse, namely, a more accurate isochronous marker for correlating diverse changes across the MIS 4/3 boundary in the southwest Pacific. ?? 2007 Elsevier Ltd. All rights reserved.

  7. Estimating Fish Exploitation and Aquatic Habitat Loss across Diffuse Inland Recreational Fisheries

    PubMed Central

    de Kerckhove, Derrick Tupper; Minns, Charles Kenneth; Chu, Cindy

    2015-01-01

    The current state of many freshwater fish stocks worldwide is largely unknown but suspected to be vulnerable to exploitation from recreational fisheries and habitat degradation. Both these factors, combined with complex ecological dynamics and the diffuse nature of inland fisheries could lead to an invisible collapse: the drastic decline in fish stocks without great public or management awareness. In this study we provide a method to address the pervasive knowledge gaps in regional rates of exploitation and habitat degradation, and demonstrate its use in one of North America’s largest and most diffuse recreational freshwater fisheries (Ontario, Canada). We estimated that 1) fish stocks were highly exploited and in apparent danger of collapse in management zones close to large population centres, and 2) fish habitat was under a low but constant threat of degradation at rates comparable to deforestation in Ontario and throughout Canada. These findings confirm some commonly held, but difficult to quantify, beliefs in inland fisheries management but also provide some further insights including 1) large anthropogenic projects greater than one hectare could contribute much more to fish habitat loss on an area basis than the cumulative effect of smaller projects within one year, 2) hooking mortality from catch-and-release fisheries is likely a greater source of mortality than the harvest itself, and 3) in most northern management zones over 50% of the fisheries resources are not yet accessible to anglers. While this model primarily provides a framework to prioritize management decisions and further targeted stock assessments, we note that our regional estimates of fisheries productivity and exploitation were similar to broadscale monitoring efforts by the Province of Ontario. We discuss the policy implications from our results and extending the model to other jurisdictions and countries. PMID:25875790

  8. Variability of ozone loss during Arctic winter (1991 to 2000) estimated from UARS Microwave Limb Sounder measurement

    NASA Technical Reports Server (NTRS)

    Manney, G.; Froidevaux, F.; Santee, M. L.; Livesey, N. J.; Sabutis, J. L.; Waters, J. W.

    2002-01-01

    A comprehensive analysis of version 5 Upper Atmosphere Research Satellite (UARS) Microwave Limb Sounder (MLS) ozone data using a Lagrangian Transport (LT) model provides estimates of chemical ozone depletion for the 1991-1992 through 1997-1998 Arctic winters. These new estimates give a consistent, three-dimensional picture of ozone loss during seven Arctic winters; previous Arctic ozone loss estimates from MLS were based on various earlier data versions and were done only for late winter and only for a subset of the years observed by MLS. We find large interannual variability in the amount, timing, and patterns of ozone depletion and in the degree to which chemical loss is masked by dynamical processes.

  9. Extreme Magnitude Earthquakes and their Economical Consequences

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Perea, N.; Emerson, D.; Salazar, A.; Moulinec, C.

    2011-12-01

    The frequency of occurrence of extreme magnitude earthquakes varies from tens to thousands of years, depending on the considered seismotectonic region of the world. However, the human and economic losses when their hypocenters are located in the neighborhood of heavily populated and/or industrialized regions, can be very large, as recently observed for the 1985 Mw 8.01 Michoacan, Mexico and the 2011 Mw 9 Tohoku, Japan, earthquakes. Herewith, a methodology is proposed in order to estimate the probability of exceedance of: the intensities of extreme magnitude earthquakes, PEI and of their direct economical consequences PEDEC. The PEI are obtained by using supercomputing facilities to generate samples of the 3D propagation of extreme earthquake plausible scenarios, and enlarge those samples by Monte Carlo simulation. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. An example of the application of the methodology due to the potential occurrence of extreme Mw 8.5 subduction earthquakes on Mexico City is presented.

  10. Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phosphorous (P) loss models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. All P loss models, however, have an inherent amount of uncertainty associated with them. In this study, we conducted an uncertainty analysis with ...

  11. A new tool for estimating phosphorus loss from cattle barnyards and outdoor lots

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phosphorus (P) loss from agriculture can compromise quality of receiving water bodies. For cattle farms, P can be lost from cropland, pastures, and outdoor animal lots. We developed a new model that predicts annual runoff, total solids loss, and total and dissolved P loss from cattle lots. The model...

  12. Estimating the magnitude of prediction uncertainties for field-scale P loss models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, an uncertainty analysis for the Annual P Loss Estima...

  13. 2011 Van earthquake (Mw=7.2) aftershocks using the source spectra an approach to real-time estimation of moment magnitude

    NASA Astrophysics Data System (ADS)

    Meral Ozel, N.; Kusmezer, A.

    2012-04-01

    The Converging Grid Search (CGS) algorithm was tested on broadband waveforms data from large aftershocks of the October 23, Van earthquake with the hypocentral distances within 0-300 km over a magnitude range of 4.0≤M≤5.6.Observed displacement spectra were virtually well adapted to the Brune's source model in the whole frequency range for many waveforms.The estimated Mw solutions were compared to global CMT catalogue solutions, and were seen to be in good agreement. To estimate Mw from a shear-wave displacement spectrum, an automatic routine named as CGS was applied to attempt to test and develop a method for stable moment magnitude estimation to be used as a real-time operation.The spectra were corrected for average an elastic attenuation and geometrical spreading factors and then were scaled to compute moment at the long period asymptote where the spectral plateau for 0 Hz is flat.For this aim, an automatic procedure was utilized: 1)calculating the displacement spectra for vertical components at a given station, 2)estimating corner frequency and seismic moment using CGS which is based on minimizing the differences between observed and synthetic source spectra, 3)calculating moment magnitude from seismic moment for each station separately, and then are averaged to give the mean values of each event. The best fitting iteration of these parameters was obtained after a few seconds. The noise spectrum was also computed to suggest a comparison between signals to noise ratio before performing the inversion.Weak events with low SNR were excluded from the computations. The method examined on the Van earthquake aftershock dataset proved that it is applicable to have stable and reliable estimates of magnitude for the routine processing within a few seconds from the initial P wave detection though the location estimation is necessary.This allows a fast determination of Mw magnitude and assist to measure physical quantities of the source available for the real time

  14. Modeling earthquake dynamics

    NASA Astrophysics Data System (ADS)

    Charpentier, Arthur; Durand, Marilou

    2015-07-01

    In this paper, we investigate questions arising in Parsons and Geist (Bull Seismol Soc Am 102:1-11, 2012). Pseudo causal models connecting magnitudes and waiting times are considered, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos and Karlis (Environmetrics 19: 251-269, 2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are functions of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year or a decade.

  15. Tsunami Loss Assessment For Istanbul

    NASA Astrophysics Data System (ADS)

    Hancilar, Ufuk; Cakti, Eser; Zulfikar, Can; Demircioglu, Mine; Erdik, Mustafa

    2010-05-01

    Tsunami risk and loss assessment incorporating with the inundation mapping in Istanbul and the Marmara Sea region are presented in this study. The city of Istanbul is under the threat of earthquakes expected to originate from the Main Marmara branch of North Anatolian Fault System. In the Marmara region the earthquake hazard reached very high levels with 2% annual probability of occurrence of a magnitude 7+ earthquake on the Main Marmara Fault. Istanbul is the biggest city of Marmara region as well as of Turkey with its almost 12 million inhabitants. It is home to 40% of the industrial facilities in Turkey and operates as the financial and trade hub of the country. Past earthquakes have evidenced that the structural reliability of residential and industrial buildings, as well as that of lifelines including port and harbor structures in the country is questionable. These facts make the management of earthquake risks imperative for the reduction of physical and socio-economic losses. The level of expected tsunami hazard in Istanbul is low as compared to earthquake hazard. Yet the assets at risk along the shores of the city make a thorough assessment of tsunami risk imperative. Important residential and industrial centres exist along the shores of the Marmara Sea. Particularly along the northern and eastern shores we see an uninterrupted settlement pattern with industries, businesses, commercial centres and ports and harbours in between. Following the inundation maps resulting from deterministic and probabilistic tsunami hazard analyses, vulnerability and risk analyses are presented and the socio-economic losses are estimated. This study is part of EU-supported FP6 project ‘TRANSFER'.

  16. Mass wasting triggered by the 5 March 1987 Ecuador earthquakes

    USGS Publications Warehouse

    Schuster, R.L.; Nieto, A.S.; O'Rourke, T. D.; Crespo, E.; Plaza-Nieto, G.

    1996-01-01

    On 5 March 1987, two earthquakes (Ms=6.1 and Ms=6.9) occurred about 25 km north of Reventador Volcano, along the eastern slopes of the Andes Mountains in northeastern Ecuador. Although the shaking damaged structures in towns and villages near the epicentral area, the economic and social losses directly due to earthquake shaking were small compared to the effects of catastrophic earthquake-triggered mass wasting and flooding. About 600 mm of rain fell in the region in the month preceding the earthquakes; thus, the surficial soils had high moisture contents. Slope failures commonly started as thin slides, which rapidly turned into fluid debris avalanches and debris flows. The surficial soils and thick vegetation covering them flowed down the slopes into minor tributaries and then were carried into major rivers. Rock and earth slides, debris avalanches, debris and mud flows, and resulting floods destroyed about 40 km of the Trans-Ecuadorian oil pipeline and the only highway from Quito to Ecuador's northeastern rain forests and oil fields. Estimates of total volume of earthquake-induced mass wastage ranged from 75-110 million m3. Economic losses were about US$ 1 billion. Nearly all of the approximately 1000 deaths from the earthquakes were a consequence of mass wasting and/ or flooding.

  17. Aura's Microwave Limb Sounder Estimates of Ozone Loss, 2004/2005 Arctic Winter

    NASA Technical Reports Server (NTRS)

    2005-01-01

    These data maps from Aura's Microwave Limb Sounder depict levels of hydrogen chloride (top), chlorine monoxide (center), and ozone (bottom) at an altitude of approximately 19 kilometers (490,000 feet) on selected days during the 2004-05 Arctic winter. White contours demark the boundary of the winter polar vortex.

    The maps from December 23, 2004, illustrate vortex conditions shortly before significant chemical ozone destruction began. By January 23, 2005, chlorine is substantially converted from the 'safe' form of hydrogen chloride, which is depleted throughout the vortex, to the 'unsafe' form of chlorine monoxide, which is enhanced in the portions of the region that receive sunlight at that time of year. Ozone increased over the month as a result of dynamical effects, and chemical ozone destruction is just beginning at this time. A brief period of intense cold a few days later promotes further chlorine activation and consequent changes in hydrogen chloride and chlorine monoxide levels on January 27, 2005. Peak chlorine monoxide enhancement occurs in early February.

    By February 24, 2005, chlorine deactivation is well underway, with chlorine monoxide abundances dropping and hydrogen chloride abundances rising. Almost all chlorine monoxide has been quenched by March 10, 2005. The fact that hydrogen chloride has not fully rebounded to December abundances suggests that some of that chemical was recovered into another chlorine reservoir species.

    Ozone maps for January 27, 2005, through March 10, 2005, show indications of mixing of air from outside the polar vortex into it. Such occurrences throughout this winter, especially in late February and early March, complicate analyses, and detailed calculations are required to rigorously disentangle chemical and dynamical effects and accurately diagnose chemical ozone destruction.

    Based on various analyses of Microwave Limb Sounder data, we estimate that maximum local ozone loss of approximately 2 parts

  18. Influence of Agropastoral System Components on Mountain Grassland Vulnerability Estimated by Connectivity Loss

    PubMed Central

    Fillat, Federico; Pérez-Cabello, Fernando; Alados, Concepción L.

    2016-01-01

    Over the last decades, global changes have altered the structure and properties of natural and semi-natural mountain grasslands. Those changes have contributed to grassland loss mainly through colonization by woody species at low elevations, and increases in biomass and greenness at high elevations. Nevertheless, the interactions between agropastoral components; i.e., ecological (grassland, environmental, and geolocation properties), social, and economic components, and their effects on the grasslands are still poorly understood. We estimated the vulnerability of dense grasslands in the Central Pyrenees, Spain, based on the connectivity loss (CL) among grassland patches that has occurred between the 1980s and the 2000s, as a result of i) an increase in biomass and greenness (CL-IBG), ii) woody encroachment (CL-WE), or iii) a decrease in biomass and greenness (CL-DBG). The environmental and grassland components of the agropastoral system were associated with the three processes, especially CL-IBG and CL-WE, in relation with the succession of vegetation toward climax communities, fostered by land abandonment and exacerbated by climate warming. CL-IBG occurred in pasture units that had a high proportion of dense grasslands and low current livestock pressure. CL-WE was most strongly associated with pasture units that had a high proportion of woody habitat and a large reduction in sheep and goat pressure between the 1930s and the 2000s. The economic component was correlated with the CL-WE and the CL-DBG; specifically, expensive pastures were the most productive and could maintain the highest rates of livestock grazing, which slowed down woody encroachment, but caused grassland degradation and DBG. In addition, CL-DBG was associated with geolocation of grasslands, mainly because livestock tend to graze closer to passable roads and buildings, where they cause grassland degradation. To properly manage the grasslands, an integrated management plan must be developed that

  19. Influence of Agropastoral System Components on Mountain Grassland Vulnerability Estimated by Connectivity Loss.

    PubMed

    Gartzia, Maite; Fillat, Federico; Pérez-Cabello, Fernando; Alados, Concepción L

    2016-01-01

    Over the last decades, global changes have altered the structure and properties of natural and semi-natural mountain grasslands. Those changes have contributed to grassland loss mainly through colonization by woody species at low elevations, and increases in biomass and greenness at high elevations. Nevertheless, the interactions between agropastoral components; i.e., ecological (grassland, environmental, and geolocation properties), social, and economic components, and their effects on the grasslands are still poorly understood. We estimated the vulnerability of dense grasslands in the Central Pyrenees, Spain, based on the connectivity loss (CL) among grassland patches that has occurred between the 1980s and the 2000s, as a result of i) an increase in biomass and greenness (CL-IBG), ii) woody encroachment (CL-WE), or iii) a decrease in biomass and greenness (CL-DBG). The environmental and grassland components of the agropastoral system were associated with the three processes, especially CL-IBG and CL-WE, in relation with the succession of vegetation toward climax communities, fostered by land abandonment and exacerbated by climate warming. CL-IBG occurred in pasture units that had a high proportion of dense grasslands and low current livestock pressure. CL-WE was most strongly associated with pasture units that had a high proportion of woody habitat and a large reduction in sheep and goat pressure between the 1930s and the 2000s. The economic component was correlated with the CL-WE and the CL-DBG; specifically, expensive pastures were the most productive and could maintain the highest rates of livestock grazing, which slowed down woody encroachment, but caused grassland degradation and DBG. In addition, CL-DBG was associated with geolocation of grasslands, mainly because livestock tend to graze closer to passable roads and buildings, where they cause grassland degradation. To properly manage the grasslands, an integrated management plan must be developed that

  20. Source Mechanism of May 30, 2015 Bonin Islands, Japan Deep Earthquake (Mw7.8) Estimated by Broadband Waveform Modeling

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Nakamura, T.; Miyoshi, T.

    2015-12-01

    May 30, 2015 Bonin Islands, Japan earthquake (Mw 7.8, depth 679.9km GCMT) was one of the deepest earthquakes ever recorded. We apply the waveform inversion technique (Kikuchi & Kanamori, 1991) to obtain slip distribution in the source fault of this earthquake in the same manner as our previous work (Nakamura et al., 2010). We use 60 broadband seismograms of IRIS GSN seismic stations with epicentral distance between 30 and 90 degrees. The broadband original data are integrated into ground displacement and band-pass filtered in the frequency band 0.002-1 Hz. We use the velocity structure model IASP91 to calculate the wavefield near source and stations. We assume that the fault is squared with the length 50 km. We obtain source rupture model for both nodal planes with high dip angle (74 degree) and low dip angle (26 degree) and compare the synthetic seismograms with the observations to determine which source rupture model would explain the observations better. We calculate broadband synthetic seismograms with these source propagation models using the spectral-element method (Komatitsch & Tromp, 2001). We use new Earth Simulator system in JAMSTEC to compute synthetic seismograms using the spectral-element method. The simulations are performed on 7,776 processors, which require 1,944 nodes of the Earth Simulator. On this number of nodes, a simulation of 50 minutes of wave propagation accurate at periods of 3.8 seconds and longer requires about 5 hours of CPU time. Comparisons of the synthetic waveforms with the observation at teleseismic stations show that the arrival time of pP wave calculated for depth 679km matches well with the observation, which demonstrates that the earthquake really happened below the 660 km discontinuity. In our present forward simulations, the source rupture model with the low-angle fault dipping is likely to better explain the observations.

  1. Estimated airborne release of plutonium from the 102 Building at the General Electric Vallecitos Nuclear Center, Vallecitos, California, as a result of damage from severe wind and earthquake hazard

    SciTech Connect

    Mishima, J.; Ayer, J.E.; Hays, I.D.

    1980-12-01

    This report estimates the potential airborne releases of plutonium as a consequence of various severities of earthquake and wind hazard postulated for the 102 Building at the General Electric Vallecitos Nuclear Center in California. The releases are based on damage scenarios developed by other specialists. The hazard severities presented range up to a nominal velocity of 230 mph for wind hazard and are in excess of 0.8 g linear acceleration for earthquakes. The consequences of thrust faulting are considered. The approaches and factors used to estimate the releases are discussed. Release estimates range from 0.003 to 3 g Pu.

  2. Re-estimation of glacier mass loss in Greenland from GRACE with correction of land-ocean leakage effects

    NASA Astrophysics Data System (ADS)

    Jin, Shuanggen; Zou, Fang

    2015-12-01

    The Gravity Recovery and Climate Experiment (GRACE) satellites can estimate the high-precision time-varying gravity field and the changes of Earth's surface mass, which have been widely used in water cycle and glacier mass balance. However, one of larger errors in GRACE measurements, land-ocean leakage effects, restricts high precision retrieval of ocean mass and terrestrial water storage variations along the coasts, particularly estimation of mass loss in Greenland. The land-ocean leakage effect along the coasts in Greenland will contaminate the mass loss signals with significant signal attenuation. In this paper, the precise glacier mass loss in Greenland from GRACE is re-estimated with correction of land-ocean leakage effects using the forward gravity modeling. The loss of Greenland ice-sheets is - 102.8 ± 9.01 Gt/a without removing leakage effect, but - 183.0 ± 19.91 Gt/a after removing the leakage effect from September 2003 to March 2008, which has a good agreement with ICESat results of - 184.8 ± 28.2 Gt/a. From January 2003 to December 2013, the total Greenland ice-sheet loss is at - 261.54 ± 6.12 Gt/a from GRACE measurements with removing the leakage effect by 42.4%, while two-thirds of total glacier melting in Greenland occurred in southern Greenland in the past 11 years. The secular leakage effects on glacier melting estimate is mainly located in the coastal areas, where larger glacier signals are significantly attenuated due to leaking out into the ocean. Furthermore, the leakage signals also have remarkable effects on seasonal and acceleration variations of glacier mass loss in Greenland. More significantly accelerated loss of glacier mass in Greenland is found at - 26.19 Gt/a2 after correcting for leakage effects.

  3. Teleseismic waveform analysis of deep-focus earthquake for the preliminary estimation of crustal structure of the northern part of Korea

    NASA Astrophysics Data System (ADS)

    Cho, H.; Shin, J.

    2010-12-01

    Crustal structures in the several areas of the northern part of Korea are estimated using the long-period teleseismic depth phase pP and the Moho underside-reflected phase pMP generated by deep-focus earthquakes. The analysis of waveform is performed through comparison of recordings and synthetics of these phases computed using a hybrid reflectivity method, WKBJ approximation for propagation in the vertically inhomogeneous mantle and the computation of Haskell propagator matrix in the layered crust and upper mantle. The pMP phase is a precursor to the surface reflection pP phase and its amplitude is relatively small. The analysis of vertical component of P, pP, and pMP provides the estimation of structure of the source side. The deep-focus earthquakes occurred at the border area of North Korea, China, and Russia are adequate for this study. The seismograms recorded at the GSN stations in Southeast Asia provide clear identification of pMP and pP phases. The preliminary analysis employs deep-focus (580 km) earthquake of magnitude 6.3 Mb of which epicenter is located at the border region between east Russia and northeast China. Seismograms after 0.01 - 0.2 Hz bandpass filtering clearly exhibit pMP and pP phases recorded on four GSN stations (BTDF, PSI, COCO, and DGAR). Shin and Baag (2000) suggested approximate crustal thickness of the region between northern Korea and northeastern China. The crustal thickness appears to be varied from 25 to 35 km that is compatible with the preliminary analysis.

  4. Simultaneous Estimation of Earthquake Source Parameters and Site Response from Inversion of Strong Motion Network Data in Kachchh Seismic Zone, Gujarat, India

    NASA Astrophysics Data System (ADS)

    Dutta, U.; Mandal, P.

    2010-12-01

    Inversion of horizontal components of S-wave spectral data in the frequency range 0.1-10.0 Hz has been carried out to estimate simultaneously the source spectra of 38 aftershocks (Mw 2.93-5.32) of the 2001 Bhuj earthquake (Mw 7.7) and site response at 18 strong motion sites in the Kachchh Seismic Zone, Gujarat, India. The spatial variation of site response (SR) in the region has been studied by averaging the SR values obtained from the inversion in two frequency bands; 0.2-1.8 Hz and 3.0-7.0 Hz, respectively. In 0.2-1.8 Hz frequency band, the high SR values are observed in the southern part of the Kachchh Mainland Fault that had suffered extensively during the 2001 Bhuj Earthquake. However, for 3.0-7.0 Hz band, the area of Jurassic and Quaternary Formations show predominantly high SR. The source spectral data obtained from the inversion were used to estimate various source parameters namely, the seismic moment, stress drop, corner frequency and radius of source rupture by using an iterative least squares inversion approach based on the Marquardt-Levenberg algorithm. It has been observed that the seismic moment and radius of rupture from 38 aftershocks vary between 3.1x10^{13} to 2.0x10^{17} Nm and 226 to 889 m, respectively. The stress drop values from these aftershocks are found to vary from 0.11 to 7.44 MPa. A significant scatter of stress drop values has been noticed in case of larger aftershocks while for smaller magnitude events, it varies proportionally with the seismic moment. The regression analysis between seismic moment and radius of rupture indicates a break in linear scaling around 10^{15.3} Nm. The seismic moment of these aftershocks found to be proportional to the corner frequency, which is consistent for earthquakes with such short rupture length.

  5. Testing the use of bulk organic δ13C, δ15N, and Corg:Ntot ratios to estimate subsidence during the 1964 great Alaska earthquake

    USGS Publications Warehouse

    Bender, Adrian M; Witter, Robert C.; Rogers, Matthew

    2015-01-01

    During the Mw 9.2 1964 great Alaska earthquake, Turnagain Arm near Girdwood, Alaska subsided 1.7 ± 0.1 m based on pre- and postearthquake leveling. The coseismic subsidence in 1964 caused equivalent sudden relative sea-level (RSL) rise that is stratigraphically preserved as mud-over-peat contacts where intertidal silt buried peaty marsh surfaces. Changes in intertidal microfossil assemblages across these contacts have been used to estimate subsidence in 1964 by applying quantitative microfossil transfer functions to reconstruct corresponding RSL rise. Here, we review the use of organic stable C and N isotope values and Corg:Ntot ratios as alternative proxies for reconstructing coseismic RSL changes, and report independent estimates of subsidence in 1964 by using δ13C values from intertidal sediment to assess RSL change caused by the earthquake. We observe that surface sediment δ13C values systematically decrease by ∼4‰ over the ∼2.5 m increase in elevation along three 60- to 100-m-long transects extending from intertidal mud flat to upland environments. We use a straightforward linear regression to quantify the relationship between modern sediment δ13C values and elevation (n = 84, R2 = 0.56). The linear regression provides a slope–intercept equation used to reconstruct the paleoelevation of the site before and after the earthquake based on δ13C values in sandy silt above and herbaceous peat below the 1964 contact. The regression standard error (average = ±0.59‰) reflects the modern isotopic variability at sites of similar surface elevation, and is equivalent to an uncertainty of ±0.4 m elevation with respect to Mean Higher High Water. To reduce potential errors in paleoelevation and subsidence estimates, we analyzed multiple sediment δ13C values in nine cores on a shore-perpendicular transect at Bird Point. Our method estimates 1.3 ± 0.4 m of coseismic RSL rise across the 1964 contact by taking the arithmetic mean of the

  6. Recent wetland land loss due to hurricanes: improved estimates based upon multiple source images

    USGS Publications Warehouse

    Kranenburg, Christine J.; Palaseanu-Lovejoy, Monica; Barras, John A.; Brock, John C.

    2011-01-01

    The objective of this study was to provide a moderate resolution 30-m fractional water map of the Chenier Plain for 2003, 2006 and 2009 by using information contained in high-resolution satellite imagery of a subset of the study area. Indices and transforms pertaining to vegetation and water were created using the high-resolution imagery, and a threshold was applied to obtain a categorical land/water map. The high-resolution data was used to train a decision-tree classifier to estimate percent water in a lower resolution (Landsat) image. Two new water indices based on the tasseled cap transformation were proposed for IKONOS imagery in wetland environments and more than 700 input parameter combinations were considered for each Landsat image classified. Final selection and thresholding of the resulting percent water maps involved over 5,000 unambiguous classified random points using corresponding 1-m resolution aerial photographs, and a statistical optimization procedure to determine the threshold at which the maximum Kappa coefficient occurs. Each selected dataset has a Kappa coefficient, percent correctly classified (PCC) water, land and total greater than 90%. An accuracy assessment using 1,000 independent random points was performed. Using the validation points, the PCC values decreased to around 90%. The time series change analysis indicated that due to Hurricane Rita, the study area lost 6.5% of marsh area, and transient changes were less than 3% for either land or water. Hurricane Ike resulted in an additional 8% land loss, although not enough time has passed to discriminate between persistent and transient changes.

  7. Geodetic model of the 2015 April 25 Mw 7.8 Gorkha Nepal Earthquake and Mw 7.3 aftershock estimated from InSAR and GPS data

    NASA Astrophysics Data System (ADS)

    Feng, Guangcai; Li, Zhiwei; Shan, Xinjian; Zhang, Lei; Zhang, Guohong; Zhu, Jianjun

    2015-11-01

    We map the complete surface deformation of 2015 Mw 7.8 Gorkha Nepal earthquake and its Mw 7.3 aftershock with two parallel ALOS2 descending ScanSAR paths' and two ascending Stripmap paths' images. The coseismic fault-slip model from a combined inversion of InSAR and GPS data reveals that this event is a reverse fault motion, with a slight right-lateral strike-slip component. The maximum thrust-slip and right-lateral strike-slip values are 5.7 and 1.2 m, respectively, located at a depth of 7-15 km, southeast to the epicentre. The total seismic moment 7.55 × 1020 Nm, corresponding to a moment magnitude Mw 7.89, is similar to the seismological estimates. Fault slips of both the main shock and the largest aftershock are absent from the upper thrust shallower than 7 km, indicating that there is a locking lower edge of Himalayan Main Frontal Thrust and future seismic disaster is not unexpected in this area. We also find that the energy released in this earthquake is much less than the accumulated moment deficit over the past seven centuries estimated in previous studies, so the region surrounding Kathmandu is still under the threaten of seismic hazards.

  8. A Full Dynamic Compound Inverse Method for output-only element-level system identification and input estimation from earthquake response signals

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2016-04-01

    This paper proposes a new output-only element-level system identification and input estimation technique, towards the simultaneous identification of modal parameters, input excitation time history and structural features at the element-level by adopting earthquake-induced structural response signals. The method, named Full Dynamic Compound Inverse Method (FDCIM), releases strong assumptions of earlier element-level techniques, by working with a two-stage iterative algorithm. Jointly, a Statistical Average technique, a modification process and a parameter projection strategy are adopted at each stage to achieve stronger convergence for the identified estimates. The proposed method works in a deterministic way and is completely developed in State-Space form. Further, it does not require continuous- to discrete-time transformations and does not depend on initialization conditions. Synthetic earthquake-induced response signals from different shear-type buildings are generated to validate the implemented procedure, also with noise-corrupted cases. The achieved results provide a necessary condition to demonstrate the effectiveness of the proposed identification method.

  9. A Full Dynamic Compound Inverse Method for output-only element-level system identification and input estimation from earthquake response signals

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2016-08-01

    This paper proposes a new output-only element-level system identification and input estimation technique, towards the simultaneous identification of modal parameters, input excitation time history and structural features at the element-level by adopting earthquake-induced structural response signals. The method, named Full Dynamic Compound Inverse Method (FDCIM), releases strong assumptions of earlier element-level techniques, by working with a two-stage iterative algorithm. Jointly, a Statistical Average technique, a modification process and a parameter projection strategy are adopted at each stage to achieve stronger convergence for the identified estimates. The proposed method works in a deterministic way and is completely developed in State-Space form. Further, it does not require continuous- to discrete-time transformations and does not depend on initialization conditions. Synthetic earthquake-induced response signals from different shear-type buildings are generated to validate the implemented procedure, also with noise-corrupted cases. The achieved results provide a necessary condition to demonstrate the effectiveness of the proposed identification method.

  10. Earthquake prediction

    SciTech Connect

    Ma, Z.; Fu, Z.; Zhang, Y.; Wang, C.; Zhang, G.; Liu, D.

    1989-01-01

    Mainland China is situated at the eastern edge of the Eurasian seismic system and is the largest intra-continental region of shallow strong earthquakes in the world. Based on nine earthquakes with magnitudes ranging between 7.0 and 7.9, the book provides observational data and discusses successes and failures of earthquake prediction. Derived from individual earthquakes, observations of various phenomena and seismic activities occurring before and after earthquakes, led to the establishment of some general characteristics valid for earthquake prediction.

  11. Estimating the probability of occurrence of earthquakes (M>6) in the Western part of the Corinth rift using fault-based and classical seismotectonic approaches.

    NASA Astrophysics Data System (ADS)

    Boiselet, Aurelien; Scotti, Oona; Lyon-Caen, Hélène

    2014-05-01

    The Corinth rift, Greece, is one of the regions with highest strain rates in the Euro-Mediterranean area and as such it has long been identified as a site of major importance for earthquake studies in Europe (20 years of research by the Corinth Rift Laboratory and 4 years of in-depth studies by the ANR-SISCOR project). This enhanced knowledge, acquired in particular, in the western part of the Gulf of Corinth, an area about 50 by 40 km, between the city of Patras to the west and the city of Aigion to the east, provides an excellent opportunity to compare fault-based and classical seismotectonic approaches currently used in seismic hazard assessment studies. A homogeneous earthquake catalogue was first constructed for the Greek territory based on two existing earthquake catalogues available for Greece (National Observatory of Athens and Thessaloniki). In spite of numerous documented damaging earthquakes, only a limited amount of macroseismic intensity data points are available in the existing databases for the damaging earthquakes affecting the west Corinth rift region. A re-interpretation of the macroseismic intensity field for numerous events was thus conducted, following an in-depth analysis of existing and newly found documentation (for details see Rovida et al. EGU2014-6346). In parallel, the construction of a comprehensive database of all relevant geological, geodetical and geophysical information (available in the literature and recently collected within the ANR-SISCOR project), allowed proposing rupture geometries for the different fault-systems identified in the study region. The combination of the new earthquake parameters and the newly defined fault geometries, together with the existing published paleoseismic data, allowed proposing a suite of rupture scenarios including the activation of multiple fault segments. The methodology used to achieve this goal consisted in setting up a logic tree that reflected the opinion of all the members of the ANR

  12. Can diligent and extensive mapping of faults provide reliable estimates of the expected maximum earthquakes at these faults? No. (Invited)

    NASA Astrophysics Data System (ADS)

    Bird, P.

    2010-12-01

    The hope expressed in the title question above can be contradicted in 5 ways, listed below. To summarize, an earthquake rupture can be larger than anticipated either because the fault system has not been fully mapped, or because the rupture is not limited to the pre-existing fault network. 1. Geologic mapping of faults is always incomplete due to four limitations: (a) Map-scale limitation: Faults below a certain (scale-dependent) apparent offset are omitted; (b) Field-time limitation: The most obvious fault(s) get(s) the most attention; (c) Outcrop limitation: You can't map what you can't see; and (d) Lithologic-contrast limitation: Intra-formation faults can be tough to map, so they are often assumed to be minor and omitted. If mapping is incomplete, fault traces may be longer and/or better-connected than we realize. 2. Fault trace “lengths” are unreliable guides to maximum magnitude. Fault networks have multiply-branching, quasi-fractal shapes, so fault “length” may be meaningless. Naming conventions for main strands are unclear, and rarely reviewed. Gaps due to Quaternary alluvial cover may not reflect deeper seismogenic structure. Mapped kinks and other “segment boundary asperities” may be only shallow structures. Also, some recent earthquakes have jumped and linked “separate” faults (Landers, California 1992; Denali, Alaska, 2002) [Wesnousky, 2006; Black, 2008]. 3. Distributed faulting (“eventually occurring everywhere”) is predicted by several simple theories: (a) Viscoelastic stress redistribution in plate/microplate interiors concentrates deviatoric stress upward until they fail by faulting; (b) Unstable triple-junctions (e.g., between 3 strike-slip faults) in 2-D plate theory require new faults to form; and (c) Faults which appear to end (on a geologic map) imply distributed permanent deformation. This means that all fault networks evolve and that even a perfect fault map would be incomplete for future ruptures. 4. A recent attempt

  13. Frictional Heat Generation and Slip Duration Estimated From Micro-fault in an Exhumed Accretionary Complex and Their Relations to the Scaling Law for Slow Earthquakes

    NASA Astrophysics Data System (ADS)

    Hashimoto, Y.; Morita, K.; Okubo, M.; Hamada, Y.; Lin, W.; Hirose, T.; Kitamura, M.

    2015-12-01

    Fault motion has been estimated by diffusion pattern of frictional heating recorded in geology (e.g., Fulton et al., 2012). The same record in deeper subduction plate interface can be observed from micro-faults in an exhumed accretionary complex. In this study, we focused on a micro-fault within the Cretaceous Shimanto Belt, SW Japan to estimate fault motion from the frictional heating diffusion pattern. A carbonaceous material concentrated layer (CMCL) with ~2m of thickness is observed in study area. Some micro-faults cut the CMCL. Thickness of a fault is about 3.7mm. Injection veins and dilatant fractures were observed in thin sections, suggesting that the high fluid pressure was existed. Samples with 10cm long were collected to measure distribution of vitrinite reflectance (Ro) as a function of distance from the center of micro-fault. Ro of host rock was ~1.0%. Diffusion pattern was detected decreasing in Ro from ~1.2%-~1.1%. Characteristic diffusion distance is ~4-~9cm. We conducted grid search to find the optimal frictional heat generation per unit area (Q, the product of friction coefficient, normal stress and slip velocity) and slip duration (t) to fit the diffusion pattern. Thermal diffusivity (0.98*10-8m2/s) and thermal conductivity (2.0 W/mK) were measured. In the result, 2000-2500J/m2 of Q and 63000-126000s of t were estimated. Moment magnitudes (M0) of slow earthquakes (slow EQs) follow a scaling law with slip duration and its dimension is different from that for normal earthquakes (normal EQ) (Ide et al., 2007). The slip duration estimated in this study (~104-~105s) consistent with 4-5 of M0, never fit to the scaling law for normal EQ. Heat generation can be inverted from 4-5 of M0, corresponding with ~108-~1011J, which is consistent with rupture area of 105-108m2 in this study. The comparisons in heat generation and slip duration between geological measurements and geophysical remote observations give us the estimation of rupture area, M0, and

  14. Gross margin losses due to Salmonella Dublin infection in Danish dairy cattle herds estimated by simulation modelling.

    PubMed

    Nielsen, T D; Kudahl, A B; Østergaard, S; Nielsen, L R

    2013-08-01

    Salmonella Dublin affects production and animal health in cattle herds. The objective of this study was to quantify the gross margin (GM) losses following introduction and spread of S. Dublin within dairy herds. The GM losses were estimated using an age-structured stochastic, mechanistic and dynamic simulation model. The model incorporated six age groups (neonatal, pre-weaned calves, weaned calves, growing heifers, breeding heifers and cows) and five infection stages (susceptible, acutely infected, carrier, super shedder and resistant). The effects of introducing one S. Dublin infectious heifer were estimated through 1000 simulation iterations for 12 scenarios. These 12 scenarios were combinations of three herd sizes (85, 200 and 400 cows) and four management levels (very good, good, poor and very poor). Input parameters for effects of S. Dublin on production and animal health were based on literature and calibrations to mimic real life observations. Mean annual GMs per cow stall were compared between herds experiencing within-herd spread of S. Dublin and non-infected reference herds over a 10-year period. The estimated GM losses were largest in the first year after infection, and increased with poorer management and herd size, e.g. average annual GM losses were estimated to 49 euros per stall for the first year after infection, and to 8 euros per stall annually averaged over the 10 years after herd infection for a 200 cow stall herd with very good management. In contrast, a 200 cow stall herd with very poor management lost on average 326 euros per stall during the first year, and 188 euros per stall annually averaged over the 10-year period following introduction of infection. The GM losses arose from both direct losses such as reduced milk yield, dead animals, treatment costs and abortions as well as indirect losses such as reduced income from sold heifers and calves, and lower milk yield of replacement animals. Through sensitivity analyses it was found that the

  15. Results of rainfall simulation to estimate sediment-bound carbon and nitrogen loss from an Atlantic Coastal Plain (USDA) ultisol

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The impact of erosion on soil and carbon loss and redistribution within landscapes is an important component for developing estimates of carbon sequestration potential, management plans to maintain soil quality, and transport of sediment bound agrochemicals. Soils of the Southeastern U.S. Coastal Pl...

  16. Hurricane Loss Estimation Models: Opportunities for Improving the State of the Art.

    NASA Astrophysics Data System (ADS)

    Watson, Charles C., Jr.; Johnson, Mark E.

    2004-11-01

    The results of hurricane loss models are used regularly for multibillion dollar decisions in the insurance and financial services industries. These models are proprietary, and this “black box” nature hinders analysis. The proprietary models produce a wide range of results, often producing loss costs that differ by a ratio of three to one or more. In a study for the state of North Carolina, 324 combinations of loss models were analyzed, based on a combination of nine wind models, four surface friction models, and nine damage models drawn from the published literature in insurance, engineering, and meteorology. These combinations were tested against reported losses from Hurricanes Hugo and Andrew as reported by a major insurance company, as well as storm total losses for additional storms. Annual loss costs were then computed using these 324 combinations of models for both North Carolina and Florida, and compared with publicly available proprietary model results in Florida. The wide range of resulting loss costs for open, scientifically defensible models that perform well against observed losses mirrors the wide range of loss costs computed by the proprietary models currently in use. This outcome may be discouraging for governmental and corporate decision makers relying on this data for policy and investment guidance (due to the high variability across model results), but it also provides guidance for the efforts of future investigations to improve loss models. Although hurricane loss models are true multidisciplinary efforts, involving meteorology, engineering, statistics, and actuarial sciences, the field of meteorology offers the most promising opportunities for improvement of the state of the art.

  17. Spatial and temporal estimation of soil loss for the sustainable management of a wet semi-arid watershed cluster.

    PubMed

    Rejani, R; Rao, K V; Osman, M; Srinivasa Rao, Ch; Reddy, K Sammi; Chary, G R; Pushpanjali; Samuel, Josily

    2016-03-01

    The ungauged wet semi-arid watershed cluster, Seethagondi, lies in the Adilabad district of Telangana in India and is prone to severe erosion and water scarcity. The runoff and soil loss data at watershed, catchment, and field level are necessary for planning soil and water conservation interventions. In this study, an attempt was made to develop a spatial soil loss estimation model for Seethagondi cluster using RUSLE coupled with ARCGIS and was used to estimate the soil loss spatially and temporally. The daily rainfall data of Aphrodite for the period from 1951 to 2007 was used, and the annual rainfall varied from 508 to 1351 mm with a mean annual rainfall of 950 mm and a mean erosivity of 6789 MJ mm ha(-1) h(-1) year(-1). Considerable variation in land use land cover especially in crop land and fallow land was observed during normal and drought years, and corresponding variation in the erosivity, C factor, and soil loss was also noted. The mean value of C factor derived from NDVI for crop land was 0.42 and 0.22 in normal year and drought years, respectively. The topography is undulating and major portion of the cluster has slope less than 10°, and 85.3% of the cluster has soil loss below 20 t ha(-1) year(-1). The soil loss from crop land varied from 2.9 to 3.6 t ha(-1) year(-1) in low rainfall years to 31.8 to 34.7 t ha(-1) year(-1) in high rainfall years with a mean annual soil loss of 12.2 t ha(-1) year(-1). The soil loss from crop land was higher in the month of August with an annual soil loss of 13.1 and 2.9 t ha(-1) year(-1) in normal and drought year, respectively. Based on the soil loss in a normal year, the interventions recommended for 85.3% of area of the watershed includes agronomic measures such as contour cultivation, graded bunds, strip cropping, mixed cropping, crop rotations, mulching, summer plowing, vegetative bunds, agri-horticultural system, and management practices such as broad bed furrow, raised sunken beds, and harvesting available water

  18. The integration of stress, strain, and seismogenic fault data: towards more robust estimates of the earthquake potential in Italy and its surroundings

    NASA Astrophysics Data System (ADS)

    Caporali, Alessandro; Braitenberg, Carla; Burrato, Pierfrancesco; Carafa, Michele; Di Giovambattista, Rita; Gentili, Stefania; Mariucci, Maria Teresa; Montone, Paola; Morsut, Federico; Nicolini, Luca; Pivetta, Tommaso; Roselli, Pamela; Rossi, Giuliana; Valensise, Gian Luca; Vigano, Alfio

    2016-04-01

    Italy is an earthquake-prone country with a long tradition in observational seismology. For many years, the country's unique historical earthquake record has revealed fundamental properties of Italian seismicity and has been used to determine earthquake rates. Paleoseismological studies conducted over the past 20 years have shown that the length of this record - 5 to 8 centuries, depending on areas - is just a fraction of the typical recurrence interval of Italian faults - consistently larger than a millennium. Hence, so far the earthquake potential may have been significantly over- or under-estimated. Based on a clear perception of these circumstances, over the past two decades large networks and datasets describing independent aspects of the seismic cycle have been developed. INGV, OGS, some universities and local administrations have built networks that globally include nearly 500 permanent GPS/GNSS sites, routinely used to compute accurate horizontal velocity gradients reflecting the accumulation of tectonic strain. INGV developed the Italian present-day stress map, which includes over 700 datapoints based on geophysical in-situ measurements and fault plane solutions, and the Database of Individual Seismogenic Sources (DISS), a unique compilation featuring nearly 300 three-dimensional seismogenic faults over the entire nation. INGV also updates and maintains the Catalogo Parametrico dei Terremoti Italiani (CPTI) and the instrumental earthquake database ISIDe, whereas OGS operates its own seismic catalogue for northeastern Italy. We present preliminary results on the use of this wealth of homogeneously collected and updated observations of stress and strain as a source of loading/unloading of the faults listed in the DISS database. We use the geodetic strain rate - after converting it to stress rate in conjunction with the geophysical stress data of the Stress Map - to compute the Coulomb Failure Function on all fault planes described by the DISS database. This

  19. Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical abstract: Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analys...

  20. Phosphorus loss and its estimation in a small watershed of the Yimeng mountainous area, China

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Non-point source pollution is severe in the Yimeng Mountainous area of China. Few studies have been conducted to identify and predict phosphorus loss at a watershed scale in this region. The objectives of this study were to identify the characteristics of phosphorus loss and further to develop regre...

  1. Mediation analysis to estimate direct and indirect milk losses associated with bacterial load in bovine subclinical mammary infections.

    PubMed

    Detilleux, J; Theron, L; Duprez, J-N; Reding, E; Moula, N; Detilleux, M; Bertozzi, C; Hanzen, C; Mainil, J

    2016-08-01

    Milk losses associated with mastitis can be attributed to either effects of pathogens per se (i.e. direct losses) or to effects of the immune response triggered by the presence of mammary pathogens (i.e. indirect losses). Test-day milk somatic cell counts (SCC) and number of bacterial colony forming units (CFU) found in milk samples are putative measures of the level of immune response and of the bacterial load, respectively. Mediation models, in which one independent variable affects a second variable which, in turn, affects a third one, are conceivable models to estimate direct and indirect losses. Here, we evaluated the feasibility of a mediation model in which test-day SCC and milk were regressed toward bacterial CFU measured at three selected sampling dates, 1 week apart. We applied this method on cows free of clinical signs and with records on up to 3 test-days before and after the date of the first bacteriological samples. Most bacteriological cultures were negative (52.38%), others contained either staphylococci (23.08%), streptococci (9.16%), mixed bacteria (8.79%) or were contaminated (6.59%). Only losses mediated by an increase in SCC were significantly different from null. In cows with three consecutive bacteriological positive results, we estimated a decreased milk yield of 0.28 kg per day for each unit increase in log2-transformed CFU that elicited one unit increase in log2-transformed SCC. In cows with one or two bacteriological positive results, indirect milk loss was not significantly different from null although test-day milk decreased by 0.74 kg per day for each unit increase of log2-transformed SCC. These results highlight the importance of milk losses that are mediated by an increase in SCC during mammary infection and the feasibility of decomposing total milk loss into its direct and indirect components. PMID:26923826

  2. Long-term psychological outcome for non-treatment-seeking earthquake survivors in Turkey.

    PubMed

    Salcioglu, Ebru; Basoglu, Metin; Livanou, Maria

    2003-03-01

    This study examined the incidence of posttraumatic stress disorder (PTSD) and depression in 586 earthquake survivors living in prefabricated housing sites a mean of 20 months after the 1999 earthquake in Turkey. The estimated rates of PTSD and major depression were 39% and 18%, respectively. More severe PTSD symptoms related to greater fear during the earthquake, female gender, older age, participation in rescue work, having been trapped under rubble, and personal history of psychiatric illness. More severe depression symptoms related to older age, loss of close ones, single marital status, past psychiatric illness, previous trauma experience, female gender, and family history of psychiatric illness. These findings suggest that catastrophic earthquakes have long-term psychological consequences, particularly for survivors with high levels of trauma exposure. These findings lend further support to the need for long-term mental health care policies for earthquake survivors. Outreach service delivery programs are needed to access non-treatment-seeking survivors with chronic PTSD. PMID:12637841

  3. Quantitative estimation of farmland soil loss by wind-erosion using improved particle-size distribution comparison method (IPSDC)

    NASA Astrophysics Data System (ADS)

    Rende, Wang; Zhongling, Guo; Chunping, Chang; Dengpan, Xiao; Hongjun, Jiang

    2015-12-01

    The rapid and accurate estimation of soil loss by wind erosion still remains challenge. This study presents an improved scheme for estimating the soil loss by wind erosion of farmland. The method estimates the soil loss by wind erosion based on a comparison of the relative contents of erodible and non-erodible particles between the surface and sub-surface layers of the farmland ploughed layer after wind erosion. It is based on the features that the soil particle-size distribution of the sampling soil layer (approximately 2 cm) is relatively uniform, and that on the surface layer, wind erosion causes the relative numbers of erodible and non-erodible particles to decrease and increase, respectively. Estimations were performed using this method for the wind erosion periods (WEP) from Oct. of 2012 to May of 2013 and from Oct. of 2013 to April of 2014 and a large wind-erosion event (WEE) on May 3, 2014 in the Bashang area of Hebei Province. The results showed that the average soil loss of farmland by wind erosion from Oct. of 2012 to May of 2013 was 2852.14 g/m2 with an average depth of 0.21 cm, while soil loss by wind from Oct. of 2013 to April of 2014 was 1199.17 g/m2 with a mean depth of 0.08 cm. During the severe WEE on May 3, 2014, the average soil loss of farmland by wind erosion was 1299.19 g/m2 with an average depth of 0.10 cm. The soil loss by wind erosion of ploughed and raked fields (PRF) was approximately twice as large as that of oat-stubble fields (OSF). The improved method of particle-size distribution comparison (IPSDC) has several advantages. It can not only calculate the wind erosion amount, but also the wind deposition amount. Slight changes in the sampling thickness and in the particle diameter range of the non-erodible particles will not obviously influence the results. Furthermore, the method is convenient, rapid, simple to implement. It is suitable for estimating the soil loss or deposition by wind erosion of farmland with flat surfaces and high

  4. A Match-based approach to the estimation of polar stratospheric ozone loss using Aura Microwave Limb Sounder observations

    NASA Astrophysics Data System (ADS)

    Livesey, N. J.; Santee, M. L.; Manney, G. L.

    2015-09-01

    The well-established "Match" approach to quantifying chemical destruction of ozone in the polar lower stratosphere is applied to ozone observations from the Microwave Limb Sounder (MLS) on NASA's Aura spacecraft. Quantification of ozone loss requires distinguishing transport- and chemically induced changes in ozone abundance. This is accomplished in the Match approach by examining cases where trajectories indicate that the same air mass has been observed on multiple occasions. The method was pioneered using ozonesonde observations, for which hundreds of matched ozone observations per winter are typically available. The dense coverage of the MLS measurements, particularly at polar latitudes, allows matches to be made to thousands of observations each day. This study is enabled by recently developed MLS Lagrangian trajectory diagnostic (LTD) support products. Sensitivity studies indicate that the largest influence on the ozone loss estimates are the value of potential vorticity (PV) used to define the edge of the polar vortex (within which matched observations must lie) and the degree to which the PV of an air mass is allowed to vary between matched observations. Applying Match calculations to MLS observations of nitrous oxide, a long-lived tracer whose expected rate of change is negligible on the weekly to monthly timescales considered here, enables quantification of the impact of transport errors on the Match-based ozone loss estimates. Our loss estimates are generally in agreement with previous estimates for selected Arctic winters, though indicating smaller losses than many other studies. Arctic ozone losses are greatest during the 2010/11 winter, as seen in prior studies, with 2.0 ppmv (parts per million by volume) loss estimated at 450 K potential temperature (~ 18 km altitude). As expected, Antarctic winter ozone losses are consistently greater than those for the Arctic, with less interannual variability (e.g., ranging between 2.3 and 3.0 ppmv at 450 K). This

  5. Applying the Land Use Portfolio Model to Estimate Natural-Hazard Loss and Risk - A Hypothetical Demonstration for Ventura County, California

    USGS Publications Warehouse

    Dinitz, Laura B.

    2008-01-01

    -MH currently performs analyses for earthquakes, floods, and hurricane wind. HAZUS-MH loss estimates, however, do not account for some uncertainties associated with the specific natural-hazard scenarios, such as the likelihood of occurrence within a particular time horizon or the effectiveness of alternative risk-reduction options. Because of the uncertainties involved, it is challenging to make informative decisions about how to cost-effectively reduce risk from natural-hazard events. Risk analysis is one approach that decision-makers can use to evaluate alternative risk-reduction choices when outcomes are unknown. The Land Use Portfolio Model (LUPM), developed by the U.S. Geological Survey (USGS), is a geospatial scenario-based tool that incorporates hazard-event uncertainties to support risk analysis. The LUPM offers an approach to estimate and compare risks and returns from investments in risk-reduction measures. This paper describes and demonstrates a hypothetical application of the LUPM for Ventura County, California, and examines the challenges involved in developing decision tools that provide quantitative methods to estimate losses and analyze risk from natural hazards.

  6. Remote sensing as a tool for watershed-wide estimation of net solar radiation and water loss to the atmosphere

    NASA Technical Reports Server (NTRS)

    Khorram, S.; Thomas, R. W.

    1976-01-01

    Results are presented for a study intended to develop a general remote sensing-aided cost-effective procedure to estimate watershed-wide water loss to the atmosphere via evapotranspiration and to estimate net solar radiation over the watershed. Evapotranspiration estimation employs a basic two-stage two-phase sample of three information resolution levels. Net solar radiation is taken as one of the variables at each level of evapotranspiration modeling. The input information for models requiring spatial information will be provided by Landsat digital data, environmental satellite data, ground meteorological data, ground sample unit information, and topographic data. The outputs of the sampling-estimation/data bank system will be in-place maps of evapotranspiration on a data resolution element basis, watershed-wide evapotranspiration isopleths, and estimates of watershed and subbasin total evapotranspiration with associated statistical confidence bounds. The methodology developed is being tested primarily on the Spanish Creek Watershed Plumas County, California.

  7. Hidden Earthquakes.

    ERIC Educational Resources Information Center

    Stein, Ross S.; Yeats, Robert S.

    1989-01-01

    Points out that large earthquakes can take place not only on faults that cut the earth's surface but also on blind faults under folded terrain. Describes four examples of fold earthquakes. Discusses the fold earthquakes using several diagrams and pictures. (YP)

  8. Earthquake Risk Mitigation in the Tokyo Metropolitan area

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Sakai, S.; Kasahara, K.; Nakagawa, S.; Nanjo, K.; Panayotopoulos, Y.; Tsuruoka, H.

    2010-12-01

    Seismic disaster risk mitigation in urban areas constitutes a challenge through collaboration of scientific, engineering, and social-science fields. Examples of collaborative efforts include research on detailed plate structure with identification of all significant faults, developing dense seismic networks; strong ground motion prediction, which uses information on near-surface seismic site effects and fault models; earthquake resistant and proof structures; and cross-discipline infrastructure for effective risk mitigation just after catastrophic events. Risk mitigation strategy for the next greater earthquake caused by the Philippine Sea plate (PSP) subducting beneath the Tokyo metropolitan area is of major concern because it caused past mega-thrust earthquakes, such as the 1703 Genroku earthquake (magnitude M8.0) and the 1923 Kanto earthquake (M7.9) which had 105,000 fatalities. A M7 or greater (M7+) earthquake in this area at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The Central Disaster Management Council of Japan estimates that the M7+ earthquake will cause 11,000 fatalities and 112 trillion yen (about 1 trillion US$) economic loss. This earthquake is evaluated to occur with a probability of 70% in 30 years by the Earthquake Research Committee of Japan. In order to mitigate disaster for greater Tokyo, the Special Project for Earthquake Disaster Mitigation in the Tokyo Metropolitan Area (2007-2011) was launched in collaboration with scientists, engineers, and social-scientists in nationwide institutions. The results that are obtained in the respective fields will be integrated until project termination to improve information on the strategy assessment for seismic risk mitigation in the Tokyo metropolitan area. In this talk, we give an outline of our project as an example of collaborative research on earthquake risk mitigation. Discussion is extended to our effort in progress and

  9. One-Step Targeted Minimum Loss-based Estimation Based on Universal Least Favorable One-Dimensional Submodels

    PubMed Central

    van der Laan, Mark; Gruber, Susan

    2016-01-01

    Consider a study in which one observes n independent and identically distributed random variables whose probability distribution is known to be an element of a particular statistical model, and one is concerned with estimation of a particular real valued pathwise differentiable target parameter of this data probability distribution. The targeted maximum likelihood estimator (TMLE) is an asymptotically efficient substitution estimator obtained by constructing a so called least favorable parametric submodel through an initial estimator with score, at zero fluctuation of the initial estimator, that spans the efficient influence curve, and iteratively maximizing the corresponding parametric likelihood till no more updates occur, at which point the updated initial estimator solves the so called efficient influence curve equation. In this article we construct a one-dimensional universal least favorable submodel for which the TMLE only takes one step, and thereby requires minimal extra data fitting to achieve its goal of solving the efficient influence curve equation. We generalize these to universal least favorable submodels through the relevant part of the data distribution as required for targeted minimum loss-based estimation. Finally, remarkably, given a multidimensional target parameter, we develop a universal canonical one-dimensional submodel such that the one-step TMLE, only maximizing the log-likelihood over a univariate parameter, solves the multivariate efficient influence curve equation. This allows us to construct a one-step TMLE based on a one-dimensional parametric submodel through the initial estimator, that solves any multivariate desired set of estimating equations. PMID:27227728

  10. Cryogenic measurements of mechanical loss of high-reflectivity coating and estimation of thermal noise.

    PubMed

    Granata, Massimo; Craig, Kieran; Cagnoli, Gianpietro; Carcy, Cécile; Cunningham, William; Degallaix, Jérôme; Flaminio, Raffaele; Forest, Danièle; Hart, Martin; Hennig, Jan-Simon; Hough, James; MacLaren, Ian; Martin, Iain William; Michel, Christophe; Morgado, Nazario; Otmani, Salim; Pinard, Laurent; Rowan, Sheila

    2013-12-15

    We report on low-frequency measurements of the mechanical loss of a high-quality (transmissivity T<5 ppm at λ(0)=1064 nm, absorption loss <0.5 ppm) multilayer dielectric coating of ion-beam-sputtered fused silica and titanium-doped tantala in the 10-300 K temperature range. A useful parameter for the computation of coating thermal noise on different substrates is derived as a function of temperature and frequency. PMID:24322234

  11. Wildlife Loss Estimates and Summary of Previous Mitigation Related to Hydroelectric Projects in Montana, Volume Three, Hungry Horse Project.

    SciTech Connect

    Casey, Daniel

    1984-10-01

    This assessment addresses the impacts to the wildlife populations and wildlife habitats due to the Hungry Horse Dam project on the South Fork of the Flathead River and previous mitigation of theses losses. In order to develop and focus mitigation efforts, it was first necessary to estimate wildlife and wildlife hatitat losses attributable to the construction and operation of the project. The purpose of this report was to document the best available information concerning the degree of impacts to target wildlife species. Indirect benefits to wildlife species not listed will be identified during the development of alternative mitigation measures. Wildlife species incurring positive impacts attributable to the project were identified.

  12. Estimating bias from loss to follow-up in a prospective cohort study of bicycle crash injuries

    PubMed Central

    Tin Tin, Sandar; Woodward, Alistair; Ameratunga, Shanthi

    2014-01-01

    Background Loss to follow-up, if related to exposures, confounders and outcomes of interest, may bias association estimates. We estimated the magnitude and direction of such bias in a prospective cohort study of crash injury among cyclists. Methods The Taupo Bicycle Study involved 2590 adult cyclists recruited from New Zealand's largest cycling event in 2006 and followed over a median period of 4.6 years through linkage to four administrative databases. We resurveyed the participants in 2009 and excluded three participants who died prior to the resurvey. We compared baseline characteristics and crash outcomes of the baseline (2006) and follow-up (those who responded in 2009) cohorts by ratios of relative frequencies and estimated potential bias from loss to follow-up on seven exposure-outcome associations of interest by ratios of HRs. Results Of the 2587 cyclists in the baseline cohort, 1526 (60%) responded to the follow-up survey. The responders were older, more educated and more socioeconomically advantaged. They were more experienced cyclists who often rode in a bunch, off-road or in the dark, but were less likely to engage in other risky cycling behaviours. Additionally, they experienced bicycle crashes more frequently during follow-up. The selection bias ranged between −10% and +9% for selected associations. Conclusions Loss to follow-up was differential by demographic, cycling and behavioural risk characteristics as well as crash outcomes, but did not substantially bias association estimates of primary research interest. PMID:24336816

  13. Great East Japan Earthquake Tsunami

    NASA Astrophysics Data System (ADS)

    Iijima, Y.; Minoura, K.; Hirano, S.; Yamada, T.

    2011-12-01

    supercritical flows, resulting in the loss of landward seawall slopes. Such erosion was also observed at landward side of footpath between rice fields. The Sendai plain was subjected just after the main shock of the earthquake. Seawater inundation resulting from tsunami run-up lasted two months. The historical document Sandai-jitsuroku, which gives a detailed history of all of Japan, describes the Jogan earthquake and subsequent tsunami which have attacked Sendai plain in AD 869. The document describes the prolonged period of flooding, and it is suggested that co-seismic subsidence of the plain took place. The inundation area of the Jogan tsunami estimated by the distribution of tsunami deposit mostly overlaps with that of the 3.11 tsunami. Considering the very similarity of seismic shocks between the both, we interpreted the Great East Japan Earthquake Tsunami is the second coming of the Jogan Earthquake Tsunami.

  14. Procedure to estimate maximum ground acceleration from macroseismic intensity rating: application to the Lima, Perú data from the October-3-1974-8.1-Mw earthquake

    NASA Astrophysics Data System (ADS)

    Ocola, L.

    2008-01-01

    Post-disaster reconstruction management of urban areas requires timely information on the ground response microzonation to strong levels of ground shaking to minimize the rebuilt-environment vulnerability to future earthquakes. In this paper, a procedure is proposed to quantitatively estimate the severity of ground response in terms of peak ground acceleration, that is computed from macroseismic rating data, soil properties (acoustic impedance) and predominant frequency of shear waves at a site. The basic mathematical relationships are derived from properties of wave propagation in a homogeneous and isotropic media. We define a Macroseismic Intensity Scale IMS as the logarithm of the quantity of seismic energy that flows through a unit area normal to the direction of wave propagation in unit time. The derived constants that relate the IMS scale and peak acceleration agree well with coefficients derived from a linear regression between MSK macroseismic rating and peak ground acceleration for historical earthquakes recorded at a strong motion station, at IGP's former headquarters, since 1954. The procedure was applied to 3-October-1974 Lima macroseismic intensity data at places where there was geotechnical data and predominant ground frequency information. The observed and computed peak acceleration values, at nearby sites, agree well.

  15. Estimating the near-surface site response to mitigate earthquake disasters at the October 6th city, Egypt, using HVSR and seismic techniques

    NASA Astrophysics Data System (ADS)

    Mohamed, Adel M. E.; Abdel Hafiez, H. E.; Taha, M. A.

    2013-06-01

    The damage caused by earthquake occurrences in different localities necessitates the evaluation of the subsurface structure. A priori estimation of the site effects became a major challenge for an efficient mitigation of the seismic risk. In the case of moderate to large earthquakes, at some distances from large events, severe damage often occurred at zones of unfavorable geotechnical conditions that give rise to significant site effects. The damage distribution in the near-source area is also significantly affected by fault geometry and rupture history. The microtremor (background noises) and shallow seismic surveys (through both the seismic refraction and Multi-channel Analysis of Surface Waves (MASW)) were carried out in a specific area (The club of October 6 city and its adjacent space area). The natural periods derived from the HVSR (Horizontal to Vertical Spectral Ratio) analysis vary from 0.37 to 0.56 s. The shallow seismic refraction data, which were conducted at the area, are used to determine the attenuation of P-waves (Qp) in different layers, using the pulse-width technique. The evaluation of the site response at the studied area yields amplification factor of the ground motion, ranging between 2.4 and 4.4.

  16. Structural Constraints and Earthquake Recurrence Estimates for the West Tahoe-Dollar Point Fault, Lake Tahoe Basin, California

    NASA Astrophysics Data System (ADS)

    Maloney, J. M.; Driscoll, N. W.; Kent, G.; Brothers, D. S.; Baskin, R. L.; Babcock, J. M.; Noble, P. J.; Karlin, R. E.

    2011-12-01

    Previous work in the Lake Tahoe Basin (LTB), California, identified the West Tahoe-Dollar Point Fault (WTDPF) as the most hazardous fault in the region. Onshore and offshore geophysical mapping delineated three segments of the WTDPF extending along the western margin of the LTB. The rupture patterns between the three WTDPF segments remain poorly understood. Fallen Leaf Lake (FLL), Cascade Lake, and Emerald Bay are three sub-basins of the LTB, located south of Lake Tahoe, that provide an opportunity to image primary earthquake deformation along the WTDPF and associated landslide deposits. We present results from recent (June 2011) high-resolution seismic CHIRP surveys in FLL and Cascade Lake, as well as complete multibeam swath bathymetry coverage of FLL. Radiocarbon dates obtained from the new piston cores acquired in FLL provide age constraints on the older FLL slide deposits and build on and complement previous work that dated the most recent event (MRE) in Fallen Leaf Lake at ~4.1-4.5 k.y. BP. The CHIRP data beneath FLL image slide deposits that appear to correlate with contemporaneous slide deposits in Emerald Bay and Lake Tahoe. A major slide imaged in FLL CHIRP data is slightly younger than the Tsoyowata ash (7950-7730 cal yrs BP) identified in sediment cores and appears synchronous with a major Lake Tahoe slide deposit (7890-7190 cal yrs BP). The equivalent age of these slides suggests the penultimate earthquake on the WTDPF may have triggered them. If correct, we postulate a recurrence interval of ~3-4 k.y. These results suggest the FLL segment of the WTDPF is near its seismic recurrence cycle. Additionally, CHIRP profiles acquired in Cascade Lake image the WTDPF for the first time in this sub-basin, which is located near the transition zone between the FLL and Rubicon Point Sections of the WTDPF. We observe two fault-strands trending N45°W across southern Cascade Lake for ~450 m. The strands produce scarps of ~5 m and ~2.7 m, respectively, on the lake

  17. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    than do the blockquake and Parkfield data. This provided opportunities for discussing the difference between Poisson and normal distributions, how those differences affect our estimation of future earthquake probabilities, the importance of both the mean and the standard deviation in predicting future behavior from a sequence of events, and how conditional probability is used to help seismologists predict future earthquakes given a known or theoretical distribution of past earthquakes.

  18. Accuracy of telemetry signal power loss in a filter as an estimate for telemetry degradation

    NASA Technical Reports Server (NTRS)

    Koerner, M. A.

    1989-01-01

    When telemetry data is transmitted through a communication link, some degradation in telemetry performance occurs as a result of the imperfect frequency response of the channel. The term telemetry degradation as used here is the increase in received signal power required to offset this filtering. The usual approach to assessing this degradation is to assume that it is equal to the signal power loss in the filtering, which is easily calculated. However, this approach neglects the effects of the nonlinear phase response of the filter, the effect of any reduction of the receiving system noise due to the filter, and intersymbol interference. Here, an exact calculation of the telemetry degradation, which includes all of the above effects, is compared with the signal power loss calculation for RF filtering of NRZ data on a carrier. The signal power loss calculation is found to be a reasonable approximation when the filter follows the point at which the receiving system noise is introduced, especially if the signal power loss is less than 0.5 dB. The signal power loss approximation is less valid when the receiving system noise is not filtered.

  19. Estimating the Frequency of Horizontal Gene Transfer Using Phylogenetic Models of Gene Gain and Loss.

    PubMed

    Zamani-Dahaj, Seyed Alireza; Okasha, Mohamed; Kosakowski, Jakub; Higgs, Paul G

    2016-07-01

    We analyze patterns of gene presence and absence in a maximum likelihood framework with rate parameters for gene gain and loss. Standard methods allow independent gains and losses in different parts of a tree. While losses of the same gene are likely to be frequent, multiple gains need to be considered carefully. A gene gain could occur by horizontal transfer or by origin of a gene within the lineage being studied. If a gene is gained more than once, then at least one of these gains must be a horizontal transfer. A key parameter is the ratio of gain to loss rates, a/v We consider the limiting case known as the infinitely many genes model, where a/v tends to zero and a gene cannot be gained more than once. The infinitely many genes model is used as a null model in comparison to models that allow multiple gains. Using genome data from cyanobacteria and archaea, it is found that the likelihood is significantly improved by allowing for multiple gains, but the average a/v is very small. The fraction of genes whose presence/absence pattern is best explained by multiple gains is only 15% in the cyanobacteria and 20% and 39% in two data sets of archaea. The distribution of rates of gene loss is very broad, which explains why many genes follow a treelike pattern of vertical inheritance, despite the presence of a significant minority of genes that undergo horizontal transfer. PMID:27189546

  20. Earthquake hazards: a national threat

    USGS Publications Warehouse

    U.S. Geological Survey

    2006-01-01

    Earthquakes are one of the most costly natural hazards faced by the Nation, posing a significant risk to 75 million Americans in 39 States. The risks that earthquakes pose to society, including death, injury, and economic loss, can be greatly reduced by (1) better planning, construction, and mitigation practices before earthquakes happen, and (2) providing critical and timely information to improve response after they occur. As part of the multi-agency National Earthquake Hazards Reduction Program, the U.S. Geological Survey (USGS) has the lead Federal responsibility to provide notification of earthquakes in order to enhance public safety and to reduce losses through effective forecasts based on the best possible scientific information.

  1. Estimates of ground-water recharge, base flow, and stream reach gains and losses in the Willamette River basin, Oregon

    USGS Publications Warehouse

    Lee, Karl K.; Risley, John C.

    2002-01-01

    Precipitation-runoff models, base-flow-separation techniques, and stream gain-loss measurements were used to study recharge and ground-water surface-water interaction as part of a study of the ground-water resources of the Willamette River Basin. The study was a cooperative effort between the U.S. Geological Survey and the State of Oregon Water Resources Department. Precipitation-runoff models were used to estimate the water budget of 216 subbasins in the Willamette River Basin. The models were also used to compute long-term average recharge and base flow. Recharge and base-flow estimates will be used as input to a regional ground-water flow model, within the same study. Recharge and base-flow estimates were made using daily streamflow records. Recharge estimates were made at 16 streamflow-gaging-station locations and were compared to recharge estimates from the precipitation-runoff models. Base-flow separation methods were used to identify the base-flow component of streamflow at 52 currently operated and discontinued streamflow-gaging-station locations. Stream gain-loss measurements were made on the Middle Fork Willamette, Willamette, South Yamhill, Pudding, and South Santiam Rivers, and were used to identify and quantify gaining and losing stream reaches both spatially and temporally. These measurements provide further understanding of ground-water/surface-water interactions.

  2. A new macroseismic intensity prediction equation and magnitude estimates of the 1811-1812 New Madrid and 1886 Charleston, South Carolina, earthquakes

    NASA Astrophysics Data System (ADS)

    Boyd, O. S.; Cramer, C. H.

    2013-12-01

    We develop an intensity prediction equation (IPE) for the Central and Eastern United States, explore differences between modified Mercalli intensities (MMI) and community internet intensities (CII) and the propensity for reporting, and estimate the moment magnitudes of the 1811-1812 New Madrid, MO, and 1886 Charleston, SC, earthquakes. We constrain the study with North American census data, the National Oceanic and Atmospheric Administration MMI dataset (responses between 1924 and 1985), and the USGS ';Did You Feel It?' CII dataset (responses between June, 2000 and August, 2012). The combined intensity dataset has more than 500,000 felt reports for 517 earthquakes with magnitudes between 2.5 and 7.2. The IPE has the basic form, MMI=c1+c2M+c3exp(λ)+c4λ. where M is moment magnitude and λ is mean log hypocentral distance. Previous IPEs use a limited dataset of MMI, do not differentiate between MMI and CII data in the CEUS, nor account for spatial variations in population. These factors can have an impact at all magnitudes, especially the last factor at large magnitudes and small intensities where the population drops to zero in the Atlantic Ocean and Gulf of Mexico. We assume that the number of reports of a given intensity have hypocentral distances that are log-normally distributed, the distribution of which is modulated by population and the propensity for individuals to report their experience. We do not account for variations in stress drop, regional variations in Q, or distance-dependent geometrical spreading. We simulate the distribution of reports of a given intensity accounting for population and use a grid search method to solve for the fraction of population to report the intensity, the standard deviation of the log-normal distribution and the mean log hypocentral distance, which appears in the above equation. We find that lower intensities, both CII and MMI, are less likely to be reported than greater intensities. Further, there are strong spatial

  3. Microwave continuum measurements and estimates of mass loss rates for cool giants and supergiants

    NASA Technical Reports Server (NTRS)

    Drake, S. A.; Linsky, J. L.

    1986-01-01

    Attention is given to the results of a sensitive, 6-cm radio continuum survey conducted with the NRAO VLA of 39 of the nearest single cool giants and supergiants of G0-M5 spectral types; the survey was conducted in order to obtain accurate measurements of the mass loss rates of ionized gas for a representative sample of such stars, in order to furnish constraints for, and a better understanding of, the total mass loss rates. The inferred angular diameters for the cool giant sources are noted to be twice as large as photospheric angular diameters, implying that these stars are surrounded by extended chromospheres containing warm partially ionized gas.

  4. DXA, bioelectrical impedance, ultrasonography and biometry for the estimation of fat and lean mass in cats during weight loss

    PubMed Central

    2012-01-01

    Background Few equations have been developed in veterinary medicine compared to human medicine to predict body composition. The present study was done to evaluate the influence of weight loss on biometry (BIO), bioimpedance analysis (BIA) and ultrasonography (US) in cats, proposing equations to estimate fat (FM) and lean (LM) body mass, as compared to dual energy x-ray absorptiometry (DXA) as the referenced method. For this were used 16 gonadectomized obese cats (8 males and 8 females) in a weight loss program. DXA, BIO, BIA and US were performed in the obese state (T0; obese animals), after 10% of weight loss (T1) and after 20% of weight loss (T2). Stepwise regression was used to analyze the relationship between the dependent variables (FM, LM) determined by DXA and the independent variables obtained by BIO, BIA and US. The better models chosen were evaluated by a simple regression analysis and means predicted vs. determined by DXA were compared to verify the accuracy of the equations. Results The independent variables determined by BIO, BIA and US that best correlated (p < 0.005) with the dependent variables (FM and LM) were BW (body weight), TC (thoracic circumference), PC (pelvic circumference), R (resistance) and SFLT (subcutaneous fat layer thickness). Using Mallows’Cp statistics, p value and r2, 19 equations were selected (12 for FM, 7 for LM); however, only 7 equations accurately predicted FM and one LM of cats. Conclusions The equations with two variables are better to use because they are effective and will be an alternative method to estimate body composition in the clinical routine. For estimated lean mass the equations using body weight associated with biometrics measures can be proposed. For estimated fat mass the equations using body weight associated with bioimpedance analysis can be proposed. PMID:22781317

  5. Estimates of the prevalence of anomalous signal losses in the Yellow Sea derived from acoustic and oceanographic computer model simulations

    NASA Astrophysics Data System (ADS)

    Chin-Bing, Stanley A.; King, David B.; Warn-Varnas, Alex C.; Lamb, Kevin G.; Hawkins, James A.; Teixeira, Marvi

    2002-05-01

    The results from collocated oceanographic and acoustic simulations in a region of the Yellow Sea near the Shandong peninsula have been presented [Chin-Bing et al., J. Acoust. Soc. Am. 108, 2577 (2000)]. In that work, the tidal flow near the peninsula was used to initialize a 2.5-dimensional ocean model [K. G. Lamb, J. Geophys. Res. 99, 843-864 (1994)] that subsequently generated internal solitary waves (solitons). The validity of these soliton simulations was established by matching satellite imagery taken over the region. Acoustic propagation simulations through this soliton field produced results similar to the anomalous signal loss measured by Zhou, Zhang, and Rogers [J. Acoust. Soc. Am. 90, 2042-2054 (1991)]. Analysis of the acoustic interactions with the solitons also confirmed the hypothesis that the loss mechanism involved acoustic mode coupling. Recently we have attempted to estimate the prevalence of these anomalous signal losses in this region. These estimates were made from simulating acoustic effects over an 80 hour space-time evolution of soliton packets. Examples will be presented that suggest the conditions necessary for anomalous signal loss may be more prevalent than previously thought. [Work supported by ONR/NRL and by a High Performance Computing DoD grant.

  6. Estimating Tempo and Mode of Y Chromosome Turnover: Explaining Y Chromosome Loss With the Fragile Y Hypothesis

    PubMed Central

    Blackmon, Heath; Demuth, Jeffery P.

    2014-01-01

    Chromosomal sex determination is phylogenetically widespread, having arisen independently in many lineages. Decades of theoretical work provide predictions about sex chromosome differentiation that are well supported by observations in both XY and ZW systems. However, the phylogenetic scope of previous work gives us a limited understanding of the pace of sex chromosome gain and loss and why Y or W chromosomes are more often lost in some lineages than others, creating XO or ZO systems. To gain phylogenetic breadth we therefore assembled a database of 4724 beetle species’ karyotypes and found substantial variation in sex chromosome systems. We used the data to estimate rates of Y chromosome gain and loss across a phylogeny of 1126 taxa estimated from seven genes. Contrary to our initial expectations, we find that highly degenerated Y chromosomes of many members of the suborder Polyphaga are rarely lost, and that cases of Y chromosome loss are strongly associated with chiasmatic segregation during male meiosis. We propose the “fragile Y” hypothesis, that recurrent selection to reduce recombination between the X and Y chromosome leads to the evolution of a small pseudoautosomal region (PAR), which, in taxa that require XY chiasmata for proper segregation during meiosis, increases the probability of aneuploid gamete production, with Y chromosome loss. This hypothesis predicts that taxa that evolve achiasmatic segregation during male meiosis will rarely lose the Y chromosome. We discuss data from mammals, which are consistent with our prediction. PMID:24939995

  7. Estimating tempo and mode of Y chromosome turnover: explaining Y chromosome loss with the fragile Y hypothesis.

    PubMed

    Blackmon, Heath; Demuth, Jeffery P

    2014-06-01

    Chromosomal sex determination is phylogenetically widespread, having arisen independently in many lineages. Decades of theoretical work provide predictions about sex chromosome differentiation that are well supported by observations in both XY and ZW systems. However, the phylogenetic scope of previous work gives us a limited understanding of the pace of sex chromosome gain and loss and why Y or W chromosomes are more often lost in some lineages than others, creating XO or ZO systems. To gain phylogenetic breadth we therefore assembled a database of 4724 beetle species' karyotypes and found substantial variation in sex chromosome systems. We used the data to estimate rates of Y chromosome gain and loss across a phylogeny of 1126 taxa estimated from seven genes. Contrary to our initial expectations, we find that highly degenerated Y chromosomes of many members of the suborder Polyphaga are rarely lost, and that cases of Y chromosome loss are strongly associated with chiasmatic segregation during male meiosis. We propose the "fragile Y" hypothesis, that recurrent selection to reduce recombination between the X and Y chromosome leads to the evolution of a small pseudoautosomal region (PAR), which, in taxa that require XY chiasmata for proper segregation during meiosis, increases the probability of aneuploid gamete production, with Y chromosome loss. This hypothesis predicts that taxa that evolve achiasmatic segregation during male meiosis will rarely lose the Y chromosome. We discuss data from mammals, which are consistent with our prediction. PMID:24939995

  8. Estimating Earthquake Magnitude from the Kentucky Bend Scarp in the New Madrid Seismic Zone Using Field Geomorphic Mapping and High-Resolution LiDAR Topography

    NASA Astrophysics Data System (ADS)

    Kelson, K. I.; Kirkendall, W. G.

    2014-12-01

    Recent suggestions that the 1811-1812 earthquakes in the New Madrid Seismic Zone (NMSZ) ranged from M6.8-7.0 versus M8.0 have implications for seismic hazard estimation in the central US. We more accurately identify the location of the NW-striking, NE-facing Kentucky Bend scarp along the northern Reelfoot fault, which is spatially associated with the Lake County uplift, contemporary seismicity, and changes in the Mississippi River from the February 1812 earthquake. We use 1m-resolution LiDAR hillshades and slope surfaces, aerial photography, soil surveys, and field geomorphic mapping to estimate the location, pattern, and amount of late Holocene coseismic surface deformation. We define eight late Holocene to historic fluvial deposits, and delineate younger alluvia that are progressively inset into older deposits on the upthrown, western side of the fault. Some younger, clayey deposits indicate past ponding against the scarp, perhaps following surface deformational events. The Reelfoot fault is represented by sinuous breaks-in-slope cutting across these fluvial deposits, locally coinciding with shallow faults identified via seismic reflection data (Woolery et al., 1999). The deformation pattern is consistent with NE-directed reverse faulting along single or multiple SW-dipping fault planes, and the complex pattern of fluvial deposition appears partially controlled by intermittent uplift. Six localities contain scarps across correlative deposits and allow evaluation of cumulative surface deformation from LiDAR-derived topographic profiles. Displacements range from 3.4±0.2 m, to 2.2±0.2 m, 1.4±0.3 m, and 0.6±0.1 m across four progressively younger surfaces. The spatial distribution of the profiles argues against the differences being a result of along-strike uplift variability. We attribute the lesser displacements of progressively younger deposits to recurrent surface deformation, but do not yet interpret these initial data with respect to possible earthquake

  9. Sensitivity and uncertainty analysis for the annual P loss estimator (APLE) model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that there are inherent uncertainties with model predictions, limited studies have addressed model prediction uncertainty. In this study we assess the effect of model input error on predict...

  10. Sorption indices to estimate risk of soil phosphorus loss in the Rathbun Lake Watershed, Iowa

    Technology Transfer Automated Retrieval System (TEKTRAN)

    To rank and better understand the risk of P loss from potentially erodible soil materials in the Mollisol-dominated watershed of Rathbun Lake in southern Iowa, we sampled seven representative soil materials at four floodplain sites. We compared the samples by using a variety of characteristics and i...

  11. Wind storm loss estimations in the Canton of Vaud (Western Switzerland)

    NASA Astrophysics Data System (ADS)

    Etienne, C.; Beniston, M.

    2012-12-01

    A storm loss model that was first developed for Germany is applied to the much smaller geographic area of the canton of Vaud, in Western Switzerland. 24 major wind storms that struck the region during the period 1990-2010 are analysed, and outputs are compared to loss observations provided by an insurance company. Model inputs include population data and daily maximum wind speeds from weather stations. These measured wind speeds are regionalised in the canton of Vaud following different methods, using either basic interpolation techniques from Geographic Information Systems (GIS), or by using an existing extreme wind speed map of Switzerland whose values are used as thresholds. A third method considers the wind power, integrating wind speeds temporally over storm duration to calculate losses. Outputs show that the model leads to similar results for all methods, with Pearson's correlation and Spearman's rank coefficients of roughly 0.7. Bootstrap techniques are applied to test the model's robustness. Impacts of population growth and possible changes in storminess under conditions of climate change shifts are also examined for this region, emphasizing high shifts in economic losses related to small increases of input wind speeds.

  12. ESTIMATION OF DIFFUSION LOSSES WHEN SAMPLING DIESEL AEROSOL: A QUALITY ASSURANCE MEASURE

    EPA Science Inventory

    A fundamental component of the QA work for the assessment of instruments and sampling system performance was the investigation of particle losses in sampling lines. Along the aerosol sample pathway from its source to the collection media or measuring instrument, some nano-size p...

  13. N uptake effects on N loss in tile drainage as estimated by RZWQM

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The dual goals of meeting food demand while protecting the environment from excess reactive nitrogen may be one of our greatest ecological challenges. Therefore we quantify the effect of N uptake by a corn-soybean rotation on N loss in tile drainage (and other N budget components) using the agricult...

  14. Turkish Compulsory Earthquake Insurance (TCIP)

    NASA Astrophysics Data System (ADS)

    Erdik, M.; Durukal, E.; Sesetyan, K.

    2009-04-01

    Through a World Bank project a government-sponsored Turkish Catastrophic Insurance Pool (TCIP) is created in 2000 with the essential aim of transferring the government's financial burden of replacing earthquake-damaged housing to international reinsurance and capital markets. Providing coverage to about 2.9 Million homeowners TCIP is the largest insurance program in the country with about 0.5 Billion USD in its own reserves and about 2.3 Billion USD in total claims paying capacity. The total payment for earthquake damage since 2000 (mostly small, 226 earthquakes) amounts to about 13 Million USD. The country-wide penetration rate is about 22%, highest in the Marmara region (30%) and lowest in the south-east Turkey (9%). TCIP is the sole-source provider of earthquake loss coverage up to 90,000 USD per house. The annual premium, categorized on the basis of earthquake zones type of structure, is about US90 for a 100 square meter reinforced concrete building in the most hazardous zone with 2% deductible. The earthquake engineering related shortcomings of the TCIP is exemplified by fact that the average rate of 0.13% (for reinforced concrete buildings) with only 2% deductible is rather low compared to countries with similar earthquake exposure. From an earthquake engineering point of view the risk underwriting (Typification of housing units to be insured, earthquake intensity zonation and the sum insured) of the TCIP needs to be overhauled. Especially for large cities, models can be developed where its expected earthquake performance (and consequently the insurance premium) can be can be assessed on the basis of the location of the unit (microzoned earthquake hazard) and basic structural attributes (earthquake vulnerability relationships). With such an approach, in the future the TCIP can contribute to the control of construction through differentiation of premia on the basis of earthquake vulnerability.

  15. Strong Algerian earthquake strikes near capital city

    NASA Astrophysics Data System (ADS)

    Jacobs, Judy

    On 21 May 2003, a damaging earthquake of Mw 6.8 struck the region of Boumerdes 40 km east of Algiers in northern Algeria (Figure 1). The main shock, which lasted ˜36-40 s, had devastating effects and claimed about 2300 victims, caused more than 11,450 injuries, and left about 200,000 people homeless. It destroyed and seriously damaged around 180,000 housing units and 6000 public buildings with losses estimated at $5 billion. The main shock was widely felt within a radius of ˜400 km in Algeria. To the north, the earthquake was felt in southeastern Spain, including the Balearic Islands, and also in Sardinia and in southern France.

  16. Estimation of long-term Ca(2+) loss through outlet flow from an agricultural watershed and the influencing factors.

    PubMed

    Zhang, Wenzhao; Yin, Chunmei; Chen, Chunlan; Chen, Anlei; Xie, Xiaoli; Fu, Xingan; Hou, Haijun; Wei, Wenxue

    2016-06-01

    Soil Ca(2+) loss from agricultural lands through surface runoff can accelerate soil acidification and render soil degradation, but the characteristics of Ca(2+) loss and influencing factors in watershed scale are unclear. This study was carried out in a watershed with various land uses in a subtropical region of China. The outlet flow was automatically monitored every 5 min all year round, and the water samples were collected twice a year from 2001 to 2011. The concentrations of Ca(2+), Mg(2+), K(+), total nitrogen (TN), and total phosphorus (TP) of water samples were measured. The dynamic losses of the nutrients through the outlet flow were estimated, and the relationships between the nutrient losses and rainfall intensity as well as antecedent soil moisture were investigated. The results showed that great variations of nutrient concentrations and losses appeared during the investigation period. The average concentrations of Ca(2+), Mg(2+), K(+), TN, and TP were 0.43, 0.08, 0.10, 0.19, and 0.003 mmol L(-1), respectively. The average Ca(2+) loss reached 1493.79 mol ha(-1) year(-1) and was several times higher than for Mg(2+), K(+), and TN, about 140 times higher than for TP. Rainfall intensity had remarkable effects on Ca(2+) concentration (P < 0.01) and loss (P < 0.05) when it reached rainstorm level (50 mm day(-1)), while a quadratic relationship was observed between antecedent soil moisture and Ca(2+) concentration only when rainfall intensity was less than 50 mm day(-1). In a word, much greater amounts of Ca(2+) were lost from the watershed, and this may be one important contributor to the increasing acidification of acidic soils in subtropical regions. PMID:26898929

  17. An approach to estimating radiological risk of offsite release from a design basis earthquake for the Process Experimental Pilot Plant (PREPP)

    SciTech Connect

    Lucero, V.; Meale, B.M.; Reny, D.A.; Brown, A.N.

    1990-09-01

    In compliance with Department of Energy (DOE) Order 6430.1A, a seismic analysis was performed on DOE's Process Experimental Pilot Plant (PREPP), a facility for processing low-level and transuranic (TRU) waste. Because no hazard curves were available for the Idaho National Engineering Laboratory (INEL), DOE guidelines were used to estimate the frequency for the specified design-basis earthquake (DBE). A dynamic structural analysis of the building was performed, using the DBE parameters, followed by a probabilistic risk assessment (PRA). For the PRA, a functional organization of the facility equipment was effected so that top events for a representative event tree model could be determined. Building response spectra (calculated from the structural analysis), in conjunction with generic fragility data, were used to generate fragility curves for the PREPP equipment. Using these curves, failure probabilities for each top event were calculated. These probabilities were integrated into the event tree model, and accident sequences and respective probabilities were calculated through quantification. By combining the sequences failure probabilities with a transport analysis of the estimated airborne source term from a DBE, onsite and offsite consequences were calculated. The results of the comprehensive analysis substantiated the ability of the PREPP facility to withstand a DBE with negligible consequence (i.e., estimated release was within personnel and environmental dose guidelines). 57 refs., 19 figs., 20 tabs.

  18. Novel procedure for estimating endogenous losses and measurement of apparent and true digestibility of phosphorus by growing pigs.

    PubMed

    Petersen, G I; Stein, H H

    2006-08-01

    An experiment was conducted to evaluate a novel procedure for estimating endogenous losses of P and for measuring the apparent total tract digestibility (ATTD) and true total tract digestibility (TTTD) of P in 5 inorganic P sources fed to growing pigs. The P sources were dicalcium phosphate (DCP), monocalcium phosphate (MCP) with 50% purity (MCP50), MCP with 70% purity (MCP70), MCP with 100% purity (MCP100), and monosodium phosphate (MSP). A gelatin-based, P-free basal diet was formulated and used to estimate endogenous losses of P. Five P-containing diets were formulated by adding 0.20% total P from each of the inorganic P sources to the basal diet. A seventh diet was formulated by adding 0.16% P from MCP70 to the basal diet. All diets were fed to 7 growing pigs in a 7 x 7 Latin square design, and urine and feces were collected during 5 d of each period. The endogenous loss of P was estimated as 139 +/- 18 mg/kg of DMI. The ATTD of P in MSP was greater (P < 0.05) than in DCP, MCP50, and MCP70 (91.9 vs. 81.5, 82.6, and 81.7%, respectively). In MSP, the TTTD of P was 98.2%. This value was greater (P < 0.05) than the TTTD of P in DCP, MCP50, and MCP70 (88.4, 89.5, and 88.6%, respectively). The ATTD and the TTTD for MCP70 were similar in diets formulated to contain 0.16 and 0.20% total P. Results from the current experiment demonstrate that a P-free diet may be used to measure endogenous losses of P in pigs. By adding inorganic P sources to this diet, the ATTD of P can be directly measured and the TTTD of P may be calculated for each source of P. PMID:16864873

  19. RF Path and Absorption Loss Estimation for Underwater Wireless Sensor Networks in Different Water Environments.

    PubMed

    Qureshi, Umair Mujtaba; Shaikh, Faisal Karim; Aziz, Zuneera; Shah, Syed M Zafi S; Sheikh, Adil A; Felemban, Emad; Qaisar, Saad Bin

    2016-01-01

    Underwater Wireless Sensor Network (UWSN) communication at high frequencies is extremely challenging. The intricacies presented by the underwater environment are far more compared to the terrestrial environment. The prime reason for such intricacies are the physical characteristics of the underwater environment that have a big impact on electromagnetic (EM) signals. Acoustics signals are by far the most preferred choice for underwater wireless communication. Because high frequency signals have the luxury of large bandwidth (BW) at shorter distances, high frequency EM signals cannot penetrate and propagate deep in underwater environments. The EM properties of water tend to resist their propagation and cause severe attenuation. Accordingly, there are two questions that need to be addressed for underwater environment, first what happens when high frequency EM signals operating at 2.4 GHz are used for communication, and second which factors affect the most to high frequency EM signals. To answer these questions, we present real-time experiments conducted at 2.4 GHz in terrestrial and underwater (fresh water) environments. The obtained results helped in studying the physical characteristics (i.e., EM properties, propagation and absorption loss) of underwater environments. It is observed that high frequency EM signals can propagate in fresh water at a shallow depth only and can be considered for a specific class of applications such as water sports. Furthermore, path loss, velocity of propagation, absorption loss and the rate of signal loss in different underwater environments are also calculated and presented in order to understand why EM signals cannot propagate in sea water and oceanic water environments. An optimal solk6ution for underwater communication in terms of coverage distance, bandwidth and nature of communication is presented, along with possible underwater applications of UWSNs at 2.4 GHz. PMID:27322263

  20. RF Path and Absorption Loss Estimation for Underwater Wireless Sensor Networks in Different Water Environments

    PubMed Central

    Qureshi, Umair Mujtaba; Shaikh, Faisal Karim; Aziz, Zuneera; Shah, Syed M. Zafi S.; Sheikh, Adil A.; Felemban, Emad; Qaisar, Saad Bin

    2016-01-01

    Underwater Wireless Sensor Network (UWSN) communication at high frequencies is extremely challenging. The intricacies presented by the underwater environment are far more compared to the terrestrial environment. The prime reason for such intricacies are the physical characteristics of the underwater environment that have a big impact on electromagnetic (EM) signals. Acoustics signals are by far the most preferred choice for underwater wireless communication. Because high frequency signals have the luxury of large bandwidth (BW) at shorter distances, high frequency EM signals cannot penetrate and propagate deep in underwater environments. The EM properties of water tend to resist their propagation and cause severe attenuation. Accordingly, there are two questions that need to be addressed for underwater environment, first what happens when high frequency EM signals operating at 2.4 GHz are used for communication, and second which factors affect the most to high frequency EM signals. To answer these questions, we present real-time experiments conducted at 2.4 GHz in terrestrial and underwater (fresh water) environments. The obtained results helped in studying the physical characteristics (i.e., EM properties, propagation and absorption loss) of underwater environments. It is observed that high frequency EM signals can propagate in fresh water at a shallow depth only and can be co