Sample records for two-dimensional hazard estimation

  1. Induction Hazard Assessment: The Variability of Geoelectric Responses During Geomagnetic Storms Within Common Hazard Zones

    NASA Astrophysics Data System (ADS)

    Cuttler, S. W.; Love, J. J.; Swidinsky, A.

    2017-12-01

    Geomagnetic field data obtained through the INTERMAGNET program are convolved with four validated EarthScope USArray impedances to estimate the geoelectric variations throughout the duration of a geomagnetic storm. A four day long geomagnetic storm began on June 22, 2016, and was recorded at the Brandon (BRD), Manitoba and Fredericksburg (FRD), Virginia magnetic observatories over four days. Two impedance tensors corresponding to each magnetic observatory produce extremely different responses, despite being within close geographical proximity. Estimated time series of the geoelectric field throughout the duration of the geomagnetic storm were calculated, providing an understanding of how the geoelectric field differs across small geographic distances within the same geomagnetic hazard zones derived from prior geomagnetic hazard assessment. We show that the geoelectric response of two sites within 200km of one another can differ by up to two orders of magnitude (4245 mV/km at one location and 38 mV/km at another location 125km away). In addition, we compare these results with estimations of the geoelectric field generated from synthetic 1-dimensional resistivity models commonly used to represent large geographic regions when assessing geomagnetically induced current (GIC) hazards. This comparison shows that estimations of the geomagnetic field from these models differ greatly from estimations produced from Earthscope USArray sites (1205 mV/km in the 1D and 4245 mV/km in the 3D case in one example). This study demonstrates that the application of uniform 1-dimensional resistivity models of the subsurface to wide geographic regions is insufficient to predict the geoelectric hazard at a given location. Furthermore an evaluation of the 3-dimensional resistivity distribution at a given location is necessary to produce a reliable estimation of how the geoelectric field evolves over the course of a geomagnetic storm.

  2. Two-dimensional fuzzy fault tree analysis for chlorine release from a chlor-alkali industry using expert elicitation.

    PubMed

    Renjith, V R; Madhu, G; Nayagam, V Lakshmana Gomathi; Bhasi, A B

    2010-11-15

    The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identification and quantification of these hazards related to chemical industries. Fault tree analysis (FTA) is an established technique in hazard identification. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. This paper outlines the estimation of the probability of release of chlorine from storage and filling facility of chlor-alkali industry using FTA. An attempt has also been made to arrive at the probability of chlorine release using expert elicitation and proven fuzzy logic technique for Indian conditions. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two-dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor involved in expert elicitation. Copyright © 2010 Elsevier B.V. All rights reserved.

  3. REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*

    PubMed Central

    Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171

  4. REGULARIZATION FOR COX'S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY.

    PubMed

    Bradic, Jelena; Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples.

  5. Combined fluvial and pluvial urban flood hazard analysis: concept development and application to Can Tho city, Mekong Delta, Vietnam

    NASA Astrophysics Data System (ADS)

    Apel, Heiko; Martínez Trepat, Oriol; Nghia Hung, Nguyen; Thi Chinh, Do; Merz, Bruno; Viet Dung, Nguyen

    2016-04-01

    Many urban areas experience both fluvial and pluvial floods, because locations next to rivers are preferred settlement areas and the predominantly sealed urban surface prevents infiltration and facilitates surface inundation. The latter problem is enhanced in cities with insufficient or non-existent sewer systems. While there are a number of approaches to analyse either a fluvial or pluvial flood hazard, studies of a combined fluvial and pluvial flood hazard are hardly available. Thus this study aims to analyse a fluvial and a pluvial flood hazard individually, but also to develop a method for the analysis of a combined pluvial and fluvial flood hazard. This combined fluvial-pluvial flood hazard analysis is performed taking Can Tho city, the largest city in the Vietnamese part of the Mekong Delta, as an example. In this tropical environment the annual monsoon triggered floods of the Mekong River, which can coincide with heavy local convective precipitation events, causing both fluvial and pluvial flooding at the same time. The fluvial flood hazard was estimated with a copula-based bivariate extreme value statistic for the gauge Kratie at the upper boundary of the Mekong Delta and a large-scale hydrodynamic model of the Mekong Delta. This provided the boundaries for 2-dimensional hydrodynamic inundation simulation for Can Tho city. The pluvial hazard was estimated by a peak-over-threshold frequency estimation based on local rain gauge data and a stochastic rainstorm generator. Inundation for all flood scenarios was simulated by a 2-dimensional hydrodynamic model implemented on a Graphics Processing Unit (GPU) for time-efficient flood propagation modelling. The combined fluvial-pluvial flood scenarios were derived by adding rainstorms to the fluvial flood events during the highest fluvial water levels. The probabilities of occurrence of the combined events were determined assuming independence of the two flood types and taking the seasonality and probability of coincidence into account. All hazards - fluvial, pluvial and combined - were accompanied by an uncertainty estimation taking into account the natural variability of the flood events. This resulted in probabilistic flood hazard maps showing the maximum inundation depths for a selected set of probabilities of occurrence, with maps showing the expectation (median) and the uncertainty by percentile maps. The results are critically discussed and their usage in flood risk management are outlined.

  6. Geoelectric hazard assessment: the differences of geoelectric responses during magnetic storms within common physiographic zones

    NASA Astrophysics Data System (ADS)

    Cuttler, Stephen W.; Love, Jeffrey J.; Swidinsky, Andrei

    2018-03-01

    Geomagnetic field data obtained through the INTERMAGNET program are convolved with with magnetotelluric surface impedance from four EarthScope USArray sites to estimate the geoelectric variations throughout the duration of a magnetic storm. A duration of time from June 22, 2016, to June 25, 2016, is considered which encompasses a magnetic storm of moderate size recorded at the Brandon, Manitoba and Fredericksburg, Virginia magnetic observatories over 3 days. Two impedance sites were chosen in each case which represent different responses while being within close geographic proximity and within the same physiographic zone. This study produces estimated time series of the geoelectric field throughout the duration of a magnetic storm, providing an understanding of how the geoelectric field differs across small geographic distances within the same physiographic zone. This study shows that the geoelectric response of two sites within 200 km of one another can differ by up to two orders of magnitude (4484 mV/km at one site and 41 mV/km at another site 125 km away). This study demonstrates that the application of uniform 1-dimensional conductivity models of the subsurface to wide geographic regions is insufficient to predict the geoelectric hazard at a given site. This necessitates that an evaluation of the 3-dimensional conductivity distribution at a given location is necessary to produce a reliable estimation of how the geoelectric field evolves over the course of a magnetic storm.

  7. Geoelectric hazard assessment: the differences of geoelectric responses during magnetic storms within common physiographic zones

    USGS Publications Warehouse

    Cuttler, Stephen W.; Love, Jeffrey J.; Swidinsky, Andrei

    2018-01-01

    Geomagnetic field data obtained through the INTERMAGNET program are convolved with with magnetotelluric surface impedance from four EarthScope USArray sites to estimate the geoelectric variations throughout the duration of a magnetic storm. A duration of time from June 22, 2016, to June 25, 2016, is considered which encompasses a magnetic storm of moderate size recorded at the Brandon, Manitoba and Fredericksburg, Virginia magnetic observatories over 3 days. Two impedance sites were chosen in each case which represent different responses while being within close geographic proximity and within the same physiographic zone. This study produces estimated time series of the geoelectric field throughout the duration of a magnetic storm, providing an understanding of how the geoelectric field differs across small geographic distances within the same physiographic zone. This study shows that the geoelectric response of two sites within 200 km of one another can differ by up to two orders of magnitude (4484 mV/km at one site and 41 mV/km at another site 125 km away). This study demonstrates that the application of uniform 1-dimensional conductivity models of the subsurface to wide geographic regions is insufficient to predict the geoelectric hazard at a given site. This necessitates that an evaluation of the 3-dimensional conductivity distribution at a given location is necessary to produce a reliable estimation of how the geoelectric field evolves over the course of a magnetic storm.

  8. Combined fluvial and pluvial urban flood hazard analysis: method development and application to Can Tho City, Mekong Delta, Vietnam

    NASA Astrophysics Data System (ADS)

    Apel, H.; Trepat, O. M.; Hung, N. N.; Chinh, D. T.; Merz, B.; Dung, N. V.

    2015-08-01

    Many urban areas experience both fluvial and pluvial floods, because locations next to rivers are preferred settlement areas, and the predominantly sealed urban surface prevents infiltration and facilitates surface inundation. The latter problem is enhanced in cities with insufficient or non-existent sewer systems. While there are a number of approaches to analyse either fluvial or pluvial flood hazard, studies of combined fluvial and pluvial flood hazard are hardly available. Thus this study aims at the analysis of fluvial and pluvial flood hazard individually, but also at developing a method for the analysis of combined pluvial and fluvial flood hazard. This combined fluvial-pluvial flood hazard analysis is performed taking Can Tho city, the largest city in the Vietnamese part of the Mekong Delta, as example. In this tropical environment the annual monsoon triggered floods of the Mekong River can coincide with heavy local convective precipitation events causing both fluvial and pluvial flooding at the same time. Fluvial flood hazard was estimated with a copula based bivariate extreme value statistic for the gauge Kratie at the upper boundary of the Mekong Delta and a large-scale hydrodynamic model of the Mekong Delta. This provided the boundaries for 2-dimensional hydrodynamic inundation simulation for Can Tho city. Pluvial hazard was estimated by a peak-over-threshold frequency estimation based on local rain gauge data, and a stochastic rain storm generator. Inundation was simulated by a 2-dimensional hydrodynamic model implemented on a Graphical Processor Unit (GPU) for time-efficient flood propagation modelling. All hazards - fluvial, pluvial and combined - were accompanied by an uncertainty estimation considering the natural variability of the flood events. This resulted in probabilistic flood hazard maps showing the maximum inundation depths for a selected set of probabilities of occurrence, with maps showing the expectation (median) and the uncertainty by percentile maps. The results are critically discussed and ways for their usage in flood risk management are outlined.

  9. Risk Assessment Using the Three Dimensions of Probability (Likelihood), Severity, and Level of Control

    NASA Technical Reports Server (NTRS)

    Watson, Clifford C.

    2011-01-01

    Traditional hazard analysis techniques utilize a two-dimensional representation of the results determined by relative likelihood and severity of the residual risk. These matrices present a quick-look at the Likelihood (Y-axis) and Severity (X-axis) of the probable outcome of a hazardous event. A three-dimensional method, described herein, utilizes the traditional X and Y axes, while adding a new, third dimension, shown as the Z-axis, and referred to as the Level of Control. The elements of the Z-axis are modifications of the Hazard Elimination and Control steps (also known as the Hazard Reduction Precedence Sequence). These steps are: 1. Eliminate risk through design. 2. Substitute less risky materials for more hazardous materials. 3. Install safety devices. 4. Install caution and warning devices. 5. Develop administrative controls (to include special procedures and training.) 6. Provide protective clothing and equipment. When added to the two-dimensional models, the level of control adds a visual representation of the risk associated with the hazardous condition, creating a tall-pole for the least-well-controlled failure while establishing the relative likelihood and severity of all causes and effects for an identified hazard. Computer modeling of the analytical results, using spreadsheets and three-dimensional charting gives a visual confirmation of the relationship between causes and their controls.

  10. Risk Presentation Using the Three Dimensions of Likelihood, Severity, and Level of Control

    NASA Technical Reports Server (NTRS)

    Watson, Clifford

    2010-01-01

    Traditional hazard analysis techniques utilize a two-dimensional representation of the results determined by relative likelihood and severity of the residual risk. These matrices present a quick-look at the Likelihood (Y-axis) and Severity (X-axis) of the probable outcome of a hazardous event. A three-dimensional method, described herein, utilizes the traditional X and Y axes, while adding a new, third dimension, shown as the Z-axis, and referred to as the Level of Control. The elements of the Z-axis are modifications of the Hazard Elimination and Control steps (also known as the Hazard Reduction Precedence Sequence). These steps are: 1. Eliminate risk through design. 2. Substitute less risky materials for more hazardous materials. 3. Install safety devices. 4. Install caution and warning devices. 5. Develop administrative controls (to include special procedures and training.) 6. Provide protective clothing and equipment. When added to the two-dimensional models, the level of control adds a visual representation of the risk associated with the hazardous condition, creating a tall-pole for the leastwell-controlled failure while establishing the relative likelihood and severity of all causes and effects for an identified hazard. Computer modeling of the analytical results, using spreadsheets and three-dimensional charting gives a visual confirmation of the relationship between causes and their controls.

  11. Neo-Deterministic Seismic Hazard Assessment at Watts Bar Nuclear Power Plant Site, Tennessee, USA

    NASA Astrophysics Data System (ADS)

    Brandmayr, E.; Cameron, C.; Vaccari, F.; Fasan, M.; Romanelli, F.; Magrin, A.; Vlahovic, G.

    2017-12-01

    Watts Bar Nuclear Power Plant (WBNPP) is located within the Eastern Tennessee Seismic Zone (ETSZ), the second most naturally active seismic zone in the US east of the Rocky Mountains. The largest instrumental earthquakes in the ETSZ are M 4.6, although paleoseismic evidence supports events of M≥6.5. Events are mainly strike-slip and occur on steeply dipping planes at an average depth of 13 km. In this work, we apply the neo-deterministic seismic hazard assessment to estimate the potential seismic input at the plant site, which has been recently targeted by the Nuclear Regulatory Commission for a seismic hazard reevaluation. First, we perform a parametric test on some seismic source characteristics (i.e. distance, depth, strike, dip and rake) using a one-dimensional regional bedrock model to define the most conservative scenario earthquakes. Then, for the selected scenario earthquakes, the estimate of the ground motion input at WBNPP is refined using a two-dimensional local structural model (based on the plant's operator documentation) with topography, thus looking for site amplification and different possible rupture processes at the source. WBNNP features a safe shutdown earthquake (SSE) design with PGA of 0.18 g and maximum spectral amplification (SA, 5% damped) of 0.46 g (at periods between 0.15 and 0.5 s). Our results suggest that, although for most of the considered scenarios the PGA is relatively low, SSE values can be reached and exceeded in the case of the most conservative scenario earthquakes.

  12. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  13. 3D visualization of unsteady 2D airplane wake vortices

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Zheng, Z. C.

    1994-01-01

    Air flowing around the wing tips of an airplane forms horizontal tornado-like vortices that can be dangerous to following aircraft. The dynamics of such vortices, including ground and atmospheric effects, can be predicted by numerical simulation, allowing the safety and capacity of airports to be improved. In this paper, we introduce three-dimensional techniques for visualizing time-dependent, two-dimensional wake vortex computations, and the hazard strength of such vortices near the ground. We describe a vortex core tracing algorithm and a local tiling method to visualize the vortex evolution. The tiling method converts time-dependent, two-dimensional vortex cores into three-dimensional vortex tubes. Finally, a novel approach calculates the induced rolling moment on the following airplane at each grid point within a region near the vortex tubes and thus allows three-dimensional visualization of the hazard strength of the vortices. We also suggest ways of combining multiple visualization methods to present more information simultaneously.

  14. A one-dimensional model of solid-earth electrical resistivity beneath Florida

    USGS Publications Warehouse

    Blum, Cletus; Love, Jeffrey J.; Pedrie, Kolby; Bedrosian, Paul A.; Rigler, E. Joshua

    2015-11-19

    An estimated one-dimensional layered model of electrical resistivity beneath Florida was developed from published geological and geophysical information. The resistivity of each layer is represented by plausible upper and lower bounds as well as a geometric mean resistivity. Corresponding impedance transfer functions, Schmucker-Weidelt transfer functions, apparent resistivity, and phase responses are calculated for inducing geomagnetic frequencies ranging from 10−5 to 100 hertz. The resulting one-dimensional model and response functions can be used to make general estimates of time-varying electric fields associated with geomagnetic storms such as might represent induction hazards for electric-power grid operation. The plausible upper- and lower-bound resistivity structures show the uncertainty, giving a wide range of plausible time-varying electric fields.

  15. Observed and forecast flood-inundation mapping application-A pilot study of an eleven-mile reach of the White River, Indianapolis, Indiana

    USGS Publications Warehouse

    Kim, Moon H.; Morlock, Scott E.; Arihood, Leslie D.; Kiesler, James L.

    2011-01-01

    Near-real-time and forecast flood-inundation mapping products resulted from a pilot study for an 11-mile reach of the White River in Indianapolis. The study was done by the U.S. Geological Survey (USGS), Indiana Silver Jackets hazard mitigation taskforce members, the National Weather Service (NWS), the Polis Center, and Indiana University, in cooperation with the City of Indianapolis, the Indianapolis Museum of Art, the Indiana Department of Homeland Security, and the Indiana Department of Natural Resources, Division of Water. The pilot project showed that it is technically feasible to create a flood-inundation map library by means of a two-dimensional hydraulic model, use a map from the library to quickly complete a moderately detailed local flood-loss estimate, and automatically run the hydraulic model during a flood event to provide the maps and flood-damage information through a Web graphical user interface. A library of static digital flood-inundation maps was created by means of a calibrated two-dimensional hydraulic model. Estimated water-surface elevations were developed for a range of river stages referenced to a USGS streamgage and NWS flood forecast point colocated within the study reach. These maps were made available through the Internet in several formats, including geographic information system, Keyhole Markup Language, and Portable Document Format. A flood-loss estimate was completed for part of the study reach by using one of the flood-inundation maps from the static library. The Federal Emergency Management Agency natural disaster-loss estimation program HAZUS-MH, in conjunction with local building information, was used to complete a level 2 analysis of flood-loss estimation. A Service-Oriented Architecture-based dynamic flood-inundation application was developed and was designed to start automatically during a flood, obtain near real-time and forecast data (from the colocated USGS streamgage and NWS flood forecast point within the study reach), run the two-dimensional hydraulic model, and produce flood-inundation maps. The application used local building data and depth-damage curves to estimate flood losses based on the maps, and it served inundation maps and flood-loss estimates through a Web-based graphical user interface.

  16. Variability of site response in Seattle, Washington

    USGS Publications Warehouse

    Hartzell, S.; Carver, D.; Cranswick, E.; Frankel, A.

    2000-01-01

    Ground motion from local earthquakes and the SHIPS (Seismic Hazards Investigation in Puget Sound) experiment is used to estimate site amplification factors in Seattle. Earthquake and SHIPS records are analyzed by two methods: (1) spectral ratios relative to a nearby site on Tertiary sandstone, and (2) a source/site spectral inversion technique. Our results show site amplifications between 3 and 4 below 5 Hz for West Seattle relative to Tertiary rock. These values are approximately 30% lower than amplification in the Duwamish Valley on artificial fill, but significantly higher than the calculated range of 2 to 2.5 below 5 Hz for the till-covered hills east of downtown Seattle. Although spectral amplitudes are only 30% higher in the Duwamish Valley compared to West Seattle, the duration of long-period ground motion is significantly greater on the artificial fill sites. Using a three-dimensional displacement response spectrum measure that includes the effects of ground-motion duration, values in the Duwamish Valley are 2 to 3 times greater than West Seattle. These calculations and estimates of site response as a function of receiver azimuth point out the importance of trapped surface-wave energy within the shallow, low-velocity, sedimentary layers of the Duwamish Valley. One-dimensional velocity models yield spectral amplification factors close to the observations for till sites east of downtown Seattle and the Duwamish Valley, but underpredict amplifications by a factor of 2 in West Seattle. A two-dimensional finite-difference model does equally well for the till sites and the Duwamish Valley and also yields duration estimates consistent with the observations for the Duwamish Valley. The two-dimensional model, however, still underpredicts amplification in West Seattle by up to a factor of 2. This discrepancy is attributed to 3D effects, including basin-edge-induced surface waves and basin-geometry-focusing effects, caused by the proximity of the Seattle thrust fault and the sediment-filled Seattle basin.

  17. Liver Stiffness Measured by Two-Dimensional Shear-Wave Elastography: Prognostic Value after Radiofrequency Ablation for Hepatocellular Carcinoma.

    PubMed

    Lee, Dong Ho; Lee, Jeong Min; Yoon, Jung-Hwan; Kim, Yoon Jun; Lee, Jeong-Hoon; Yu, Su Jong; Han, Joon Koo

    2018-03-01

    To evaluate the prognostic value of liver stiffness (LS) measured using two-dimensional (2D) shear-wave elastography (SWE) in patients with hepatocellular carcinoma (HCC) treated by radiofrequency ablation (RFA). The Institutional Review Board approved this retrospective study and informed consent was obtained from all patients. A total of 134 patients with up to 3 HCCs ≤5 cm who had undergone pre-procedural 2D-SWE prior to RFA treatment between January 2012 and December 2013 were enrolled. LS values were measured using real-time 2D-SWE before RFA on the procedural day. After a mean follow-up of 33.8 ± 9.9 months, we analyzed the overall survival after RFA using the Kaplan-Meier method and Cox proportional hazard regression model. The optimal cutoff LS value to predict overall survival was determined using the minimal p value approach. During the follow-up period, 22 patients died, and the estimated 1- and 3-year overall survival rates were 96.4 and 85.8%, respectively. LS measured by 2D-SWE was found to be a significant predictive factor for overall survival after RFA of HCCs, as was the presence of extrahepatic metastases. As for the optimal cutoff LS value for the prediction of overall survival, it was determined to be 13.3 kPa. In our study, 71 patients had LS values ≥13.3 kPa, and the estimated 3-year overall survival was 76.8% compared to 96.3% in 63 patients with LS values <13.3 kPa. This difference was statistically significant (hazard ratio = 4.30 [1.26-14.7]; p = 0.020). LS values measured by 2D-SWE was a significant predictive factor for overall survival after RFA for HCC.

  18. Estimating survival probabilities by exposure levels: utilizing vital statistics and complex survey data with mortality follow-up.

    PubMed

    Landsman, V; Lou, W Y W; Graubard, B I

    2015-05-20

    We present a two-step approach for estimating hazard rates and, consequently, survival probabilities, by levels of general categorical exposure. The resulting estimator utilizes three sources of data: vital statistics data and census data are used at the first step to estimate the overall hazard rate for a given combination of gender and age group, and cohort data constructed from a nationally representative complex survey with linked mortality records, are used at the second step to divide the overall hazard rate by exposure levels. We present an explicit expression for the resulting estimator and consider two methods for variance estimation that account for complex multistage sample design: (1) the leaving-one-out jackknife method, and (2) the Taylor linearization method, which provides an analytic formula for the variance estimator. The methods are illustrated with smoking and all-cause mortality data from the US National Health Interview Survey Linked Mortality Files, and the proposed estimator is compared with a previously studied crude hazard rate estimator that uses survey data only. The advantages of a two-step approach and possible extensions of the proposed estimator are discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Confidence intervals for the first crossing point of two hazard functions.

    PubMed

    Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng

    2009-12-01

    The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.

  20. An estimator of the survival function based on the semi-Markov model under dependent censorship.

    PubMed

    Lee, Seung-Yeoun; Tsai, Wei-Yann

    2005-06-01

    Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.

  1. PSHA in Israel by using the synthetic ground motions from simulated seismicity: the modified SvE procedure

    NASA Astrophysics Data System (ADS)

    Meirova, T.; Shapira, A.; Eppelbaum, L.

    2018-05-01

    In this study, we updated and modified the SvE approach of Shapira and van Eck (Nat Hazards 8:201-215, 1993) which may be applied as an alternative to the conventional probabilistic seismic hazard assessment (PSHA) in Israel and other regions of low and moderate seismicity where measurements of strong ground motions are scarce. The new computational code SvE overcomes difficulties associated with the description of the earthquake source model and regional ground-motion scaling. In the modified SvE procedure, generating suites of regional ground motion is based on the extended two-dimensional source model of Motazedian and Atkinson (Bull Seism Soc Amer 95:995-1010, 2005a) and updated regional ground-motion scaling (Meirova and Hofstteter, Bull Earth Eng 15:3417-3436, 2017). The analytical approach of Mavroeidis and Papageorgiou (Bull Seism Soc Amer 93:1099-1131, 2003) is used to simulate the near-fault acceleration with the near-fault effects. The comparison of hazard estimates obtained by using the conventional method implemented in the National Building Code for Design provisions for earthquake resistance of structures and the modified SvE procedure for rock-site conditions indicates a general agreement with some perceptible differences at the periods of 0.2 and 0.5 s. For the periods above 0.5 s, the SvE estimates are systematically greater and can increase by a factor of 1.6. For the soft-soil sites, the SvE hazard estimates at the period of 0.2 s are greater than those based on the CB2008 ground-motion prediction equation (GMPE) by a factor of 1.3-1.6. We suggest that the hazard estimates for the sites with soft-soil conditions calculated by the modified SvE procedure are more reliable than those which can be found by means of the conventional PSHA. This result agrees with the opinion that the use of a standard GMPE applying the NEHRP soil classification based on the V s, 30 parameter may be inappropriate for PSHA at many sites in Israel.

  2. Classification of Large-Scale Remote Sensing Images for Automatic Identification of Health Hazards: Smoke Detection Using an Autologistic Regression Classifier.

    PubMed

    Wolters, Mark A; Dean, C B

    2017-01-01

    Remote sensing images from Earth-orbiting satellites are a potentially rich data source for monitoring and cataloguing atmospheric health hazards that cover large geographic regions. A method is proposed for classifying such images into hazard and nonhazard regions using the autologistic regression model, which may be viewed as a spatial extension of logistic regression. The method includes a novel and simple approach to parameter estimation that makes it well suited to handling the large and high-dimensional datasets arising from satellite-borne instruments. The methodology is demonstrated on both simulated images and a real application to the identification of forest fire smoke.

  3. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  4. Two-dimensional signal processing with application to image restoration

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1974-01-01

    A recursive technique for modeling and estimating a two-dimensional signal contaminated by noise is presented. A two-dimensional signal is assumed to be an undistorted picture, where the noise introduces the distortion. Both the signal and the noise are assumed to be wide-sense stationary processes with known statistics. Thus, to estimate the two-dimensional signal is to enhance the picture. The picture representing the two-dimensional signal is converted to one dimension by scanning the image horizontally one line at a time. The scanner output becomes a nonstationary random process due to the periodic nature of the scanner operation. Procedures to obtain a dynamical model corresponding to the autocorrelation function of the scanner output are derived. Utilizing the model, a discrete Kalman estimator is designed to enhance the image.

  5. A finite area scheme for shallow granular flows on three-dimensional surfaces

    NASA Astrophysics Data System (ADS)

    Rauter, Matthias

    2017-04-01

    Shallow granular flow models have become a popular tool for the estimation of natural hazards, such as landslides, debris flows and avalanches. The shallowness of the flow allows to reduce the three-dimensional governing equations to a quasi two-dimensional system. Three-dimensional flow fields are replaced by their depth-integrated two-dimensional counterparts, which yields a robust and fast method [1]. A solution for a simple shallow granular flow model, based on the so-called finite area method [3] is presented. The finite area method is an adaption of the finite volume method [4] to two-dimensional curved surfaces in three-dimensional space. This method handles the three dimensional basal topography in a simple way, making the model suitable for arbitrary (but mildly curved) topography, such as natural terrain. Furthermore, the implementation into the open source software OpenFOAM [4] is shown. OpenFOAM is a popular computational fluid dynamics application, designed so that the top-level code mimics the mathematical governing equations. This makes the code easy to read and extendable to more sophisticated models. Finally, some hints on how to get started with the code and how to extend the basic model will be given. I gratefully acknowledge the financial support by the OEAW project "beyond dense flow avalanches". Savage, S. B. & Hutter, K. 1989 The motion of a finite mass of granular material down a rough incline. Journal of Fluid Mechanics 199, 177-215. Ferziger, J. & Peric, M. 2002 Computational methods for fluid dynamics, 3rd edn. Springer. Tukovic, Z. & Jasak, H. 2012 A moving mesh finite volume interface tracking method for surface tension dominated interfacial fluid flow. Computers & fluids 55, 70-84. Weller, H. G., Tabor, G., Jasak, H. & Fureby, C. 1998 A tensorial approach to computational continuum mechanics using object-oriented techniques. Computers in physics 12(6), 620-631.

  6. 30 CFR 550.214 - What geological and geophysical (G&G) information must accompany the EP?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... already submitted it to the Regional Supervisor. (f) Shallow hazards assessment. For each proposed well, an assessment of any seafloor and subsurface geological and manmade features and conditions that may...-bearing reservoir showing the locations of proposed wells. (c) Two-dimensional (2-D) or three-dimensional...

  7. 30 CFR 550.214 - What geological and geophysical (G&G) information must accompany the EP?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... already submitted it to the Regional Supervisor. (f) Shallow hazards assessment. For each proposed well, an assessment of any seafloor and subsurface geological and manmade features and conditions that may...-bearing reservoir showing the locations of proposed wells. (c) Two-dimensional (2-D) or three-dimensional...

  8. 30 CFR 550.214 - What geological and geophysical (G&G) information must accompany the EP?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... already submitted it to the Regional Supervisor. (f) Shallow hazards assessment. For each proposed well, an assessment of any seafloor and subsurface geological and manmade features and conditions that may...-bearing reservoir showing the locations of proposed wells. (c) Two-dimensional (2-D) or three-dimensional...

  9. Exposure Models for the Prior Distribution in Bayesian Decision Analysis for Occupational Hygiene Decision Making

    PubMed Central

    Lee, Eun Gyung; Kim, Seung Won; Feigley, Charles E.; Harper, Martin

    2015-01-01

    This study introduces two semi-quantitative methods, Structured Subjective Assessment (SSA) and Control of Substances Hazardous to Health (COSHH) Essentials, in conjunction with two-dimensional Monte Carlo simulations for determining prior probabilities. Prior distribution using expert judgment was included for comparison. Practical applications of the proposed methods were demonstrated using personal exposure measurements of isoamyl acetate in an electronics manufacturing facility and of isopropanol in a printing shop. Applicability of these methods in real workplaces was discussed based on the advantages and disadvantages of each method. Although these methods could not be completely independent of expert judgments, this study demonstrated a methodological improvement in the estimation of the prior distribution for the Bayesian decision analysis tool. The proposed methods provide a logical basis for the decision process by considering determinants of worker exposure. PMID:23252451

  10. Seismic hazard and risk assessment in the intraplate environment: The New Madrid seismic zone of the central United States

    USGS Publications Warehouse

    Wang, Z.

    2007-01-01

    Although the causes of large intraplate earthquakes are still not fully understood, they pose certain hazard and risk to societies. Estimating hazard and risk in these regions is difficult because of lack of earthquake records. The New Madrid seismic zone is one such region where large and rare intraplate earthquakes (M = 7.0 or greater) pose significant hazard and risk. Many different definitions of hazard and risk have been used, and the resulting estimates differ dramatically. In this paper, seismic hazard is defined as the natural phenomenon generated by earthquakes, such as ground motion, and is quantified by two parameters: a level of hazard and its occurrence frequency or mean recurrence interval; seismic risk is defined as the probability of occurrence of a specific level of seismic hazard over a certain time and is quantified by three parameters: probability, a level of hazard, and exposure time. Probabilistic seismic hazard analysis (PSHA), a commonly used method for estimating seismic hazard and risk, derives a relationship between a ground motion parameter and its return period (hazard curve). The return period is not an independent temporal parameter but a mathematical extrapolation of the recurrence interval of earthquakes and the uncertainty of ground motion. Therefore, it is difficult to understand and use PSHA. A new method is proposed and applied here for estimating seismic hazard in the New Madrid seismic zone. This method provides hazard estimates that are consistent with the state of our knowledge and can be easily applied to other intraplate regions. ?? 2007 The Geological Society of America.

  11. Validation of a heteroscedastic hazards regression model.

    PubMed

    Wu, Hong-Dar Isaac; Hsieh, Fushing; Chen, Chen-Hsin

    2002-03-01

    A Cox-type regression model accommodating heteroscedasticity, with a power factor of the baseline cumulative hazard, is investigated for analyzing data with crossing hazards behavior. Since the approach of partial likelihood cannot eliminate the baseline hazard, an overidentified estimating equation (OEE) approach is introduced in the estimation procedure. It by-product, a model checking statistic, is presented to test for the overall adequacy of the heteroscedastic model. Further, under the heteroscedastic model setting, we propose two statistics to test the proportional hazards assumption. Implementation of this model is illustrated in a data analysis of a cancer clinical trial.

  12. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  13. Risk Assessment Using the Three Dimensions of Probability (Likelihood), Severity, and Level of Control

    NASA Technical Reports Server (NTRS)

    Watson, Clifford

    2010-01-01

    Traditional hazard analysis techniques utilize a two-dimensional representation of the results determined by relative likelihood and severity of the residual risk. These matrices present a quick-look at the Likelihood (Y-axis) and Severity (X-axis) of the probable outcome of a hazardous event. A three-dimensional method, described herein, utilizes the traditional X and Y axes, while adding a new, third dimension, shown as the Z-axis, and referred to as the Level of Control. The elements of the Z-axis are modifications of the Hazard Elimination and Control steps (also known as the Hazard Reduction Precedence Sequence). These steps are: 1. Eliminate risk through design. 2. Substitute less risky materials for more hazardous materials. 3. Install safety devices. 4. Install caution and warning devices. 5. Develop administrative controls (to include special procedures and training.) 6. Provide protective clothing and equipment. When added to the twodimensional models, the level of control adds a visual representation of the risk associated with the hazardous condition, creating a tall-pole for the least-well-controlled failure while establishing the relative likelihood and severity of all causes and effects for an identified hazard. Computer modeling of the analytical results, using spreadsheets and threedimensional charting gives a visual confirmation of the relationship between causes and their controls

  14. Harvesting rockfall hazard evaluation parameters from Google Earth Street View

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Agioutantis, Zacharias; Tripolitsiotis, Achilles; Steiakakis, Chrysanthos; Mertikas, Stelios

    2015-04-01

    Rockfall incidents along highways and railways prove extremely dangerous for properties, infrastructures and human lives. Several qualitative metrics such as the Rockfall Hazard Rating System (RHRS) and the Colorado Rockfall Hazard Rating System (CRHRS) have been established to estimate rockfall potential and provide risk maps in order to control and monitor rockfall incidents. The implementation of such metrics for efficient and reliable risk modeling require accurate knowledge of multi-parametric attributes such as the geological, geotechnical, topographic parameters of the study area. The Missouri Rockfall Hazard Rating System (MORH RS) identifies the most potentially problematic areas using digital video logging for the determination of parameters like slope height and angle, face irregularities, etc. This study aims to harvest in a semi-automated approach geometric and qualitative measures through open source platforms that may provide 3-dimensional views of the areas of interest. More specifically, the Street View platform from Google Maps, is hereby used to provide essential information that can be used towards 3-dimensional reconstruction of slopes along highways. The potential of image capturing along a programmable virtual route to provide the input data for photogrammetric processing is also evaluated. Moreover, qualitative characterization of the geological and geotechnical status, based on the Street View images, is performed. These attributes are then integrated to deliver a GIS-based rockfall hazard map. The 3-dimensional models are compared to actual photogrammetric measures in a rockfall prone area in Crete, Greece while in-situ geotechnical characterization is also used to compare and validate the hazard risk. This work is considered as the first step towards the exploitation of open source platforms to improve road safety and the development of an operational system where authorized agencies (i.e., civil protection) will be able to acquire near-real time hazard maps based on video images retrieved either by open source platforms, operational unmanned aerial vehicles, and/or simple video recordings from users. This work has been performed under the framework of the "Cooperation 2011" project ISTRIA (11_SYN_9_13989) funded from the Operational Program "Competitiveness and Entrepreneurship" (co-funded by the European Regional Development Fund (ERDF)) and managed by the Greek General Secretariat for Research and Technology.

  15. Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.

    PubMed

    Rad, Kamiar Rahnama; Paninski, Liam

    2010-01-01

    Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.

  16. Modeling landslide recurrence in Seattle, Washington, USA

    USGS Publications Warehouse

    Salciarini, Diana; Godt, Jonathan W.; Savage, William Z.; Baum, Rex L.; Conversini, Pietro

    2008-01-01

    To manage the hazard associated with shallow landslides, decision makers need an understanding of where and when landslides may occur. A variety of approaches have been used to estimate the hazard from shallow, rainfall-triggered landslides, such as empirical rainfall threshold methods or probabilistic methods based on historical records. The wide availability of Geographic Information Systems (GIS) and digital topographic data has led to the development of analytic methods for landslide hazard estimation that couple steady-state hydrological models with slope stability calculations. Because these methods typically neglect the transient effects of infiltration on slope stability, results cannot be linked with historical or forecasted rainfall sequences. Estimates of the frequency of conditions likely to cause landslides are critical for quantitative risk and hazard assessments. We present results to demonstrate how a transient infiltration model coupled with an infinite slope stability calculation may be used to assess shallow landslide frequency in the City of Seattle, Washington, USA. A module called CRF (Critical RainFall) for estimating deterministic rainfall thresholds has been integrated in the TRIGRS (Transient Rainfall Infiltration and Grid-based Slope-Stability) model that combines a transient, one-dimensional analytic solution for pore-pressure response to rainfall infiltration with an infinite slope stability calculation. Input data for the extended model include topographic slope, colluvial thickness, initial water-table depth, material properties, and rainfall durations. This approach is combined with a statistical treatment of rainfall using a GEV (General Extreme Value) probabilistic distribution to produce maps showing the shallow landslide recurrence induced, on a spatially distributed basis, as a function of rainfall duration and hillslope characteristics.

  17. Simulation of Water-Surface Elevations and Velocity Distributions at the U.S. Highway 13 Bridge over the Tar River at Greenville, North Carolina, Using One- and Two-Dimensional Steady-State Hydraulic Models

    USGS Publications Warehouse

    Wagner, Chad R.

    2007-01-01

    The use of one-dimensional hydraulic models currently is the standard method for estimating velocity fields through a bridge opening for scour computations and habitat assessment. Flood-flow contraction through bridge openings, however, is hydrodynamically two dimensional and often three dimensional. Although there is awareness of the utility of two-dimensional models to predict the complex hydraulic conditions at bridge structures, little guidance is available to indicate whether a one- or two-dimensional model will accurately estimate the hydraulic conditions at a bridge site. The U.S. Geological Survey, in cooperation with the North Carolina Department of Transportation, initiated a study in 2004 to compare one- and two-dimensional model results with field measurements at complex riverine and tidal bridges in North Carolina to evaluate the ability of each model to represent field conditions. The field data consisted of discharge and depth-averaged velocity profiles measured with an acoustic Doppler current profiler and surveyed water-surface profiles for two high-flow conditions. For the initial study site (U.S. Highway 13 over the Tar River at Greenville, North Carolina), the water-surface elevations and velocity distributions simulated by the one- and two-dimensional models showed appreciable disparity in the highly sinuous reach upstream from the U.S. Highway 13 bridge. Based on the available data from U.S. Geological Survey streamgaging stations and acoustic Doppler current profiler velocity data, the two-dimensional model more accurately simulated the water-surface elevations and the velocity distributions in the study reach, and contracted-flow magnitudes and direction through the bridge opening. To further compare the results of the one- and two-dimensional models, estimated hydraulic parameters (flow depths, velocities, attack angles, blocked flow width) for measured high-flow conditions were used to predict scour depths at the U.S. Highway 13 bridge by using established methods. Comparisons of pier-scour estimates from both models indicated that the scour estimates from the two-dimensional model were as much as twice the depth of the estimates from the one-dimensional model. These results can be attributed to higher approach velocities and the appreciable flow angles at the piers simulated by the two-dimensional model and verified in the field. Computed flood-frequency estimates of the 10-, 50-, 100-, and 500-year return-period floods on the Tar River at Greenville were also simulated with both the one- and two-dimensional models. The simulated water-surface profiles and velocity fields of the various return-period floods were used to compare the modeling approaches and provide information on what return-period discharges would result in road over-topping and(or) pressure flow. This information is essential in the design of new and replacement structures. The ability to accurately simulate water-surface elevations and velocity magnitudes and distributions at bridge crossings is essential in assuring that bridge plans balance public safety with the most cost-effective design. By compiling pertinent bridge-site characteristics and relating them to the results of several model-comparison studies, the framework for developing guidelines for selecting the most appropriate model for a given bridge site can be accomplished.

  18. Subscale Fast Cookoff Testing and Modeling for the Hazard Assessment of Large Rocket Motors

    DTIC Science & Technology

    2001-03-01

    41 LIST OF TABLES Table 1 Heats of Vaporization Parameter for Two-liner Phase Transformation - Complete Liner Sublimation and/or Combined Liner...One-dimensional 2-D Two-dimensional ALE3D Arbitrary-Lagrange-Eulerian (3-D) Computer Code ALEGRA 3-D Arbitrary-Lagrange-Eulerian Computer Code for...case-liner bond areas and in the grain inner bore to explore the pre-ignition and ignition phases , as well as burning evolution in rocket motor fast

  19. Application of quantitative microbial risk assessments for estimation of risk management metrics: Clostridium perfringens in ready-to-eat and partially cooked meat and poultry products as an example.

    PubMed

    Crouch, Edmund A; Labarre, David; Golden, Neal J; Kause, Janell R; Dearfield, Kerry L

    2009-10-01

    The U.S. Department of Agriculture, Food Safety and Inspection Service is exploring quantitative risk assessment methodologies to incorporate the use of the Codex Alimentarius' newly adopted risk management metrics (e.g., food safety objectives and performance objectives). It is suggested that use of these metrics would more closely tie the results of quantitative microbial risk assessments (QMRAs) to public health outcomes. By estimating the food safety objective (the maximum frequency and/or concentration of a hazard in a food at the time of consumption) and the performance objective (the maximum frequency and/or concentration of a hazard in a food at a specified step in the food chain before the time of consumption), risk managers will have a better understanding of the appropriate level of protection (ALOP) from microbial hazards for public health protection. We here demonstrate a general methodology that allows identification of an ALOP and evaluation of corresponding metrics at appropriate points in the food chain. It requires a two-dimensional probabilistic risk assessment, the example used being the Monte Carlo QMRA for Clostridium perfringens in ready-to eat and partially cooked meat and poultry products, with minor modifications to evaluate and abstract required measures. For demonstration purposes, the QMRA model was applied specifically to hot dogs produced and consumed in the United States. Evaluation of the cumulative uncertainty distribution for illness rate allows a specification of an ALOP that, with defined confidence, corresponds to current industry practices.

  20. Source parameters controlling the generation and propagation of potential local tsunamis along the cascadia margin

    USGS Publications Warehouse

    Geist, E.; Yoshioka, S.

    1996-01-01

    The largest uncertainty in assessing hazards from local tsunamis along the Cascadia margin is estimating the possible earthquake source parameters. We investigate which source parameters exert the largest influence on tsunami generation and determine how each parameter affects the amplitude of the local tsunami. The following source parameters were analyzed: (1) type of faulting characteristic of the Cascadia subduction zone, (2) amount of slip during rupture, (3) slip orientation, (4) duration of rupture, (5) physical properties of the accretionary wedge, and (6) influence of secondary faulting. The effect of each of these source parameters on the quasi-static displacement of the ocean floor is determined by using elastic three-dimensional, finite-element models. The propagation of the resulting tsunami is modeled both near the coastline using the two-dimensional (x-t) Peregrine equations that includes the effects of dispersion and near the source using the three-dimensional (x-y-t) linear long-wave equations. The source parameters that have the largest influence on local tsunami excitation are the shallowness of rupture and the amount of slip. In addition, the orientation of slip has a large effect on the directivity of the tsunami, especially for shallow dipping faults, which consequently has a direct influence on the length of coastline inundated by the tsunami. Duration of rupture, physical properties of the accretionary wedge, and secondary faulting all affect the excitation of tsunamis but to a lesser extent than the shallowness of rupture and the amount and orientation of slip. Assessment of the severity of the local tsunami hazard should take into account that relatively large tsunamis can be generated from anomalous 'tsunami earthquakes' that rupture within the accretionary wedge in comparison to interplate thrust earthquakes of similar magnitude. ?? 1996 Kluwer Academic Publishers.

  1. Perception of Air Pollution in the Jinchuan Mining Area, China: A Structural Equation Modeling Approach

    PubMed Central

    Li, Zhengtao; Folmer, Henk; Xue, Jianhong

    2016-01-01

    Studies on the perception of air pollution in China are very limited. The aim of this paper is to help to fill this gap by analyzing a cross-sectional dataset of 759 residents of the Jinchuan mining area, Gansu Province, China. The estimations suggest that perception of air pollution is two-dimensional. The first dimension is the perceived intensity of air pollution and the second is the perceived hazardousness of the pollutants. Both dimensions are influenced by environmental knowledge. Perceived intensity is furthermore influenced by socio-economic status and proximity to the pollution source; perceived hazardousness is influenced by socio-economic status, family health experience, family size and proximity to the pollution source. There are no reverse effects from perception on environmental knowledge. The main conclusion is that virtually all Jinchuan residents perceive high intensity and hazardousness of air pollution despite the fact that public information on air pollution and its health impacts is classified to a great extent. It is suggested that, to assist the residents to take appropriate preventive action, the local government should develop counseling and educational campaigns and institutionalize disclosure of air quality conditions. These programs should pay special attention to young residents who have limited knowledge of air pollution in the Jinchuan mining area. PMID:27455291

  2. A three-dimensional quality-guided phase unwrapping method for MR elastography

    NASA Astrophysics Data System (ADS)

    Wang, Huifang; Weaver, John B.; Perreard, Irina I.; Doyley, Marvin M.; Paulsen, Keith D.

    2011-07-01

    Magnetic resonance elastography (MRE) uses accumulated phases that are acquired at multiple, uniformly spaced relative phase offsets, to estimate harmonic motion information. Heavily wrapped phase occurs when the motion is large and unwrapping procedures are necessary to estimate the displacements required by MRE. Two unwrapping methods were developed and compared in this paper. The first method is a sequentially applied approach. The three-dimensional MRE phase image block for each slice was processed by two-dimensional unwrapping followed by a one-dimensional phase unwrapping approach along the phase-offset direction. This unwrapping approach generally works well for low noise data. However, there are still cases where the two-dimensional unwrapping method fails when noise is high. In this case, the baseline of the corrupted regions within an unwrapped image will not be consistent. Instead of separating the two-dimensional and one-dimensional unwrapping in a sequential approach, an interleaved three-dimensional quality-guided unwrapping method was developed to combine both the two-dimensional phase image continuity and one-dimensional harmonic motion information. The quality of one-dimensional harmonic motion unwrapping was used to guide the three-dimensional unwrapping procedures and it resulted in stronger guidance than in the sequential method. In this work, in vivo results generated by the two methods were compared.

  3. Novel angle estimation for bistatic MIMO radar using an improved MUSIC

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Zhang, Xiaofei; Chen, Han

    2014-09-01

    In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.

  4. Numerical modelling of glacial lake outburst floods using physically based dam-breach models

    NASA Astrophysics Data System (ADS)

    Westoby, M. J.; Brasington, J.; Glasser, N. F.; Hambrey, M. J.; Reynolds, J. M.; Hassan, M. A. A. M.; Lowe, A.

    2015-03-01

    The instability of moraine-dammed proglacial lakes creates the potential for catastrophic glacial lake outburst floods (GLOFs) in high-mountain regions. In this research, we use a unique combination of numerical dam-breach and two-dimensional hydrodynamic modelling, employed within a generalised likelihood uncertainty estimation (GLUE) framework, to quantify predictive uncertainty in model outputs associated with a reconstruction of the Dig Tsho failure in Nepal. Monte Carlo analysis was used to sample the model parameter space, and morphological descriptors of the moraine breach were used to evaluate model performance. Multiple breach scenarios were produced by differing parameter ensembles associated with a range of breach initiation mechanisms, including overtopping waves and mechanical failure of the dam face. The material roughness coefficient was found to exert a dominant influence over model performance. The downstream routing of scenario-specific breach hydrographs revealed significant differences in the timing and extent of inundation. A GLUE-based methodology for constructing probabilistic maps of inundation extent, flow depth, and hazard is presented and provides a useful tool for communicating uncertainty in GLOF hazard assessment.

  5. Comparison of methods for estimating the attributable risk in the context of survival analysis.

    PubMed

    Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M

    2017-01-23

    The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.

  6. Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke E

    2018-03-01

    Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode, to explain the maximum variance of the data. Functional PARAFAC permits the entities in different modes to be smooth functions or curves, varying over a continuum, rather than a collection of unconnected responses. The existing functional PARAFAC methods handle functions of a one-dimensional argument (e.g., time) only. In this paper, we propose a new extension of functional PARAFAC for handling three-way data whose responses are sequenced along both a two-dimensional domain (e.g., a plane with x- and y-axis coordinates) and a one-dimensional argument. Technically, the proposed method combines PARAFAC with basis function expansion approximations, using a set of piecewise quadratic finite element basis functions for estimating two-dimensional smooth functions and a set of one-dimensional basis functions for estimating one-dimensional smooth functions. In a simulation study, the proposed method appeared to outperform the conventional PARAFAC. We apply the method to EEG data to demonstrate its empirical usefulness.

  7. Interval Estimation of Seismic Hazard Parameters

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2017-03-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  8. An Evaluation of the Measurement Requirements for an In-Situ Wake Vortex Detection System

    NASA Technical Reports Server (NTRS)

    Fuhrmann, Henri D.; Stewart, Eric C.

    1996-01-01

    Results of a numerical simulation are presented to determine the feasibility of estimating the location and strength of a wake vortex from imperfect in-situ measurements. These estimates could be used to provide information to a pilot on how to avoid a hazardous wake vortex encounter. An iterative algorithm based on the method of secants was used to solve the four simultaneous equations describing the two-dimensional flow field around a pair of parallel counter-rotating vortices of equal and constant strength. The flow field information used by the algorithm could be derived from measurements from flow angle sensors mounted on the wing-tip of the detecting aircraft and an inertial navigation system. The study determined the propagated errors in the estimated location and strength of the vortex which resulted from random errors added to theoretically perfect measurements. The results are summarized in a series of charts and a table which make it possible to estimate these propagated errors for many practical situations. The situations include several generator-detector airplane combinations, different distances between the vortex and the detector airplane, as well as different levels of total measurement error.

  9. Estimating Sedimentation from an Erosion-Hazard Rating

    Treesearch

    R.M. Rice; S.A. Sherbin

    1977-01-01

    Data from two watersheds in northern California were used to develop an interpretation of the erosion hazard rating (EHR) of the Coast Forest District as amount of sedimentation. For the Caspar Creek Experimental Watershed (North Fork and South Fork), each EHR unit was estimated as equivalent to 0.0543 cubic yards per acre per year, on undisturbed forest. Experience...

  10. Estimating sedimentation from an erosion-hazard rating

    Treesearch

    R. M. Rice; S. A. Sherbin

    1977-01-01

    Data from two watersheds in northern California were used to develop an interpretation of the erosion-hazard rating (EHR) of the Coast Forest District as amount of sedimentation. For the Caspar Creek Experimental Watershed (North Fork and South Fork), each EHR unit was estimated as equivalent to 0.0543 cubic yards per acre per year, on undisturbed forest. Experience...

  11. The Average Hazard Ratio - A Good Effect Measure for Time-to-event Endpoints when the Proportional Hazard Assumption is Violated?

    PubMed

    Rauch, Geraldine; Brannath, Werner; Brückner, Matthias; Kieser, Meinhard

    2018-05-01

    In many clinical trial applications, the endpoint of interest corresponds to a time-to-event endpoint. In this case, group differences are usually expressed by the hazard ratio. Group differences are commonly assessed by the logrank test, which is optimal under the proportional hazard assumption. However, there are many situations in which this assumption is violated. Especially in applications were a full population and several subgroups or a composite time-to-first-event endpoint and several components are considered, the proportional hazard assumption usually does not simultaneously hold true for all test problems under investigation. As an alternative effect measure, Kalbfleisch and Prentice proposed the so-called 'average hazard ratio'. The average hazard ratio is based on a flexible weighting function to modify the influence of time and has a meaningful interpretation even in the case of non-proportional hazards. Despite this favorable property, it is hardly ever used in practice, whereas the standard hazard ratio is commonly reported in clinical trials regardless of whether the proportional hazard assumption holds true or not. There exist two main approaches to construct corresponding estimators and tests for the average hazard ratio where the first relies on weighted Cox regression and the second on a simple plug-in estimator. The aim of this work is to give a systematic comparison of these two approaches and the standard logrank test for different time-toevent settings with proportional and nonproportional hazards and to illustrate the pros and cons in application. We conduct a systematic comparative study based on Monte-Carlo simulations and by a real clinical trial example. Our results suggest that the properties of the average hazard ratio depend on the underlying weighting function. The two approaches to construct estimators and related tests show very similar performance for adequately chosen weights. In general, the average hazard ratio defines a more valid effect measure than the standard hazard ratio under non-proportional hazards and the corresponding tests provide a power advantage over the common logrank test. As non-proportional hazards are often met in clinical practice and the average hazard ratio tests often outperform the common logrank test, this approach should be used more routinely in applications. Schattauer GmbH.

  12. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    USGS Publications Warehouse

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  13. Doubly Robust Additive Hazards Models to Estimate Effects of a Continuous Exposure on Survival.

    PubMed

    Wang, Yan; Lee, Mihye; Liu, Pengfei; Shi, Liuhua; Yu, Zhi; Abu Awad, Yara; Zanobetti, Antonella; Schwartz, Joel D

    2017-11-01

    The effect of an exposure on survival can be biased when the regression model is misspecified. Hazard difference is easier to use in risk assessment than hazard ratio and has a clearer interpretation in the assessment of effect modifications. We proposed two doubly robust additive hazards models to estimate the causal hazard difference of a continuous exposure on survival. The first model is an inverse probability-weighted additive hazards regression. The second model is an extension of the doubly robust estimator for binary exposures by categorizing the continuous exposure. We compared these with the marginal structural model and outcome regression with correct and incorrect model specifications using simulations. We applied doubly robust additive hazard models to the estimation of hazard difference of long-term exposure to PM2.5 (particulate matter with an aerodynamic diameter less than or equal to 2.5 microns) on survival using a large cohort of 13 million older adults residing in seven states of the Southeastern United States. We showed that the proposed approaches are doubly robust. We found that each 1 μg m increase in annual PM2.5 exposure was associated with a causal hazard difference in mortality of 8.0 × 10 (95% confidence interval 7.4 × 10, 8.7 × 10), which was modified by age, medical history, socioeconomic status, and urbanicity. The overall hazard difference translates to approximately 5.5 (5.1, 6.0) thousand deaths per year in the study population. The proposed approaches improve the robustness of the additive hazards model and produce a novel additive causal estimate of PM2.5 on survival and several additive effect modifications, including social inequality.

  14. Three-dimensional displays for natural hazards analysis, using classified Landsat Thematic Mapper digital data and large-scale digital elevation models

    NASA Technical Reports Server (NTRS)

    Butler, David R.; Walsh, Stephen J.; Brown, Daniel G.

    1991-01-01

    Methods are described for using Landsat Thematic Mapper digital data and digital elevation models for the display of natural hazard sites in a mountainous region of northwestern Montana, USA. Hazard zones can be easily identified on the three-dimensional images. Proximity of facilities such as highways and building locations to hazard sites can also be easily displayed. A temporal sequence of Landsat TM (or similar) satellite data sets could also be used to display landscape changes associated with dynamic natural hazard processes.

  15. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    PubMed

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  16. Seismic hazard assessment: Issues and alternatives

    USGS Publications Warehouse

    Wang, Z.

    2011-01-01

    Seismic hazard and risk are two very important concepts in engineering design and other policy considerations. Although seismic hazard and risk have often been used inter-changeably, they are fundamentally different. Furthermore, seismic risk is more important in engineering design and other policy considerations. Seismic hazard assessment is an effort by earth scientists to quantify seismic hazard and its associated uncertainty in time and space and to provide seismic hazard estimates for seismic risk assessment and other applications. Although seismic hazard assessment is more a scientific issue, it deserves special attention because of its significant implication to society. Two approaches, probabilistic seismic hazard analysis (PSHA) and deterministic seismic hazard analysis (DSHA), are commonly used for seismic hazard assessment. Although PSHA has been pro-claimed as the best approach for seismic hazard assessment, it is scientifically flawed (i.e., the physics and mathematics that PSHA is based on are not valid). Use of PSHA could lead to either unsafe or overly conservative engineering design or public policy, each of which has dire consequences to society. On the other hand, DSHA is a viable approach for seismic hazard assessment even though it has been labeled as unreliable. The biggest drawback of DSHA is that the temporal characteristics (i.e., earthquake frequency of occurrence and the associated uncertainty) are often neglected. An alternative, seismic hazard analysis (SHA), utilizes earthquake science and statistics directly and provides a seismic hazard estimate that can be readily used for seismic risk assessment and other applications. ?? 2010 Springer Basel AG.

  17. A computationally fast, reduced model for simulating landslide dynamics and tsunamis generated by landslides in natural terrains

    NASA Astrophysics Data System (ADS)

    Mohammed, F.

    2016-12-01

    Landslide hazards such as fast-moving debris flows, slow-moving landslides, and other mass flows cause numerous fatalities, injuries, and damage. Landslide occurrences in fjords, bays, and lakes can additionally generate tsunamis with locally extremely high wave heights and runups. Two-dimensional depth-averaged models can successfully simulate the entire lifecycle of the three-dimensional landslide dynamics and tsunami propagation efficiently and accurately with the appropriate assumptions. Landslide rheology is defined using viscous fluids, visco-plastic fluids, and granular material to account for the possible landslide source materials. Saturated and unsaturated rheologies are further included to simulate debris flow, debris avalanches, mudflows, and rockslides respectively. The models are obtained by reducing the fully three-dimensional Navier-Stokes equations with the internal rheological definition of the landslide material, the water body, and appropriate scaling assumptions to obtain the depth-averaged two-dimensional models. The landslide and tsunami models are coupled to include the interaction between the landslide and the water body for tsunami generation. The reduced models are solved numerically with a fast semi-implicit finite-volume, shock-capturing based algorithm. The well-balanced, positivity preserving algorithm accurately accounts for wet-dry interface transition for the landslide runout, landslide-water body interface, and the tsunami wave flooding on land. The models are implemented as a General-Purpose computing on Graphics Processing Unit-based (GPGPU) suite of models, either coupled or run independently within the suite. The GPGPU implementation provides up to 1000 times speedup over a CPU-based serial computation. This enables simulations of multiple scenarios of hazard realizations that provides a basis for a probabilistic hazard assessment. The models have been successfully validated against experiments, past studies, and field data for landslides and tsunamis.

  18. Causal Mediation Analysis for the Cox Proportional Hazards Model with a Smooth Baseline Hazard Estimator.

    PubMed

    Wang, Wei; Albert, Jeffrey M

    2017-08-01

    An important problem within the social, behavioral, and health sciences is how to partition an exposure effect (e.g. treatment or risk factor) among specific pathway effects and to quantify the importance of each pathway. Mediation analysis based on the potential outcomes framework is an important tool to address this problem and we consider the estimation of mediation effects for the proportional hazards model in this paper. We give precise definitions of the total effect, natural indirect effect, and natural direct effect in terms of the survival probability, hazard function, and restricted mean survival time within the standard two-stage mediation framework. To estimate the mediation effects on different scales, we propose a mediation formula approach in which simple parametric models (fractional polynomials or restricted cubic splines) are utilized to approximate the baseline log cumulative hazard function. Simulation study results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes. We apply this method to the Jackson Heart Study data and conduct sensitivity analysis to assess the impact on the mediation effects inference when the no unmeasured mediator-outcome confounding assumption is violated.

  19. Aerodynamic and heat transfer analysis of the low aspect ratio turbine

    NASA Astrophysics Data System (ADS)

    Sharma, O. P.; Nguyen, P.; Ni, R. H.; Rhie, C. M.; White, J. A.

    1987-06-01

    The available two- and three-dimensional codes are used to estimate external heat loads and aerodynamic characteristics of a highly loaded turbine stage in order to demonstrate state-of-the-art methodologies in turbine design. By using data for a low aspect ratio turbine, it is found that a three-dimensional multistage Euler code gives good averall predictions for the turbine stage, yielding good estimates of the stage pressure ratio, mass flow, and exit gas angles. The nozzle vane loading distribution is well predicted by both the three-dimensional multistage Euler and three-dimensional Navier-Stokes codes. The vane airfoil surface Stanton number distributions, however, are underpredicted by both two- and three-dimensional boundary value analysis.

  20. The Torino Impact Hazard Scale

    NASA Astrophysics Data System (ADS)

    Binzel, Richard P.

    2000-04-01

    Newly discovered asteroids and comets have inherent uncertainties in their orbit determinations owing to the natural limits of positional measurement precision and the finite lengths of orbital arcs over which determinations are made. For some objects making predictable future close approaches to the Earth, orbital uncertainties may be such that a collision with the Earth cannot be ruled out. Careful and responsible communication between astronomers and the public is required for reporting these predictions and a 0-10 point hazard scale, reported inseparably with the date of close encounter, is recommended as a simple and efficient tool for this purpose. The goal of this scale, endorsed as the Torino Impact Hazard Scale, is to place into context the level of public concern that is warranted for any close encounter event within the next century. Concomitant reporting of the close encounter date further conveys the sense of urgency that is warranted. The Torino Scale value for a close approach event is based upon both collision probability and the estimated kinetic energy (collision consequence), where the scale value can change as probability and energy estimates are refined by further data. On the scale, Category 1 corresponds to collision probabilities that are comparable to the current annual chance for any given size impactor. Categories 8-10 correspond to certain (probability >99%) collisions having increasingly dire consequences. While close approaches falling Category 0 may be no cause for noteworthy public concern, there remains a professional responsibility to further refine orbital parameters for such objects and a figure of merit is suggested for evaluating such objects. Because impact predictions represent a multi-dimensional problem, there is no unique or perfect translation into a one-dimensional system such as the Torino Scale. These limitations are discussed.

  1. DOA estimation of noncircular signals for coprime linear array via locally reduced-dimensional Capon

    NASA Astrophysics Data System (ADS)

    Zhai, Hui; Zhang, Xiaofei; Zheng, Wang

    2018-05-01

    We investigate the issue of direction of arrival (DOA) estimation of noncircular signals for coprime linear array (CLA). The noncircular property enhances the degree of freedom and improves angle estimation performance, but it leads to a more complex angle ambiguity problem. To eliminate ambiguity, we theoretically prove that the actual DOAs of noncircular signals can be uniquely estimated by finding the coincide results from the two decomposed subarrays based on the coprimeness. We propose a locally reduced-dimensional (RD) Capon algorithm for DOA estimation of noncircular signals for CLA. The RD processing is used in the proposed algorithm to avoid two dimensional (2D) spectral peak search, and coprimeness is employed to avoid the global spectral peak search. The proposed algorithm requires one-dimensional locally spectral peak search, and it has very low computational complexity. Furthermore, the proposed algorithm needs no prior knowledge of the number of sources. We also derive the Crámer-Rao bound of DOA estimation of noncircular signals in CLA. Numerical simulation results demonstrate the effectiveness and superiority of the algorithm.

  2. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  3. Optimal experimental designs for the estimation of thermal properties of composite materials

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.; Moncman, Deborah A.

    1994-01-01

    Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.

  4. Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory

    NASA Astrophysics Data System (ADS)

    Deyi, Feng; Ichikawa, M.

    1989-11-01

    In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.

  5. The joint return period analysis of natural disasters based on monitoring and statistical modeling of multidimensional hazard factors.

    PubMed

    Liu, Xueqin; Li, Ning; Yuan, Shuai; Xu, Ning; Shi, Wenqin; Chen, Weibin

    2015-12-15

    As a random event, a natural disaster has the complex occurrence mechanism. The comprehensive analysis of multiple hazard factors is important in disaster risk assessment. In order to improve the accuracy of risk analysis and forecasting, the formation mechanism of a disaster should be considered in the analysis and calculation of multi-factors. Based on the consideration of the importance and deficiencies of multivariate analysis of dust storm disasters, 91 severe dust storm disasters in Inner Mongolia from 1990 to 2013 were selected as study cases in the paper. Main hazard factors from 500-hPa atmospheric circulation system, near-surface meteorological system, and underlying surface conditions were selected to simulate and calculate the multidimensional joint return periods. After comparing the simulation results with actual dust storm events in 54years, we found that the two-dimensional Frank Copula function showed the better fitting results at the lower tail of hazard factors and that three-dimensional Frank Copula function displayed the better fitting results at the middle and upper tails of hazard factors. However, for dust storm disasters with the short return period, three-dimensional joint return period simulation shows no obvious advantage. If the return period is longer than 10years, it shows significant advantages in extreme value fitting. Therefore, we suggest the multivariate analysis method may be adopted in forecasting and risk analysis of serious disasters with the longer return period, such as earthquake and tsunami. Furthermore, the exploration of this method laid the foundation for the prediction and warning of other nature disasters. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Comparison of the historical record of earthquake hazard with seismic-hazard models for New Zealand and the continental United States

    USGS Publications Warehouse

    Stirling, M.; Petersen, M.

    2006-01-01

    We compare the historical record of earthquake hazard experienced at 78 towns and cities (sites) distributed across New Zealand and the continental United States with the hazard estimated from the national probabilistic seismic-hazard (PSH) models for the two countries. The two PSH models are constructed with similar methodologies and data. Our comparisons show a tendency for the PSH models to slightly exceed the historical hazard in New Zealand and westernmost continental United States interplate regions, but show lower hazard than that of the historical record in the continental United States intraplate region. Factors such as non-Poissonian behavior, parameterization of active fault data in the PSH calculations, and uncertainties in estimation of ground-motion levels from historical felt intensity data for the interplate regions may have led to the higher-than-historical levels of hazard at the interplate sites. In contrast, the less-than-historical hazard for the remaining continental United States (intraplate) sites may be largely due to site conditions not having been considered at the intraplate sites, and uncertainties in correlating ground-motion levels to historical felt intensities. The study also highlights the importance of evaluating PSH models at more than one region, because the conclusions reached on the basis of a solely interplate or intraplate study would be very different.

  7. Automated estimation of individual conifer tree height and crown diameter via Two-dimensional spatial wavelet analysis of lidar data

    Treesearch

    Michael J. Falkowski; Alistair M.S. Smith; Andrew T. Hudak; Paul E. Gessler; Lee A. Vierling; Nicholas L. Crookston

    2006-01-01

    We describe and evaluate a new analysis technique, spatial wavelet analysis (SWA), to automatically estimate the location, height, and crown diameter of individual trees within mixed conifer open canopy stands from light detection and ranging (lidar) data. Two-dimensional Mexican hat wavelets, over a range of likely tree crown diameters, were convolved with lidar...

  8. Bivariate drought frequency analysis using the copula method

    NASA Astrophysics Data System (ADS)

    Mirabbasi, Rasoul; Fakheri-Fard, Ahmad; Dinpashoh, Yagob

    2012-04-01

    Droughts are major natural hazards with significant environmental and economic impacts. In this study, two-dimensional copulas were applied to the analysis of the meteorological drought characteristics of the Sharafkhaneh gauge station, located in the northwest of Iran. Two major drought characteristics, duration and severity, as defined by the standardized precipitation index, were abstracted from observed drought events. Since drought duration and severity exhibited a significant correlation and since they were modeled using different distributions, copulas were used to construct the joint distribution function of the drought characteristics. The parameter of copulas was estimated using the method of the Inference Function for Margins. Several copulas were tested in order to determine the best data fit. According to the error analysis and the tail dependence coefficient, the Galambos copula provided the best fit for the observed drought data. Some bivariate probabilistic properties of droughts, based on the derived copula-based joint distribution, were also investigated. These probabilistic properties can provide useful information for water resource planning and management.

  9. Linear Vector Quantisation and Uniform Circular Arrays based decoupled two-dimensional angle of arrival estimation

    NASA Astrophysics Data System (ADS)

    Ndaw, Joseph D.; Faye, Andre; Maïga, Amadou S.

    2017-05-01

    Artificial neural networks (ANN)-based models are efficient ways of source localisation. However very large training sets are needed to precisely estimate two-dimensional Direction of arrival (2D-DOA) with ANN models. In this paper we present a fast artificial neural network approach for 2D-DOA estimation with reduced training sets sizes. We exploit the symmetry properties of Uniform Circular Arrays (UCA) to build two different datasets for elevation and azimuth angles. Linear Vector Quantisation (LVQ) neural networks are then sequentially trained on each dataset to separately estimate elevation and azimuth angles. A multilevel training process is applied to further reduce the training sets sizes.

  10. Vs30 mapping at selected sites within the Greater Accra Metropolitan Area

    NASA Astrophysics Data System (ADS)

    Nortey, Grace; Armah, Thomas K.; Amponsah, Paulina

    2018-06-01

    A large part of Accra is underlain by a complex distribution of shallow soft soils. Within seismically active zones, these soils hold the most potential to significantly amplify seismic waves and cause severe damage, especially to structures sited on soils lacking sufficient stiffness. This paper presents preliminary site classification for the Greater Accra Metropolitan Area of Ghana (GAMA), using experimental data from two-dimensional (2-D) Multichannel Analysis of Surface Wave (MASW) technique. The dispersive characteristics of fundamental mode Rayleigh type surface waves were utilized for imaging the shallow subsurface layers (approx. up to 30 m depth) by estimating the 1D (depth) and 2D (depth and surface location) shear wave velocities at 5 selected sites. The average shear wave velocity for 30 m depth (Vs30), which is critical in evaluating the site response of the upper 30 m, was estimated and used for the preliminary site classification of the GAM area, as per NEHRP (National Earthquake Hazards Reduction Program). Based on the Vs30 values obtained in the study, two common site types C, and D corresponding to shallow (>6 m < 30 m) weathered rock and deep (up 30 m thick) stiff soils respectively, have been identified within the study area. Lower velocity profiles are inferred for the residual soils (sandy to silty clays), derived from the Accraian Formation that lies mainly within Accra central. Stiffer soil sites lie to the north of Accra, and to the west near Nyanyano. The seismic response characteristics over the residual soils in the GAMA have become apparent using the MASW technique. An extensive site effect map and a more robust probabilistic seismic hazard analysis can now be efficiently built for the metropolis, by considering the site classes and design parameters obtained from this study.

  11. Undersampling power-law size distributions: effect on the assessment of extreme natural hazards

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2014-01-01

    The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.

  12. Motion Estimation System Utilizing Point Cloud Registration

    NASA Technical Reports Server (NTRS)

    Chen, Qi (Inventor)

    2016-01-01

    A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.

  13. Validation of the alternating conditional estimation algorithm for estimation of flexible extensions of Cox's proportional hazards model with nonlinear constraints on the parameters.

    PubMed

    Wynant, Willy; Abrahamowicz, Michal

    2016-11-01

    Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Bayesian updating in a fault tree model for shipwreck risk assessment.

    PubMed

    Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M

    2017-07-15

    Shipwrecks containing oil and other hazardous substances have been deteriorating on the seabeds of the world for many years and are threatening to pollute the marine environment. The status of the wrecks and the potential volume of harmful substances present in the wrecks are affected by a multitude of uncertainties. Each shipwreck poses a unique threat, the nature of which is determined by the structural status of the wreck and possible damage resulting from hazardous activities that could potentially cause a discharge. Decision support is required to ensure the efficiency of the prioritisation process and the allocation of resources required to carry out risk mitigation measures. Whilst risk assessments can provide the requisite decision support, comprehensive methods that take into account key uncertainties related to shipwrecks are limited. The aim of this paper was to develop a method for estimating the probability of discharge of hazardous substances from shipwrecks. The method is based on Bayesian updating of generic information on the hazards posed by different activities in the surroundings of the wreck, with information on site-specific and wreck-specific conditions in a fault tree model. Bayesian updating is performed using Monte Carlo simulations for estimating the probability of a discharge of hazardous substances and formal handling of intrinsic uncertainties. An example application involving two wrecks located off the Swedish coast is presented. Results show the estimated probability of opening, discharge and volume of the discharge for the two wrecks and illustrate the capability of the model to provide decision support. Together with consequence estimations of a discharge of hazardous substances, the suggested model enables comprehensive and probabilistic risk assessments of shipwrecks to be made. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Simulating Geriatric Home Safety Assessments in a Three-Dimensional Virtual World

    ERIC Educational Resources Information Center

    Andrade, Allen D.; Cifuentes, Pedro; Mintzer, Michael J.; Roos, Bernard A.; Anam, Ramanakumar; Ruiz, Jorge G.

    2012-01-01

    Virtual worlds could offer inexpensive and safe three-dimensional environments in which medical trainees can learn to identify home safety hazards. Our aim was to evaluate the feasibility, usability, and acceptability of virtual worlds for geriatric home safety assessments and to correlate performance efficiency in hazard identification with…

  16. Methodology for time-domain estimation of storm time geoelectric fields using the 3-D magnetotelluric response tensors

    USGS Publications Warehouse

    Kelbert, Anna; Balch, Christopher; Pulkkinen, Antti; Egbert, Gary D; Love, Jeffrey J.; Rigler, E. Joshua; Fujii, Ikuko

    2017-01-01

    Geoelectric fields at the Earth's surface caused by magnetic storms constitute a hazard to the operation of electric power grids and related infrastructure. The ability to estimate these geoelectric fields in close to real time and provide local predictions would better equip the industry to mitigate negative impacts on their operations. Here we report progress toward this goal: development of robust algorithms that convolve a magnetic storm time series with a frequency domain impedance for a realistic three-dimensional (3-D) Earth, to estimate the local, storm time geoelectric field. Both frequency domain and time domain approaches are presented and validated against storm time geoelectric field data measured in Japan. The methods are then compared in the context of a real-time application.

  17. Methodology for time-domain estimation of storm time geoelectric fields using the 3-D magnetotelluric response tensors

    NASA Astrophysics Data System (ADS)

    Kelbert, Anna; Balch, Christopher C.; Pulkkinen, Antti; Egbert, Gary D.; Love, Jeffrey J.; Rigler, E. Joshua; Fujii, Ikuko

    2017-07-01

    Geoelectric fields at the Earth's surface caused by magnetic storms constitute a hazard to the operation of electric power grids and related infrastructure. The ability to estimate these geoelectric fields in close to real time and provide local predictions would better equip the industry to mitigate negative impacts on their operations. Here we report progress toward this goal: development of robust algorithms that convolve a magnetic storm time series with a frequency domain impedance for a realistic three-dimensional (3-D) Earth, to estimate the local, storm time geoelectric field. Both frequency domain and time domain approaches are presented and validated against storm time geoelectric field data measured in Japan. The methods are then compared in the context of a real-time application.

  18. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.

    PubMed

    Wang, Lan; Kim, Yongdai; Li, Runze

    2013-10-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.

  19. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION

    PubMed Central

    Wang, Lan; Kim, Yongdai; Li, Runze

    2014-01-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843

  20. Flood-hazard mapping in Honduras in response to Hurricane Mitch

    USGS Publications Warehouse

    Mastin, M.C.

    2002-01-01

    The devastation in Honduras due to flooding from Hurricane Mitch in 1998 prompted the U.S. Agency for International Development, through the U.S. Geological Survey, to develop a country-wide systematic approach of flood-hazard mapping and a demonstration of the method at selected sites as part of a reconstruction effort. The design discharge chosen for flood-hazard mapping was the flood with an average return interval of 50 years, and this selection was based on discussions with the U.S. Agency for International Development and the Honduran Public Works and Transportation Ministry. A regression equation for estimating the 50-year flood discharge using drainage area and annual precipitation as the explanatory variables was developed, based on data from 34 long-term gaging sites. This equation, which has a standard error of prediction of 71.3 percent, was used in a geographic information system to estimate the 50-year flood discharge at any location for any river in the country. The flood-hazard mapping method was demonstrated at 15 selected municipalities. High-resolution digital-elevation models of the floodplain were obtained using an airborne laser-terrain mapping system. Field verification of the digital elevation models showed that the digital-elevation models had mean absolute errors ranging from -0.57 to 0.14 meter in the vertical dimension. From these models, water-surface elevation cross sections were obtained and used in a numerical, one-dimensional, steady-flow stepbackwater model to estimate water-surface profiles corresponding to the 50-year flood discharge. From these water-surface profiles, maps of area and depth of inundation were created at the 13 of the 15 selected municipalities. At La Lima only, the area and depth of inundation of the channel capacity in the city was mapped. At Santa Rose de Aguan, no numerical model was created. The 50-year flood and the maps of area and depth of inundation are based on the estimated 50-year storm tide.

  1. Seismic Hazard Estimates Using Ill-defined Macroseismic Data at Site

    NASA Astrophysics Data System (ADS)

    Albarello, D.; Mucciarelli, M.

    - A new approach is proposed to the seismic hazard estimate based on documentary data concerning local history of seismic effects. The adopted methodology allows for the use of ``poor'' data, such as the macroseismic ones, within a formally coherent approach that permits overcoming a number of problems connected to the forcing of available information in the frame of ``standard'' methodologies calibrated on the use of instrumental data. The use of the proposed methodology allows full exploitation of all the available information (that for many towns in Italy covers several centuries) making possible a correct use of macroseismic data characterized by different levels of completeness and reliability. As an application of the proposed methodology, seismic hazard estimates are presented for two towns located in Northern Italy: Bologna and Carpi.

  2. Incremental Value of Three-Dimensional Transesophageal Echocardiography over the Two-Dimensional Technique in the Assessment of a Thrombus in Transit through a Patent Foramen Ovale.

    PubMed

    Thind, Munveer; Ahmed, Mustafa I; Gok, Gulay; Joson, Marisa; Elsayed, Mahmoud; Tuck, Benjamin C; Townsley, Matthew M; Klas, Berthold; McGiffin, David C; Nanda, Navin C

    2015-05-01

    We report a case of a right atrial thrombus traversing a patent foramen ovale into the left atrium, where three-dimensional transesophageal echocardiography provided considerable incremental value over two-dimensional transesophageal echocardiography in its assessment. As well as allowing us to better spatially characterize the thrombus, three-dimensional transesophageal echocardiography provided a more quantitative assessment through estimation of total thrombus burden. © 2015, Wiley Periodicals, Inc.

  3. Computation of nonparametric convex hazard estimators via profile methods.

    PubMed

    Jankowski, Hanna K; Wellner, Jon A

    2009-05-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.

  4. Defining the baseline for inhibition concentration calculations for hormetic hazards.

    PubMed

    Bailer, A J; Oris, J T

    2000-01-01

    The use of endpoint estimates based on modeling inhibition of test organism response relative to a baseline response is an important tool in the testing and evaluation of aquatic hazards. In the presence of a hormetic hazard, the definition of the baseline response is not clear because non-zero levels of the hazard stimulate an enhanced response prior to inhibition. In the present study, the methodology and implications of how one defines a baseline response for inhibition concentration estimation in aquatic toxicity tests was evaluated. Three possible baselines were considered: the control response level; the pooling of responses, including controls and all concentration conditions with responses enhanced relative to controls; and, finally, the maximal response. The statistical methods associated with estimating inhibition relative to the first two baseline definitions were described and a method for estimating inhibition relative to the third baseline definition was derived. These methods were illustrated with data from a standard aquatic zooplankton reproductive toxicity test in which the number of young produced in three broods of a cladoceran exposed to effluent was modeled as a function of effluent concentration. Copyright 2000 John Wiley & Sons, Ltd.

  5. Inter-model analysis of tsunami-induced coastal currents

    NASA Astrophysics Data System (ADS)

    Lynett, Patrick J.; Gately, Kara; Wilson, Rick; Montoya, Luis; Arcas, Diego; Aytore, Betul; Bai, Yefei; Bricker, Jeremy D.; Castro, Manuel J.; Cheung, Kwok Fai; David, C. Gabriel; Dogan, Gozde Guney; Escalante, Cipriano; González-Vida, José Manuel; Grilli, Stephan T.; Heitmann, Troy W.; Horrillo, Juan; Kânoğlu, Utku; Kian, Rozita; Kirby, James T.; Li, Wenwen; Macías, Jorge; Nicolsky, Dmitry J.; Ortega, Sergio; Pampell-Manis, Alyssa; Park, Yong Sung; Roeber, Volker; Sharghivand, Naeimeh; Shelby, Michael; Shi, Fengyan; Tehranirad, Babak; Tolkova, Elena; Thio, Hong Kie; Velioğlu, Deniz; Yalçıner, Ahmet Cevdet; Yamazaki, Yoshiki; Zaytsev, Andrey; Zhang, Y. J.

    2017-06-01

    To help produce accurate and consistent maritime hazard products, the National Tsunami Hazard Mitigation Program organized a benchmarking workshop to evaluate the numerical modeling of tsunami currents. Thirteen teams of international researchers, using a set of tsunami models currently utilized for hazard mitigation studies, presented results for a series of benchmarking problems; these results are summarized in this paper. Comparisons focus on physical situations where the currents are shear and separation driven, and are thus de-coupled from the incident tsunami waveform. In general, we find that models of increasing physical complexity provide better accuracy, and that low-order three-dimensional models are superior to high-order two-dimensional models. Inside separation zones and in areas strongly affected by eddies, the magnitude of both model-data errors and inter-model differences can be the same as the magnitude of the mean flow. Thus, we make arguments for the need of an ensemble modeling approach for areas affected by large-scale turbulent eddies, where deterministic simulation may be misleading. As a result of the analyses presented herein, we expect that tsunami modelers now have a better awareness of their ability to accurately capture the physics of tsunami currents, and therefore a better understanding of how to use these simulation tools for hazard assessment and mitigation efforts.

  6. How well do we understand oil spill hazard mapping?

    NASA Astrophysics Data System (ADS)

    Sepp Neves, Antonio Augusto; Pinardi, Nadia

    2017-04-01

    In simple terms, we could describe the marine oil spill hazard as related to three main factors: the spill event itself, the spill trajectory and the arrival and adsorption of oil to the shore or beaching. Regarding the first factor, spill occurrence rates and magnitude distribution and their respective uncertainties have been estimated mainly relying on maritime casualty reports. Abascal et al. (2010) and Sepp Neves et al. (2015) demonstrated for the Prestige (Spain, 2002) and Jiyeh (Lebanon, 2006) spills that ensemble numerical oil spill simulations can generate reliable estimaes of the most likely oil trajectories and impacted coasts. Although paramount to estimate the spill impacts on coastal resources, the third component of the oil spill hazard (i.e. oil beaching) is still subject of discussion. Analysts have employed different methodologies to estimate the coastal component of the hazard relying, for instance, on the beaching frequency solely, the time which a given coastal segment is subject to oil concentrations above a certain preset threshold, percentages of oil beached compared to the original spilled volume and many others. Obviously, results are not comparable and sometimes not consistent with the present knowledge about the environmental impacts of oil spills. The observed inconsistency in the hazard mapping methodologies suggests that there is still a lack of understanding of the beaching component of the oil spill hazard itself. The careful statistical description of the beaching process could finally set a common ground in oil spill hazard mapping studies as observed for other hazards such as earthquakes and landslides. This paper is the last of a series of efforts to standardize oil spill hazard and risk assessments through an ISO-compliant framework (IT - OSRA, see Sepp Neves et al., (2015)). We performed two large ensemble oil spill experiments addressing uncertainties in the spill characteristics and location, and meteocean conditions for two different areas (Algarve and Uruguay) aiming at quantifying the hazard due to accidental (large volumes and rare events) and operational (frequent and usually involving small volumes) spills associated with the maritime traffic. In total, over 60,000 240h-long simulations were run and the statistical behavior of the beached concentrations found was described. The concentration distributions for both study areas were successfully fit using a Gamma distribution demonstrating the generality of our conclusions. The oil spill hazard and its uncertainties were quantified for accidental and operational events relying on the statistical distribution parameters. Therefore, the hazard estimates were comparable between areas and allowed to identify priority coastal segments for protection and rank sources of hazard.

  7. On the estimation of sound speed in two-dimensional Yukawa fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Semenov, I. L., E-mail: Igor.Semenov@dlr.de; Thomas, H. M.; Khrapak, S. A.

    2015-11-15

    The longitudinal sound speed in two-dimensional Yukawa fluids is estimated using the conventional hydrodynamic expression supplemented by appropriate thermodynamic functions proposed recently by Khrapak et al. [Phys. Plasmas 22, 083706 (2015)]. In contrast to the existing approaches, such as quasi-localized charge approximation (QLCA) and molecular dynamics simulations, our model provides a relatively simple estimate for the sound speed over a wide range of parameters of interest. At strong coupling, our results are shown to be in good agreement with the results obtained using the QLCA approach and those derived from the phonon spectrum for the triangular lattice. On the othermore » hand, our model is also expected to remain accurate at moderate values of the coupling strength. In addition, the obtained results are used to discuss the influence of the strong coupling effects on the adiabatic index of two-dimensional Yukawa fluids.« less

  8. State-of-charge estimation in lithium-ion batteries: A particle filter approach

    NASA Astrophysics Data System (ADS)

    Tulsyan, Aditya; Tsai, Yiting; Gopaluni, R. Bhushan; Braatz, Richard D.

    2016-11-01

    The dynamics of lithium-ion batteries are complex and are often approximated by models consisting of partial differential equations (PDEs) relating the internal ionic concentrations and potentials. The Pseudo two-dimensional model (P2D) is one model that performs sufficiently accurately under various operating conditions and battery chemistries. Despite its widespread use for prediction, this model is too complex for standard estimation and control applications. This article presents an original algorithm for state-of-charge estimation using the P2D model. Partial differential equations are discretized using implicit stable algorithms and reformulated into a nonlinear state-space model. This discrete, high-dimensional model (consisting of tens to hundreds of states) contains implicit, nonlinear algebraic equations. The uncertainty in the model is characterized by additive Gaussian noise. By exploiting the special structure of the pseudo two-dimensional model, a novel particle filter algorithm that sweeps in time and spatial coordinates independently is developed. This algorithm circumvents the degeneracy problems associated with high-dimensional state estimation and avoids the repetitive solution of implicit equations by defining a 'tether' particle. The approach is illustrated through extensive simulations.

  9. Explosive hazard detection using MIMO forward-looking ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Shaw, Darren; Ho, K. C.; Stone, Kevin; Keller, James M.; Popescu, Mihail; Anderson, Derek T.; Luke, Robert H.; Burns, Brian

    2015-05-01

    This paper proposes a machine learning algorithm for subsurface object detection on multiple-input-multiple-output (MIMO) forward-looking ground-penetrating radar (FLGPR). By detecting hazards using FLGPR, standoff distances of up to tens of meters can be acquired, but this is at the degradation of performance due to high false alarm rates. The proposed system utilizes an anomaly detection prescreener to identify potential object locations. Alarm locations have multiple one-dimensional (ML) spectral features, two-dimensional (2D) spectral features, and log-Gabor statistic features extracted. The ability of these features to reduce the number of false alarms and increase the probability of detection is evaluated for both co-polarizations present in the Akela MIMO array. Classification is performed by a Support Vector Machine (SVM) with lane-based cross-validation for training and testing. Class imbalance and optimized SVM kernel parameters are considered during classifier training.

  10. Estimating oxygen distribution from vasculature in three-dimensional tumour tissue

    PubMed Central

    Kannan, Pavitra; Warren, Daniel R.; Markelc, Bostjan; Bates, Russell; Muschel, Ruth; Partridge, Mike

    2016-01-01

    Regions of tissue which are well oxygenated respond better to radiotherapy than hypoxic regions by up to a factor of three. If these volumes could be accurately estimated, then it might be possible to selectively boost dose to radio-resistant regions, a concept known as dose-painting. While imaging modalities such as 18F-fluoromisonidazole positron emission tomography (PET) allow identification of hypoxic regions, they are intrinsically limited by the physics of such systems to the millimetre domain, whereas tumour oxygenation is known to vary over a micrometre scale. Mathematical modelling of microscopic tumour oxygen distribution therefore has the potential to complement and enhance macroscopic information derived from PET. In this work, we develop a general method of estimating oxygen distribution in three dimensions from a source vessel map. The method is applied analytically to line sources and quasi-linear idealized line source maps, and also applied to full three-dimensional vessel distributions through a kernel method and compared with oxygen distribution in tumour sections. The model outlined is flexible and stable, and can readily be applied to estimating likely microscopic oxygen distribution from any source geometry. We also investigate the problem of reconstructing three-dimensional oxygen maps from histological and confocal two-dimensional sections, concluding that two-dimensional histological sections are generally inadequate representations of the three-dimensional oxygen distribution. PMID:26935806

  11. Estimating oxygen distribution from vasculature in three-dimensional tumour tissue.

    PubMed

    Grimes, David Robert; Kannan, Pavitra; Warren, Daniel R; Markelc, Bostjan; Bates, Russell; Muschel, Ruth; Partridge, Mike

    2016-03-01

    Regions of tissue which are well oxygenated respond better to radiotherapy than hypoxic regions by up to a factor of three. If these volumes could be accurately estimated, then it might be possible to selectively boost dose to radio-resistant regions, a concept known as dose-painting. While imaging modalities such as 18F-fluoromisonidazole positron emission tomography (PET) allow identification of hypoxic regions, they are intrinsically limited by the physics of such systems to the millimetre domain, whereas tumour oxygenation is known to vary over a micrometre scale. Mathematical modelling of microscopic tumour oxygen distribution therefore has the potential to complement and enhance macroscopic information derived from PET. In this work, we develop a general method of estimating oxygen distribution in three dimensions from a source vessel map. The method is applied analytically to line sources and quasi-linear idealized line source maps, and also applied to full three-dimensional vessel distributions through a kernel method and compared with oxygen distribution in tumour sections. The model outlined is flexible and stable, and can readily be applied to estimating likely microscopic oxygen distribution from any source geometry. We also investigate the problem of reconstructing three-dimensional oxygen maps from histological and confocal two-dimensional sections, concluding that two-dimensional histological sections are generally inadequate representations of the three-dimensional oxygen distribution. © 2016 The Authors.

  12. 30 CFR 250.244 - What geological and geophysical (G&G) information must accompany the DPP or DOCD?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... MANAGEMENT, REGULATION, AND ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL AND GAS AND SULPHUR... depths of expected productive formations and the locations of proposed wells. (c) Two dimensional (2-D...-sections showing the depths of expected productive formations. (e) Shallow hazards report. A shallow...

  13. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro

    2018-03-01

    This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.

  14. Estimating piecewise exponential frailty model with changing prior for baseline hazard function

    NASA Astrophysics Data System (ADS)

    Thamrin, Sri Astuti; Lawi, Armin

    2016-02-01

    Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.

  15. Localization and tracking of moving objects in two-dimensional space by echolocation.

    PubMed

    Matsuo, Ikuo

    2013-02-01

    Bats use frequency-modulated echolocation to identify and capture moving objects in real three-dimensional space. Experimental evidence indicates that bats are capable of locating static objects with a range accuracy of less than 1 μs. A previously introduced model estimates ranges of multiple, static objects using linear frequency modulation (LFM) sound and Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates. The delay time for a single object was estimated with an accuracy of about 1.3 μs by measuring the echo at a low signal-to-noise ratio (SNR). The range accuracy was dependent not only on the SNR but also the Doppler shift, which was dependent on the movements. However, it was unclear whether this model could estimate the moving object range at each timepoint. In this study, echoes were measured from the rotating pole at two receiving points by intermittently emitting LFM sounds. The model was shown to localize moving objects in two-dimensional space by accurately estimating the object's range at each timepoint.

  16. Multi -risk assessment at a national level in Georgia

    NASA Astrophysics Data System (ADS)

    Tsereteli, Nino; Varazanashvili, Otar; Amiranashvili, Avtandil; Tsereteli, Emili; Elizbarashvili, Elizbar; Saluqvadze, Manana; Dolodze, Jemal

    2013-04-01

    Work presented here was initiated by national GNSF project " Reducing natural disasters multiple risk: a positive factor for Georgia development " and two international projects: NATO SFP 983038 "Seismic hazard and Rusk assessment for Southern Caucasus-eastern Turkey Energy Corridors" and EMME " Earthquake Model for Middle east Region". Methodology for estimation of "general" vulnerability, hazards and multiple risk to natural hazards (namely, earthquakes, landslides, snow avalanches, flash floods, mudflows, drought, hurricanes, frost, hail) where developed for Georgia. The electronic detailed databases of natural disasters were created. These databases contain the parameters of hazardous phenomena that caused natural disasters. The magnitude and intensity scale of the mentioned disasters are reviewed and the new magnitude and intensity scales are suggested for disasters for which the corresponding formalization is not yet performed. The associated economic losses were evaluated and presented in monetary terms for these hazards. Based on the hazard inventory, an approach was developed that allowed for the calculation of an overall vulnerability value for each individual hazard type, using the Gross Domestic Product per unit area (applied to population) as the indicator for elements at risk exposed. The correlation between estimated economic losses, physical exposure and the magnitude for each of the six types of hazards has been investigated in detail by using multiple linear regression analysis. Economic losses for all past events and historical vulnerability were estimated. Finally, the spatial distribution of general vulnerability was assessed, and the expected maximum economic loss was calculated as well as a multi-risk map was set-up.

  17. Progress in NTHMP Hazard Assessment

    USGS Publications Warehouse

    Gonzalez, F.I.; Titov, V.V.; Mofjeld, H.O.; Venturato, A.J.; Simmons, R.S.; Hansen, R.; Combellick, Rodney; Eisner, R.K.; Hoirup, D.F.; Yanagi, B.S.; Yong, S.; Darienzo, M.; Priest, G.R.; Crawford, G.L.; Walsh, T.J.

    2005-01-01

    The Hazard Assessment component of the U.S. National Tsunami Hazard Mitigation Program has completed 22 modeling efforts covering 113 coastal communities with an estimated population of 1.2 million residents that are at risk. Twenty-three evacuation maps have also been completed. Important improvements in organizational structure have been made with the addition of two State geotechnical agency representatives to Steering Group membership, and progress has been made on other improvements suggested by program reviewers. ?? Springer 2005.

  18. Using Remote Sensing Data to Constrain Models of Fault Interactions and Plate Boundary Deformation

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Donnellan, A.; Lyzenga, G. A.; Parker, J. W.; Milliner, C. W. D.

    2016-12-01

    Determining the distribution of slip and behavior of fault interactions at plate boundaries is a complex problem. Field and remotely sensed data often lack the necessary coverage to fully resolve fault behavior. However, realistic physical models may be used to more accurately characterize the complex behavior of faults constrained with observed data, such as GPS, InSAR, and SfM. These results will improve the utility of using combined models and data to estimate earthquake potential and characterize plate boundary behavior. Plate boundary faults exhibit complex behavior, with partitioned slip and distributed deformation. To investigate what fraction of slip becomes distributed deformation off major faults, we examine a model fault embedded within a damage zone of reduced elastic rigidity that narrows with depth and forward model the slip and resulting surface deformation. The fault segments and slip distributions are modeled using the JPL GeoFEST software. GeoFEST (Geophysical Finite Element Simulation Tool) is a two- and three-dimensional finite element software package for modeling solid stress and strain in geophysical and other continuum domain applications [Lyzenga, et al., 2000; Glasscoe, et al., 2004; Parker, et al., 2008, 2010]. New methods to advance geohazards research using computer simulations and remotely sensed observations for model validation are required to understand fault slip, the complex nature of fault interaction and plate boundary deformation. These models help enhance our understanding of the underlying processes, such as transient deformation and fault creep, and can aid in developing observation strategies for sUAV, airborne, and upcoming satellite missions seeking to determine how faults behave and interact and assess their associated hazard. Models will also help to characterize this behavior, which will enable improvements in hazard estimation. Validating the model results against remotely sensed observations will allow us to better constrain fault zone rheology and physical properties, having implications for the overall understanding of earthquake physics, fault interactions, plate boundary deformation and earthquake hazard, preparedness and risk reduction.

  19. A practical divergence measure for survival distributions that can be estimated from Kaplan-Meier curves.

    PubMed

    Cox, Trevor F; Czanner, Gabriela

    2016-06-30

    This paper introduces a new simple divergence measure between two survival distributions. For two groups of patients, the divergence measure between their associated survival distributions is based on the integral of the absolute difference in probabilities that a patient from one group dies at time t and a patient from the other group survives beyond time t and vice versa. In the case of non-crossing hazard functions, the divergence measure is closely linked to the Harrell concordance index, C, the Mann-Whitney test statistic and the area under a receiver operating characteristic curve. The measure can be used in a dynamic way where the divergence between two survival distributions from time zero up to time t is calculated enabling real-time monitoring of treatment differences. The divergence can be found for theoretical survival distributions or can be estimated non-parametrically from survival data using Kaplan-Meier estimates of the survivor functions. The estimator of the divergence is shown to be generally unbiased and approximately normally distributed. For the case of proportional hazards, the constituent parts of the divergence measure can be used to assess the proportional hazards assumption. The use of the divergence measure is illustrated on the survival of pancreatic cancer patients. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Uncertainty analysis in vulnerability estimations for elements at risk- a review of concepts and some examples on landslides

    NASA Astrophysics Data System (ADS)

    Ciurean, R. L.; Glade, T.

    2012-04-01

    Decision under uncertainty is a constant of everyday life and an important component of risk management and governance. Recently, experts have emphasized the importance of quantifying uncertainty in all phases of landslide risk analysis. Due to its multi-dimensional and dynamic nature, (physical) vulnerability is inherently complex and the "degree of loss" estimates imprecise and to some extent even subjective. Uncertainty analysis introduces quantitative modeling approaches that allow for a more explicitly objective output, improving the risk management process as well as enhancing communication between various stakeholders for better risk governance. This study presents a review of concepts for uncertainty analysis in vulnerability of elements at risk to landslides. Different semi-quantitative and quantitative methods are compared based on their feasibility in real-world situations, hazard dependency, process stage in vulnerability assessment (i.e. input data, model, output), and applicability within an integrated landslide hazard and risk framework. The resulted observations will help to identify current gaps and future needs in vulnerability assessment, including estimation of uncertainty propagation, transferability of the methods, development of visualization tools, but also address basic questions like what is uncertainty and how uncertainty can be quantified or treated in a reliable and reproducible way.

  1. Environmental Risk Assessment: Spatial Analysis of Chemical Hazards and Risks in South Korea

    NASA Astrophysics Data System (ADS)

    Yu, H.; Heo, S.; Kim, M.; Lee, W. K.; Jong-Ryeul, S.

    2017-12-01

    This study identified chemical hazard and risk levels in Korea by analyzing the spatial distribution of chemical factories and accidents. The number of chemical factories and accidents in 5-km2 grids were used as the attribute value for spatial analysis. First, semi-variograms were conducted to examine spatial distribution patterns and to identify spatial autocorrelation of chemical factories and accidents. Semi-variograms explained that the spatial distribution of chemical factories and accidents were spatially autocorrelated. Second, the results of the semi-variograms were used in Ordinary Kriging to estimate chemical hazard and risk level. The level values were extracted from the Ordinary Kriging result and their spatial similarity was examined by juxtaposing the two values with respect to their location. Six peaks were identified in both the hazard and risk estimation result, and the peaks correlated with major cities in Korea. Third, the estimated hazard and risk levels were classified with geometrical interval and could be classified into four quadrants: Low Hazard and Low Risk (LHLR), Low Hazard and High Risk (LHHR), High Hazard and Low Risk (HHLR), and High Hazard and High Risk (HHHR). The 4 groups identified different chemical safety management issues in Korea; relatively safe LHLR group, many chemical reseller factories were found in HHLR group, chemical transportation accidents were in the LHHR group, and an abundance of factories and accidents were in the HHHR group. Each quadrant represented different safety management obstacles in Korea, and studying spatial differences can support the establishment of an efficient risk management plan.

  2. Mortality Measurement at Advanced Ages: A Study of the Social Security Administration Death Master File

    PubMed Central

    Gavrilov, Leonid A.; Gavrilova, Natalia S.

    2011-01-01

    Accurate estimates of mortality at advanced ages are essential to improving forecasts of mortality and the population size of the oldest old age group. However, estimation of hazard rates at extremely old ages poses serious challenges to researchers: (1) The observed mortality deceleration may be at least partially an artifact of mixing different birth cohorts with different mortality (heterogeneity effect); (2) standard assumptions of hazard rate estimates may be invalid when risk of death is extremely high at old ages and (3) ages of very old people may be exaggerated. One way of obtaining estimates of mortality at extreme ages is to pool together international records of persons surviving to extreme ages with subsequent efforts of strict age validation. This approach helps researchers to resolve the third of the above-mentioned problems but does not resolve the first two problems because of inevitable data heterogeneity when data for people belonging to different birth cohorts and countries are pooled together. In this paper we propose an alternative approach, which gives an opportunity to resolve the first two problems by compiling data for more homogeneous single-year birth cohorts with hazard rates measured at narrow (monthly) age intervals. Possible ways of resolving the third problem of hazard rate estimation are elaborated. This approach is based on data from the Social Security Administration Death Master File (DMF). Some birth cohorts covered by DMF could be studied by the method of extinct generations. Availability of month of birth and month of death information provides a unique opportunity to obtain hazard rate estimates for every month of age. Study of several single-year extinct birth cohorts shows that mortality trajectory at advanced ages follows the Gompertz law up to the ages 102–105 years without a noticeable deceleration. Earlier reports of mortality deceleration (deviation of mortality from the Gompertz law) at ages below 100 appear to be artifacts of mixing together several birth cohorts with different mortality levels and using cross-sectional instead of cohort data. Age exaggeration and crude assumptions applied to mortality estimates at advanced ages may also contribute to mortality underestimation at very advanced ages. PMID:22308064

  3. Semiparametric Time-to-Event Modeling in the Presence of a Latent Progression Event

    PubMed Central

    Rice, John D.; Tsodikov, Alex

    2017-01-01

    Summary In cancer research, interest frequently centers on factors influencing a latent event that must precede a terminal event. In practice it is often impossible to observe the latent event precisely, making inference about this process difficult. To address this problem, we propose a joint model for the unobserved time to the latent and terminal events, with the two events linked by the baseline hazard. Covariates enter the model parametrically as linear combinations that multiply, respectively, the hazard for the latent event and the hazard for the terminal event conditional on the latent one. We derive the partial likelihood estimators for this problem assuming the latent event is observed, and propose a profile likelihood–based method for estimation when the latent event is unobserved. The baseline hazard in this case is estimated nonparametrically using the EM algorithm, which allows for closed-form Breslow-type estimators at each iteration, bringing improved computational efficiency and stability compared with maximizing the marginal likelihood directly. We present simulation studies to illustrate the finite-sample properties of the method; its use in practice is demonstrated in the analysis of a prostate cancer data set. PMID:27556886

  4. Semiparametric time-to-event modeling in the presence of a latent progression event.

    PubMed

    Rice, John D; Tsodikov, Alex

    2017-06-01

    In cancer research, interest frequently centers on factors influencing a latent event that must precede a terminal event. In practice it is often impossible to observe the latent event precisely, making inference about this process difficult. To address this problem, we propose a joint model for the unobserved time to the latent and terminal events, with the two events linked by the baseline hazard. Covariates enter the model parametrically as linear combinations that multiply, respectively, the hazard for the latent event and the hazard for the terminal event conditional on the latent one. We derive the partial likelihood estimators for this problem assuming the latent event is observed, and propose a profile likelihood-based method for estimation when the latent event is unobserved. The baseline hazard in this case is estimated nonparametrically using the EM algorithm, which allows for closed-form Breslow-type estimators at each iteration, bringing improved computational efficiency and stability compared with maximizing the marginal likelihood directly. We present simulation studies to illustrate the finite-sample properties of the method; its use in practice is demonstrated in the analysis of a prostate cancer data set. © 2016, The International Biometric Society.

  5. A non-intrusive screening methodology for environmental hazard assessment at waste disposal sites for water resources protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simons, B.A.; Woldt, W.E.; Jones, D.D.

    The environmental and health risks posed by unregulated waste disposal sites are potential concerns of Pacific Rim regions and island ares because of the need to protect aquifers and other valuable water resources. A non-intrusive screening methodology to determine site characteristics including possible soil and/or groundwater contamination, areal extent of waste, etc. is being developed and tested at waste disposal sites in Nebraska. This type of methodology would be beneficial to Pacific Rim regions in investigating and/or locating unknown or poorly documented contamination areas for hazard assessment and groundwater protection. Traditional assessment methods are generally expensive, time consuming, and potentiallymore » exacerbate the problem. Ideally, a quick and inexpensive assessment method to reliably characterize these sites is desired. Electromagnetic (EM) conductivity surveying and soil-vapor sampling techniques, combined with innovative three-dimensional geostatistical methods are used to map the data to develop a site characterization of the subsurface and to aid in tracking any contaminant plumes. The EM data is analyzed to determine/estimate the extent and volume of waste and/or leachate. Soil-vapor data are analyzed to estimate a site`s volatile organic compound (VOC) emission rate to the atmosphere. The combined information could then be incorporated as one part of an overall hazard assessment system.« less

  6. Data-Adaptive Bias-Reduced Doubly Robust Estimation.

    PubMed

    Vermeulen, Karel; Vansteelandt, Stijn

    2016-05-01

    Doubly robust estimators have now been proposed for a variety of target parameters in the causal inference and missing data literature. These consistently estimate the parameter of interest under a semiparametric model when one of two nuisance working models is correctly specified, regardless of which. The recently proposed bias-reduced doubly robust estimation procedure aims to partially retain this robustness in more realistic settings where both working models are misspecified. These so-called bias-reduced doubly robust estimators make use of special (finite-dimensional) nuisance parameter estimators that are designed to locally minimize the squared asymptotic bias of the doubly robust estimator in certain directions of these finite-dimensional nuisance parameters under misspecification of both parametric working models. In this article, we extend this idea to incorporate the use of data-adaptive estimators (infinite-dimensional nuisance parameters), by exploiting the bias reduction estimation principle in the direction of only one nuisance parameter. We additionally provide an asymptotic linearity theorem which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model. Simulation studies confirm the desirable finite-sample performance of the proposed estimators relative to a variety of other doubly robust estimators.

  7. Modeling Compound Flood Hazards in Coastal Embayments

    NASA Astrophysics Data System (ADS)

    Moftakhari, H.; Schubert, J. E.; AghaKouchak, A.; Luke, A.; Matthew, R.; Sanders, B. F.

    2017-12-01

    Coastal cities around the world are built on lowland topography adjacent to coastal embayments and river estuaries, where multiple factors threaten increasing flood hazards (e.g. sea level rise and river flooding). Quantitative risk assessment is required for administration of flood insurance programs and the design of cost-effective flood risk reduction measures. This demands a characterization of extreme water levels such as 100 and 500 year return period events. Furthermore, hydrodynamic flood models are routinely used to characterize localized flood level intensities (i.e., local depth and velocity) based on boundary forcing sampled from extreme value distributions. For example, extreme flood discharges in the U.S. are estimated from measured flood peaks using the Log-Pearson Type III distribution. However, configuring hydrodynamic models for coastal embayments is challenging because of compound extreme flood events: events caused by a combination of extreme sea levels, extreme river discharges, and possibly other factors such as extreme waves and precipitation causing pluvial flooding in urban developments. Here, we present an approach for flood risk assessment that coordinates multivariate extreme analysis with hydrodynamic modeling of coastal embayments. First, we evaluate the significance of correlation structure between terrestrial freshwater inflow and oceanic variables; second, this correlation structure is described using copula functions in unit joint probability domain; and third, we choose a series of compound design scenarios for hydrodynamic modeling based on their occurrence likelihood. The design scenarios include the most likely compound event (with the highest joint probability density), preferred marginal scenario and reproduced time series of ensembles based on Monte Carlo sampling of bivariate hazard domain. The comparison between resulting extreme water dynamics under the compound hazard scenarios explained above provides an insight to the strengths/weaknesses of each approach and helps modelers choose the appropriate scenario that best fit to the needs of their project. The proposed risk assessment approach can help flood hazard modeling practitioners achieve a more reliable estimate of risk, by cautiously reducing the dimensionality of the hazard analysis.

  8. Estimation of two-dimensional motion velocity using ultrasonic signals beamformed in Cartesian coordinate for measurement of cardiac dynamics

    NASA Astrophysics Data System (ADS)

    Kaburaki, Kaori; Mozumi, Michiya; Hasegawa, Hideyuki

    2018-07-01

    Methods for the estimation of two-dimensional (2D) velocity and displacement of physiological tissues are necessary for quantitative diagnosis. In echocardiography with a phased array probe, the accuracy in the estimation of the lateral motion is lower than that of the axial motion. To improve the accuracy in the estimation of the lateral motion, in the present study, the coordinate system for ultrasonic beamforming was changed from the conventional polar coordinate to the Cartesian coordinate. In a basic experiment, the motion velocity of a phantom, which was moved at a constant speed, was estimated by the conventional and proposed methods. The proposed method reduced the bias error and standard deviation in the estimated motion velocities. In an in vivo measurement, intracardiac blood flow was analyzed by the proposed method.

  9. Binding Direction-Based Two-Dimensional Flattened Contact Area Computing Algorithm for Protein-Protein Interactions.

    PubMed

    Kang, Beom Sik; Pugalendhi, GaneshKumar; Kim, Ku-Jin

    2017-10-13

    Interactions between protein molecules are essential for the assembly, function, and regulation of proteins. The contact region between two protein molecules in a protein complex is usually complementary in shape for both molecules and the area of the contact region can be used to estimate the binding strength between two molecules. Although the area is a value calculated from the three-dimensional surface, it cannot represent the three-dimensional shape of the surface. Therefore, we propose an original concept of two-dimensional contact area which provides further information such as the ruggedness of the contact region. We present a novel algorithm for calculating the binding direction between two molecules in a protein complex, and then suggest a method to compute the two-dimensional flattened area of the contact region between two molecules based on the binding direction.

  10. The bias of a 2D view: Comparing 2D and 3D mesophyll surface area estimates using non-invasive imaging

    USDA-ARS?s Scientific Manuscript database

    The surface area of the leaf mesophyll exposed to intercellular airspace per leaf area (Sm) is closely associated with CO2 diffusion and photosynthetic rates. Sm is typically estimated from two-dimensional (2D) leaf sections and corrected for the three-dimensional (3D) geometry of mesophyll cells, l...

  11. The performance of different propensity score methods for estimating marginal hazard ratios.

    PubMed

    Austin, Peter C

    2013-07-20

    Propensity score methods are increasingly being used to reduce or minimize the effects of confounding when estimating the effects of treatments, exposures, or interventions when using observational or non-randomized data. Under the assumption of no unmeasured confounders, previous research has shown that propensity score methods allow for unbiased estimation of linear treatment effects (e.g., differences in means or proportions). However, in biomedical research, time-to-event outcomes occur frequently. There is a paucity of research into the performance of different propensity score methods for estimating the effect of treatment on time-to-event outcomes. Furthermore, propensity score methods allow for the estimation of marginal or population-average treatment effects. We conducted an extensive series of Monte Carlo simulations to examine the performance of propensity score matching (1:1 greedy nearest-neighbor matching within propensity score calipers), stratification on the propensity score, inverse probability of treatment weighting (IPTW) using the propensity score, and covariate adjustment using the propensity score to estimate marginal hazard ratios. We found that both propensity score matching and IPTW using the propensity score allow for the estimation of marginal hazard ratios with minimal bias. Of these two approaches, IPTW using the propensity score resulted in estimates with lower mean squared error when estimating the effect of treatment in the treated. Stratification on the propensity score and covariate adjustment using the propensity score result in biased estimation of both marginal and conditional hazard ratios. Applied researchers are encouraged to use propensity score matching and IPTW using the propensity score when estimating the relative effect of treatment on time-to-event outcomes. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Regression Analysis of a Disease Onset Distribution Using Diagnosis Data

    PubMed Central

    Young, Jessica G.; Jewell, Nicholas P.; Samuels, Steven J.

    2008-01-01

    Summary We consider methods for estimating the effect of a covariate on a disease onset distribution when the observed data structure consists of right-censored data on diagnosis times and current status data on onset times amongst individuals who have not yet been diagnosed. Dunson and Baird (2001, Biometrics 57, 306–403) approached this problem using maximum likelihood, under the assumption that the ratio of the diagnosis and onset distributions is monotonic nondecreasing. As an alternative, we propose a two-step estimator, an extension of the approach of van der Laan, Jewell, and Petersen (1997, Biometrika 84, 539–554) in the single sample setting, which is computationally much simpler and requires no assumptions on this ratio. A simulation study is performed comparing estimates obtained from these two approaches, as well as that from a standard current status analysis that ignores diagnosis data. Results indicate that the Dunson and Baird estimator outperforms the two-step estimator when the monotonicity assumption holds, but the reverse is true when the assumption fails. The simple current status estimator loses only a small amount of precision in comparison to the two-step procedure but requires monitoring time information for all individuals. In the data that motivated this work, a study of uterine fibroids and chemical exposure to dioxin, the monotonicity assumption is seen to fail. Here, the two-step and current status estimators both show no significant association between the level of dioxin exposure and the hazard for onset of uterine fibroids; the two-step estimator of the relative hazard associated with increasing levels of exposure has the least estimated variance amongst the three estimators considered. PMID:17680832

  13. Hurricane Sandy's flood frequency increasing from year 1800 to 2100.

    PubMed

    Lin, Ning; Kopp, Robert E; Horton, Benjamin P; Donnelly, Jeffrey P

    2016-10-25

    Coastal flood hazard varies in response to changes in storm surge climatology and the sea level. Here we combine probabilistic projections of the sea level and storm surge climatology to estimate the temporal evolution of flood hazard. We find that New York City's flood hazard has increased significantly over the past two centuries and is very likely to increase more sharply over the 21st century. Due to the effect of sea level rise, the return period of Hurricane Sandy's flood height decreased by a factor of ∼3× from year 1800 to 2000 and is estimated to decrease by a further ∼4.4× from 2000 to 2100 under a moderate-emissions pathway. When potential storm climatology change over the 21st century is also accounted for, Sandy's return period is estimated to decrease by ∼3× to 17× from 2000 to 2100.

  14. Hurricane Sandy’s flood frequency increasing from year 1800 to 2100

    PubMed Central

    Horton, Benjamin P.; Donnelly, Jeffrey P.

    2016-01-01

    Coastal flood hazard varies in response to changes in storm surge climatology and the sea level. Here we combine probabilistic projections of the sea level and storm surge climatology to estimate the temporal evolution of flood hazard. We find that New York City’s flood hazard has increased significantly over the past two centuries and is very likely to increase more sharply over the 21st century. Due to the effect of sea level rise, the return period of Hurricane Sandy’s flood height decreased by a factor of ∼3× from year 1800 to 2000 and is estimated to decrease by a further ∼4.4× from 2000 to 2100 under a moderate-emissions pathway. When potential storm climatology change over the 21st century is also accounted for, Sandy’s return period is estimated to decrease by ∼3× to 17× from 2000 to 2100. PMID:27790992

  15. Modeling of marginal burning state of fire spread in live chaparral shrub fuel bed

    Treesearch

    X. Zhou; S. Mahalingam; D. Weise

    2005-01-01

    Prescribed burning in chaparral, currently used to manage wildland fuels and reduce wildfire hazard, is often conducted under marginal burning conditions. The relative importance of the fuel and environmental variables that determine fire spread success in chaparral fuels is not quantitatively understood. Based on extensive experimental study, a two-dimensional...

  16. An EM-based semi-parametric mixture model approach to the regression analysis of competing-risks data.

    PubMed

    Ng, S K; McLachlan, G J

    2003-04-15

    We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.

  17. Applied Prevalence Ratio estimation with different Regression models: An example from a cross-national study on substance use research.

    PubMed

    Espelt, Albert; Marí-Dell'Olmo, Marc; Penelo, Eva; Bosque-Prous, Marina

    2016-06-14

    To examine the differences between Prevalence Ratio (PR) and Odds Ratio (OR) in a cross-sectional study and to provide tools to calculate PR using two statistical packages widely used in substance use research (STATA and R). We used cross-sectional data from 41,263 participants of 16 European countries participating in the Survey on Health, Ageing and Retirement in Europe (SHARE). The dependent variable, hazardous drinking, was calculated using the Alcohol Use Disorders Identification Test - Consumption (AUDIT-C). The main independent variable was gender. Other variables used were: age, educational level and country of residence. PR of hazardous drinking in men with relation to women was estimated using Mantel-Haenszel method, log-binomial regression models and poisson regression models with robust variance. These estimations were compared to the OR calculated using logistic regression models. Prevalence of hazardous drinkers varied among countries. Generally, men have higher prevalence of hazardous drinking than women [PR=1.43 (1.38-1.47)]. Estimated PR was identical independently of the method and the statistical package used. However, OR overestimated PR, depending on the prevalence of hazardous drinking in the country. In cross-sectional studies, where comparisons between countries with differences in the prevalence of the disease or condition are made, it is advisable to use PR instead of OR.

  18. Turbulence Hazard Metric Based on Peak Accelerations for Jetliner Passengers

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.

    2005-01-01

    Calculations are made of the approximate hazard due to peak normal accelerations of an airplane flying through a simulated vertical wind field associated with a convective frontal system. The calculations are based on a hazard metric developed from a systematic application of a generic math model to 1-cosine discrete gusts of various amplitudes and gust lengths. The math model simulates the three degree-of- freedom longitudinal rigid body motion to vertical gusts and includes (1) fuselage flexibility, (2) the lag in the downwash from the wing to the tail, (3) gradual lift effects, (4) a simplified autopilot, and (5) motion of an unrestrained passenger in the rear cabin. Airplane and passenger response contours are calculated for a matrix of gust amplitudes and gust lengths. The airplane response contours are used to develop an approximate hazard metric of peak normal accelerations as a function of gust amplitude and gust length. The hazard metric is then applied to a two-dimensional simulated vertical wind field of a convective frontal system. The variations of the hazard metric with gust length and airplane heading are demonstrated.

  19. GFZ Wireless Seismic Array (GFZ-WISE), a Wireless Mesh Network of Seismic Sensors: New Perspectives for Seismic Noise Array Investigations and Site Monitoring

    PubMed Central

    Picozzi, Matteo; Milkereit, Claus; Parolai, Stefano; Jaeckel, Karl-Heinz; Veit, Ingo; Fischer, Joachim; Zschau, Jochen

    2010-01-01

    Over the last few years, the analysis of seismic noise recorded by two dimensional arrays has been confirmed to be capable of deriving the subsoil shear-wave velocity structure down to several hundred meters depth. In fact, using just a few minutes of seismic noise recordings and combining this with the well known horizontal-to-vertical method, it has also been shown that it is possible to investigate the average one dimensional velocity structure below an array of stations in urban areas with a sufficient resolution to depths that would be prohibitive with active source array surveys, while in addition reducing the number of boreholes required to be drilled for site-effect analysis. However, the high cost of standard seismological instrumentation limits the number of sensors generally available for two-dimensional array measurements (i.e., of the order of 10), limiting the resolution in the estimated shear-wave velocity profiles. Therefore, new themes in site-effect estimation research by two-dimensional arrays involve the development and application of low-cost instrumentation, which potentially allows the performance of dense-array measurements, and the development of dedicated signal-analysis procedures for rapid and robust estimation of shear-wave velocity profiles. In this work, we present novel low-cost wireless instrumentation for dense two-dimensional ambient seismic noise array measurements that allows the real–time analysis of the surface-wavefield and the rapid estimation of the local shear-wave velocity structure for site response studies. We first introduce the general philosophy of the new system, as well as the hardware and software that forms the novel instrument, which we have tested in laboratory and field studies. PMID:22319298

  20. Microburst vertical wind estimation from horizontal wind measurements

    NASA Technical Reports Server (NTRS)

    Vicroy, Dan D.

    1994-01-01

    The vertical wind or downdraft component of a microburst-generated wind shear can significantly degrade airplane performance. Doppler radar and lidar are two sensor technologies being tested to provide flight crews with early warning of the presence of hazardous wind shear. An inherent limitation of Doppler-based sensors is the inability to measure velocities perpendicular to the line of sight, which results in an underestimate of the total wind shear hazard. One solution to the line-of-sight limitation is to use a vertical wind model to estimate the vertical component from the horizontal wind measurement. The objective of this study was to assess the ability of simple vertical wind models to improve the hazard prediction capability of an airborne Doppler sensor in a realistic microburst environment. Both simulation and flight test measurements were used to test the vertical wind models. The results indicate that in the altitude region of interest (at or below 300 m), the simple vertical wind models improved the hazard estimate. The radar simulation study showed that the magnitude of the performance improvement was altitude dependent. The altitude of maximum performance improvement occurred at about 300 m.

  1. Debris flow risk mapping on medium scale and estimation of prospective economic losses

    NASA Astrophysics Data System (ADS)

    Blahut, Jan; Sterlacchini, Simone

    2010-05-01

    Delimitation of potential zones affected by debris flow hazard, mapping of areas at risk, and estimation of future economic damage provides important information for spatial planners and local administrators in all countries endangered by this type of phenomena. This study presents a medium scale (1:25 000 - 1: 50 000) analysis applied in the Consortium of Mountain Municipalities of Valtellina di Tirano (Italian Alps, Lombardy Region). In this area a debris flow hazard map was coupled with the information about the elements at risk to obtain monetary values of prospective damage. Two available hazard maps were obtained from GIS medium scale modelling. Probability estimations of debris flow occurrence were calculated using existing susceptibility maps and two sets of aerial images. Value to the elements at risk was assigned according to the official information on housing costs and land value from the Territorial Agency of Lombardy Region. In the first risk map vulnerability values were assumed to be 1. The second risk map uses three classes of vulnerability values qualitatively estimated according to the debris flow possible propagation. Risk curves summarizing the possible economic losses were calculated. Finally these maps of economic risk were compared to maps derived from qualitative evaluation of the values of the elements at risk.

  2. Hazard Function Estimation with Cause-of-Death Data Missing at Random.

    PubMed

    Wang, Qihua; Dinse, Gregg E; Liu, Chunling

    2012-04-01

    Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.

  3. Estimating contrast transfer function and associated parameters by constrained non-linear optimization.

    PubMed

    Yang, C; Jiang, W; Chen, D-H; Adiga, U; Ng, E G; Chiu, W

    2009-03-01

    The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.

  4. A two dimensional power spectral estimate for some nonstationary processes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Smith, Gregory L.

    1989-01-01

    A two dimensional estimate for the power spectral density of a nonstationary process is being developed. The estimate will be applied to helicopter noise data which is clearly nonstationary. The acoustic pressure from the isolated main rotor and isolated tail rotor is known to be periodically correlated (PC) and the combined noise from the main and tail rotors is assumed to be correlation autoregressive (CAR). The results of this nonstationary analysis will be compared with the current method of assuming that the data is stationary and analyzing it as such. Another method of analysis is to introduce a random phase shift into the data as shown by Papoulis to produce a time history which can then be accurately modeled as stationary. This method will also be investigated for the helicopter data. A method used to determine the period of a PC process when the period is not know is discussed. The period of a PC process must be known in order to produce an accurate spectral representation for the process. The spectral estimate is developed. The bias and variability of the estimate are also discussed. Finally, the current method for analyzing nonstationary data is compared to that of using a two dimensional spectral representation. In addition, the method of phase shifting the data is examined.

  5. Hazard ratio estimation and inference in clinical trials with many tied event times.

    PubMed

    Mehrotra, Devan V; Zhang, Yiwei

    2018-06-13

    The medical literature contains numerous examples of randomized clinical trials with time-to-event endpoints in which large numbers of events accrued over relatively short follow-up periods, resulting in many tied event times. A generally common feature across such examples was that the logrank test was used for hypothesis testing and the Cox proportional hazards model was used for hazard ratio estimation. We caution that this common practice is particularly risky in the setting of many tied event times for two reasons. First, the estimator of the hazard ratio can be severely biased if the Breslow tie-handling approximation for the Cox model (the default in SAS and Stata software) is used. Second, the 95% confidence interval for the hazard ratio can include one even when the corresponding logrank test p-value is less than 0.05. To help establish a better practice, with applicability for both superiority and noninferiority trials, we use theory and simulations to contrast Wald and score tests based on well-known tie-handling approximations for the Cox model. Our recommendation is to report the Wald test p-value and corresponding confidence interval based on the Efron approximation. The recommended test is essentially as powerful as the logrank test, the accompanying point and interval estimates of the hazard ratio have excellent statistical properties even in settings with many tied event times, inferential alignment between the p-value and confidence interval is guaranteed, and implementation is straightforward using commonly used software. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Direct estimates of low-level radiation risks of lung cancer at two NRC-compliant nuclear installations: why are the new risk estimates 20 to 200 times the old official estimates?

    PubMed

    Bross, I D; Driscoll, D L

    1981-01-01

    An official report on the health hazards to nuclear submarine workers at the Portsmouth Naval Shipyard (PNS), who were exposed to low-level ionizing radiation, was based on a casual inspection of the data and not on statistical analyses of the dosage-response relationships. When these analyses are done, serious hazards from lung cancer and other causes of death are shown. As a result of the recent studies on nuclear workers, the new risk estimates have been found to be much higher than the official estimates currently used in setting NRC permissible levels. The official BEIR estimates are about one lung cancer death per year per million persons per rem[s]. The PNS data show 189 lung cancer deaths per year per million persons per rem.

  7. Hazard Function Estimation with Cause-of-Death Data Missing at Random

    PubMed Central

    Wang, Qihua; Dinse, Gregg E.; Liu, Chunling

    2010-01-01

    Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data. PMID:22267874

  8. Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle.

    PubMed

    Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu

    2017-02-26

    In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.

  9. Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle

    PubMed Central

    Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu

    2017-01-01

    In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions. PMID:28245634

  10. Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation.

    PubMed

    Yuan, Haidong

    2016-10-14

    Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O(d+1) improvement for Hamiltonian parameter estimation on d-dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.

  11. Conjugated Equine Estrogens and Breast Cancer Risk in the Women’s Health Initiative Clinical Trial and Observational Study

    PubMed Central

    Prentice, Ross L.; Chlebowski, Rowan T.; Stefanick, Marcia L.; Manson, JoAnn E.; Langer, Robert D.; Pettinger, Mary; Hendrix, Susan L.; Hubbell, F. Allan; Kooperberg, Charles; Kuller, Lewis H.; Lane, Dorothy S.; McTiernan, Anne; O’Sullivan, Mary Jo; Rossouw, Jacques E.; Anderson, Garnet L.

    2009-01-01

    The Women’s Health Initiative randomized controlled trial found a trend (p = 0.09) toward a lower breast cancer risk among women assigned to daily 0.625-mg conjugated equine estrogens (CEEs) compared with placebo, in contrast to an observational literature that mostly reports a moderate increase in risk with estrogenalone preparations. In 1993–2004 at 40 US clinical centers, breast cancer hazard ratio estimates for this CEE regimen were compared between the Women’s Health Initiative clinical trial and observational study toward understanding this apparent discrepancy and refining hazard ratio estimates. After control for prior use of postmenopausal hormone therapy and for confounding factors, CEE hazard ratio estimates were higher from the observational study compared with the clinical trial by 43% (p = 0.12). However, after additional control for time from menopause to first use of postmenopausal hormone therapy, the hazard ratios agreed closely between the two cohorts (p = 0.82). For women who begin use soon after menopause, combined analyses of clinical trial and observational study data do not provide clear evidence of either an overall reduction or an increase in breast cancer risk with CEEs, although hazard ratios appeared to be relatively higher among women having certain breast cancer risk factors or a low body mass index. PMID:18448442

  12. Validation of attenuation models for ground motion applications in central and eastern North America

    DOE PAGES

    Pasyanos, Michael E.

    2015-11-01

    Recently developed attenuation models are incorporated into standard one-dimensional (1-D) ground motion prediction equations (GMPEs), effectively making them two-dimensional (2-D) and eliminating the need to create different GMPEs for an increasing number of sub-regions. The model is tested against a data set of over 10,000 recordings from 81 earthquakes in North America. The use of attenuation models in GMPEs improves our ability to fit observed ground motions and should be incorporated into future national hazard maps. The improvement is most significant at higher frequencies and longer distances which have a greater number of wave cycles. This has implications for themore » rare high-magnitude earthquakes, which produce potentially damaging ground motions over wide areas, and drive the seismic hazards. Furthermore, the attenuation models can be created using weak ground motions, they could be developed for regions of low seismicity where empirical recordings of ground motions are uncommon and do not span the full range of magnitudes and distances.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasyanos, Michael E.

    Recently developed attenuation models are incorporated into standard one-dimensional (1-D) ground motion prediction equations (GMPEs), effectively making them two-dimensional (2-D) and eliminating the need to create different GMPEs for an increasing number of sub-regions. The model is tested against a data set of over 10,000 recordings from 81 earthquakes in North America. The use of attenuation models in GMPEs improves our ability to fit observed ground motions and should be incorporated into future national hazard maps. The improvement is most significant at higher frequencies and longer distances which have a greater number of wave cycles. This has implications for themore » rare high-magnitude earthquakes, which produce potentially damaging ground motions over wide areas, and drive the seismic hazards. Furthermore, the attenuation models can be created using weak ground motions, they could be developed for regions of low seismicity where empirical recordings of ground motions are uncommon and do not span the full range of magnitudes and distances.« less

  14. Extinction maps toward the Milky Way bulge: Two-dimensional and three-dimensional tests with apogee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultheis, M.; Zasowski, G.; Allende Prieto, C.

    Galactic interstellar extinction maps are powerful and necessary tools for Milky Way structure and stellar population analyses, particularly toward the heavily reddened bulge and in the midplane. However, due to the difficulty of obtaining reliable extinction measures and distances for a large number of stars that are independent of these maps, tests of their accuracy and systematics have been limited. Our goal is to assess a variety of photometric stellar extinction estimates, including both two-dimensional and three-dimensional extinction maps, using independent extinction measures based on a large spectroscopic sample of stars toward the Milky Way bulge. We employ stellar atmosphericmore » parameters derived from high-resolution H-band Apache Point Observatory Galactic Evolution Experiment (APOGEE) spectra, combined with theoretical stellar isochrones, to calculate line-of-sight extinction and distances for a sample of more than 2400 giants toward the Milky Way bulge. We compare these extinction values to those predicted by individual near-IR and near+mid-IR stellar colors, two-dimensional bulge extinction maps, and three-dimensional extinction maps. The long baseline, near+mid-IR stellar colors are, on average, the most accurate predictors of the APOGEE extinction estimates, and the two-dimensional and three-dimensional extinction maps derived from different stellar populations along different sightlines show varying degrees of reliability. We present the results of all of the comparisons and discuss reasons for the observed discrepancies. We also demonstrate how the particular stellar atmospheric models adopted can have a strong impact on this type of analysis, and discuss related caveats.« less

  15. Matching on the Disease Risk Score in Comparative Effectiveness Research of New Treatments

    PubMed Central

    Wyss, Richard; Ellis, Alan R.; Brookhart, M. Alan; Funk, Michele Jonsson; Girman, Cynthia J.; Simpson, Ross J.; Stürmer, Til

    2016-01-01

    Purpose We use simulations and an empirical example to evaluate the performance of disease risk score (DRS) matching compared with propensity score (PS) matching when controlling large numbers of covariates in settings involving newly introduced treatments. Methods We simulated a dichotomous treatment, a dichotomous outcome, and 100 baseline covariates that included both continuous and dichotomous random variables. For the empirical example, we evaluated the comparative effectiveness of dabigatran versus warfarin in preventing combined ischemic stroke and all-cause mortality. We matched treatment groups on a historically estimated DRS and again on the PS. We controlled for a high-dimensional set of covariates using 20% and 1% samples of Medicare claims data from October 2010 through December 2012. Results In simulations, matching on the DRS versus the PS generally yielded matches for more treated individuals and improved precision of the effect estimate. For the empirical example, PS and DRS matching in the 20% sample resulted in similar hazard ratios (0.88 and 0.87) and standard errors (0.04 for both methods). In the 1% sample, PS matching resulted in matches for only 92.0% of the treated population and a hazard ratio and standard error of 0.89 and 0.19, respectively, while DRS matching resulted in matches for 98.5% and a hazard ratio and standard error of 0.85 and 0.16, respectively. Conclusions When PS distributions are separated, DRS matching can improve the precision of effect estimates and allow researchers to evaluate the treatment effect in a larger proportion of the treated population. However, accurately modeling the DRS can be challenging compared with the PS. PMID:26112690

  16. Matching on the disease risk score in comparative effectiveness research of new treatments.

    PubMed

    Wyss, Richard; Ellis, Alan R; Brookhart, M Alan; Jonsson Funk, Michele; Girman, Cynthia J; Simpson, Ross J; Stürmer, Til

    2015-09-01

    We use simulations and an empirical example to evaluate the performance of disease risk score (DRS) matching compared with propensity score (PS) matching when controlling large numbers of covariates in settings involving newly introduced treatments. We simulated a dichotomous treatment, a dichotomous outcome, and 100 baseline covariates that included both continuous and dichotomous random variables. For the empirical example, we evaluated the comparative effectiveness of dabigatran versus warfarin in preventing combined ischemic stroke and all-cause mortality. We matched treatment groups on a historically estimated DRS and again on the PS. We controlled for a high-dimensional set of covariates using 20% and 1% samples of Medicare claims data from October 2010 through December 2012. In simulations, matching on the DRS versus the PS generally yielded matches for more treated individuals and improved precision of the effect estimate. For the empirical example, PS and DRS matching in the 20% sample resulted in similar hazard ratios (0.88 and 0.87) and standard errors (0.04 for both methods). In the 1% sample, PS matching resulted in matches for only 92.0% of the treated population and a hazard ratio and standard error of 0.89 and 0.19, respectively, while DRS matching resulted in matches for 98.5% and a hazard ratio and standard error of 0.85 and 0.16, respectively. When PS distributions are separated, DRS matching can improve the precision of effect estimates and allow researchers to evaluate the treatment effect in a larger proportion of the treated population. However, accurately modeling the DRS can be challenging compared with the PS. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Canister Storage Building (CSB) Hazard Analysis Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    POWERS, T.B.

    2000-03-16

    This report describes the methodology used in conducting the Canister Storage Building (CSB) Hazard Analysis to support the final CSB Safety Analysis Report and documents the results. This report describes the methodology used in conducting the Canister Storage Building (CSB) hazard analysis to support the CSB final safety analysis report (FSAR) and documents the results. The hazard analysis process identified hazardous conditions and material-at-risk, determined causes for potential accidents, identified preventive and mitigative features, and qualitatively estimated the frequencies and consequences of specific occurrences. The hazard analysis was performed by a team of cognizant CSB operations and design personnel, safetymore » analysts familiar with the CSB, and technical experts in specialty areas. The material included in this report documents the final state of a nearly two-year long process. Attachment A provides two lists of hazard analysis team members and describes the background and experience of each. The first list is a complete list of the hazard analysis team members that have been involved over the two-year long process. The second list is a subset of the first list and consists of those hazard analysis team members that reviewed and agreed to the final hazard analysis documentation. The material included in this report documents the final state of a nearly two-year long process involving formal facilitated group sessions and independent hazard and accident analysis work. The hazard analysis process led to the selection of candidate accidents for further quantitative analysis. New information relative to the hazards, discovered during the accident analysis, was incorporated into the hazard analysis data in order to compile a complete profile of facility hazards. Through this process, the results of the hazard and accident analyses led directly to the identification of safety structures, systems, and components, technical safety requirements, and other controls required to protect the public, workers, and environment.« less

  18. Emergency assessment of postwildfire debris-flow hazards for the 2011 Motor Fire, Sierra and Stanislaus National Forests, California

    USGS Publications Warehouse

    Cannon, Susan H.; Michael, John A.

    2011-01-01

    This report presents an emergency assessment of potential debris-flow hazards from basins burned by the 2011 Motor fire in the Sierra and Stanislaus National Forests, Calif. Statistical-empirical models are used to estimate the probability and volume of debris flows that may be produced from burned drainage basins as a function of different measures of basin burned extent, gradient, and soil physical properties, and in response to a 30-minute-duration, 10-year-recurrence rainstorm. Debris-flow probability and volume estimates are then combined to form a relative hazard ranking for each basin. This assessment provides critical information for issuing warnings, locating and designing mitigation measures, and planning evacuation timing and routes within the first two years following the fire.

  19. Fifty-year flood-inundation maps for Choluteca, Honduras

    USGS Publications Warehouse

    Kresch, David L.; Mastin, Mark C.; Olsen, T.D.

    2002-01-01

    After the devastating floods caused by Hurricane Mitch in 1998, maps of the areas and depths of 50-year-flood inundation at 15 municipalities in Honduras were prepared as a tool for agencies involved in reconstruction and planning. This report, which is one in a series of 15, presents maps of areas in the municipality of Choluteca that would be inundated by 50-year floods on Rio Choluteca and Rio Iztoca. Geographic Information System (GIS) coverages of the flood inundation are available on a computer in the municipality of Choluteca as part of the Municipal GIS project and on the Internet at the Flood Hazard Mapping Web page (http://mitchnts1.cr.usgs.gov/projects/floodhazard.html). These coverages allow users to view the flood inundation in much more detail than is possible using the maps in this report. Water-surface elevations for 50-year-floods on Rio Choluteca and Rio Iztoca at Choluteca were estimated using HEC-RAS, a one-dimensional, steady-flow, step-backwater computer program. The channel and floodplain cross sections used in HEC-RAS were developed from an airborne light-detection-and-ranging (LIDAR) topographic survey of the area. The estimated 50-year-flood discharge for Rio Choluteca at Choluteca is 4,620 cubic meters per second, which is the drainage-area-adjusted weighted-average of two independently estimated 50-year-flood discharges for the gaging station Rio Choluteca en Puente Choluteca. One discharge, 4,913 cubic meters per second, was estimated from a frequency analysis of the 17 years of peak discharge record for the gage, and the other, 2,650 cubic meters per second, was estimated from a regression equation that relates the 50-year-flood discharge to drainage area and mean annual precipitation. The weighted-average of the two discharges at the gage is 4,530 cubic meters per second. The 50-year-flood discharge for the study area reach of Rio Choluteca was estimated by multiplying the weighted discharge at the gage by the ratio of the drainage areas upstream from the two locations. The 50-year-flood discharge for Rio Iztoca, which was estimated from the regression equation, is 430 cubic meters per second.

  20. Airborne LiDAR analysis and geochronology of faulted glacial moraines in the Tahoe-Sierra frontal fault zone reveal substantial seismic hazards in the Lake Tahoe region, California-Nevada USA

    USGS Publications Warehouse

    Howle, James F.; Bawden, Gerald W.; Schweickert, Richard A.; Finkel, Robert C.; Hunter, Lewis E.; Rose, Ronn S.; von Twistern, Brent

    2012-01-01

    We integrated high-resolution bare-earth airborne light detection and ranging (LiDAR) imagery with field observations and modern geochronology to characterize the Tahoe-Sierra frontal fault zone, which forms the neotectonic boundary between the Sierra Nevada and the Basin and Range Province west of Lake Tahoe. The LiDAR imagery clearly delineates active normal faults that have displaced late Pleistocene glacial moraines and Holocene alluvium along 30 km of linear, right-stepping range front of the Tahoe-Sierra frontal fault zone. Herein, we illustrate and describe the tectonic geomorphology of faulted lateral moraines. We have developed new, three-dimensional modeling techniques that utilize the high-resolution LiDAR data to determine tectonic displacements of moraine crests and alluvium. The statistically robust displacement models combined with new ages of the displaced Tioga (20.8 ± 1.4 ka) and Tahoe (69.2 ± 4.8 ka; 73.2 ± 8.7 ka) moraines are used to estimate the minimum vertical separation rate at 17 sites along the Tahoe-Sierra frontal fault zone. Near the northern end of the study area, the minimum vertical separation rate is 1.5 ± 0.4 mm/yr, which represents a two- to threefold increase in estimates of seismic moment for the Lake Tahoe basin. From this study, we conclude that potential earthquake moment magnitudes (Mw) range from 6.3 ± 0.25 to 6.9 ± 0.25. A close spatial association of landslides and active faults suggests that landslides have been seismically triggered. Our study underscores that the Tahoe-Sierra frontal fault zone poses substantial seismic and landslide hazards.

  1. Base Pressure at Supersonic Speeds on Two-dimensional Airfoils and on Bodies of Revolution with and Without Fins Having Turbulent Boundary Layers

    NASA Technical Reports Server (NTRS)

    LOVE EUGENE S

    1957-01-01

    An analysis has been made of available experimental data to show the effects of most of the variables that are more predominant in determining base pressure at supersonic speeds. The analysis covers base pressures for two-dimensional airfoils and for bodies of revolution with and without stabilizing fins and is restricted to turbulent boundary layers. The present status of available experimental information is summarized as are the existing methods for predicting base pressure. A simple semiempirical method is presented for estimating base pressure. For two-dimensional bases, this method stems from an analogy established between the base-pressure phenomena and the peak pressure rise associated with the separation of the boundary layer. An analysis made for axially symmetric flow indicates that the base pressure for bodies of revolution is subject to the same analogy. Based upon the methods presented, estimations are made of such effects as Mach number, angle of attack, boattailing, fineness ratio, and fins. These estimations give fair predictions of experimental results. (author)

  2. Quantifying the uncertainty in site amplification modeling and its effects on site-specific seismic-hazard estimation in the upper Mississippi embayment and adjacent areas

    USGS Publications Warehouse

    Cramer, C.H.

    2006-01-01

    The Mississippi embayment, located in the central United States, and its thick deposits of sediments (over 1 km in places) have a large effect on earthquake ground motions. Several previous studies have addressed how these thick sediments might modify probabilistic seismic-hazard maps. The high seismic hazard associated with the New Madrid seismic zone makes it particularly important to quantify the uncertainty in modeling site amplification to better represent earthquake hazard in seismic-hazard maps. The methodology of the Memphis urban seismic-hazard-mapping project (Cramer et al., 2004) is combined with the reference profile approach of Toro and Silva (2001) to better estimate seismic hazard in the Mississippi embayment. Improvements over previous approaches include using the 2002 national seismic-hazard model, fully probabilistic hazard calculations, calibration of site amplification with improved nonlinear soil-response estimates, and estimates of uncertainty. Comparisons are made with the results of several previous studies, and estimates of uncertainty inherent in site-amplification modeling for the upper Mississippi embayment are developed. I present new seismic-hazard maps for the upper Mississippi embayment with the effects of site geology incorporating these uncertainties.

  3. Assessing rockfall susceptibility in steep and overhanging slopes using three-dimensional analysis of failure mechanisms

    USGS Publications Warehouse

    Matasci, Battista; Stock, Greg M.; Jaboyedoff, Michael; Carrea, Dario; Collins, Brian D.; Guérin, Antoine; Matasci, G.; Ravanel, L.

    2018-01-01

    Rockfalls strongly influence the evolution of steep rocky landscapes and represent a significant hazard in mountainous areas. Defining the most probable future rockfall source areas is of primary importance for both geomorphological investigations and hazard assessment. Thus, a need exists to understand which areas of a steep cliff are more likely to be affected by a rockfall. An important analytical gap exists between regional rockfall susceptibility studies and block-specific geomechanical calculations. Here we present methods for quantifying rockfall susceptibility at the cliff scale, which is suitable for sub-regional hazard assessment (hundreds to thousands of square meters). Our methods use three-dimensional point clouds acquired by terrestrial laser scanning to quantify the fracture patterns and compute failure mechanisms for planar, wedge, and toppling failures on vertical and overhanging rock walls. As a part of this work, we developed a rockfall susceptibility index for each type of failure mechanism according to the interaction between the discontinuities and the local cliff orientation. The susceptibility for slope parallel exfoliation-type failures, which are generally hard to identify, is partly captured by planar and toppling susceptibility indexes. We tested the methods for detecting the most susceptible rockfall source areas on two famously steep landscapes, Yosemite Valley (California, USA) and the Drus in the Mont-Blanc massif (France). Our rockfall susceptibility models show good correspondence with active rockfall sources. The methods offer new tools for investigating rockfall hazard and improving our understanding of rockfall processes.

  4. Estimation of the vortex length scale and intensity from two-dimensional samples

    NASA Technical Reports Server (NTRS)

    Reuss, D. L.; Cheng, W. P.

    1992-01-01

    A method is proposed for estimating flow features that influence flame wrinkling in reciprocating internal combustion engines, where traditional statistical measures of turbulence are suspect. Candidate methods were tested in a computed channel flow where traditional turbulence measures are valid and performance can be rationally evaluated. Two concepts are tested. First, spatial filtering is applied to the two-dimensional velocity distribution and found to reveal structures corresponding to the vorticity field. Decreasing the spatial-frequency cutoff of the filter locally changes the character and size of the flow structures that are revealed by the filter. Second, vortex length scale and intensity is estimated by computing the ensemble-average velocity distribution conditionally sampled on the vorticity peaks. The resulting conditionally sampled 'average vortex' has a peak velocity less than half the rms velocity and a size approximately equal to the two-point-correlation integral-length scale.

  5. Accuracy of parameter estimates for closely spaced optical targets using multiple detectors

    NASA Astrophysics Data System (ADS)

    Dunn, K. P.

    1981-10-01

    In order to obtain the cross-scan position of an optical target, more than one scanning detector is used. As expected, the cross-scan position estimation performance degrades when two nearby optical targets interfere with each other. Theoretical bounds on the two-dimensional parameter estimation performance for two closely spaced optical targets are found. Two particular classes of scanning detector arrays, namely, the crow's foot and the brickwall (or mosaic) patterns, are considered.

  6. Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions

    PubMed Central

    Li, Haoran; Xiong, Li; Jiang, Xiaoqian

    2014-01-01

    Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s τ estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241

  7. Ab initio quantum mechanical calculation of the reaction probability for the Cl-+PH2Cl→ClPH2+Cl- reaction

    NASA Astrophysics Data System (ADS)

    Farahani, Pooria; Lundberg, Marcus; Karlsson, Hans O.

    2013-11-01

    The SN2 substitution reactions at phosphorus play a key role in organic and biological processes. Quantum molecular dynamics simulations have been performed to study the prototype reaction Cl-+PH2Cl→ClPH2+Cl-, using one and two-dimensional models. A potential energy surface, showing an energy well for a transition complex, was generated using ab initio electronic structure calculations. The one-dimensional model is essentially reflection free, whereas the more realistic two-dimensional model displays involved resonance structures in the reaction probability. The reaction rate is almost two orders of magnitude smaller for the two-dimensional compared to the one-dimensional model. Energetic errors in the potential energy surface is estimated to affect the rate by only a factor of two. This shows that for these types of reactions it is more important to increase the dimensionality of the modeling than to increase the accuracy of the electronic structure calculation.

  8. Understanding earthquake hazards in urban areas - Evansville Area Earthquake Hazards Mapping Project

    USGS Publications Warehouse

    Boyd, Oliver S.

    2012-01-01

    The region surrounding Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the proximity of Evansville to the Wabash Valley and New Madrid seismic zones, there is concern among nearby communities about hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake and are able to design structures to withstand this estimated ground shaking. Earthquake-hazard maps provide one way of conveying such information and can help the region of Evansville prepare for future earthquakes and reduce earthquake-caused loss of life and financial and structural loss. The Evansville Area Earthquake Hazards Mapping Project (EAEHMP) has produced three types of hazard maps for the Evansville area: (1) probabilistic seismic-hazard maps show the ground motion that is expected to be exceeded with a given probability within a given period of time; (2) scenario ground-shaking maps show the expected shaking from two specific scenario earthquakes; (3) liquefaction-potential maps show how likely the strong ground shaking from the scenario earthquakes is to produce liquefaction. These maps complement the U.S. Geological Survey's National Seismic Hazard Maps but are more detailed regionally and take into account surficial geology, soil thickness, and soil stiffness; these elements greatly affect ground shaking.

  9. Decorrelation of the true and estimated classifier errors in high-dimensional settings.

    PubMed

    Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R

    2007-01-01

    The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.

  10. Regional Detection of Decoupled Explosions, Yield Estimation from Surface Waves, Two-Dimensional Source Effects, Three-Dimensional Earthquake Modeling and Automated Magnitude Measures

    DTIC Science & Technology

    1980-07-01

    41 3.2 EXPERIMENTAL DETERMINATION OF THE DEPENDENCE OF RAYLEIGH WAVE AMPLITUDE ON PROPERTIES OF THE SOURCE MATERIAL ...Surface Wave Observations ...... ................ 48 3.3.3 Surface Wave Dependence on Source Material Properties ..... ................ .. 51 SYSTEMS...with various aspects of the problem of estimating yield from single station recordings of surface waves. The material in these four summaries has been

  11. Signal Estimation, Inverse Scattering, and Problems in One and Two Dimensions.

    DTIC Science & Technology

    1982-11-01

    attention to implication for new estimation algorithms and signal processing and, to a lesser extent, for system theory . The publications resulting...from the work are listed by category and date. They are briefly organized and reviewed under five major headings: (1) Two-Dimensional System Theory ; (2

  12. Recent Progress in Understanding Natural-Hazards-Generated TEC Perturbations: Measurements and Modeling Results

    NASA Astrophysics Data System (ADS)

    Komjathy, A.; Yang, Y. M.; Meng, X.; Verkhoglyadova, O. P.; Mannucci, A. J.; Langley, R. B.

    2015-12-01

    Natural hazards, including earthquakes, volcanic eruptions, and tsunamis, have been significant threats to humans throughout recorded history. The Global Positioning System satellites have become primary sensors to measure signatures associated with such natural hazards. These signatures typically include GPS-derived seismic deformation measurements, co-seismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure and monitor post-seismic ionospheric disturbances caused by earthquakes, volcanic eruptions, and tsunamis. Research at the University of New Brunswick (UNB) laid the foundations to model the three-dimensional ionosphere at NASA's Jet Propulsion Laboratory by ingesting ground- and space-based GPS measurements into the state-of-the-art Global Assimilative Ionosphere Modeling (GAIM) software. As an outcome of the UNB and NASA research, new and innovative GPS applications have been invented including the use of ionospheric measurements to detect tiny fluctuations in the GPS signals between the spacecraft and GPS receivers caused by natural hazards occurring on or near the Earth's surface.We will show examples for early detection of natural hazards generated ionospheric signatures using ground-based and space-borne GPS receivers. We will also discuss recent results from the U.S. Real-time Earthquake Analysis for Disaster Mitigation Network (READI) exercises utilizing our algorithms. By studying the propagation properties of ionospheric perturbations generated by natural hazards along with applying sophisticated first-principles physics-based modeling, we are on track to develop new technologies that can potentially save human lives and minimize property damage. It is also expected that ionospheric monitoring of TEC perturbations might become an integral part of existing natural hazards warning systems.

  13. Fast Two-Dimensional Bubble Analysis of Biopolymer Filamentous Networks Pore Size from Confocal Microscopy Thin Data Stacks

    PubMed Central

    Molteni, Matteo; Magatti, Davide; Cardinali, Barbara; Rocco, Mattia; Ferri, Fabio

    2013-01-01

    The average pore size ξ0 of filamentous networks assembled from biological macromolecules is one of the most important physical parameters affecting their biological functions. Modern optical methods, such as confocal microscopy, can noninvasively image such networks, but extracting a quantitative estimate of ξ0 is a nontrivial task. We present here a fast and simple method based on a two-dimensional bubble approach, which works by analyzing one by one the (thresholded) images of a series of three-dimensional thin data stacks. No skeletonization or reconstruction of the full geometry of the entire network is required. The method was validated by using many isotropic in silico generated networks of different structures, morphologies, and concentrations. For each type of network, the method provides accurate estimates (a few percent) of the average and the standard deviation of the three-dimensional distribution of the pore sizes, defined as the diameters of the largest spheres that can be fit into the pore zones of the entire gel volume. When applied to the analysis of real confocal microscopy images taken on fibrin gels, the method provides an estimate of ξ0 consistent with results from elastic light scattering data. PMID:23473499

  14. Exploring load, velocity, and surface disorder dependence of friction with one-dimensional and two-dimensional models.

    PubMed

    Dagdeviren, Omur E

    2018-08-03

    The effect of surface disorder, load, and velocity on friction between a single asperity contact and a model surface is explored with one-dimensional and two-dimensional Prandtl-Tomlinson (PT) models. We show that there are fundamental physical differences between the predictions of one-dimensional and two-dimensional models. The one-dimensional model estimates a monotonic increase in friction and energy dissipation with load, velocity, and surface disorder. However, a two-dimensional PT model, which is expected to approximate a tip-sample system more realistically, reveals a non-monotonic trend, i.e. friction is inert to surface disorder and roughness in wearless friction regime. The two-dimensional model discloses that the surface disorder starts to dominate the friction and energy dissipation when the tip and the sample interact predominantly deep into the repulsive regime. Our numerical calculations address that tracking the minimum energy path and the slip-stick motion are two competing effects that determine the load, velocity, and surface disorder dependence of friction. In the two-dimensional model, the single asperity can follow the minimum energy path in wearless regime; however, with increasing load and sliding velocity, the slip-stick movement dominates the dynamic motion and results in an increase in friction by impeding tracing the minimum energy path. Contrary to the two-dimensional model, when the one-dimensional PT model is employed, the single asperity cannot escape to the minimum energy minimum due to constraint motion and reveals only a trivial dependence of friction on load, velocity, and surface disorder. Our computational analyses clarify the physical differences between the predictions of the one-dimensional and two-dimensional models and open new avenues for disordered surfaces for low energy dissipation applications in wearless friction regime.

  15. Upstream Structural Management Measures for an Urban Area Flooding in Turkey and their Consequences on Flood Risk Management

    NASA Astrophysics Data System (ADS)

    Akyurek, Z.; Bozoglu, B.; Girayhan, T.

    2015-12-01

    Flooding has the potential to cause significant impacts to economic activities as well as to disrupt or displace populations. Changing climate regimes such as extreme precipitation events increase flood vulnerability and put additional stresses on infrastructure. In this study the flood modelling in an urbanized area, namely Samsun-Terme in Blacksea region of Turkey is done. MIKE21 with flexible grid is used in 2- dimensional shallow water flow modelling. 1/1000 scaled maps with the buildings for the urbanized area and 1/5000 scaled maps for the rural parts are used to obtain DTM needed in the flood modelling. The bathymetry of the river is obtained from additional surveys. The main river passing through the urbanized area has a capacity of Q5 according to the design discharge obtained by simple ungauged discharge estimation depending on catchment area only. The effects of the available structures like bridges across the river on the flooding are presented. The upstream structural measures are studied on scenario basis. Four sub-catchments of Terme River are considered as contributing the downstream flooding. The existing circumstance of the Terme River states that the meanders of the river have a major effect on the flood situation and lead to approximately 35% reduction in the peak discharge between upstream and downstream of the river. It is observed that if the flow from the upstream catchments can be retarded through a detention pond constructed in at least two of the upstream catchments, estimated Q100 flood can be conveyed by the river without overtopping from the river channel. The operation of the upstream detention ponds and the scenarios to convey Q500 without causing flooding are also presented. Structural management measures to address changes in flood characteristics in water management planning are discussed. Flood risk is obtained by using the flood hazard maps and water depth-damage functions plotted for a variety of building types and occupancies. The estimated mean annual hazard for the area is calculated as $340 000 and it is estimated that the upstream structural management measures can decrease the direct economic risk 11% for the 500 return period flood.

  16. Estimating the Maximum Magnitude of Induced Earthquakes With Dynamic Rupture Simulations

    NASA Astrophysics Data System (ADS)

    Gilmour, E.; Daub, E. G.

    2017-12-01

    Seismicity in Oklahoma has been sharply increasing as the result of wastewater injection. The earthquakes, thought to be induced from changes in pore pressure due to fluid injection, nucleate along existing faults. Induced earthquakes currently dominate central and eastern United States seismicity (Keranen et al. 2016). Induced earthquakes have only been occurring in the central US for a short time; therefore, too few induced earthquakes have been observed in this region to know their maximum magnitude. The lack of knowledge regarding the maximum magnitude of induced earthquakes means that large uncertainties exist in the seismic hazard for the central United States. While induced earthquakes follow the Gutenberg-Richter relation (van der Elst et al. 2016), it is unclear if there are limits to their magnitudes. An estimate of the maximum magnitude of the induced earthquakes is crucial for understanding their impact on seismic hazard. While other estimates of the maximum magnitude exist, those estimates are observational or statistical, and cannot take into account the possibility of larger events that have not yet been observed. Here, we take a physical approach to studying the maximum magnitude based on dynamic ruptures simulations. We run a suite of two-dimensional ruptures simulations to physically determine how ruptures propagate. The simulations use the known parameters of principle stress orientation and rupture locations. We vary the other unknown parameters of the ruptures simulations to obtain a large number of rupture simulation results reflecting different possible sets of parameters, and use these results to train a neural network to complete the ruptures simulations. Then using a Markov Chain Monte Carlo method to check different combinations of parameters, the trained neural network is used to create synthetic magnitude-frequency distributions to compare to the real earthquake catalog. This method allows us to find sets of parameters that are consistent with earthquakes observed in Oklahoma and find which parameters effect the rupture propagation. Our results show that the stress orientation and magnitude, pore pressure, and friction properties combine to determine the final magnitude of the simulated event.

  17. Simulating geriatric home safety assessments in a three-dimensional virtual world.

    PubMed

    Andrade, Allen D; Cifuentes, Pedro; Mintzer, Michael J; Roos, Bernard A; Anam, Ramanakumar; Ruiz, Jorge G

    2012-01-01

    Virtual worlds could offer inexpensive and safe three-dimensional environments in which medical trainees can learn to identify home safety hazards. Our aim was to evaluate the feasibility, usability, and acceptability of virtual worlds for geriatric home safety assessments and to correlate performance efficiency in hazard identification with spatial ability, self-efficacy, cognitive load, and presence. In this study, 30 medical trainees found the home safety simulation easy to use, and their self-efficacy was improved. Men performed better than women in hazard identification. Presence and spatial ability were correlated significantly with performance. Educators should consider spatial ability and gender differences when implementing virtual world training for geriatric home safety assessments.

  18. On-Line Use of Three-Dimensional Marker Trajectory Estimation From Cone-Beam Computed Tomography Projections for Precise Setup in Radiotherapy for Targets With Respiratory Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worm, Esben S., E-mail: esbeworm@rm.dk; Department of Medical Physics, Aarhus University Hospital, Aarhus; Hoyer, Morten

    2012-05-01

    Purpose: To develop and evaluate accurate and objective on-line patient setup based on a novel semiautomatic technique in which three-dimensional marker trajectories were estimated from two-dimensional cone-beam computed tomography (CBCT) projections. Methods and Materials: Seven treatment courses of stereotactic body radiotherapy for liver tumors were delivered in 21 fractions in total to 6 patients by a linear accelerator. Each patient had two to three gold markers implanted close to the tumors. Before treatment, a CBCT scan with approximately 675 two-dimensional projections was acquired during a full gantry rotation. The marker positions were segmented in each projection. From this, the three-dimensionalmore » marker trajectories were estimated using a probability based method. The required couch shifts for patient setup were calculated from the mean marker positions along the trajectories. A motion phantom moving with known tumor trajectories was used to examine the accuracy of the method. Trajectory-based setup was retrospectively used off-line for the first five treatment courses (15 fractions) and on-line for the last two treatment courses (6 fractions). Automatic marker segmentation was compared with manual segmentation. The trajectory-based setup was compared with setup based on conventional CBCT guidance on the markers (first 15 fractions). Results: Phantom measurements showed that trajectory-based estimation of the mean marker position was accurate within 0.3 mm. The on-line trajectory-based patient setup was performed within approximately 5 minutes. The automatic marker segmentation agreed with manual segmentation within 0.36 {+-} 0.50 pixels (mean {+-} SD; pixel size, 0.26 mm in isocenter). The accuracy of conventional volumetric CBCT guidance was compromised by motion smearing ({<=}21 mm) that induced an absolute three-dimensional setup error of 1.6 {+-} 0.9 mm (maximum, 3.2) relative to trajectory-based setup. Conclusions: The first on-line clinical use of trajectory estimation from CBCT projections for precise setup in stereotactic body radiotherapy was demonstrated. Uncertainty in the conventional CBCT-based setup procedure was eliminated with the new method.« less

  19. Seismic velocity site characterization of 10 Arizona strong-motion recording stations by spectral analysis of surface wave dispersion

    USGS Publications Warehouse

    Kayen, Robert E.; Carkin, Brad A.; Corbett, Skye C.

    2017-10-19

    Vertical one-dimensional shear wave velocity (VS) profiles are presented for strong-motion sites in Arizona for a suite of stations surrounding the Palo Verde Nuclear Generating Station. The purpose of the study is to determine the detailed site velocity profile, the average velocity in the upper 30 meters of the profile (VS30), the average velocity for the entire profile (VSZ), and the National Earthquake Hazards Reduction Program (NEHRP) site classification. The VS profiles are estimated using a non-invasive continuous-sine-wave method for gathering the dispersion characteristics of surface waves. Shear wave velocity profiles were inverted from the averaged dispersion curves using three independent methods for comparison, and the root-mean-square combined coefficient of variation (COV) of the dispersion and inversion calculations are estimated for each site.

  20. Enhancement of regional wet deposition estimates based on modeled precipitation inputs

    Treesearch

    James A. Lynch; Jeffery W. Grimm; Edward S. Corbett

    1996-01-01

    Application of a variety of two-dimensional interpolation algorithms to precipitation chemistry data gathered at scattered monitoring sites for the purpose of estimating precipitation- born ionic inputs for specific points or regions have failed to produce accurate estimates. The accuracy of these estimates is particularly poor in areas of high topographic relief....

  1. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1998-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  2. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1997-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  3. Fifty-year flood-inundation maps for Juticalpa, Honduras

    USGS Publications Warehouse

    Kresch, David L.; Mastin, M.C.; Olsen, T.D.

    2002-01-01

    After the devastating floods caused by Hurricane Mitch in 1998, maps of the areas and depths of 50-year-flood inundation at 15 municipalities in Honduras were prepared as a tool for agencies involved in reconstruction and planning. This report, which is one in a series of 15, presents maps of areas in the municipality of Juticalpa that would be inundated by a 50-year flood of Rio Juticalpa. Geographic Information System (GIS) coverages of the flood inundation are available on a computer in the municipality of Juticalpa as part of the Municipal GIS project and on the Internet at the Flood Hazard Mapping Web page (http://mitchnts1.cr.usgs.gov/projects/floodhazard.html). These coverages allow users to view the flood inundation in much more detail than is possible using the maps in this report. Water-surface elevations for a 50-year-flood on Rio Juticalpa at Juticalpa were estimated using HEC-RAS, a one-dimensional, steady-flow, step-backwater computer program. The channel and floodplain cross sections used in HEC-RAS were developed from an airborne light-detection-and-ranging (LIDAR) topographic survey of the area. The estimated 50-year-flood discharge for Rio Juticalpa at Juticalpa, 1,360 cubic meters per second, was computed as the drainage-area-adjusted weighted average of two independently estimated 50-year-flood discharges for the gaging station Rio Juticalpa en El Torito, located about 2 kilometers upstream from Juticalpa. One discharge, 1,551 cubic meters per second, was estimated from a frequency analysis of the 33 years of peak-discharge record for the gage, and the other, 486 cubic meters per second, was estimated from a regression equation that relates the 50-year-flood discharge to drainage area and mean annual precipitation. The weighted-average of the two discharges at the gage is 1,310 cubic meters per second. The 50-year flood discharge for the study area reach of Rio Juticalpa was estimated by multiplying the weighted discharge at the gage by the ratio of the drainage areas upstream from the two locations.

  4. Global Natural Disaster Risk Hotspots: Transition to a Regional Approach

    NASA Astrophysics Data System (ADS)

    Lerner-Lam, A.; Chen, R.; Dilley, M.

    2005-12-01

    The "Hotspots Project" is a collaborative study of the global distribution and occurrence of multiple natural hazards and the associated exposures of populations and their economic output. In this study we assess the global risks of two disaster-related outcomes: mortality and economic losses. We estimate risk levels by combining hazard exposure with historical vulnerability for two indicators of elements at risk-gridded population and Gross Domestic Product (GDP) per unit area - for six major natural hazards: earthquakes, volcanoes, landslides, floods, drought, and cyclones. By calculating relative risks for each grid cell rather than for countries as a whole, we are able to estimate risk levels at sub-national scales. These can then be used to estimate aggregate relative multiple hazard risk at regional and national scales. Mortality-related risks are assessed on a 2.5' x 2.5' latitude-longitude grid of global population (GPW Version 3). Economic risks are assessed at the same resolution for gridded GDP per unit area, using World Bank estimates of GDP based on purchasing power parity. Global hazard data were compiled from multiple sources. The project collaborated directly with UNDP and UNEP, the International Research Institute for Climate Prediction (IRI) at Columbia, and the Norwegian Geotechnical Institute (NGI) in the creation of data sets for several hazards for which global data sets did not previously exist. Drought, flood and volcano hazards are characterized in terms of event frequency, storms by frequency and severity, earthquakes by frequency and ground acceleration exceedance probability, and landslides by an index derived from probability of occurrence. The global analysis undertaken in this project is clearly limited by issues of scale as well as by the availability and quality of data. For some hazards, there exist only 15- to 25-year global records with relatively crude spatial information. Data on historical disaster losses, and particularly on economic losses, are also limited. On one hand the data are adequate for general identification of areas of the globe that are at relatively higher single- or multiple-hazard risk than other areas. On the other hand they are inadequate for understanding the absolute levels of risk posed by any specific hazard or combination of hazards. Nevertheless it is possible to assess in general terms the exposure and potential magnitude of losses to people and their assets in these areas. Such information, although not ideal, can still be very useful for informing a range of disaster prevention and preparedness measures, including prioritization of resources, targeting of more localized and detailed risk assessments, implementation of risk-based disaster management and emergency response strategies, and development of long-term plans for poverty reduction and economic development. In addition to summarizing the results of the Hotspots Project, we discuss data collection issues and suggest methodological approaches for making the transition to more detailed regional and national studies. Preliminary results for several regional case studies will be presented.

  5. Three-dimensional analysis of magnetometer array data

    NASA Technical Reports Server (NTRS)

    Richmond, A. D.; Baumjohann, W.

    1984-01-01

    A technique is developed for mapping magnetic variation fields in three dimensions using data from an array of magnetometers, based on the theory of optimal linear estimation. The technique is applied to data from the Scandinavian Magnetometer Array. Estimates of the spatial power spectra for the internal and external magnetic variations are derived, which in turn provide estimates of the spatial autocorrelation functions of the three magnetic variation components. Statistical errors involved in mapping the external and internal fields are quantified and displayed over the mapping region. Examples of field mapping and of separation into external and internal components are presented. A comparison between the three-dimensional field separation and a two-dimensional separation from a single chain of stations shows that significant differences can arise in the inferred internal component.

  6. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratama, Cecep, E-mail: great.pratama@gmail.com; Meilano, Irwan; Nugraha, Andri Dian

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate formore » Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.« less

  7. Multiplicative Versus Additive Filtering for Spacecraft Attitude Determination

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2003-01-01

    The absence of a globally nonsingular three-parameter representation of rotations forces attitude Kalman filters to estimate either a singular or a redundant attitude representation. We compare two filtering strategies using simplified kinematics and measurement models. Our favored strategy estimates a three-parameter representation of attitude deviations from a reference attitude specified by a higher- dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. We point out some disadvantages of the other strategy, which directly estimates the four-parameter quaternion representation.

  8. A one-dimensional diffusion analogy model for estimation of tide heights in selected tidal marshes in Connecticut

    USGS Publications Warehouse

    Bjerklie, David M.; O’Brien, Kevin; Rozsa, Ron

    2013-01-01

    A one-dimensional diffusion analogy model for estimating tide heights in coastal marshes was developed and calibrated by using data from previous tidal-marsh studies. The method is simpler to use than other one- and two-dimensional hydrodynamic models because it does not require marsh depth and tidal prism information; however, the one-dimensional diffusion analogy model cannot be used to estimate tide heights, flow velocities, and tide arrival times for tide conditions other than the highest tide for which it is calibrated. Limited validation of the method indicates that it has an accuracy within 0.3 feet. The method can be applied with limited calibration information that is based entirely on remote sensing or geographic information system data layers. The method can be used to estimate high-tide heights in tidal wetlands drained by tide gates where tide levels cannot be observed directly by opening the gates without risk of flooding properties and structures. A geographic information system application of the method is demonstrated for Sybil Creek marsh in Branford, Connecticut. The tidal flux into this marsh is controlled by two tide gates that prevent full tidal inundation of the marsh. The method application shows reasonable tide heights for the gates-closed condition (the normal condition) and the one-gate-open condition on the basis of comparison with observed heights. The condition with all tide gates open (two gates) was simulated with the model; results indicate where several structures would be flooded if the gates were removed as part of restoration efforts or if the tide gates were to fail.

  9. Two-dimensional wavefront reconstruction based on double-shearing and least squares fitting

    NASA Astrophysics Data System (ADS)

    Liang, Peiying; Ding, Jianping; Zhu, Yangqing; Dong, Qian; Huang, Yuhua; Zhu, Zhen

    2017-06-01

    The two-dimensional wavefront reconstruction method based on double-shearing and least squares fitting is proposed in this paper. Four one-dimensional phase estimates of the measured wavefront, which correspond to the two shears and the two orthogonal directions, could be calculated from the differential phase, which solves the problem of the missing spectrum, and then by using the least squares method the two-dimensional wavefront reconstruction could be done. The numerical simulations of the proposed algorithm are carried out to verify the feasibility of this method. The influence of noise generated from different shear amount and different intensity on the accuracy of the reconstruction is studied and compared with the results from the algorithm based on single-shearing and least squares fitting. Finally, a two-grating lateral shearing interference experiment is carried out to verify the wavefront reconstruction algorithm based on doubleshearing and least squares fitting.

  10. Two-dimensional free-surface flow under gravity: A new benchmark case for SPH method

    NASA Astrophysics Data System (ADS)

    Wu, J. Z.; Fang, L.

    2018-02-01

    Currently there are few free-surface benchmark cases with analytical results for the Smoothed Particle Hydrodynamics (SPH) simulation. In the present contribution we introduce a two-dimensional free-surface flow under gravity, and obtain an analytical expression on the surface height difference and a theoretical estimation on the surface fractal dimension. They are preliminarily validated and supported by SPH calculations.

  11. Convolutional neural network based side attack explosive hazard detection in three dimensional voxel radar

    NASA Astrophysics Data System (ADS)

    Brockner, Blake; Veal, Charlie; Dowdy, Joshua; Anderson, Derek T.; Williams, Kathryn; Luke, Robert; Sheen, David

    2018-04-01

    The identification followed by avoidance or removal of explosive hazards in past and/or present conflict zones is a serious threat for both civilian and military personnel. This is a challenging task as variability exists with respect to the objects, their environment and emplacement context, to name a few factors. A goal is the development of automatic or human-in-the-loop sensor technologies that leverage signal processing, data fusion and machine learning. Herein, we explore the detection of side attack explosive hazards (SAEHs) in three dimensional voxel space radar via different shallow and deep convolutional neural network (CNN) architectures. Dimensionality reduction is performed by using multiple projected images versus the raw three dimensional voxel data, which leads to noteworthy savings in input size and associated network hyperparameters. Last, we explore the accuracy and interpretation of solutions learned via random versus intelligent network weight initialization. Experiments are provided on a U.S. Army data set collected over different times, weather conditions, target types and concealments. Preliminary results indicate that deep learning can perform as good as, if not better, than a skilled domain expert, even in light of limited training data with a class imbalance.

  12. Approximation theory for LQG (Linear-Quadratic-Gaussian) optimal control of flexible structures

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Adamian, A.

    1988-01-01

    An approximation theory is presented for the LQG (Linear-Quadratic-Gaussian) optimal control problem for flexible structures whose distributed models have bounded input and output operators. The main purpose of the theory is to guide the design of finite dimensional compensators that approximate closely the optimal compensator. The optimal LQG problem separates into an optimal linear-quadratic regulator problem and an optimal state estimation problem. The solution of the former problem lies in the solution to an infinite dimensional Riccati operator equation. The approximation scheme approximates the infinite dimensional LQG problem with a sequence of finite dimensional LQG problems defined for a sequence of finite dimensional, usually finite element or modal, approximations of the distributed model of the structure. Two Riccati matrix equations determine the solution to each approximating problem. The finite dimensional equations for numerical approximation are developed, including formulas for converting matrix control and estimator gains to their functional representation to allow comparison of gains based on different orders of approximation. Convergence of the approximating control and estimator gains and of the corresponding finite dimensional compensators is studied. Also, convergence and stability of the closed-loop systems produced with the finite dimensional compensators are discussed. The convergence theory is based on the convergence of the solutions of the finite dimensional Riccati equations to the solutions of the infinite dimensional Riccati equations. A numerical example with a flexible beam, a rotating rigid body, and a lumped mass is given.

  13. Two-Dimensional Analysis of Conical Pulsed Inductive Plasma Thruster Performance

    NASA Technical Reports Server (NTRS)

    Hallock, A. K.; Polzin, K. A.; Emsellem, G. D.

    2011-01-01

    A model of the maximum achievable exhaust velocity of a conical theta pinch pulsed inductive thruster is presented. A semi-empirical formula relating coil inductance to both axial and radial current sheet location is developed and incorporated into a circuit model coupled to a momentum equation to evaluate the effect of coil geometry on the axial directed kinetic energy of the exhaust. Inductance measurements as a function of the axial and radial displacement of simulated current sheets from four coils of different geometries are t to a two-dimensional expression to allow the calculation of the Lorentz force at any relevant averaged current sheet location. This relation for two-dimensional inductance, along with an estimate of the maximum possible change in gas-dynamic pressure as the current sheet accelerates into downstream propellant, enables the expansion of a one-dimensional circuit model to two dimensions. The results of this two-dimensional model indicate that radial current sheet motion acts to rapidly decouple the current sheet from the driving coil, leading to losses in axial kinetic energy 10-50 times larger than estimations of the maximum available energy in the compressed propellant. The decreased available energy in the compressed propellant as compared to that of other inductive plasma propulsion concepts suggests that a recovery in the directed axial kinetic energy of the exhaust is unlikely, and that radial compression of the current sheet leads to a loss in exhaust velocity for the operating conditions considered here.

  14. Multilevel Sequential Monte Carlo Samplers for Normalizing Constants

    DOE PAGES

    Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...

    2017-08-24

    This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less

  15. Line tension of a two dimensional gas-liquid interface.

    PubMed

    Santra, Mantu; Bagchi, Biman

    2009-08-28

    In two dimensional (2D) gas-liquid systems, the reported simulation values of line tension are known to disagree with the existing theoretical estimates. We find that while the simulation erred in truncating the range of the interaction potential, and as a result grossly underestimated the actual value, the earlier theoretical calculation was also limited by several approximations. When both the simulation and the theory are improved, we find that the estimate of line tension is in better agreement with each other. The small value of surface tension suggests increased influence of noncircular clusters in 2D gas-liquid nucleation, as indeed observed in a recent simulation.

  16. Marginal regression approach for additive hazards models with clustered current status data.

    PubMed

    Su, Pei-Fang; Chi, Yunchan

    2014-01-15

    Current status data arise naturally from tumorigenicity experiments, epidemiology studies, biomedicine, econometrics and demographic and sociology studies. Moreover, clustered current status data may occur with animals from the same litter in tumorigenicity experiments or with subjects from the same family in epidemiology studies. Because the only information extracted from current status data is whether the survival times are before or after the monitoring or censoring times, the nonparametric maximum likelihood estimator of survival function converges at a rate of n(1/3) to a complicated limiting distribution. Hence, semiparametric regression models such as the additive hazards model have been extended for independent current status data to derive the test statistics, whose distributions converge at a rate of n(1/2) , for testing the regression parameters. However, a straightforward application of these statistical methods to clustered current status data is not appropriate because intracluster correlation needs to be taken into account. Therefore, this paper proposes two estimating functions for estimating the parameters in the additive hazards model for clustered current status data. The comparative results from simulation studies are presented, and the application of the proposed estimating functions to one real data set is illustrated. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Estimating latent time of maturation and survival costs of reproduction in continuous time from capture-recapture data

    USGS Publications Warehouse

    Ergon, T.; Yoccoz, N.G.; Nichols, J.D.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    In many species, age or time of maturation and survival costs of reproduction may vary substantially within and among populations. We present a capture-mark-recapture model to estimate the latent individual trait distribution of time of maturation (or other irreversible transitions) as well as survival differences associated with the two states (representing costs of reproduction). Maturation can take place at any point in continuous time, and mortality hazard rates for each reproductive state may vary according to continuous functions over time. Although we explicitly model individual heterogeneity in age/time of maturation, we make the simplifying assumption that death hazard rates do not vary among individuals within groups of animals. However, the estimates of the maturation distribution are fairly robust against individual heterogeneity in survival as long as there is no individual level correlation between mortality hazards and latent time of maturation. We apply the model to biweekly capture?recapture data of overwintering field voles (Microtus agrestis) in cyclically fluctuating populations to estimate time of maturation and survival costs of reproduction. Results show that onset of seasonal reproduction is particularly late and survival costs of reproduction are particularly large in declining populations.

  18. A baseline-free procedure for transformation models under interval censorship.

    PubMed

    Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin

    2005-12-01

    An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.

  19. A reduced theoretical model for estimating condensation effects in combustion-heated hypersonic tunnel

    NASA Astrophysics Data System (ADS)

    Lin, L.; Luo, X.; Qin, F.; Yang, J.

    2018-03-01

    As one of the combustion products of hydrocarbon fuels in a combustion-heated wind tunnel, water vapor may condense during the rapid expansion process, which will lead to a complex two-phase flow inside the wind tunnel and even change the design flow conditions at the nozzle exit. The coupling of the phase transition and the compressible flow makes the estimation of the condensation effects in such wind tunnels very difficult and time-consuming. In this work, a reduced theoretical model is developed to approximately compute the nozzle-exit conditions of a flow including real-gas and homogeneous condensation effects. Specifically, the conservation equations of the axisymmetric flow are first approximated in the quasi-one-dimensional way. Then, the complex process is split into two steps, i.e., a real-gas nozzle flow but excluding condensation, resulting in supersaturated nozzle-exit conditions, and a discontinuous jump at the end of the nozzle from the supersaturated state to a saturated state. Compared with two-dimensional numerical simulations implemented with a detailed condensation model, the reduced model predicts the flow parameters with good accuracy except for some deviations caused by the two-dimensional effect. Therefore, this reduced theoretical model can provide a fast, simple but also accurate estimation of the condensation effect in combustion-heated hypersonic tunnels.

  20. Improved Analysis of GW150914 Using a Fully Spin-Precessing Waveform Model

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, C.; Casentini, J.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gaebel, S.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Johnson-McDaniel, N. K.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lousto, C. O.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van der Sluys, M. V.; van Heijningen, J. V.; Vano-Vinuales, A.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; Boyle, M.; Brügmann, B.; Campanelli, M.; Chu, T.; Clark, M.; Haas, R.; Hemberger, D.; Hinder, I.; Kidder, L. E.; Kinsey, M.; Laguna, P.; Ossokine, S.; Pan, Y.; Röver, C.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-10-01

    This paper presents updated estimates of source parameters for GW150914, a binary black-hole coalescence event detected by the Laser Interferometer Gravitational-wave Observatory (LIGO) in 2015 [Abbott et al. Phys. Rev. Lett. 116, 061102 (2016).]. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] presented parameter estimation of the source using a 13-dimensional, phenomenological precessing-spin model (precessing IMRPhenom) and an 11-dimensional nonprecessing effective-one-body (EOB) model calibrated to numerical-relativity simulations, which forces spin alignment (nonprecessing EOBNR). Here, we present new results that include a 15-dimensional precessing-spin waveform model (precessing EOBNR) developed within the EOB formalism. We find good agreement with the parameters estimated previously [Abbott et al. Phys. Rev. Lett. 116, 241102 (2016).], and we quote updated component masses of 35-3+5 M⊙ and 3 0-4+3 M⊙ (where errors correspond to 90% symmetric credible intervals). We also present slightly tighter constraints on the dimensionless spin magnitudes of the two black holes, with a primary spin estimate <0.65 and a secondary spin estimate <0.75 at 90% probability. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] estimated the systematic parameter-extraction errors due to waveform-model uncertainty by combining the posterior probability densities of precessing IMRPhenom and nonprecessing EOBNR. Here, we find that the two precessing-spin models are in closer agreement, suggesting that these systematic errors are smaller than previously quoted.

  1. Two Stage Assessment of Thermal Hazard in An Underground Mine

    NASA Astrophysics Data System (ADS)

    Drenda, Jan; Sułkowski, Józef; Pach, Grzegorz; Różański, Zenon; Wrona, Paweł

    2016-06-01

    The results of research into the application of selected thermal indices of men's work and climate indices in a two stage assessment of climatic work conditions in underground mines have been presented in this article. The difference between these two kinds of indices was pointed out during the project entitled "The recruiting requirements for miners working in hot underground mine environments". The project was coordinated by The Institute of Mining Technologies at Silesian University of Technology. It was a part of a Polish strategic project: "Improvement of safety in mines" being financed by the National Centre of Research and Development. Climate indices are based only on physical parameters of air and their measurements. Thermal indices include additional factors which are strictly connected with work, e.g. thermal resistance of clothing, kind of work etc. Special emphasis has been put on the following indices - substitute Silesian temperature (TS) which is considered as the climatic index, and the thermal discomfort index (δ) which belongs to the thermal indices group. The possibility of the two stage application of these indices has been taken into consideration (preliminary and detailed estimation). Based on the examples it was proved that by the application of thermal hazard (detailed estimation) it is possible to avoid the use of additional technical solutions which would be necessary to reduce thermal hazard in particular work places according to the climate index. The threshold limit value for TS has been set, based on these results. It was shown that below TS = 24°C it is not necessary to perform detailed estimation.

  2. The impact of regulatory compliance behavior on hazardous waste generation in European private healthcare facilities.

    PubMed

    Botelho, Anabela

    2013-10-01

    This study empirically evaluates whether the increasingly large numbers of private outpatient healthcare facilities (HCFs) within the European Union (EU) countries comply with the existing European waste legislation, and whether compliance with such legislation affects the fraction of healthcare waste (HCW) classified as hazardous. To that end, this study uses data collected by a large survey of more than 700 small private HCFs distributed throughout Portugal, a full member of the EU since 1986, where 50% of outpatient care is currently dominated by private operators. The collected data are then used to estimate a hurdle model, i.e. a statistical specification in which there are two processes: one is the process by which some HCFs generate zero or some positive fraction of hazardous HCW, and another is the process by which HCFs generate a specific positive fraction of hazardous HCW conditional on producing any. Taken together, the results show that although compliance with the law is far from ideal, it is the strongest factor influencing hazardous waste generation. In particular, it is found that higher compliance has a small and insignificant effect on the probability of generating (or reporting) positive amounts of hazardous waste, but it does have a large and significant effect on the fraction of hazardous waste produced, conditional on producing any, with a unit increase in the compliance rate leading to an estimated decrease in the fraction of hazardous HCW by 16.3 percentage points.

  3. Earth science: lasting earthquake legacy

    USGS Publications Warehouse

    Parsons, Thomas E.

    2009-01-01

    On 31 August 1886, a magnitude-7 shock struck Charleston, South Carolina; low-level activity continues there today. One view of seismic hazard is that large earthquakes will return to New Madrid and Charleston at intervals of about 500 years. With expected ground motions that would be stronger than average, that prospect produces estimates of earthquake hazard that rival those at the plate boundaries marked by the San Andreas fault and Cascadia subduction zone. The result is two large 'bull's-eyes' on the US National Seismic Hazard Maps — which, for example, influence regional building codes and perceptions of public safety.

  4. Rank-based estimation in the {ell}1-regularized partly linear model for censored outcomes with application to integrated analyses of clinical predictors and gene expression data.

    PubMed

    Johnson, Brent A

    2009-10-01

    We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.

  5. Parameter estimation in a human operator describing function model for a two-dimensional tracking task

    NASA Technical Reports Server (NTRS)

    Vanlunteren, A.

    1977-01-01

    A previously described parameter estimation program was applied to a number of control tasks, each involving a human operator model consisting of more than one describing function. One of these experiments is treated in more detail. It consisted of a two dimensional tracking task with identical controlled elements. The tracking errors were presented on one display as two vertically moving horizontal lines. Each loop had its own manipulator. The two forcing functions were mutually independent and consisted each of 9 sine waves. A human operator model was chosen consisting of 4 describing functions, thus taking into account possible linear cross couplings. From the Fourier coefficients of the relevant signals the model parameters were estimated after alignment, averaging over a number of runs and decoupling. The results show that for the elements in the main loops the crossover model applies. A weak linear cross coupling existed with the same dynamics as the elements in the main loops but with a negative sign.

  6. OPTICAL PROCESSING OF INFORMATION: Multistage optoelectronic two-dimensional image switches

    NASA Astrophysics Data System (ADS)

    Fedorov, V. B.

    1994-06-01

    The implementation principles and the feasibility of construction of high-throughput multistage optoelectronic switches, capable of transmitting data in the form of two-dimensional images along interconnected pairs of optical channels, are considered. Different ways of realising compact switches are proposed. They are based on the use of polarisation-sensitive elements, arrays of modulators of the plane of polarisation of light, arrays of objectives, and free-space optics. Optical systems of such switches can theoretically ensure that the resolution and optical losses in two-dimensional image transmission are limited only by diffraction. Estimates are obtained of the main maximum-performance parameters of the proposed optoelectronic image switches.

  7. Two-dimensional echo-cardiographic estimation of left atrial volume and volume load in patients with congenital heart disease.

    PubMed

    Kawaguchi, A; Linde, L M; Imachi, T; Mizuno, H; Akutsu, H

    1983-12-01

    To estimate the left atrial volume (LAV) and pulmonary blood flow in patients with congenital heart disease (CHD), we employed two-dimensional echocardiography (TDE). The LAV was measured in dimensions other than those obtained in conventional M-mode echocardiography (M-mode echo). Mathematical and geometrical models for LAV calculation using the standard long-axis, short-axis and apical four-chamber planes were devised and found to be reliable in a preliminary study using porcine heart preparations, although length (10%), area (20%) and volume (38%) were significantly and consistently underestimated with echocardiography. Those models were then applied and correlated with angiocardiograms (ACG) in 25 consecutive patients with suspected CHD. In terms of the estimation of the absolute LAV, accuracy seemed commensurate with the number of the dimensions measured. The correlation between data obtained by TDE and ACG varied with changing hemodynamics such as cardiac cycle, absolute LAV and presence or absence of volume load. The left atrium was found to become spherical and progressively underestimated with TDE at ventricular endsystole, in larger LAV and with increased volume load. Since this tendency became less pronounced in measuring additional dimensions, reliable estimation of the absolute LAV and volume load was possible when 2 or 3 dimensions were measured. Among those calculation models depending on 2 or 3 dimensional measurements, there was only a small difference in terms of accuracy and predictability, although algorithm used varied from one model to another. This suggests that accurate cross-sectional area measurement is critically important for volume estimation rather than any particular algorithm involved. Cross-sectional area measurement by TDE integrated into a three dimensional equivalent allowed a reliable estimate of the LAV or volume load in a variety of hemodynamic situations where M-mode echo was not reliable.

  8. Probabilistic forecasts of debris-flow hazard at the regional scale with a combination of models.

    NASA Astrophysics Data System (ADS)

    Malet, Jean-Philippe; Remaître, Alexandre

    2015-04-01

    Debris flows are one of the many active slope-forming processes in the French Alps, where rugged and steep slopes mantled by various slope deposits offer a great potential for triggering hazardous events. A quantitative assessment of debris-flow hazard requires the estimation, in a probabilistic framework, of the spatial probability of occurrence of source areas, the spatial probability of runout areas, the temporal frequency of events, and their intensity. The main objective of this research is to propose a pipeline for the estimation of these quantities at the region scale using a chain of debris-flow models. The work uses the experimental site of the Barcelonnette Basin (South French Alps), where 26 active torrents have produced more than 150 debris-flow events since 1850 to develop and validate the methodology. First, a susceptibility assessment is performed to identify the debris-flow prone source areas. The most frequently used approach is the combination of environmental factors with GIS procedures and statistical techniques, integrating or not, detailed event inventories. Based on a 5m-DEM and derivatives, and information on slope lithology, engineering soils and landcover, the possible source areas are identified with a statistical logistic regression model. The performance of the statistical model is evaluated with the observed distribution of debris-flow events recorded after 1850 in the study area. The source areas in the three most active torrents (Riou-Bourdoux, Faucon, Sanières) are well identified by the model. Results are less convincing for three other active torrents (Bourget, La Valette and Riou-Chanal); this could be related to the type of debris-flow triggering mechanism as the model seems to better spot the open slope debris-flow source areas (e.g. scree slopes), but appears to be less efficient for the identification of landslide-induced debris flows. Second, a susceptibility assessment is performed to estimate the possible runout distance with a process-based model. The MassMov-2D code is a two-dimensional model of mud and debris flow dynamics over complex topography, based on a numerical integration of the depth-averaged motion equations using shallow water approximation. The run-out simulations are performed for the most active torrents. The performance of the model has been evaluated by comparing modelling results with the observed spreading areas of several recent debris flows. Existing data on the debris flow volume, input discharge and deposits were used to back-analyze those events and estimate the values of the model parameters. Third, hazard is estimated on the basis of scenarios computed in a probabilistic way, for volumes in the range 20'000 to 350'000 m3, and for several combinations of rheological parameters. In most cases, the simulations indicate that the debris flows cause significant overflowing on the alluvial fans for volumes exceeding 100'000 m3 (height of deposits > 2 m, velocities > 5 m.s-1). Probabilities of debris flow runout and debris flow intensities are then computed for each terrain units.

  9. Methodologies For A Physically Based Rockfall Hazard Assessment

    NASA Astrophysics Data System (ADS)

    Agliardi, F.; Crosta, G. B.; Guzzetti, F.; Marian, M.

    Rockfall hazard assessment is an important land planning tool in alpine areas, where settlements progressively expand across rockfall prone areas, rising the vulnerability of the elements at risk, the worth of potential losses and the restoration costs. Nev- ertheless, hazard definition is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. In addition, the high mobility of rockfalls implies a more difficult hazard definition with respect to other slope insta- bilities for which runout is minimal. When coping with rockfalls, hazard assessment involves complex definitions for "occurrence probability" and "intensity". The local occurrence probability must derive from the combination of the triggering probability (related to the geomechanical susceptibility of rock masses to fail) and the transit or impact probability at a given location (related to the motion of falling blocks). The intensity (or magnitude) of a rockfall is a complex function of mass, velocity and fly height of involved blocks that can be defined in many different ways depending on the adopted physical description and "destructiveness" criterion. This work is an attempt to evaluate rockfall hazard using the results of numerical modelling performed by an original 3D rockfall simulation program. This is based on a kinematic algorithm and allows the spatially distributed simulation of rockfall motions on a three-dimensional topography described by a DTM. The code provides raster maps portraying the max- imum frequency of transit, velocity and height of blocks at each model cell, easily combined in a GIS in order to produce physically based rockfall hazard maps. The results of some three dimensional rockfall models, performed at both regional and lo- cal scale in areas where rockfall related problems are well known, have been used to assess rockfall hazard, by adopting an objective approach based on three-dimensional matrixes providing a positional "hazard index". Different hazard maps have been ob- tained combining and classifying variables in different ways. The performance of the different hazard maps has been evaluated on the basis of past rockfall events and com- pared to the results of existing methodologies. The sensitivity of the hazard index with respect to the included variables and their combinations is discussed in order to constrain as objective as possible assessment criteria.

  10. Flood hazard mapping of Palembang City by using 2D model

    NASA Astrophysics Data System (ADS)

    Farid, Mohammad; Marlina, Ayu; Kusuma, Muhammad Syahril Badri

    2017-11-01

    Palembang as the capital city of South Sumatera Province is one of the metropolitan cities in Indonesia that flooded almost every year. Flood in the city is highly related to Musi River Basin. Based on Indonesia National Agency of Disaster Management (BNPB), the level of flood hazard is high. Many natural factors caused flood in the city such as high intensity of rainfall, inadequate drainage capacity, and also backwater flow due to spring tide. Furthermore, anthropogenic factors such as population increase, land cover/use change, and garbage problem make flood problem become worse. The objective of this study is to develop flood hazard map of Palembang City by using two dimensional model. HEC-RAS 5.0 is used as modelling tool which is verified with field observation data. There are 21 sub catchments of Musi River Basin in the flood simulation. The level of flood hazard refers to Head Regulation of BNPB number 2 in 2012 regarding general guideline of disaster risk assessment. The result for 25 year return per iod of flood shows that with 112.47 km2 area of inundation, 14 sub catchments are categorized in high hazard level. It is expected that the hazard map can be used for risk assessment.

  11. RockFall analyst: A GIS extension for three-dimensional and spatially distributed rockfall hazard modeling

    NASA Astrophysics Data System (ADS)

    Lan, Hengxing; Derek Martin, C.; Lim, C. H.

    2007-02-01

    Geographic information system (GIS) modeling is used in combination with three-dimensional (3D) rockfall process modeling to assess rockfall hazards. A GIS extension, RockFall Analyst (RA), which is capable of effectively handling large amounts of geospatial information relative to rockfall behaviors, has been developed in ArcGIS using ArcObjects and C#. The 3D rockfall model considers dynamic processes on a cell plane basis. It uses inputs of distributed parameters in terms of raster and polygon features created in GIS. Two major components are included in RA: particle-based rockfall process modeling and geostatistics-based rockfall raster modeling. Rockfall process simulation results, 3D rockfall trajectories and their velocity features either for point seeders or polyline seeders are stored in 3D shape files. Distributed raster modeling, based on 3D rockfall trajectories and a spatial geostatistical technique, represents the distribution of spatial frequency, the flying and/or bouncing height, and the kinetic energy of falling rocks. A distribution of rockfall hazard can be created by taking these rockfall characteristics into account. A barrier analysis tool is also provided in RA to aid barrier design. An application of these modeling techniques to a case study is provided. The RA has been tested in ArcGIS 8.2, 8.3, 9.0 and 9.1.

  12. Three-Dimensional Model of Heat and Mass Transfer in Fractured Rocks to Estimate Environmental Conditions Along Heated Drifts

    NASA Astrophysics Data System (ADS)

    Fedors, R. W.; Painter, S. L.

    2004-12-01

    Temperature gradients along the thermally-perturbed drifts of the potential high-level waste repository at Yucca Mountain, Nevada, will drive natural convection and associated heat and mass transfer along drifts. A three-dimensional, dual-permeability, thermohydrological model of heat and mass transfer was used to estimate the magnitude of temperature gradients along a drift. Temperature conditions along heated drifts are needed to support estimates of repository-edge cooling and as input to computational fluid dynamics modeling of in-drift axial convection and the cold-trap process. Assumptions associated with abstracted heat transfer models and two-dimensional thermohydrological models weakly coupled to mountain-scale thermal models can readily be tested using the three-dimensional thermohydrological model. Although computationally expensive, the fully coupled three-dimensional thermohydrological model is able to incorporate lateral heat transfer, including host rock processes of conduction, convection in gas phase, advection in liquid phase, and latent-heat transfer. Results from the three-dimensional thermohydrological model showed that weakly coupling three-dimensional thermal and two-dimensional thermohydrological models lead to underestimates of temperatures and underestimates of temperature gradients over large portions of the drift. The representative host rock thermal conductivity needed for abstracted heat transfer models are overestimated using the weakly coupled models. If axial flow patterns over large portions of drifts are not impeded by the strong cross-sectional flow patterns imparted by the heat rising directly off the waste package, condensation from the cold-trap process will not be limited to the extreme ends of each drift. Based on the three-dimensional thermohydrological model, axial temperature gradients occur sooner over a larger portion of the drift, though high gradients nearest the edge of the potential repository are dampened. This abstract is an independent product of CNWRA and does not necessarily reflect the view or regulatory position of the Nuclear Regulatory Commission.

  13. A new rapid method for rockfall energies and distances estimation

    NASA Astrophysics Data System (ADS)

    Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric

    2016-04-01

    Rockfalls are characterized by long travel distances and significant energies. Over the last decades, three main methods have been proposed in the literature to assess the rockfall runout: empirical, process-based and GIS-based methods (Dorren, 2003). Process-based methods take into account the physics of rockfall by simulating the motion of a falling rock along a slope and they are generally based on a probabilistic rockfall modelling approach that allows for taking into account the uncertainties associated with the rockfall phenomenon. Their application has the advantage of evaluating the energies, bounce heights and distances along the path of a falling block, hence providing valuable information for the design of mitigation measures (Agliardi et al., 2009), however, the implementation of rockfall simulations can be time-consuming and data-demanding. This work focuses on the development of a new methodology for estimating the expected kinetic energies and distances of the first impact at the base of a rock cliff, subject to the conditions that the geometry of the cliff and the properties of the representative block are known. The method is based on an extensive two-dimensional sensitivity analysis, conducted by means of kinematic simulations based on probabilistic modelling of two-dimensional rockfall trajectories (Ferrari et al., 2016). To take into account for the uncertainty associated with the estimation of the input parameters, the study was based on 78400 rockfall scenarios performed by systematically varying the input parameters that are likely to affect the block trajectory, its energy and distance at the base of the rock wall. The variation of the geometry of the rock cliff (in terms of height and slope angle), the roughness of the rock surface and the properties of the outcropping material were considered. A simplified and idealized rock wall geometry was adopted. The analysis of the results allowed finding empirical laws that relate impact energies and distances at the base to block and slope features. The validation of the proposed approach was conducted by comparing predictions to experimental data collected in the field and gathered from the scientific literature. The method can be used for both natural and constructed slopes and easily extended to more complicated and articulated slope geometries. The study shows its great potential for a quick qualitative hazard assessment providing indication about impact energy and horizontal distance of the first impact at the base of a rock cliff. Nevertheless, its application cannot substitute a more detailed quantitative analysis required for site-specific design of mitigation measures. Acknowledgements The authors gratefully acknowledge the financial support of the Australian Coal Association Research Program (ACARP). References Dorren, L.K.A. (2003) A review of rockfall mechanics and modelling approaches, Progress in Physical Geography 27(1), 69-87. Agliardi, F., Crosta, G.B., Frattini, P. (2009) Integrating rockfall risk assessment and countermeasure design by 3D modelling techniques. Natural Hazards and Earth System Sciences 9(4), 1059-1073. Ferrari, F., Thoeni, K., Giacomini, A., Lambert, C. (2016) A rapid approach to estimate the rockfall energies and distances at the base of rock cliffs. Georisk, DOI: 10.1080/17499518.2016.1139729.

  14. Detection of Natural Hazards Generated TEC Perturbations and Related New Applications

    NASA Astrophysics Data System (ADS)

    Komjathy, A.; Yang, Y.; Langley, R. B.

    2013-12-01

    Natural hazards, including earthquakes, volcanic eruptions, and tsunamis, have been significant threats to humans throughout recorded history. The Global Positioning System satellites have become primary sensors to measure signatures associated with such natural hazards. These signatures typically include GPS-derived seismic deformation measurements, co-seismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure and monitor post-seismic ionospheric disturbances caused by earthquakes, volcanic eruptions, and tsunamis. Research at the University of New Brunswick (UNB) laid the foundations to model the three-dimensional ionosphere at NASA's Jet Propulsion Laboratory by ingesting ground- and space-based GPS measurements into the state-of-the-art Global Assimilative Ionosphere Modeling (GAIM) software. As an outcome of the UNB and NASA research, new and innovative GPS applications have been invented including the use of ionospheric measurements to detect tiny fluctuations in the GPS signals between the spacecraft and GPS receivers caused by natural hazards occurring on or near the Earth's surface. This continuing research is expected to provide early warning for tsunamis, earthquakes, volcanic eruptions, and meteor impacts, for example, using GPS and other global navigation satellite systems. We will demonstrate new and upcoming applications including recent natural hazards and artificial explosions that generated TEC perturbations to perform state-of-the-art imaging and modeling of earthquakes, tsunamis and meteor impacts. By studying the propagation properties of ionospheric perturbations generated by natural hazards along with applying sophisticated first-principles physics-based modeling, we are on track to develop new technologies that can potentially save human lives and minimize property damage.

  15. Investigating Surface and Near-Surface Bushfire Fuel Attributes: A Comparison between Visual Assessments and Image-Based Point Clouds.

    PubMed

    Spits, Christine; Wallace, Luke; Reinke, Karin

    2017-04-20

    Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential.

  16. WASTE REDUCTION PRACTICES AT TWO CHROMATED COPPER ARSENATE WOOD-TREATING PLANTS

    EPA Science Inventory

    Two chromated copper arsenate (CCA) wood-treating plants were assessed for their waste reduction practices. The objectives of this study were to estimate the amount of hazardous wastes that a well-designed and well-main- tained CCA treatment facility would generate and to iden- t...

  17. Contingent valuation of fuel hazard reduction treatments

    Treesearch

    John B. Loomis; Armando Gonzalez-Caban

    2008-01-01

    This chapter presents a stated preference technique for estimating the public benefits of reducing wildfires to residents of California, Florida, and Montana from two alternative fuel reduction programs: prescribed burning, and mechanical fuels reduction. The two fuel reduction programs under study are quite relevant to people living in California, Florida, and...

  18. Updating the USGS seismic hazard maps for Alaska

    USGS Publications Warehouse

    Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.

    2015-01-01

    The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.

  19. Optimizing a Sensor Network with Data from Hazard Mapping Demonstrated in a Heavy-Vehicle Manufacturing Facility.

    PubMed

    Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A

    2018-05-28

    To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.

  20. Nonparametric change point estimation for survival distributions with a partially constant hazard rate.

    PubMed

    Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang

    2018-04-05

    We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.

  1. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  2. Robust learning for optimal treatment decision with NP-dimensionality

    PubMed Central

    Shi, Chengchun; Song, Rui; Lu, Wenbin

    2016-01-01

    In order to identify important variables that are involved in making optimal treatment decision, Lu, Zhang and Zeng (2013) proposed a penalized least squared regression framework for a fixed number of predictors, which is robust against the misspecification of the conditional mean model. Two problems arise: (i) in a world of explosively big data, effective methods are needed to handle ultra-high dimensional data set, for example, with the dimension of predictors is of the non-polynomial (NP) order of the sample size; (ii) both the propensity score and conditional mean models need to be estimated from data under NP dimensionality. In this paper, we propose a robust procedure for estimating the optimal treatment regime under NP dimensionality. In both steps, penalized regressions are employed with the non-concave penalty function, where the conditional mean model of the response given predictors may be misspecified. The asymptotic properties, such as weak oracle properties, selection consistency and oracle distributions, of the proposed estimators are investigated. In addition, we study the limiting distribution of the estimated value function for the obtained optimal treatment regime. The empirical performance of the proposed estimation method is evaluated by simulations and an application to a depression dataset from the STAR*D study. PMID:28781717

  3. ALGE3D: A Three-Dimensional Transport Model

    NASA Astrophysics Data System (ADS)

    Maze, G. M.

    2017-12-01

    Of the top 10 most populated US cities from a 2015 US Census Bureau estimate, 7 of the cities are situated near the ocean, a bay, or on one of the Great Lakes. A contamination of the water ways in the United States could be devastating to the economy (through tourism and industries such as fishing), public health (from direct contact, or contaminated drinking water), and in some cases even infrastructure (water treatment plants). Current national response models employed by emergency response agencies have well developed models to simulate the effects of hazardous contaminants in riverine systems that are primarily driven by one-dimensional flows; however in more complex systems, such as tidal estuaries, bays, or lakes, a more complex model is needed. While many models exist, none are capable of quick deployment in emergency situations that could contain a variety of release situations including a mixture of both particulate and dissolved chemicals in a complex flow area. ALGE3D, developed at the Department of Energy's (DOE) Savannah River National Laboratory (SRNL), is a three-dimensional hydrodynamic code which solves the momentum, mass, and energy conservation equations to predict the movement and dissipation of thermal or dissolved chemical plumes discharged into cooling lakes, rivers, and estuaries. ALGE3D is capable of modeling very complex flows, including areas with tidal flows which include wetting and drying of land. Recent upgrades have increased the capabilities including the transport of particulate tracers, allowing for more complete modeling of the transport of pollutants. In addition the model is capable of coupling with a one-dimension riverine transport model or a two-dimension atmospheric deposition model in the event that a contamination event occurs upstream or upwind of the water body.

  4. A single-index threshold Cox proportional hazard model for identifying a treatment-sensitive subset based on multiple biomarkers.

    PubMed

    He, Ye; Lin, Huazhen; Tu, Dongsheng

    2018-06-04

    In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Evaluation and design of a rain gauge network using a statistical optimization method in a severe hydro-geological hazard prone area

    NASA Astrophysics Data System (ADS)

    Fattoruso, Grazia; Longobardi, Antonia; Pizzuti, Alfredo; Molinara, Mario; Marocco, Claudio; De Vito, Saverio; Tortorella, Francesco; Di Francia, Girolamo

    2017-06-01

    Rainfall data collection gathered in continuous by a distributed rain gauge network is instrumental to more effective hydro-geological risk forecasting and management services though the input estimated rainfall fields suffer from prediction uncertainty. Optimal rain gauge networks can generate accurate estimated rainfall fields. In this research work, a methodology has been investigated for evaluating an optimal rain gauges network aimed at robust hydrogeological hazard investigations. The rain gauges of the Sarno River basin (Southern Italy) has been evaluated by optimizing a two-objective function that maximizes the estimated accuracy and minimizes the total metering cost through the variance reduction algorithm along with the climatological variogram (time-invariant). This problem has been solved by using an enumerative search algorithm, evaluating the exact Pareto-front by an efficient computational time.

  6. Two-dimensional fruit ripeness estimation using thermal imaging

    NASA Astrophysics Data System (ADS)

    Sumriddetchkajorn, Sarun; Intaravanne, Yuttana

    2013-06-01

    Some green fruits do not change their color from green to yellow when being ripe. As a result, ripeness estimation via color and fluorescent analytical approaches cannot be applied. In this article, we propose and show for the first time how a thermal imaging camera can be used to two-dimensionally classify fruits into different ripeness levels. Our key idea relies on the fact that the mature fruits have higher heat capacity than the immature ones and therefore the change in surface temperature overtime is slower. Our experimental proof of concept using a thermal imaging camera shows a promising result in non-destructively identifying three different ripeness levels of mangoes Mangifera indica L.

  7. Three-dimensional effects on pure tone fan noise due to inflow distortion. [rotor blade noise prediction

    NASA Technical Reports Server (NTRS)

    Kobayashi, H.

    1978-01-01

    Two dimensional, quasi three dimensional and three dimensional theories for the prediction of pure tone fan noise due to the interaction of inflow distortion with a subsonic annular blade row were studied with the aid of an unsteady three dimensional lifting surface theory. The effects of compact and noncompact source distributions on pure tone fan noise in an annular cascade were investigated. Numerical results show that the strip theory and quasi three-dimensional theory are reasonably adequate for fan noise prediction. The quasi three-dimensional method is more accurate for acoustic power and model structure prediction with an acoustic power estimation error of about plus or minus 2db.

  8. A stochastic automata network for earthquake simulation and hazard estimation

    NASA Astrophysics Data System (ADS)

    Belubekian, Maya Ernest

    1998-11-01

    This research develops a model for simulation of earthquakes on seismic faults with available earthquake catalog data. The model allows estimation of the seismic hazard at a site of interest and assessment of the potential damage and loss in a region. There are two approaches for studying the earthquakes: mechanistic and stochastic. In the mechanistic approach, seismic processes, such as changes in stress or slip on faults, are studied in detail. In the stochastic approach, earthquake occurrences are simulated as realizations of a certain stochastic process. In this dissertation, a stochastic earthquake occurrence model is developed that uses the results from dislocation theory for the estimation of slip released in earthquakes. The slip accumulation and release laws and the event scheduling mechanism adopted in the model result in a memoryless Poisson process for the small and moderate events and in a time- and space-dependent process for large events. The minimum and maximum of the hazard are estimated by the model when the initial conditions along the faults correspond to a situation right after a largest event and after a long seismic gap, respectively. These estimates are compared with the ones obtained from a Poisson model. The Poisson model overestimates the hazard after the maximum event and underestimates it in the period of a long seismic quiescence. The earthquake occurrence model is formulated as a stochastic automata network. Each fault is divided into cells, or automata, that interact by means of information exchange. The model uses a statistical method called bootstrap for the evaluation of the confidence bounds on its results. The parameters of the model are adjusted to the target magnitude patterns obtained from the catalog. A case study is presented for the city of Palo Alto, where the hazard is controlled by the San Andreas, Hayward and Calaveras faults. The results of the model are used to evaluate the damage and loss distribution in Palo Alto. The sensitivity analysis of the model results to the variation in basic parameters shows that the maximum magnitude has the most significant impact on the hazard, especially for long forecast periods.

  9. Estimating restricted mean treatment effects with stacked survival models

    PubMed Central

    Wey, Andrew; Vock, David M.; Connett, John; Rudser, Kyle

    2016-01-01

    The difference in restricted mean survival times between two groups is a clinically relevant summary measure. With observational data, there may be imbalances in confounding variables between the two groups. One approach to account for such imbalances is estimating a covariate-adjusted restricted mean difference by modeling the covariate-adjusted survival distribution, and then marginalizing over the covariate distribution. Since the estimator for the restricted mean difference is defined by the estimator for the covariate-adjusted survival distribution, it is natural to expect that a better estimator of the covariate-adjusted survival distribution is associated with a better estimator of the restricted mean difference. We therefore propose estimating restricted mean differences with stacked survival models. Stacked survival models estimate a weighted average of several survival models by minimizing predicted error. By including a range of parametric, semi-parametric, and non-parametric models, stacked survival models can robustly estimate a covariate-adjusted survival distribution and, therefore, the restricted mean treatment effect in a wide range of scenarios. We demonstrate through a simulation study that better performance of the covariate-adjusted survival distribution often leads to better mean-squared error of the restricted mean difference although there are notable exceptions. In addition, we demonstrate that the proposed estimator can perform nearly as well as Cox regression when the proportional hazards assumption is satisfied and significantly better when proportional hazards is violated. Finally, the proposed estimator is illustrated with data from the United Network for Organ Sharing to evaluate post-lung transplant survival between large and small-volume centers. PMID:26934835

  10. Two-Dimensional DOA and Polarization Estimation for a Mixture of Uncorrelated and Coherent Sources with Sparsely-Distributed Vector Sensor Array

    PubMed Central

    Si, Weijian; Zhao, Pinjiao; Qu, Zhiyu

    2016-01-01

    This paper presents an L-shaped sparsely-distributed vector sensor (SD-VS) array with four different antenna compositions. With the proposed SD-VS array, a novel two-dimensional (2-D) direction of arrival (DOA) and polarization estimation method is proposed to handle the scenario where uncorrelated and coherent sources coexist. The uncorrelated and coherent sources are separated based on the moduli of the eigenvalues. For the uncorrelated sources, coarse estimates are acquired by extracting the DOA information embedded in the steering vectors from estimated array response matrix of the uncorrelated sources, and they serve as coarse references to disambiguate fine estimates with cyclical ambiguity obtained from the spatial phase factors. For the coherent sources, four Hankel matrices are constructed, with which the coherent sources are resolved in a similar way as for the uncorrelated sources. The proposed SD-VS array requires only two collocated antennas for each vector sensor, thus the mutual coupling effects across the collocated antennas are reduced greatly. Moreover, the inter-sensor spacings are allowed beyond a half-wavelength, which results in an extended array aperture. Simulation results demonstrate the effectiveness and favorable performance of the proposed method. PMID:27258271

  11. Method to estimate center of rigidity using vibration recordings

    USGS Publications Warehouse

    Safak, Erdal; Çelebi, Mehmet

    1990-01-01

    A method to estimate the center of rigidity of buildings by using vibration recordings is presented. The method is based on the criterion that the coherence of translational motions with the rotational motion is minimum at the center of rigidity. Since the coherence is a function of frequency, a gross but frequency-independent measure of the coherency is defined as the integral of the coherence function over the frequency. The center of rigidity is determined by minimizing this integral. The formulation is given for two-dimensional motions. Two examples are presented for the method; a rectangular building with ambient-vibration recordings, and a triangular building with earthquake-vibration recordings. Although the examples given are for buildings, the method can be applied to any structure with two-dimensional motions.

  12. Combining heuristic and statistical techniques in landslide hazard assessments

    NASA Astrophysics Data System (ADS)

    Cepeda, Jose; Schwendtner, Barbara; Quan, Byron; Nadim, Farrokh; Diaz, Manuel; Molina, Giovanni

    2014-05-01

    As a contribution to the Global Assessment Report 2013 - GAR2013, coordinated by the United Nations International Strategy for Disaster Reduction - UNISDR, a drill-down exercise for landslide hazard assessment was carried out by entering the results of both heuristic and statistical techniques into a new but simple combination rule. The data available for this evaluation included landslide inventories, both historical and event-based. In addition to the application of a heuristic method used in the previous editions of GAR, the availability of inventories motivated the use of statistical methods. The heuristic technique is largely based on the Mora & Vahrson method, which estimates hazard as the product of susceptibility and triggering factors, where classes are weighted based on expert judgment and experience. Two statistical methods were also applied: the landslide index method, which estimates weights of the classes for the susceptibility and triggering factors based on the evidence provided by the density of landslides in each class of the factors; and the weights of evidence method, which extends the previous technique to include both positive and negative evidence of landslide occurrence in the estimation of weights for the classes. One key aspect during the hazard evaluation was the decision on the methodology to be chosen for the final assessment. Instead of opting for a single methodology, it was decided to combine the results of the three implemented techniques using a combination rule based on a normalization of the results of each method. The hazard evaluation was performed for both earthquake- and rainfall-induced landslides. The country chosen for the drill-down exercise was El Salvador. The results indicate that highest hazard levels are concentrated along the central volcanic chain and at the centre of the northern mountains.

  13. Recognition of Equations Using a Two-Dimensional Stochastic Context-Free Grammar

    NASA Astrophysics Data System (ADS)

    Chou, Philip A.

    1989-11-01

    We propose using two-dimensional stochastic context-free grammars for image recognition, in a manner analogous to using hidden Markov models for speech recognition. The value of the approach is demonstrated in a system that recognizes printed, noisy equations. The system uses a two-dimensional probabilistic version of the Cocke-Younger-Kasami parsing algorithm to find the most likely parse of the observed image, and then traverses the corresponding parse tree in accordance with translation formats associated with each production rule, to produce eqn I troff commands for the imaged equation. In addition, it uses two-dimensional versions of the Inside/Outside and Baum re-estimation algorithms for learning the parameters of the grammar from a training set of examples. Parsing the image of a simple noisy equation currently takes about one second of cpu time on an Alliant FX/80.

  14. Electron transport in the two-dimensional channel material - zinc oxide nanoflake

    NASA Astrophysics Data System (ADS)

    Lai, Jian-Jhong; Jian, Dunliang; Lin, Yen-Fu; Ku, Ming-Ming; Jian, Wen-Bin

    2018-03-01

    ZnO nanoflakes of 3-5 μm in lateral size and 15-20 nm in thickness are synthesized. The nanoflakes are used to make back-gated transistor devices. Electron transport in the ZnO nanoflake channel between source and drain electrodes are investigated. In the beginning, we argue and determine that electrons are in a two-dimensional system. We then apply Mott's two-dimensional variable range hopping model to analyze temperature and electric field dependences of resistivity. The disorder parameter, localization length, hopping distance, and hopping energy of the electron system in ZnO nanoflakes are obtained and, additionally, their temperature behaviors and dependences on room-temperature resistivity are presented. On the other hand, the basic transfer characteristics of the channel material are carried out, as well, and the carrier concentration, the mobility, and the Fermi wavelength of two-dimensional ZnO nanoflakes are estimated.

  15. Electrical level of defects in single-layer two-dimensional TiO2

    NASA Astrophysics Data System (ADS)

    Song, X. F.; Hu, L. F.; Li, D. H.; Chen, L.; Sun, Q. Q.; Zhou, P.; Zhang, D. W.

    2015-11-01

    The remarkable properties of graphene and transition metal dichalcogenides (TMDCs) have attracted increasing attention on two-dimensional materials, but the gate oxide, one of the key components of two-dimensional electronic devices, has rarely reported. We found the single-layer oxide can be used as the two dimensional gate oxide in 2D electronic structure, such as TiO2. However, the electrical performance is seriously influenced by the defects existing in the single-layer oxide. In this paper, a nondestructive and noncontact solution based on spectroscopic ellipsometry has been used to detect the defect states and energy level of single-layer TiO2 films. By fitting the Lorentz oscillator model, the results indicate the exact position of defect energy levels depends on the estimated band gap and the charge state of the point defects of TiO2.

  16. Seismic hazard map of North and Central America and the Caribbean

    USGS Publications Warehouse

    Shedlock, K.M.

    1999-01-01

    Minimization of the loss of life, property damage, and social and economic disruption due to earthquakes depends on reliable estimates of seismic hazard. National, state, and local government, decision makers, engineers, planners, emergency response organizations, builders, universities, and the general public require seismic hazard estimates for land use planning, improved building design and construction (including adoption of building construction codes), emergency response preparedness plans, economic forecasts, housing and employment decisions, and many more types of risk mitigation. The seismic hazard map of North and Central America and the Caribbean is the concatenation of various national and regional maps, involving a suite of approaches. The combined maps and documentation provide a useful regional seismic hazard framework and serve as a resource for any national or regional agency for further detailed studies applicable to their needs. This seismic hazard map depicts Peak Ground Acceleration (PGA) with a 10% chance of exceedance in 50 years. PGA, a short-period ground motion parameter that is proportional to force, is the most commonly mapped ground motion parameter because current building codes that include seismic provisions specify the horizontal force a building should be able to withstand during an earthquake. This seismic hazard map of North and Central America and the Caribbean depicts the likely level of short-period ground motion from earthquakes in a fifty-year window. Short-period ground motions effect short-period structures (e.g., one-to-two story buildings). The highest seismic hazard values in the region generally occur in areas that have been, or are likely to be, the sites of the largest plate boundary earthquakes.

  17. Sensitivity of tsunami evacuation modeling to direction and land cover assumptions

    USGS Publications Warehouse

    Schmidtlein, Mathew C.; Wood, Nathan J.

    2015-01-01

    Although anisotropic least-cost-distance (LCD) modeling is becoming a common tool for estimating pedestrian-evacuation travel times out of tsunami hazard zones, there has been insufficient attention paid to understanding model sensitivity behind the estimates. To support tsunami risk-reduction planning, we explore two aspects of LCD modeling as it applies to pedestrian evacuations and use the coastal community of Seward, Alaska, as our case study. First, we explore the sensitivity of modeling to the direction of movement by comparing standard safety-to-hazard evacuation times to hazard-to-safety evacuation times for a sample of 3985 points in Seward's tsunami-hazard zone. Safety-to-hazard evacuation times slightly overestimated hazard-to-safety evacuation times but the strong relationship to the hazard-to-safety evacuation times, slightly conservative bias, and shorter processing times of the safety-to-hazard approach make it the preferred approach. Second, we explore how variations in land cover speed conservation values (SCVs) influence model performance using a Monte Carlo approach with one thousand sets of land cover SCVs. The LCD model was relatively robust to changes in land cover SCVs with the magnitude of local model sensitivity greatest in areas with higher evacuation times or with wetland or shore land cover types, where model results may slightly underestimate travel times. This study demonstrates that emergency managers should be concerned not only with populations in locations with evacuation times greater than wave arrival times, but also with populations with evacuation times lower than but close to expected wave arrival times, particularly if they are required to cross wetlands or beaches.

  18. Two- and three-dimensional CT measurements of urinary calculi length and width: a comparative study.

    PubMed

    Lidén, Mats; Thunberg, Per; Broxvall, Mathias; Geijer, Håkan

    2015-04-01

    The standard imaging procedure for a patient presenting with renal colic is unenhanced computed tomography (CT). The CT measured size has a close correlation to the estimated prognosis for spontaneous passage of a ureteral calculus. Size estimations of urinary calculi in CT images are still based on two-dimensional (2D) reformats. To develop and validate a calculus oriented three-dimensional (3D) method for measuring the length and width of urinary calculi and to compare the calculus oriented measurements of the length and width with corresponding 2D measurements obtained in axial and coronal reformats. Fifty unenhanced CT examinations demonstrating urinary calculi were included. A 3D symmetric segmentation algorithm was validated against reader size estimations. The calculus oriented size from the segmentation was then compared to the estimated size in axial and coronal 2D reformats. The validation showed 0.1 ± 0.7 mm agreement against reference measure. There was a 0.4 mm median bias for 3D estimated calculus length compared to 2D (P < 0.001), but no significant bias for 3D width compared to 2D. The length of a calculus in axial and coronal reformats becomes underestimated compared to 3D if its orientation is not aligned to the image planes. Future studies aiming to correlate calculus size with patient outcome should use a calculus oriented size estimation. © The Foundation Acta Radiologica 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  19. Improved Analysis of GW150914 Using a Fully Spin-Precessing Waveform Model

    NASA Technical Reports Server (NTRS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Camp, J. B.; hide

    2016-01-01

    This paper presents updated estimates of source parameters for GW150914, a binary black-hole coalescence event detected by the Laser Interferometer Gravitational-wave Observatory (LIGO) in 2015 [Abbott et al. Phys. Rev. Lett. 116, 061102 (2016).]. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] presented parameter estimation of the source using a 13-dimensional, phenomenological precessing-spin model (precessing IMRPhenom) and an 11-dimensional nonprecessing effective-one-body (EOB) model calibrated to numerical-relativity simulations, which forces spin alignment (nonprecessing EOBNR). Here, we present new results that include a 15-dimensional precessing-spin waveform model (precessing EOBNR) developed within the EOB formalism. We find good agreement with the parameters estimated previously [Abbott et al. Phys. Rev. Lett. 116, 241102 (2016).], and we quote updated component masses of 35(+5)(-3) solar M; and 30(+3)(-4) solar M; (where errors correspond to 90 symmetric credible intervals). We also present slightly tighter constraints on the dimensionless spin magnitudes of the two black holes, with a primary spin estimate is less than 0.65 and a secondary spin estimate is less than 0.75 at 90% probability. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] estimated the systematic parameter-extraction errors due to waveform-model uncertainty by combining the posterior probability densities of precessing IMRPhenom and nonprecessing EOBNR. Here, we find that the two precessing-spin models are in closer agreement, suggesting that these systematic errors are smaller than previously quoted.

  20. Estimation of age- and stage-specific Catalan breast cancer survival functions using US and Catalan survival data

    PubMed Central

    2009-01-01

    Background During the last part of the 1990s the chance of surviving breast cancer increased. Changes in survival functions reflect a mixture of effects. Both, the introduction of adjuvant treatments and early screening with mammography played a role in the decline in mortality. Evaluating the contribution of these interventions using mathematical models requires survival functions before and after their introduction. Furthermore, required survival functions may be different by age groups and are related to disease stage at diagnosis. Sometimes detailed information is not available, as was the case for the region of Catalonia (Spain). Then one may derive the functions using information from other geographical areas. This work presents the methodology used to estimate age- and stage-specific Catalan breast cancer survival functions from scarce Catalan survival data by adapting the age- and stage-specific US functions. Methods Cubic splines were used to smooth data and obtain continuous hazard rate functions. After, we fitted a Poisson model to derive hazard ratios. The model included time as a covariate. Then the hazard ratios were applied to US survival functions detailed by age and stage to obtain Catalan estimations. Results We started estimating the hazard ratios for Catalonia versus the USA before and after the introduction of screening. The hazard ratios were then multiplied by the age- and stage-specific breast cancer hazard rates from the USA to obtain the Catalan hazard rates. We also compared breast cancer survival in Catalonia and the USA in two time periods, before cancer control interventions (USA 1975–79, Catalonia 1980–89) and after (USA and Catalonia 1990–2001). Survival in Catalonia in the 1980–89 period was worse than in the USA during 1975–79, but the differences disappeared in 1990–2001. Conclusion Our results suggest that access to better treatments and quality of care contributed to large improvements in survival in Catalonia. On the other hand, we obtained detailed breast cancer survival functions that will be used for modeling the effect of screening and adjuvant treatments in Catalonia. PMID:19331670

  1. Development and validation of a two-dimensional fast-response flood estimation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judi, David R; Mcpherson, Timothy N; Burian, Steven J

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. Themore » simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.« less

  2. Use of cone beam computed tomography in periodontology

    PubMed Central

    Acar, Buket; Kamburoğlu, Kıvanç

    2014-01-01

    Diagnosis of periodontal disease mainly depends on clinical signs and symptoms. However, in the case of bone destruction, radiographs are valuable diagnostic tools as an adjunct to the clinical examination. Two dimensional periapical and panoramic radiographs are routinely used for diagnosing periodontal bone levels. In two dimensional imaging, evaluation of bone craters, lamina dura and periodontal bone level is limited by projection geometry and superpositions of adjacent anatomical structures. Those limitations of 2D radiographs can be eliminated by three-dimensional imaging techniques such as computed tomography. Cone beam computed tomography (CBCT) generates 3D volumetric images and is also commonly used in dentistry. All CBCT units provide axial, coronal and sagittal multi-planar reconstructed images without magnification. Also, panoramic images without distortion and magnification can be generated with curved planar reformation. CBCT displays 3D images that are necessary for the diagnosis of intra bony defects, furcation involvements and buccal/lingual bone destructions. CBCT applications provide obvious benefits in periodontics, however; it should be used only in correct indications considering the necessity and the potential hazards of the examination. PMID:24876918

  3. Comparing the Performance of Commonly Available Digital Elevation Models in GIS-based Flood Simulation

    NASA Astrophysics Data System (ADS)

    Ybanez, R. L.; Lagmay, A. M. A.; David, C. P.

    2016-12-01

    With climatological hazards increasing globally, the Philippines is listed as one of the most vulnerable countries in the world due to its location in the Western Pacific. Flood hazards mapping and modelling is one of the responses by local government and research institutions to help prepare for and mitigate the effects of flood hazards that constantly threaten towns and cities in floodplains during the 6-month rainy season. Available digital elevation maps, which serve as the most important dataset used in 2D flood modelling, are limited in the Philippines and testing is needed to determine which of the few would work best for flood hazards mapping and modelling. Two-dimensional GIS-based flood modelling with the flood-routing software FLO-2D was conducted using three different available DEMs from the ASTER GDEM, the SRTM GDEM, and the locally available IfSAR DTM. All other parameters kept uniform, such as resolution, soil parameters, rainfall amount, and surface roughness, the three models were run over a 129-sq. kilometer watershed with only the basemap varying. The output flood hazard maps were compared on the basis of their flood distribution, extent, and depth. The ASTER and SRTM GDEMs contained too much error and noise which manifested as dissipated and dissolved hazard areas in the lower watershed where clearly delineated flood hazards should be present. Noise on the two datasets are clearly visible as erratic mounds in the floodplain. The dataset which produced the only feasible flood hazard map is the IfSAR DTM which delineates flood hazard areas clearly and properly. Despite the use of ASTER and SRTM with their published resolution and accuracy, their use in GIS-based flood modelling would be unreliable. Although not as accessible, only IfSAR or better datasets should be used for creating secondary products from these base DEM datasets. For developing countries which are most prone to hazards, but with limited choices for basemaps used in hazards studies, the caution must be taken in the use of globally available GDEMs and higher-resolution DEMs must always be sought.

  4. The potential monetary benefits of reclaiming hazardous waste sites in the Campania region: an economic evaluation.

    PubMed

    Guerriero, Carla; Cairns, John

    2009-06-24

    Evaluating the economic benefit of reducing negative health outcomes resulting from waste management is of pivotal importance for designing an effective waste policy that takes into account the health consequences for the populations exposed to environmental hazards. Despite the high level of Italian and international media interest in the problem of hazardous waste in Campania little has been done to reclaim the land and the waterways contaminated by hazardous waste. This study aims to reduce the uncertainty about health damage due to waste exposure by providing for the first time a monetary valuation of health benefits arising from the reclamation of hazardous waste dumps in Campania. First the criteria by which the landfills in the Campania region, in particular in the two provinces of Naples and Caserta, have been classified are described. Then, the annual cases of premature death and fatal cases of cancers attributable to waste exposure are quantified. Finally, the present value of the health benefits from the reclamation of polluted land is estimated for each of the health outcomes (premature mortality, fatal cancer and premature mortality adjusted for the cancer premium). Due to the uncertainty about the time frame of the benefits arising from reclamation, the latency of the effects of toxic waste on human health and the lack of context specific estimates of the Value of Preventing a Fatality (VPF), extensive sensitivity analyses are performed. There are estimated to be 848 cases of premature mortality and 403 cases of fatal cancer per year as a consequence of exposure to toxic waste. The present value of the benefit of reducing the number of waste associated deaths after adjusting for a cancer premium is euro11.6 billion. This value ranges from euro5.4 to euro20.0 billion assuming a time frame for benefits of 10 and 50 years respectively. This study suggests that there is a strong economic argument for both reclaiming the land contaminated with hazardous waste in the two provinces of Naples and Caserta and increasing the control of the territory in order to avoid the creation of new illegal dump sites.

  5. Space Geodesy and the New Madrid Seismic Zone

    NASA Astrophysics Data System (ADS)

    Smalley, Robert; Ellis, Michael A.

    2008-07-01

    One of the most contentious issues related to earthquake hazards in the United States centers on the midcontinent and the origin, magnitudes, and likely recurrence intervals of the 1811-1812 New Madrid earthquakes that occurred there. The stakeholder groups in the debate (local and state governments, reinsurance companies, American businesses, and the scientific community) are similar to the stakeholder groups in regions more famous for large earthquakes. However, debate about New Madrid seismic hazard has been fiercer because of the lack of two fundamental components of seismic hazard estimation: an explanatory model for large, midplate earthquakes; and sufficient or sufficiently precise data about the causes, effects, and histories of such earthquakes.

  6. Reducing fire hazard: balancing costs and outcomes.

    Treesearch

    Valerie Rapp

    2004-01-01

    Massive wildfires in recent years have given urgency to questions of how to reduce fire hazard in Western forests, how to finance the work, and how to use the wood, especially in forests crowded with small trees. Scientists have already developed tools that estimate fire hazard in a forest stand. But hazard is more difficult to estimate at a landscape scale, involving...

  7. An interaction algorithm for prediction of mean and fluctuating velocities in two-dimensional aerodynamic wake flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Orzechowski, J. A.

    1980-01-01

    A theoretical analysis is presented yielding sets of partial differential equations for determination of turbulent aerodynamic flowfields in the vicinity of an airfoil trailing edge. A four phase interaction algorithm is derived to complete the analysis. Following input, the first computational phase is an elementary viscous corrected two dimensional potential flow solution yielding an estimate of the inviscid-flow induced pressure distribution. Phase C involves solution of the turbulent two dimensional boundary layer equations over the trailing edge, with transition to a two dimensional parabolic Navier-Stokes equation system describing the near-wake merging of the upper and lower surface boundary layers. An iteration provides refinement of the potential flow induced pressure coupling to the viscous flow solutions. The final phase is a complete two dimensional Navier-Stokes analysis of the wake flow in the vicinity of a blunt-bases airfoil. A finite element numerical algorithm is presented which is applicable to solution of all partial differential equation sets of inviscid-viscous aerodynamic interaction algorithm. Numerical results are discussed.

  8. Nonequilibrium critical dynamics of the two-dimensional Ashkin-Teller model at the Baxter line

    NASA Astrophysics Data System (ADS)

    Fernandes, H. A.; da Silva, R.; Caparica, A. A.; de Felício, J. R. Drugowich

    2017-04-01

    We investigate the short-time universal behavior of the two-dimensional Ashkin-Teller model at the Baxter line by performing time-dependent Monte Carlo simulations. First, as preparatory results, we obtain the critical parameters by searching the optimal power-law decay of the magnetization. Thus, the dynamic critical exponents θm and θp, related to the magnetic and electric order parameters, as well as the persistence exponent θg, are estimated using heat-bath Monte Carlo simulations. In addition, we estimate the dynamic exponent z and the static critical exponents β and ν for both order parameters. We propose a refined method to estimate the static exponents that considers two different averages: one that combines an internal average using several seeds with another, which is taken over temporal variations in the power laws. Moreover, we also performed the bootstrapping method for a complementary analysis. Our results show that the ratio β /ν exhibits universal behavior along the critical line corroborating the conjecture for both magnetization and polarization.

  9. Evaluation of levee setbacks for flood-loss reduction, Middle Mississippi River, USA

    NASA Astrophysics Data System (ADS)

    Dierauer, Jennifer; Pinter, Nicholas; Remo, Jonathan W. F.

    2012-07-01

    SummaryOne-dimensional hydraulic modeling and flood-loss modeling were used to test the effectiveness of levee setbacks for flood-loss reduction along the Middle Mississippi River (MMR). Four levee scenarios were assessed: (1) the present-day levee configuration, (2) a 1000 m levee setback, (3) a 1500 m levee setback, and (4) an optimized setback configuration. Flood losses were estimated using FEMA's Hazus-MH (Hazards US Multi-Hazard) loss-estimation software on a structure-by-structure basis for a range of floods from the 2- to the 500-year events. These flood-loss estimates were combined with a levee-reliability model to calculate probability-weighted damage estimates. In the simplest case, the levee setback scenarios tested here reduced flood losses compared to current conditions for large, infrequent flooding events but increased flood losses for smaller, more frequent flood events. These increases occurred because levee protection was removed for some of the existing structures. When combined with buyouts of unprotected structures, levee setbacks reduced flood losses for all recurrence intervals. The "optimized" levee setback scenario, involving a levee configuration manually planned to protect existing high-value infrastructure, reduced damages with or without buyouts. This research shows that levee setbacks in combination with buyouts are an economically viable approach for flood-risk reduction along the study reach and likely elsewhere where levees are widely employed for flood control. Designing a levee setback around existing high-value infrastructure can maximize the benefit of the setback while simultaneously minimizing the costs. The optimized levee setback scenario analyzed here produced payback periods (costs divided by benefits) of less than 12 years. With many aging levees failing current inspections across the US, and flood losses spiraling up over time, levee setbacks are a viable solution for reducing flood exposure and flood levels.

  10. Sample size calculation for studies with grouped survival data.

    PubMed

    Li, Zhiguo; Wang, Xiaofei; Wu, Yuan; Owzar, Kouros

    2018-06-10

    Grouped survival data arise often in studies where the disease status is assessed at regular visits to clinic. The time to the event of interest can only be determined to be between two adjacent visits or is right censored at one visit. In data analysis, replacing the survival time with the endpoint or midpoint of the grouping interval leads to biased estimators of the effect size in group comparisons. Prentice and Gloeckler developed a maximum likelihood estimator for the proportional hazards model with grouped survival data and the method has been widely applied. Previous work on sample size calculation for designing studies with grouped data is based on either the exponential distribution assumption or the approximation of variance under the alternative with variance under the null. Motivated by studies in HIV trials, cancer trials and in vitro experiments to study drug toxicity, we develop a sample size formula for studies with grouped survival endpoints that use the method of Prentice and Gloeckler for comparing two arms under the proportional hazards assumption. We do not impose any distributional assumptions, nor do we use any approximation of variance of the test statistic. The sample size formula only requires estimates of the hazard ratio and survival probabilities of the event time of interest and the censoring time at the endpoints of the grouping intervals for one of the two arms. The formula is shown to perform well in a simulation study and its application is illustrated in the three motivating examples. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Evaluation of long-term survival: use of diagnostics and robust estimators with Cox's proportional hazards model.

    PubMed

    Valsecchi, M G; Silvestri, D; Sasieni, P

    1996-12-30

    We consider methodological problems in evaluating long-term survival in clinical trials. In particular we examine the use of several methods that extend the basic Cox regression analysis. In the presence of a long term observation, the proportional hazard (PH) assumption may easily be violated and a few long term survivors may have a large effect on parameter estimates. We consider both model selection and robust estimation in a data set of 474 ovarian cancer patients enrolled in a clinical trial and followed for between 7 and 12 years after randomization. Two diagnostic plots for assessing goodness-of-fit are introduced. One shows the variation in time of parameter estimates and is an alternative to PH checking based on time-dependent covariates. The other takes advantage of the martingale residual process in time to represent the lack of fit with a metric of the type 'observed minus expected' number of events. Robust estimation is carried out by maximizing a weighted partial likelihood which downweights the contribution to estimation of influential observations. This type of complementary analysis of long-term results of clinical studies is useful in assessing the soundness of the conclusions on treatment effect. In the example analysed here, the difference in survival between treatments was mostly confined to those individuals who survived at least two years beyond randomization.

  12. Phase retrieval in digital speckle pattern interferometry by application of two-dimensional active contours called snakes.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2006-03-20

    We propose a novel approach to retrieving the phase map coded by a single closed-fringe pattern in digital speckle pattern interferometry, which is based on the estimation of the local sign of the quadrature component. We obtain the estimate by calculating the local orientation of the fringes that have previously been denoised by a weighted smoothing spline method. We carry out the procedure of sign estimation by determining the local abrupt jumps of size pi in the orientation field of the fringes and by segmenting the regions defined by these jumps. The segmentation method is based on the application of two-dimensional active contours (snakes), with which one can also estimate absent jumps, i.e., those that cannot be detected from the local orientation of the fringes. The performance of the proposed phase-retrieval technique is evaluated for synthetic and experimental fringes and compared with the results obtained with the spiral-phase- and Fourier-transform methods.

  13. Evaluating earthquake hazards in the Los Angeles region; an earth-science perspective

    USGS Publications Warehouse

    Ziony, Joseph I.

    1985-01-01

    Potentially destructive earthquakes are inevitable in the Los Angeles region of California, but hazards prediction can provide a basis for reducing damage and loss. This volume identifies the principal geologically controlled earthquake hazards of the region (surface faulting, strong shaking, ground failure, and tsunamis), summarizes methods for characterizing their extent and severity, and suggests opportunities for their reduction. Two systems of active faults generate earthquakes in the Los Angeles region: northwest-trending, chiefly horizontal-slip faults, such as the San Andreas, and west-trending, chiefly vertical-slip faults, such as those of the Transverse Ranges. Faults in these two systems have produced more than 40 damaging earthquakes since 1800. Ninety-five faults have slipped in late Quaternary time (approximately the past 750,000 yr) and are judged capable of generating future moderate to large earthquakes and displacing the ground surface. Average rates of late Quaternary slip or separation along these faults provide an index of their relative activity. The San Andreas and San Jacinto faults have slip rates measured in tens of millimeters per year, but most other faults have rates of about 1 mm/yr or less. Intermediate rates of as much as 6 mm/yr characterize a belt of Transverse Ranges faults that extends from near Santa Barbara to near San Bernardino. The dimensions of late Quaternary faults provide a basis for estimating the maximum sizes of likely future earthquakes in the Los Angeles region: moment magnitude .(M) 8 for the San Andreas, M 7 for the other northwest-trending elements of that fault system, and M 7.5 for the Transverse Ranges faults. Geologic and seismologic evidence along these faults, however, suggests that, for planning and designing noncritical facilities, appropriate sizes would be M 8 for the San Andreas, M 7 for the San Jacinto, M 6.5 for other northwest-trending faults, and M 6.5 to 7 for the Transverse Ranges faults. The geologic and seismologic record indicates that parts of the San Andreas and San Jacinto faults have generated major earthquakes having recurrence intervals of several tens to a few hundred years. In contrast, the geologic evidence at points along other active faults suggests recurrence intervals measured in many hundreds to several thousands of years. The distribution and character of late Quaternary surface faulting permit estimation of the likely location, style, and amount of future surface displacements. An extensive body of geologic and geotechnical information is used to evaluate areal differences in future levels of shaking. Bedrock and alluvial deposits are differentiated according to the physical properties that control shaking response; maps of these properties are prepared by analyzing existing geologic and soils maps, the geomorphology of surficial units, and. geotechnical data obtained from boreholes. The shear-wave velocities of near-surface geologic units must be estimated for some methods of evaluating shaking potential. Regional-scale maps of highly generalized shearwave velocity groups, based on the age and texture of exposed geologic units and on a simple two-dimensional model of Quaternary sediment distribution, provide a first approximation of the areal variability in shaking response. More accurate depictions of near-surface shear-wave velocity useful for predicting ground-motion parameters take into account the thickness of the Quaternary deposits, vertical variations in sediment .type, and the correlation of shear-wave velocity with standard penetration resistance of different sediments. A map of the upper Santa Ana River basin showing shear-wave velocities to depths equal to one-quarter wavelength of a 1-s shear wave demonstrates the three-dimensional mapping procedure. Four methods for predicting the distribution and strength of shaking from future earthquakes are presented. These techniques use different measures of strong-motion

  14. Precise DOA Estimation Using SAGE Algorithm with a Cylindrical Array

    NASA Astrophysics Data System (ADS)

    Takanashi, Masaki; Nishimura, Toshihiko; Ogawa, Yasutaka; Ohgane, Takeo

    A uniform circular array (UCA) is a well-known array configuration which can accomplish estimation of 360° field of view with identical accuracy. However, a UCA cannot estimate coherent signals because we cannot apply the SSP owing to the structure of UCA. Although a variety of studies on UCA in coherent multipath environments have been done, it is impossible to estimate the DOA of coherent signals with different incident polar angles. Then, we have proposed Root-MUSIC algorithm with a cylindrical array. However, the estimation performance is degraded when incident signals arrive with close polar angles. To solve this problem, in the letter, we propose to use SAGE algorithm with a cylindrical array. Here, we adopt a CLA Root-MUSIC for the initial estimation and decompose two-dimensional search to double one-dimensional search to reduce the calculation load. The results show that the proposal achieves high resolution with low complexity.

  15. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  16. Non-Cooperative Target Imaging and Parameter Estimation with Narrowband Radar Echoes.

    PubMed

    Yeh, Chun-mao; Zhou, Wei; Lu, Yao-bing; Yang, Jian

    2016-01-20

    This study focuses on the rotating target imaging and parameter estimation with narrowband radar echoes, which is essential for radar target recognition. First, a two-dimensional (2D) imaging model with narrowband echoes is established in this paper, and two images of the target are formed on the velocity-acceleration plane at two neighboring coherent processing intervals (CPIs). Then, the rotating velocity (RV) is proposed to be estimated by utilizing the relationship between the positions of the scattering centers among two images. Finally, the target image is rescaled to the range-cross-range plane with the estimated rotational parameter. The validity of the proposed approach is confirmed using numerical simulations.

  17. The challenge of estimating the SWOT signal and error spectra over the Ocean and its applications to CalVal and state estimation problems

    NASA Astrophysics Data System (ADS)

    Ubelmann, C.; Gerald, D.

    2016-12-01

    The SWOT data validation will be a first challenge after launch, as the nature of the measurement, in particular the two-dimensionality at short spatial scales, is new in altimetry. If the comparison with independent observations may be locally possible, a validation of the full signal and error spectrum will be challenging. However, some recent analyses in simulations have shown the possibility to separate the geophysical signals from the spatially coherent instrumental errors in the spectral space, through cross-spectral analysis. These results suggest that rapidly after launch, the instrument error canl be spectrally separated providing some validations and insights on the Ocean energy spectrum, as well as optimal calibrations. Beyond CalVal, such spectral computations will be also essential for producing high-level Ocean estimates (two and three dimensional Ocean state reconstructions).

  18. Stress Recovery and Error Estimation for Shell Structures

    NASA Technical Reports Server (NTRS)

    Yazdani, A. A.; Riggs, H. R.; Tessler, A.

    2000-01-01

    The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.

  19. Usability Evaluation of a Flight-Deck Airflow Hazard Visualization System

    NASA Technical Reports Server (NTRS)

    Aragon, Cecilia R.

    2004-01-01

    Many aircraft accidents each year are caused by encounters with unseen airflow hazards near the ground, such as vortices, downdrafts, low level wind shear, microbursts, or turbulence from surrounding vegetation or structures near the landing site. These hazards can be dangerous even to airliners; there have been hundreds of fatalities in the United States in the last two decades attributable to airliner encounters with microbursts and low level wind shear alone. However, helicopters are especially vulnerable to airflow hazards because they often have to operate in confined spaces and under operationally stressful conditions (such as emergency search and rescue, military or shipboard operations). Providing helicopter pilots with an augmented-reality display visualizing local airflow hazards may be of significant benefit. However, the form such a visualization might take, and whether it does indeed provide a benefit, had not been studied before our experiment. We recruited experienced military and civilian helicopter pilots for a preliminary usability study to evaluate a prototype augmented-reality visualization system. The study had two goals: first, to assess the efficacy of presenting airflow data in flight; and second, to obtain expert feedback on sample presentations of hazard indicators to refine our design choices. The study addressed the optimal way to provide critical safety information to the pilot, what level of detail to provide, whether to display specific aerodynamic causes or potential effects only, and how to safely and effectively shift the locus of attention during a high-workload task. Three-dimensional visual cues, with varying shape, color, transparency, texture, depth cueing, and use of motion, depicting regions of hazardous airflow, were developed and presented to the pilots. The study results indicated that such a visualization system could be of significant value in improving safety during critical takeoff and landing operations, and also gave clear indications of the best design choices in producing the hazard visual cues.

  20. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    PubMed

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  1. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    PubMed Central

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059

  2. Investigating Surface and Near-Surface Bushfire Fuel Attributes: A Comparison between Visual Assessments and Image-Based Point Clouds

    PubMed Central

    Spits, Christine; Wallace, Luke; Reinke, Karin

    2017-01-01

    Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential. PMID:28425957

  3. Two-dimensional echocardiographic estimates of left atrial function in healthy dogs and dogs with myxomatous mitral valve disease.

    PubMed

    Dickson, David; Caivano, Domenico; Matos, Jose Novo; Summerfield, Nuala; Rishniw, Mark

    2017-12-01

    To provide reference intervals for 2-dimensional linear and area-based estimates of left atrial (LA) function in healthy dogs and to evaluate the ability of estimates of LA function to differentiate dogs with subclinical myxomatous mitral valve disease (MMVD) and similarly affected dogs with congestive heart failure (CHF). Fifty-two healthy adult dogs, 88 dogs with MMVD of varying severity. Linear and area measurements from 2-dimensional echocardiographs in both right parasternal long and short axis views optimized for the left atrium were used to derive estimates of LA active emptying fraction, passive emptying fraction, expansion index, and total fractional emptying. Differences for each estimate were compared between healthy and MMVD dogs (based on ACVIM classification), and between MMVD dogs with subclinical disease and CHF that had similar LA dimensions. Diagnostic utility at identifying CHF was examined for dogs with subclinical MMVD and CHF. Relationships with bodyweight were assessed. All estimates of LA function decreased with increasing ACVIM stage of mitral valve disease (p<0.05) and showed negative relationships with increasing LA size (all r 2 values < 0.2), except for LA passive emptying fraction, which did not differ or correlate with LA size (p=0.4). However, no index of LA function identified CHF better than measurements of LA size. Total LA fractional emptying and expansion index showed modest negative correlations with bodyweight. Estimates of LA function worsen with worsening MMVD but fail to discriminate dogs with CHF from those with subclinical MMVD any better than simple estimates of LA size. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Localization and separation of acoustic sources by using a 2.5-dimensional circular microphone array.

    PubMed

    Bai, Mingsian R; Lai, Chang-Sheng; Wu, Po-Chen

    2017-07-01

    Circular microphone arrays (CMAs) are sufficient in many immersive audio applications because azimuthal angles of sources are considered more important than the elevation angles in those occasions. However, the fact that CMAs do not resolve the elevation angle well can be a limitation for some applications which involves three-dimensional sound images. This paper proposes a 2.5-dimensional (2.5-D) CMA comprised of a CMA and a vertical logarithmic-spacing linear array (LLA) on the top. In the localization stage, two delay-and-sum beamformers are applied to the CMA and the LLA, respectively. The direction of arrival (DOA) is estimated from the product of two array output signals. In the separation stage, Tikhonov regularization and convex optimization are employed to extract the source amplitudes on the basis of the estimated DOA. The extracted signals from two arrays are further processed by the normalized least-mean-square algorithm with the internal iteration to yield the source signal with improved quality. To validate the 2.5-D CMA experimentally, a three-dimensionally printed circular array comprised of a 24-element CMA and an eight-element LLA is constructed. Objective perceptual evaluation of speech quality test and a subjective listening test are also undertaken.

  5. Vibration isolation technology: Sensitivity of selected classes of experiments to residual accelerations

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan D.

    1991-01-01

    Work was completed on all aspects of the following tasks: order of magnitude estimates; thermo-capillary convection - two-dimensional (fixed planar surface); thermo-capillary convection - three-dimensional and axisymmetric; liquid bridge/floating zone sensitivity; transport in closed containers; interaction: design and development stages; interaction: testing flight hardware; and reporting. Results are included in the Appendices.

  6. Application of the coherent anomaly method to percolation

    NASA Astrophysics Data System (ADS)

    Takayasu, Misako; Takayasu, Hideki

    1988-03-01

    Applying the coherent anomaly method (CAM) to site percolation problems, we estimate the percolation threshold pc and critical exponents. We obtain pc=0.589, β=0.140, γ=2.426 on the two-dimensional square lattice. These values are in good agreement with the values already known. We also investigate higher-dimensional cases by this method.

  7. Application of the Coherent Anomaly Method to Percolation

    NASA Astrophysics Data System (ADS)

    Takayasu, Misako; Takayasu, Hideki

    Applying the coherent anomaly method (CAM) to site percolation problems, we estimate the percolation threshold ϱc and critical exponents. We obtain pc = 0.589, Β=0.140, Γ = 2.426 on the two-dimensional square lattice. These values are in good agreement with the values already known. We also investigate higher-dimensional cases by this method.

  8. Reliability and validity of food portion size estimation from images using manual flexible digital virtual meshes

    USDA-ARS?s Scientific Manuscript database

    The eButton takes frontal images at 4 second intervals throughout the day. A three-dimensional (3D) manually administered wire mesh procedure has been developed to quantify portion sizes from the two-dimensional (2D) images. This paper reports a test of the interrater reliability and validity of use...

  9. Digital signal processing and control and estimation theory -- Points of tangency, area of intersection, and parallel directions

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1976-01-01

    A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.

  10. Three-dimensional representation of curved nanowires.

    PubMed

    Huang, Z; Dikin, D A; Ding, W; Qiao, Y; Chen, X; Fridman, Y; Ruoff, R S

    2004-12-01

    Nanostructures, such as nanowires, nanotubes and nanocoils, can be described in many cases as quasi one-dimensional curved objects projecting in three-dimensional space. A parallax method to construct the correct three-dimensional geometry of such one-dimensional nanostructures is presented. A series of scanning electron microscope images was acquired at different view angles, thus providing a set of image pairs that were used to generate three-dimensional representations using a matlab program. An error analysis as a function of the view angle between the two images is presented and discussed. As an example application, the importance of knowing the true three-dimensional shape of boron nanowires is demonstrated; without the nanowire's correct length and diameter, mechanical resonance data cannot provide an accurate estimate of Young's modulus.

  11. Changes in risk of immediate adverse reactions to iodinated contrast media by repeated administrations in patients with hepatocellular carcinoma.

    PubMed

    Fujiwara, Naoto; Tateishi, Ryosuke; Akahane, Masaaki; Taguri, Masataka; Minami, Tatsuya; Mikami, Shintaro; Sato, Masaya; Uchino, Koji; Uchino, Kouji; Enooku, Kenichiro; Kondo, Yuji; Asaoka, Yoshinari; Yamashiki, Noriyo; Goto, Tadashi; Shiina, Shuichiro; Yoshida, Haruhiko; Ohtomo, Kuni; Koike, Kazuhiko

    2013-01-01

    To elucidate whether repeated exposures to iodinated contrast media increase the risk of adverse reaction. We retrospectively reviewed 1,861 patients with hepatocellular carcinoma who visited authors' institution, a tertiary referral center, between 2004 and 2008. We analyzed cumulative probability of adverse reactions and risk factors. We categorized all symptoms into hypersensitivity reactions, physiologic reactions, and other reactions, according to the American College of Radiology guidelines, and evaluated each category as an event. We estimated the association between hazard for adverse reactions and the number of cumulative exposures to contrast media. We also evaluated subsequent contrast media injections and adverse reactions. There were 23,684 contrast media injections in 1,729 patients. One hundred and thirty-two patients were excluded because they were given no contrast media during the study period. Adverse reactions occurred in 196 (0.83%) patients. The cumulative incidence at 10(th), 20(th), and 30(th) examination was 7.9%, 15.2%, and 24.1%, respectively. Presence of renal impairment was found to be one of risk factors for adverse reactions. The estimated hazard of overall adverse reaction gradually decreased until around 10(th) exposure and rose with subsequent exposures. The estimated hazard of hypersensitivity showed V-shaped change with cumulative number of exposures. The estimated hazard of physiologic reaction had a tendency toward decreasing and that of other reaction had a tendency toward increasing. Second adverse reaction was more severe than the initial in only one among 130 patients receiving subsequent injections. Repeated exposures to iodinated contrast media increase the risk of adverse reaction.

  12. Seismic waves in 3-D: from mantle asymmetries to reliable seismic hazard assessment

    NASA Astrophysics Data System (ADS)

    Panza, Giuliano F.; Romanelli, Fabio

    2014-10-01

    A global cross-section of the Earth parallel to the tectonic equator (TE) path, the great circle representing the equator of net lithosphere rotation, shows a difference in shear wave velocities between the western and eastern flanks of the three major oceanic rift basins. The low-velocity layer in the upper asthenosphere, at a depth range of 120 to 200 km, is assumed to represent the decoupling between the lithosphere and the underlying mantle. Along the TE-perturbed (TE-pert) path, a ubiquitous LVZ, about 1,000-km-wide and 100-km-thick, occurs in the asthenosphere. The existence of the TE-pert is a necessary prerequisite for the existence of a continuous global flow within the Earth. Ground-shaking scenarios were constructed using a scenario-based method for seismic hazard analysis (NDSHA), using realistic and duly validated synthetic time series, and generating a data bank of several thousands of seismograms that account for source, propagation, and site effects. Accordingly, with basic self-organized criticality concepts, NDSHA permits the integration of available information provided by the most updated seismological, geological, geophysical, and geotechnical databases for the site of interest, as well as advanced physical modeling techniques, to provide a reliable and robust background for the development of a design basis for cultural heritage and civil infrastructures. Estimates of seismic hazard obtained using the NDSHA and standard probabilistic approaches are compared for the Italian territory, and a case-study is discussed. In order to enable a reliable estimation of the ground motion response to an earthquake, three-dimensional velocity models have to be considered, resulting in a new, very efficient, analytical procedure for computing the broadband seismic wave-field in a 3-D anelastic Earth model.

  13. A Fast Estimation Algorithm for Two-Dimensional Gravity Data (GEOFAST),

    DTIC Science & Technology

    1979-11-15

    to a wide class of problems (Refs. 9 and 17). The major inhibitor to the widespread appli- ( cation of optimal gravity data processing is the severe...extends directly to two dimensions. Define the nln 2xn1 n2 diagonal window matrix W as the Kronecker product of two one-dimensional windows W = W1 0 W2 (B...Inversion of Separable Matrices Consider the linear system y = T x (B.3-1) where T is block Toeplitz of dimension nln 2xnIn 2 . Its fre- quency domain

  14. Fractal Dimensionality of Pore and Grain Volume of a Siliciclastic Marine Sand

    NASA Astrophysics Data System (ADS)

    Reed, A. H.; Pandey, R. B.; Lavoie, D. L.

    Three-dimensional (3D) spatial distributions of pore and grain volumes were determined from high-resolution computer tomography (CT) images of resin-impregnated marine sands. Using a linear gradient extrapolation method, cubic three-dimensional samples were constructed from two-dimensional CT images. Image porosity (0.37) was found to be consistent with the estimate of porosity by water weight loss technique (0.36). Scaling of the pore volume (Vp) with the linear size (L), V~LD provides the fractal dimensionalities of the pore volume (D=2.74+/-0.02) and grain volume (D=2.90+/-0.02) typical for sedimentary materials.

  15. A conceptual design study for a two-dimensional, electronically scanned thinned array radiometer

    NASA Technical Reports Server (NTRS)

    Mutton, Philip; Chromik, Christopher C.; Dixon, Iain; Statham, Richard B.; Stillwagen, Frederic H.; Vontheumer, Alfred E.; Sasamoto, Washito A.; Garn, Paul A.; Cosgrove, Patrick A.; Ganoe, George G.

    1993-01-01

    A conceptual design for the Two-Dimensional, Electronically Steered Thinned Array Radiometer (ESTAR) is described. This instrument is a synthetic aperture microwave radiometer that operates in the L-band frequency range for the measurement of soil moisture and ocean salinity. Two auxiliary instruments, an 8-12 micron, scanning infrared radiometer and a 0.4-1.0 micron, charge coupled device (CCD) video camera, are included to provided data for sea surface temperature measurements and spatial registration of targets respectively. The science requirements were defined by Goddard Space Flight Center. Instrument and the spacecraft configurations are described for missions using the Pegasus and Taurus launch vehicles. The analyses and design trades described include: estimations of size, mass and power, instrument viewing coverage, mechanical design trades, structural and thermal analyses, data and communications performance assessments, and cost estimation.

  16. Estimation and Validation of \\delta18O Global Distribution with Rayleigh-type two Dimensional Isotope Circulation Model

    NASA Astrophysics Data System (ADS)

    Yoshimura, K.; Oki, T.; Ohte, N.; Kanae, S.; Ichiyanagi, K.

    2004-12-01

    A simple water isotope circulation model on a global scale that includes a Rayleigh equation and the use of _grealistic_h external meteorological forcings estimates short-term variability of precipitation 18O. The results are validated by Global Network of Isotopes in Precipitation (GNIP) monthly observations and by daily observations at three sites in Thailand. This good agreement highlights the importance of large scale transport and mixing of vapor masses as a control factor for spatial and temporal variability of precipitation isotopes, rather than in-cloud micro processes. It also indicates the usefulness of the model and the isotopes observation databases for evaluation of two-dimensional atmospheric water circulation fields in forcing datasets. In this regard, two offline simulations for 1978-1993 with major reanalyses, i.e. NCEP and ERA15, were implemented, and the results show that, over Europe ERA15 better matched observations at both monthly and interannual time scales, mainly owing to better precipitation fields in ERA15, while in the tropics both produced similarly accurate isotopic fields. The isotope analyses diagnose accuracy of two-dimensional water circulation fields in datasets with a particular focus on precipitation processes.

  17. Use of Electrical Conductivity Logging to Characterize the Geological Context of Releases at UST Sites

    EPA Science Inventory

    Risk is the combination of hazard and exposure. Risk characterization at UST release sites has traditionally emphasized hazard (presence of residual fuel) with little attention to exposure. Exposure characterization often limited to a one-dimensional model such as the RBCA equa...

  18. Three-dimensional pantograph for use in hazardous environments

    NASA Technical Reports Server (NTRS)

    Cowfer, C. D.; Wagner, H. A.

    1970-01-01

    Material measurement device is used with radioactive probes which can be approached only to distance of 3 feet. Tracer-following unit is capable of precisely controlled movement in X-Y-Z planes. Pantograph is usable in industrial processes involving chemical corrosives, poisons, and bacteriological hazards, as well as nuclear applications.

  19. Robotic vehicle with multiple tracked mobility platforms

    DOEpatents

    Salton, Jonathan R [Albuquerque, NM; Buttz, James H [Albuquerque, NM; Garretson, Justin [Albuquerque, NM; Hayward, David R [Wetmore, CO; Hobart, Clinton G [Albuquerque, NM; Deuel, Jr., Jamieson K.

    2012-07-24

    A robotic vehicle having two or more tracked mobility platforms that are mechanically linked together with a two-dimensional coupling, thereby forming a composite vehicle of increased mobility. The robotic vehicle is operative in hazardous environments and can be capable of semi-submersible operation. The robotic vehicle is capable of remote controlled operation via radio frequency and/or fiber optic communication link to a remote operator control unit. The tracks have a plurality of track-edge scallop cut-outs that allow the tracks to easily grab onto and roll across railroad tracks, especially when crossing the railroad tracks at an oblique angle.

  20. Modeling the combined influence of host dispersal and waterborne fate and transport on pathogen spread in complex landscapes

    PubMed Central

    Lu, Ding; McDowell, Julia Z.; Davis, George M.; Spear, Robert C.; Remais, Justin V.

    2012-01-01

    Environmental models, often applied to questions on the fate and transport of chemical hazards, have recently become important in tracing certain environmental pathogens to their upstream sources of contamination. These tools, such as first order decay models applied to contaminants in surface waters, offer promise for quantifying the fate and transport of pathogens with multiple environmental stages and/or multiple hosts, in addition to those pathogens whose environmental stages are entirely waterborne. Here we consider the fate and transport capabilities of the human schistosome Schistosoma japonicum, which exhibits two waterborne stages and is carried by an amphibious intermediate snail host. We present experimentally-derived dispersal estimates for the intermediate snail host and fate and transport estimates for the passive downstream diffusion of cercariae, the waterborne, human-infective parasite stage. Using a one dimensional advective transport model exhibiting first-order decay, we simulate the added spatial reach and relative increase in cercarial concentrations that dispersing snail hosts contribute to downstream sites. Simulation results suggest that snail dispersal can substantially increase the concentrations of cercariae reaching downstream locations, relative to no snail dispersal, effectively putting otherwise isolated downstream sites at increased risk of exposure to cercariae from upstream sources. The models developed here can be applied to other infectious diseases with multiple life-stages and hosts, and have important implications for targeted ecological control of disease spread. PMID:23162675

  1. Relationships between digital signal processing and control and estimation theory

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1978-01-01

    Research directions in the fields of digital signal processing and modern control and estimation theory are discussed. Stability theory, linear prediction and parameter identification, system synthesis and implementation, two-dimensional filtering, decentralized control and estimation, and image processing are considered in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the disciplines.

  2. Estimation of uncertainty for contour method residual stress measurements

    DOE PAGES

    Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; ...

    2014-12-03

    This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less

  3. Theoretical and experimental study of DOA estimation using AML algorithm for an isotropic and non-isotropic 3D array

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz; Ali, Andreas M.; Collier, Travis C.; Yao, Yuan; Hudson, Ralph E.; Yao, Kung; Taylor, Charles E.

    2007-09-01

    The focus of most direction-of-arrival (DOA) estimation problems has been based mainly on a two-dimensional (2D) scenario where we only need to estimate the azimuth angle. But in various practical situations we have to deal with a three-dimensional scenario. The importance of being able to estimate both azimuth and elevation angles with high accuracy and low complexity is of interest. We present the theoretical and the practical issues of DOA estimation using the Approximate-Maximum-Likelihood (AML) algorithm in a 3D scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. Various numerical results are presented. We use two acoustic arrays each consisting of 8 microphones to do some field measurements. The processing of the measured data from the acoustic arrays for different azimuth and elevation angles confirms the effectiveness of the proposed methods.

  4. Comments on potential geologic and seismic hazards affecting coastal Ventura County, California

    USGS Publications Warehouse

    Ross, Stephanie L.; Boore, David M.; Fisher, Michael A.; Frankel, Arthur D.; Geist, Eric L.; Hudnut, Kenneth W.; Kayen, Robert E.; Lee, Homa J.; Normark, William R.; Wong, Florence L.

    2004-01-01

    This report examines the regional seismic and geologic hazards that could affect proposed liquefied natural gas (LNG) facilities in coastal Ventura County, California. Faults throughout this area are thought to be capable of producing earthquakes of magnitude 6.5 to 7.5, which could produce surface fault offsets of as much as 15 feet. Many of these faults are sufficiently well understood to be included in the current generation of the National Seismic Hazard Maps; others may become candidates for inclusion in future revisions as research proceeds. Strong shaking is the primary hazard that causes damage from earthquakes and this area is zoned with a high level of shaking hazard. The estimated probability of a magnitude 6.5 or larger earthquake (comparable in size to the 2003 San Simeon quake) occurring in the next 30 years within 30 miles of Platform Grace is 50-60%; for Cabrillo Port, the estimate is a 35% likelihood. Combining these probabilities of earthquake occurrence with relationships that give expected ground motions yields the estimated seismic-shaking hazard. In parts of the project area, the estimated shaking hazard is as high as along the San Andreas Fault. The combination of long-period basin waves and LNG installations with large long-period resonances potentially increases this hazard.

  5. Structural estimation of a principal-agent model: moral hazard in medical insurance.

    PubMed

    Vera-Hernández, Marcos

    2003-01-01

    Despite the importance of principal-agent models in the development of modern economic theory, there are few estimations of these models. I recover the estimates of a principal-agent model and obtain an approximation to the optimal contract. The results show that out-of-pocket payments follow a concave profile with respect to costs of treatment. I estimate the welfare loss due to moral hazard, taking into account income effects. I also propose a new measure of moral hazard based on the conditional correlation between contractible and noncontractible variables.

  6. Conformal mapping in optical biosensor applications.

    PubMed

    Zumbrum, Matthew E; Edwards, David A

    2015-09-01

    Optical biosensors are devices used to investigate surface-volume reaction kinetics. Current mathematical models for reaction dynamics rely on the assumption of unidirectional flow within these devices. However, new devices, such as the Flexchip, include a geometry that introduces two-dimensional flow, complicating the depletion of the volume reactant. To account for this, a previous mathematical model is extended to include two-dimensional flow, and the Schwarz-Christoffel mapping is used to relate the physical device geometry to that for a device with unidirectional flow. Mappings for several Flexchip dimensions are considered, and the ligand depletion effect is investigated for one of these mappings. Estimated rate constants are produced for simulated data to quantify the inclusion of two-dimensional flow in the mathematical model.

  7. Shearlet-based measures of entropy and complexity for two-dimensional patterns

    NASA Astrophysics Data System (ADS)

    Brazhe, Alexey

    2018-06-01

    New spatial entropy and complexity measures for two-dimensional patterns are proposed. The approach is based on the notion of disequilibrium and is built on statistics of directional multiscale coefficients of the fast finite shearlet transform. Shannon entropy and Jensen-Shannon divergence measures are employed. Both local and global spatial complexity and entropy estimates can be obtained, thus allowing for spatial mapping of complexity in inhomogeneous patterns. The algorithm is validated in numerical experiments with a gradually decaying periodic pattern and Ising surfaces near critical state. It is concluded that the proposed algorithm can be instrumental in describing a wide range of two-dimensional imaging data, textures, or surfaces, where an understanding of the level of order or randomness is desired.

  8. De Haas-van Alphen effect of a two-dimensional ultracold atomic gas

    NASA Astrophysics Data System (ADS)

    Farias, B.; Furtado, C.

    2016-01-01

    In this paper, we show how the ultracold atom analogue of the two-dimensional de Haas-van Alphen effect in electronic condensed matter systems can be induced by optical fields in a neutral atomic system. The interaction between the suitable spatially varying laser fields and tripod-type trapped atoms generates a synthetic magnetic field which leads the particles to organize themselves in Landau levels. Initially, with the atomic gas in a regime of lowest Landau level, we display the oscillatory behaviour of the atomic energy and its derivative with respect to the effective magnetic field (B) as a function of 1/B. Furthermore, we estimate the area of the Fermi circle of the two-dimensional atomic gas.

  9. Integration of Satellite-Derived Cloud Phase, Cloud Top Height, and Liquid Water Path into an Operational Aircraft Icing Nowcasting System

    NASA Technical Reports Server (NTRS)

    Haggerty, Julie; McDonough, Frank; Black, Jennifer; Landott, Scott; Wolff, Cory; Mueller, Steven; Minnis, Patrick; Smith, William, Jr.

    2008-01-01

    Operational products used by the U.S. Federal Aviation Administration to alert pilots of hazardous icing provide nowcast and short-term forecast estimates of the potential for the presence of supercooled liquid water and supercooled large droplets. The Current Icing Product (CIP) system employs basic satellite-derived information, including a cloud mask and cloud top temperature estimates, together with multiple other data sources to produce a gridded, three-dimensional, hourly depiction of icing probability and severity. Advanced satellite-derived cloud products developed at the NASA Langley Research Center (LaRC) provide a more detailed description of cloud properties (primarily at cloud top) compared to the basic satellite-derived information used currently in CIP. Cloud hydrometeor phase, liquid water path, cloud effective temperature, and cloud top height as estimated by the LaRC algorithms are into the CIP fuzzy logic scheme and a confidence value is determined. Examples of CIP products before and after the integration of the LaRC satellite-derived products will be presented at the conference.

  10. [The concept of risk and its estimation].

    PubMed

    Zocchetti, C; Della Foglia, M; Colombi, A

    1996-01-01

    The concept of risk, in relation to human health, is a topic of primary interest for occupational health professionals. A new legislation recently established in Italy (626/94) according to European Community directives in the field of Preventive Medicine, called attention to this topic, and in particular to risk assessment and evaluation. Motivated by this context and by the impression that the concept of risk is frequently misunderstood, the present paper has two aims: the identification of the different meanings of the term "risk" in the new Italian legislation and the critical discussion of some commonly used definitions; and the proposal of a general definition, with the specification of a mathematical expression for quantitative risk estimation. The term risk (and risk estimation, assessment, or evaluation) has mainly referred to three different contexts: hazard identification, exposure assessment, and adverse health effects occurrence. Unfortunately, there are contexts in the legislation in which it is difficult to identify the true meaning of the term. This might cause equivocal interpretations and erroneous applications of the law because hazard evaluation, exposure assessment, and adverse health effects identification are completely different topics that require integrated but distinct approaches to risk management. As far as a quantitative definition of risk is of concern, we suggest an algorithm which connects the three basic risk elements (hazard, exposure, adverse health effects) by means of their probabilities of occurrence: the probability of being exposed (to a definite dose) given that a specific hazard is present (Pr(e[symbol: see text]p)), and the probability of occurrence of an adverse health effect as a consequence of that exposure (Pr(d[symbol: see text]e)). Using these quantitative components, risk can be defined as a sequence of measurable events that starts with hazard identification and terminates with disease occurrence; therefore, the following formal definition of risk is proposed: the probability of occurrence, in a given period of time, of an adverse health effect as a consequence of the existence of an hazard. In formula: R(d[symbol: see text]p) = Pr(e[symbol: see text]p) x Pr(d[symbol: see text]e). While Pr(e[symbol: see text]p) (exposure given hazard) must be evaluated in the situation under study, two alternatives exist for the estimation of the occurrence of adverse health effects (Pr(d[symbol: see text]e)): a "direct" estimation of the damage (Pr(d[symbol: see text]e) through formal epidemiologic studies conducted in the situation under observation; and an "indirect" estimation of Pr(d[symbol: see text]e) using information taken from the scientific literature (epidemiologic evaluations, dose-response relationships, extrapolations, ...). Both conditions are presented along with their respective advantages, disadvantages, and uncertainties. The usefulness of the proposed algorithm is discussed with respect to commonly used applications of risk assessment in occupational medicine; the relevance of time for risk estimation (both in the term of duration of observation, duration of exposure, and latency of effect) is briefly explained; and how the proposed algorithm takes into account (in terms of prevention and public health) both the etiologic relevance of the exposure and the consequences of exposure removal is highlighted. As a last comment, it is suggested that the diffuse application of good work practices (technical, behavioral, organizational, ...), or the exhaustive use of check lists, can be relevant in terms of improvement of prevention efficacy, but does not represent any quantitative procedure of risk assessment which, in any circumstance, must be considered the elective approach to adverse health effect prevention.

  11. Low-temperature specific heat of the quasi-two-dimensional charge-density wave compound KMo6O17

    NASA Astrophysics Data System (ADS)

    Wang, Junfeng; Xiong, Rui; Yin, Di; Li, Changzhen; Tang, Zheng; Wang, Ququan; Shi, Jing; Wang, Yue; Wen, Haihu

    2006-05-01

    Low temperature specific heat (Cp) of quasi-two-dimensional charge-density wave (CDW) compound KMo6O17 has been studied by a relaxation method from 2to48K under zero and 12T magnetic fields. The results show that no specific heat anomaly is found at 16K under both zero and 12T magnetic fields, although an anomaly is clearly observed in the resistivity and magnetoresistance measurements. From the data between 2 and 4K , the density of states at Fermi level is estimated as 0.2eV-1permolecule and the Debye temperature is extracted to be 418K . A bump appearing in Cp/T3 is found between 4 and 48K centered around 12.5-15K , indicating that the phason excitations contribute to the total specific heat similarly as in quasi-one-dimensional CDW conductors. Using a modified Debye model, a pinning frequency of 0.73THz for KMo6O17 is estimated from the phason contribution.

  12. Estimation of ballistic block landing energy during 2014 Mount Ontake eruption

    NASA Astrophysics Data System (ADS)

    Tsunematsu, Kae; Ishimine, Yasuhiro; Kaneko, Takayuki; Yoshimoto, Mitsuhiro; Fujii, Toshitsugu; Yamaoka, Koshun

    2016-05-01

    The 2014 Mount Ontake eruption started just before noon on September 27, 2014. It killed 58 people, and five are still missing (as of January 1, 2016). The casualties were mainly caused by the impact of ballistic blocks around the summit area. It is necessary to know the magnitude of the block velocity and energy to construct a hazard map of ballistic projectiles and design effective shelters and mountain huts. The ejection velocities of the ballistic projectiles were estimated by comparing the observed distribution of the ballistic impact craters on the ground with simulated distributions of landing positions under various sets of conditions. A three-dimensional numerical multiparticle ballistic model adapted to account for topographic effect was used to estimate the ejection angles. From these simulations, we have obtained an ejection angle of γ = 20° from vertical to horizontal and α = 20° from north to east. With these ejection angle conditions, the ejection speed was estimated to be between 145 and 185 m/s for a previously obtained range of drag coefficients of 0.62-1.01. The order of magnitude of the mean landing energy obtained using our numerical simulation was 104 J.

  13. Incompressible Deformation Estimation Algorithm (IDEA) from Tagged MR Images

    PubMed Central

    Liu, Xiaofeng; Abd-Elmoniem, Khaled Z.; Stone, Maureen; Murano, Emi Z.; Zhuo, Jiachen; Gullapalli, Rao P.; Prince, Jerry L.

    2013-01-01

    Measuring the three-dimensional motion of muscular tissues, e.g., the heart or the tongue, using magnetic resonance (MR) tagging is typically carried out by interpolating the two-dimensional motion information measured on orthogonal stacks of images. The incompressibility of muscle tissue is an important constraint on the reconstructed motion field and can significantly help to counter the sparsity and incompleteness of the available motion information. Previous methods utilizing this fact produced incompressible motions with limited accuracy. In this paper, we present an incompressible deformation estimation algorithm (IDEA) that reconstructs a dense representation of the three-dimensional displacement field from tagged MR images and the estimated motion field is incompressible to high precision. At each imaged time frame, the tagged images are first processed to determine components of the displacement vector at each pixel relative to the reference time. IDEA then applies a smoothing, divergence-free, vector spline to interpolate velocity fields at intermediate discrete times such that the collection of velocity fields integrate over time to match the observed displacement components. Through this process, IDEA yields a dense estimate of a three-dimensional displacement field that matches our observations and also corresponds to an incompressible motion. The method was validated with both numerical simulation and in vivo human experiments on the heart and the tongue. PMID:21937342

  14. Documentation for the 2008 Update of the United States National Seismic Hazard Maps

    USGS Publications Warehouse

    Petersen, Mark D.; Frankel, Arthur D.; Harmsen, Stephen C.; Mueller, Charles S.; Haller, Kathleen M.; Wheeler, Russell L.; Wesson, Robert L.; Zeng, Yuehua; Boyd, Oliver S.; Perkins, David M.; Luco, Nicolas; Field, Edward H.; Wills, Chris J.; Rukstales, Kenneth S.

    2008-01-01

    The 2008 U.S. Geological Survey (USGS) National Seismic Hazard Maps display earthquake ground motions for various probability levels across the United States and are applied in seismic provisions of building codes, insurance rate structures, risk assessments, and other public policy. This update of the maps incorporates new findings on earthquake ground shaking, faults, seismicity, and geodesy. The resulting maps are derived from seismic hazard curves calculated on a grid of sites across the United States that describe the frequency of exceeding a set of ground motions. The USGS National Seismic Hazard Mapping Project developed these maps by incorporating information on potential earthquakes and associated ground shaking obtained from interaction in science and engineering workshops involving hundreds of participants, review by several science organizations and State surveys, and advice from two expert panels. The National Seismic Hazard Maps represent our assessment of the 'best available science' in earthquake hazards estimation for the United States (maps of Alaska and Hawaii as well as further information on hazard across the United States are available on our Web site at http://earthquake.usgs.gov/research/hazmaps/).

  15. Modeling a Glacial Lake Outburst Flood Process Chain: The Case of Lake Palcacocha and Huaraz, Peru

    NASA Astrophysics Data System (ADS)

    Chisolm, Rachel; Somos-Valenzuela, Marcelo; Rivas Gomez, Denny; McKinney, Daene C.; Portocarrero Rodriguez, Cesar

    2016-04-01

    One of the consequences of recent glacier recession in the Cordillera Blanca, Peru, is the risk of Glacial Lake Outburst Floods (GLOFs) from lakes that have formed at the base of retreating glaciers. GLOFs are often triggered by avalanches falling into glacial lakes, initiating a chain of processes that may culminate in significant inundation and destruction downstream. This paper presents simulations of all of the processes involved in a potential GLOF originating from Lake Palcacocha, the source of a previously catastrophic GLOF on December 13, 1941, 1800 people in the city of Huaraz, Peru. The chain of processes simulated here includes: (1) avalanches above the lake; (2) lake dynamics resulting from the avalanche impact, including wave generation, propagation, and run-up across lakes; (3) terminal moraine overtopping and dynamic moraine erosion simulations to determine the possibility of breaching; (4) flood propagation along downstream valleys; and (5) inundation of populated areas. The results of each process feed into simulations of subsequent processes in the chain, finally resulting in estimates of inundation in the city of Huaraz. The results of the inundation simulations were converted into flood intensity and hazard maps (based on an intensity-likelihood matrix) that may be useful for city planning and regulation. Three avalanche events with volumes ranging from 0.5-3 x 106 m3 were simulated, and two scenarios of 15 m and 30 m lake lowering were simulated to assess the potential of mitigating the hazard level in Huaraz. For all three avalanche events, three-dimensional hydrodynamic models show large waves generated in the lake from the impact resulting in overtopping of the damming-moraine. Despite very high discharge rates (up to 63.4 x 103 m3/s), the erosion from the overtopping wave did not result in failure of the damming-moraine when simulated with a hydro-morphodynamic model using excessively conservative soil characteristics that provide very little erosion resistance. With the current lake level, all three avalanche events result in inundation in Huaraz, and the resulting hazard map shows a total affected area of 2.01 km2, most of which is in the high-hazard category. Lowering the lake has the potential to reduce the affected area by up to 35% resulting in a smaller portion of the inundated area in the high-hazard category.

  16. Trans-dimensional inversion of microtremor array dispersion data with hierarchical autoregressive error models

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.

    2012-02-01

    This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.

  17. Alcohol use disorders and hazardous drinking among undergraduates at English universities.

    PubMed

    Heather, Nick; Partington, Sarah; Partington, Elizabeth; Longstaff, Fran; Allsop, Susan; Jankowski, Mark; Wareham, Helen; St Clair Gibson, Alan

    2011-01-01

    To report on alcohol use disorders and hazardous drinking from a survey of university students in England in 2008-2009. A cross-sectional survey using the Alcohol Use Disorders Identification Test (AUDIT) was carried out in a purposive sample of 770 undergraduates from seven universities across England. Sixty-one per cent of the sample (65% men; 58% women) scored positive (8+) on the AUDIT, comprising 40% hazardous drinkers, 11% harmful drinkers and 10% with probable dependence. There were large and significant differences in mean AUDIT scores between the universities taking part in the survey. Two universities in the North of England showed a significantly higher combined mean AUDIT score than two universities in the Midlands which in turn showed a significantly higher mean AUDIT score than three universities in the South. When the effects of university attended were extracted in a binary logistic regression analysis, independent significant predictors of AUDIT positive status were younger age, 'White' ethnicity and both on-campus and off-campus term-time student accommodation. Undergraduates at some universities in England show very high levels of alcohol-related risk and harm. University authorities should estimate the level of hazardous drinking and alcohol use disorders among students at their institutions and take action to reduce risk and harm accordingly. Research is needed using nationally representative samples to estimate the prevalence of alcohol risk and harm in the UK student population and to determine the future course of drinking problems among students currently affected.

  18. An approach to trial design and analysis in the era of non-proportional hazards of the treatment effect.

    PubMed

    Royston, Patrick; Parmar, Mahesh K B

    2014-08-07

    Most randomized controlled trials with a time-to-event outcome are designed and analysed under the proportional hazards assumption, with a target hazard ratio for the treatment effect in mind. However, the hazards may be non-proportional. We address how to design a trial under such conditions, and how to analyse the results. We propose to extend the usual approach, a logrank test, to also include the Grambsch-Therneau test of proportional hazards. We test the resulting composite null hypothesis using a joint test for the hazard ratio and for time-dependent behaviour of the hazard ratio. We compute the power and sample size for the logrank test under proportional hazards, and from that we compute the power of the joint test. For the estimation of relevant quantities from the trial data, various models could be used; we advocate adopting a pre-specified flexible parametric survival model that supports time-dependent behaviour of the hazard ratio. We present the mathematics for calculating the power and sample size for the joint test. We illustrate the methodology in real data from two randomized trials, one in ovarian cancer and the other in treating cellulitis. We show selected estimates and their uncertainty derived from the advocated flexible parametric model. We demonstrate in a small simulation study that when a treatment effect either increases or decreases over time, the joint test can outperform the logrank test in the presence of both patterns of non-proportional hazards. Those designing and analysing trials in the era of non-proportional hazards need to acknowledge that a more complex type of treatment effect is becoming more common. Our method for the design of the trial retains the tools familiar in the standard methodology based on the logrank test, and extends it to incorporate a joint test of the null hypothesis with power against non-proportional hazards. For the analysis of trial data, we propose the use of a pre-specified flexible parametric model that can represent a time-dependent hazard ratio if one is present.

  19. Observation of entanglement witnesses for orbital angular momentum states

    NASA Astrophysics Data System (ADS)

    Agnew, M.; Leach, J.; Boyd, R. W.

    2012-06-01

    Entanglement witnesses provide an efficient means of determining the level of entanglement of a system using the minimum number of measurements. Here we demonstrate the observation of two-dimensional entanglement witnesses in the high-dimensional basis of orbital angular momentum (OAM). In this case, the number of potentially entangled subspaces scales as d(d - 1)/2, where d is the dimension of the space. The choice of OAM as a basis is relevant as each subspace is not necessarily maximally entangled, thus providing the necessary state for certain tests of nonlocality. The expectation value of the witness gives an estimate of the state of each two-dimensional subspace belonging to the d-dimensional Hilbert space. These measurements demonstrate the degree of entanglement and therefore the suitability of the resulting subspaces for quantum information applications.

  20. The limits on the usefulness of erosion hazard ratings

    Treesearch

    R. M. Rice; P. D. Gradek

    1984-01-01

    Although erosion-hazard ratings are often used to guide forest practices, those used in California from 1974 to 1982 have been inadequate for estimating erosion potential. To improve the erosion-hazard rating procedure, separate estimating equations were used for different situations. The ratings were partitioned according to yarding method, erosional process, and...

  1. Simulation program for estimating statistical power of Cox's proportional hazards model assuming no specific distribution for the survival time.

    PubMed

    Akazawa, K; Nakamura, T; Moriguchi, S; Shimada, M; Nose, Y

    1991-07-01

    Small sample properties of the maximum partial likelihood estimates for Cox's proportional hazards model depend on the sample size, the true values of regression coefficients, covariate structure, censoring pattern and possibly baseline hazard functions. Therefore, it would be difficult to construct a formula or table to calculate the exact power of a statistical test for the treatment effect in any specific clinical trial. The simulation program, written in SAS/IML, described in this paper uses Monte-Carlo methods to provide estimates of the exact power for Cox's proportional hazards model. For illustrative purposes, the program was applied to real data obtained from a clinical trial performed in Japan. Since the program does not assume any specific function for the baseline hazard, it is, in principle, applicable to any censored survival data as long as they follow Cox's proportional hazards model.

  2. Probabilistic estimation of long-term volcanic hazard under evolving tectonic conditions in a 1 Ma timeframe

    NASA Astrophysics Data System (ADS)

    Jaquet, O.; Lantuéjoul, C.; Goto, J.

    2017-10-01

    Risk assessments in relation to the siting of potential deep geological repositories for radioactive wastes demand the estimation of long-term tectonic hazards such as volcanicity and rock deformation. Owing to their tectonic situation, such evaluations concern many industrial regions around the world. For sites near volcanically active regions, a prevailing source of uncertainty is related to volcanic hazard. For specific situations, in particular in relation to geological repository siting, the requirements for the assessment of volcanic and tectonic hazards have to be expanded to 1 million years. At such time scales, tectonic changes are likely to influence volcanic hazard and therefore a particular stochastic model needs to be developed for the estimation of volcanic hazard. The concepts and theoretical basis of the proposed model are given and a methodological illustration is provided using data from the Tohoku region of Japan.

  3. Development of a Probabilistic Tsunami Hazard Analysis in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toshiaki Sakai; Tomoyoshi Takeda; Hiroshi Soraoka

    2006-07-01

    It is meaningful for tsunami assessment to evaluate phenomena beyond the design basis as well as seismic design. Because once we set the design basis tsunami height, we still have possibilities tsunami height may exceeds the determined design tsunami height due to uncertainties regarding the tsunami phenomena. Probabilistic tsunami risk assessment consists of estimating for tsunami hazard and fragility of structures and executing system analysis. In this report, we apply a method for probabilistic tsunami hazard analysis (PTHA). We introduce a logic tree approach to estimate tsunami hazard curves (relationships between tsunami height and probability of excess) and present anmore » example for Japan. Examples of tsunami hazard curves are illustrated, and uncertainty in the tsunami hazard is displayed by 5-, 16-, 50-, 84- and 95-percentile and mean hazard curves. The result of PTHA will be used for quantitative assessment of the tsunami risk for important facilities located on coastal area. Tsunami hazard curves are the reasonable input data for structures and system analysis. However the evaluation method for estimating fragility of structures and the procedure of system analysis is now being developed. (authors)« less

  4. On Railroad Tank Car Puncture Performance: Part II - Estimating Metrics

    DOT National Transportation Integrated Search

    2016-04-12

    This paper is the second in a two-part series on the puncture performance of railroad tank cars carrying hazardous materials in the event of an accident. Various metrics are often mentioned in the open literature to characterize the structural perfor...

  5. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets.

    PubMed

    Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A

    2015-01-15

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets

    PubMed Central

    Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.

    2014-01-01

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152

  7. Uncertainty of water type-specific hazardous copper concentrations derived with biotic ligand models.

    PubMed

    Vijver, Martina G; De Koning, Arjan; Peijnenburg, Willie J G M

    2008-11-01

    One of the aims of the Water Framework Directive is to derive Europe-wide environmental quality standards that are scientifically based and protective of surface waters. Accounting for water type-specific bioavailability corrections presents challenges and opportunities for metals research. In this study, we present generally applicable approaches for tiered risk assessment of chemicals for prospective use. The objective of the present study was to derive water type-specific dissolved copper criteria for Dutch surface waters. The intent was to show the utility of accounting for bioavailability by using biotic ligand models (BLMs) and two different ways of extrapolating these BLMs in order to obtain reliable bioavailability-corrected species sensitivity distributions. Water type-specific criteria estimations were generated for six different water quality conditions. Average hazard concentrations as calculated using the BLMs and the two alternate normalization scenarios varied significantly among the different water types, from 5.6 to 73.6 microg/L. Water types defined as large rivers, sandy springs, and acid ponds were most sensitive for Cu. Streams and brooks had the highest hazard concentrations. The two different options examined for toxicity data normalization did impact the calculated hazard concentrations for each water type.

  8. Sherwood correlation for dissolution of pooled NAPL in porous media

    NASA Astrophysics Data System (ADS)

    Aydin Sarikurt, Derya; Gokdemir, Cagri; Copty, Nadim K.

    2017-11-01

    The rate of interphase mass transfer from non-aqueous phase liquids (NAPLs) entrapped in the subsurface into the surrounding mobile aqueous phase is commonly expressed in terms of Sherwood (Sh) correlations that are expressed as a function of flow and porous media properties. Because of the lack of precise methods for the estimation of the interfacial area separating the NAPL and aqueous phases, most studies have opted to use modified Sherwood expressions that lump the interfacial area into the interphase mass transfer coefficient. To date, there are only two studies in the literature that have developed non-lumped Sherwood correlations; however, these correlations have undergone limited validation. In this paper controlled dissolution experiments from pooled NAPL were conducted. The immobile NAPL mass is placed at the bottom of a flow cell filled with porous media with water flowing horizontally on top. Effluent aqueous phase concentrations were measured for a wide range of aqueous phase velocities and for two different porous media. To interpret the experimental results, a two-dimensional pore network model of the NAPL dissolution kinetics and aqueous phase transport was developed. The observed effluent concentrations were then used to compute best-fit mass transfer coefficients. Comparison of the effluent concentrations computed with the two-dimensional pore network model to those estimated with one-dimensional analytical solutions indicates that the analytical model which ignores the transport in the lateral direction can lead to under-estimation of the mass transfer coefficient. Based on system parameters and the estimated mass transfer coefficients, non-lumped Sherwood correlations were developed and compared to previously published data. The developed correlations, which are a significant improvement over currently available correlations that are associated with large uncertainties, can be incorporated into future modeling studies requiring non-lumped Sh expressions.

  9. Estimating the Proportion of Childhood Cancer Cases and Costs Attributable to the Environment in California.

    PubMed

    Nelson, Lauren; Valle, Jhaqueline; King, Galatea; Mills, Paul K; Richardson, Maxwell J; Roberts, Eric M; Smith, Daniel; English, Paul

    2017-05-01

    To estimate the proportion of cases and costs of the most common cancers among children aged 0 to 14 years (leukemia, lymphoma, and brain or central nervous system tumors) that were attributable to preventable environmental pollution in California in 2013. We conducted a literature review to identify preventable environmental hazards associated with childhood cancer. We combined risk estimates with California-specific exposure prevalence estimates to calculate hazard-specific environmental attributable fractions (EAFs). We combined hazard-specific EAFs to estimate EAFs for each cancer and calculated an overall EAF. Estimated economic costs included annual (indirect and direct medical) and lifetime costs. Hazards associated with childhood cancer risks included tobacco smoke, residential exposures, and parental occupational exposures. Estimated EAFs for leukemia, lymphoma, and brain or central nervous system cancer were 21.3% (range = 11.7%-30.9%), 16.1% (range = 15.0%-17.2%), and 2.0% (range = 1.7%-2.2%), respectively. The combined EAF was 15.1% (range = 9.4%-20.7%), representing $18.6 million (range = $11.6 to $25.5 million) in annual costs and $31 million in lifetime costs. Reducing environmental hazards and exposures in California could substantially reduce the human burden of childhood cancer and result in significant annual and lifetime savings.

  10. An Approach to Addressing Selection Bias in Survival Analysis

    PubMed Central

    Carlin, Caroline S.; Solid, Craig A.

    2014-01-01

    This work proposes a frailty model that accounts for non-random treatment assignment in survival analysis. Using Monte Carlo simulation, we found that estimated treatment parameters from our proposed endogenous selection survival model (esSurv) closely parallel the consistent two-stage residual inclusion (2SRI) results, while offering computational and interpretive advantages. The esSurv method greatly enhances computational speed relative to 2SRI by eliminating the need for bootstrapped standard errors, and generally results in smaller standard errors than those estimated by 2SRI. In addition, esSurv explicitly estimates the correlation of unobservable factors contributing to both treatment assignment and the outcome of interest, providing an interpretive advantage over the residual parameter estimate in the 2SRI method. Comparisons with commonly used propensity score methods and with a model that does not account for non-random treatment assignment show clear bias in these methods that is not mitigated by increased sample size. We illustrate using actual dialysis patient data comparing mortality of patients with mature arteriovenous grafts for venous access to mortality of patients with grafts placed but not yet ready for use at the initiation of dialysis. We find strong evidence of endogeneity (with estimate of correlation in unobserved factors ρ̂ = 0.55), and estimate a mature-graft hazard ratio of 0.197 in our proposed method, with a similar 0.173 hazard ratio using 2SRI. The 0.630 hazard ratio from a frailty model without a correction for the non-random nature of treatment assignment illustrates the importance of accounting for endogeneity. PMID:24845211

  11. Analytical computation of three-dimensional synthetic seismograms by Modal Summation: method, validation and applications

    NASA Astrophysics Data System (ADS)

    La Mura, Cristina; Gholami, Vahid; Panza, Giuliano F.

    2013-04-01

    In order to enable realistic and reliable earthquake hazard assessment and reliable estimation of the ground motion response to an earthquake, three-dimensional velocity models have to be considered. The propagation of seismic waves in complex laterally varying 3D layered structures is a complicated process. Analytical solutions of the elastodynamic equations for such types of media are not known. The most common approaches to the formal description of seismic wavefields in such complex structures are methods based on direct numerical solutions of the elastodynamic equations, e.g. finite-difference, finite-element method, and approximate asymptotic methods. In this work, we present an innovative methodology for computing synthetic seismograms, complete of the main direct, refracted, converted phases and surface waves in three-dimensional anelastic models based on the combination of the Modal Summation technique with the Asymptotic Ray Theory in the framework of the WKBJ - approximation. The three - dimensional models are constructed using a set of vertically heterogeneous sections (1D structures) that are juxtaposed on a regular grid. The distribution of these sections in the grid is done in such a way to fulfill the requirement of weak lateral inhomogeneity in order to satisfy the condition of applicability of the WKBJ - approximation, i.e. the lateral gradient of the parameters characterizing the 1D structure has to be small with respect to the prevailing wavelength. The new method has been validated comparing synthetic seismograms with the records available of three different earthquakes in three different regions: Kanto basin (Japan) triggered by the 1990 Odawara earthquake Mw= 5.1, Romanian territory triggered by the 30 May 1990 Vrancea intermediate-depth earthquake Mw= 6.9 and Iranian territory affected by the 26 December 2003 Bam earthquake Mw= 6.6. Besides the advantage of being a useful tool for assessment of seismic hazard and seismic risk reduction, it is characterized by high efficiency, in fact, once the study region is identified and the 3D model is constructed, the computation, at each station, of the three components of the synthetic signal (displacement, velocity, and acceleration) takes less than 3 hours on a 2 GHz CPU.

  12. Hörmander multipliers on two-dimensional dyadic Hardy spaces

    NASA Astrophysics Data System (ADS)

    Daly, J.; Fridli, S.

    2008-12-01

    In this paper we are interested in conditions on the coefficients of a two-dimensional Walsh multiplier operator that imply the operator is bounded on certain of the Hardy type spaces Hp, 0

  13. Cross-validation and Peeling Strategies for Survival Bump Hunting using Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a framework to build a survival/risk bump hunting model with a censored time-to-event response. Our Survival Bump Hunting (SBH) method is based on a recursive peeling procedure that uses a specific survival peeling criterion derived from non/semi-parametric statistics such as the hazards-ratio, the log-rank test or the Nelson--Aalen estimator. To optimize the tuning parameter of the model and validate it, we introduce an objective function based on survival or prediction-error statistics, such as the log-rank test and the concordance error rate. We also describe two alternative cross-validation techniques adapted to the joint task of decision-rule making by recursive peeling and survival estimation. Numerical analyses show the importance of replicated cross-validation and the differences between criteria and techniques in both low and high-dimensional settings. Although several non-parametric survival models exist, none addresses the problem of directly identifying local extrema. We show how SBH efficiently estimates extreme survival/risk subgroups unlike other models. This provides an insight into the behavior of commonly used models and suggests alternatives to be adopted in practice. Finally, our SBH framework was applied to a clinical dataset. In it, we identified subsets of patients characterized by clinical and demographic covariates with a distinct extreme survival outcome, for which tailored medical interventions could be made. An R package PRIMsrc (Patient Rule Induction Method in Survival, Regression and Classification settings) is available on CRAN (Comprehensive R Archive Network) and GitHub. PMID:27034730

  14. Probabilistic Appraisal of Earthquake Hazard Parameters Deduced from a Bayesian Approach in the Northwest Frontier of the Himalayas

    NASA Astrophysics Data System (ADS)

    Yadav, R. B. S.; Tsapanos, T. M.; Bayrak, Yusuf; Koravos, G. Ch.

    2013-03-01

    A straightforward Bayesian statistic is applied in five broad seismogenic source zones of the northwest frontier of the Himalayas to estimate the earthquake hazard parameters (maximum regional magnitude M max, β value of G-R relationship and seismic activity rate or intensity λ). For this purpose, a reliable earthquake catalogue which is homogeneous for M W ≥ 5.0 and complete during the period 1900 to 2010 is compiled. The Hindukush-Pamir Himalaya zone has been further divided into two seismic zones of shallow ( h ≤ 70 km) and intermediate depth ( h > 70 km) according to the variation of seismicity with depth in the subduction zone. The estimated earthquake hazard parameters by Bayesian approach are more stable and reliable with low standard deviations than other approaches, but the technique is more time consuming. In this study, quantiles of functions of distributions of true and apparent magnitudes for future time intervals of 5, 10, 20, 50 and 100 years are calculated with confidence limits for probability levels of 50, 70 and 90 % in all seismogenic source zones. The zones of estimated M max greater than 8.0 are related to the Sulaiman-Kirthar ranges, Hindukush-Pamir Himalaya and Himalayan Frontal Thrusts belt; suggesting more seismically hazardous regions in the examined area. The lowest value of M max (6.44) has been calculated in Northern-Pakistan and Hazara syntaxis zone which have estimated lowest activity rate 0.0023 events/day as compared to other zones. The Himalayan Frontal Thrusts belt exhibits higher earthquake magnitude (8.01) in next 100-years with 90 % probability level as compared to other zones, which reveals that this zone is more vulnerable to occurrence of a great earthquake. The obtained results in this study are directly useful for the probabilistic seismic hazard assessment in the examined region of Himalaya.

  15. Dynamic Financial Constraints: Distinguishing Mechanism Design from Exogenously Incomplete Regimes*

    PubMed Central

    Karaivanov, Alexander; Townsend, Robert M.

    2014-01-01

    We formulate and solve a range of dynamic models of constrained credit/insurance that allow for moral hazard and limited commitment. We compare them to full insurance and exogenously incomplete financial regimes (autarky, saving only, borrowing and lending in a single asset). We develop computational methods based on mechanism design, linear programming, and maximum likelihood to estimate, compare, and statistically test these alternative dynamic models with financial/information constraints. Our methods can use both cross-sectional and panel data and allow for measurement error and unobserved heterogeneity. We estimate the models using data on Thai households running small businesses from two separate samples. We find that in the rural sample, the exogenously incomplete saving only and borrowing regimes provide the best fit using data on consumption, business assets, investment, and income. Family and other networks help consumption smoothing there, as in a moral hazard constrained regime. In contrast, in urban areas, we find mechanism design financial/information regimes that are decidedly less constrained, with the moral hazard model fitting best combined business and consumption data. We perform numerous robustness checks in both the Thai data and in Monte Carlo simulations and compare our maximum likelihood criterion with results from other metrics and data not used in the estimation. A prototypical counterfactual policy evaluation exercise using the estimation results is also featured. PMID:25246710

  16. Applying the Land Use Portfolio Model to Estimate Natural-Hazard Loss and Risk - A Hypothetical Demonstration for Ventura County, California

    USGS Publications Warehouse

    Dinitz, Laura B.

    2008-01-01

    With costs of natural disasters skyrocketing and populations increasingly settling in areas vulnerable to natural hazards, society is challenged to better allocate its limited risk-reduction resources. In 2000, Congress passed the Disaster Mitigation Act, amending the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Robert T. Stafford Disaster Relief and Emergency Assistance Act, Pub. L. 93-288, 1988; Federal Emergency Management Agency, 2002, 2008b; Disaster Mitigation Act, 2000), mandating that State, local, and tribal communities prepare natural-hazard mitigation plans to qualify for pre-disaster mitigation grants and post-disaster aid. The Federal Emergency Management Agency (FEMA) was assigned to coordinate and implement hazard-mitigation programs, and it published information about specific mitigation-plan requirements and the mechanisms (through the Hazard Mitigation Grant Program-HMGP) for distributing funds (Federal Emergency Management Agency, 2002). FEMA requires that each community develop a mitigation strategy outlining long-term goals to reduce natural-hazard vulnerability, mitigation objectives and specific actions to reduce the impacts of natural hazards, and an implementation plan for those actions. The implementation plan should explain methods for prioritizing, implementing, and administering the actions, along with a 'cost-benefit review' justifying the prioritization. FEMA, along with the National Institute of Building Sciences (NIBS), supported the development of HAZUS ('Hazards U.S.'), a geospatial natural-hazards loss-estimation tool, to help communities quantify potential losses and to aid in the selection and prioritization of mitigation actions. HAZUS was expanded to a multiple-hazard version, HAZUS-MH, that combines population, building, and natural-hazard science and economic data and models to estimate physical damages, replacement costs, and business interruption for specific natural-hazard scenarios. HAZUS-MH currently performs analyses for earthquakes, floods, and hurricane wind. HAZUS-MH loss estimates, however, do not account for some uncertainties associated with the specific natural-hazard scenarios, such as the likelihood of occurrence within a particular time horizon or the effectiveness of alternative risk-reduction options. Because of the uncertainties involved, it is challenging to make informative decisions about how to cost-effectively reduce risk from natural-hazard events. Risk analysis is one approach that decision-makers can use to evaluate alternative risk-reduction choices when outcomes are unknown. The Land Use Portfolio Model (LUPM), developed by the U.S. Geological Survey (USGS), is a geospatial scenario-based tool that incorporates hazard-event uncertainties to support risk analysis. The LUPM offers an approach to estimate and compare risks and returns from investments in risk-reduction measures. This paper describes and demonstrates a hypothetical application of the LUPM for Ventura County, California, and examines the challenges involved in developing decision tools that provide quantitative methods to estimate losses and analyze risk from natural hazards.

  17. Validation of a 30m resolution flood hazard model of the conterminous United States

    NASA Astrophysics Data System (ADS)

    Sampson, C. C.; Wing, O.; Smith, A.; Bates, P. D.; Neal, J. C.

    2017-12-01

    We present a 30m resolution two-dimensional hydrodynamic model of the entire conterminous US that has been used to simulate continent-wide flood extent for ten return periods. The model uses a highly efficient numerical solution of the shallow water equations to simulate fluvial flooding in catchments down to 50 km2 and pluvial flooding in all catchments. We use the US National Elevation Dataset (NED) to determine topography for the model and the US Army Corps of Engineers National Levee Dataset to explicitly represent known flood defences. Return period flows and rainfall intensities are estimated using regionalized frequency analyses. We validate these simulations against the complete catalogue of Federal Emergency Management Agency (FEMA) Special Flood Hazard Area maps. We also compare the results obtained from the NED-based continental model with results from a 90m resolution global hydraulic model built using SRTM terrain and identical boundary conditions. Where the FEMA Special Flood Hazard Areas are based on high quality local models the NED-based continental scale model attains a Hit Rate of 86% and a Critical Success Index (CSI) of 0.59; both are typical of scores achieved when comparing high quality reach-scale models to observed event data. The NED model also consistently outperformed the coarser SRTM model. The correspondence between the continental model and FEMA improves in temperate areas and for basins above 400 km2. Given typical hydraulic modeling uncertainties in the FEMA maps, it is probable that the continental-scale model can replicate them to within error. The continental model covers the entire continental US, compared to only 61% for FEMA, and also maps flooding in smaller watersheds not included in the FEMA coverage. The simulations were performed using computing hardware costing less than 100k, whereas the FEMA flood layers are built from thousands of individual local studies that took several decades to develop at an estimated cost (up to 2013) of 4.5 - $7.5bn. The continental model is relatively straightforward to modify and could be re-run under different scenarios, such as climate change. The results show that continental-scale models may now offer sufficient rigor to inform some decision-making needs with far lower cost and greater coverage than traditional patchwork approaches.

  18. The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.

    PubMed

    Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik

    2014-11-11

    Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.

  19. Logarithmic Superdiffusion in Two Dimensional Driven Lattice Gases

    NASA Astrophysics Data System (ADS)

    Krug, J.; Neiss, R. A.; Schadschneider, A.; Schmidt, J.

    2018-03-01

    The spreading of density fluctuations in two-dimensional driven diffusive systems is marginally anomalous. Mode coupling theory predicts that the diffusivity in the direction of the drive diverges with time as (ln t)^{2/3} with a prefactor depending on the macroscopic current-density relation and the diffusion tensor of the fluctuating hydrodynamic field equation. Here we present the first numerical verification of this behavior for a particular version of the two-dimensional asymmetric exclusion process. Particles jump strictly asymmetrically along one of the lattice directions and symmetrically along the other, and an anisotropy parameter p governs the ratio between the two rates. Using a novel massively parallel coupling algorithm that strongly reduces the fluctuations in the numerical estimate of the two-point correlation function, we are able to accurately determine the exponent of the logarithmic correction. In addition, the variation of the prefactor with p provides a stringent test of mode coupling theory.

  20. Two-dimensional ground-water flow model of the Cretaceous aquifer system of Lee County and vicinity, Mississippi

    USGS Publications Warehouse

    Kernodle, John Michael

    1981-01-01

    A two-dimensional ground-water flow model of the Eutaw-McShan and Gordo aquifers in the area of Lee County, Miss., was successfully calibrated and verified using data from six long-term observation wells and two intensive studies of areal water levels. The water levels computed by the model were found to be most sensitive to changes in simulated aquifer hydraulic conductivity and to changes in head in the overlying Coffee Sand aquifer. The two-dimensional model performed reasonably well in simulating the aquifer system except possibly in southern Lee County and southward where a clay bed at the top of the Gordo Formation partially isolated the Gordo from the overlying Eutaw-McShan aquifer. The verified model was used to determine theoretical aquifer response to increased ground-water withdrawal to the year 2000. Two estimated rates of increase and five possible well field locations were examined. (USGS)

  1. Real-time two-dimensional temperature imaging using ultrasound.

    PubMed

    Liu, Dalong; Ebbini, Emad S

    2009-01-01

    We present a system for real-time 2D imaging of temperature change in tissue media using pulse-echo ultrasound. The frontend of the system is a SonixRP ultrasound scanner with a research interface giving us the capability of controlling the beam sequence and accessing radio frequency (RF) data in real-time. The beamformed RF data is streamlined to the backend of the system, where the data is processed using a two-dimensional temperature estimation algorithm running in the graphics processing unit (GPU). The estimated temperature is displayed in real-time providing feedback that can be used for real-time control of the heating source. Currently we have verified our system with elastography tissue mimicking phantom and in vitro porcine heart tissue, excellent repeatability and sensitivity were demonstrated.

  2. Control-surface hinge-moment calculations for a high-aspect-ratio supercritical wing

    NASA Technical Reports Server (NTRS)

    Perry, B., III

    1978-01-01

    The hinge moments, at selected flight conditions, resulting from deflecting two trailing edge control surfaces (one inboard and one midspan) on a high aspect ratio, swept, fuel conservative wing with a supercritical airfoil are estimated. Hinge moment results obtained from procedures which employ a recently developed transonic analysis are given. In this procedure a three dimensional inviscid transonic aerodynamics computer program is combined with a two dimensional turbulent boundary layer program in order to obtain an interacted solution. These results indicate that trends of the estimated hinge moment as a function of deflection angle are similar to those from experimental hinge moment measurements made on wind tunnel models with swept supercritical wings tested at similar values of free stream Mach number and angle of attack.

  3. Control-surface hinge-moment calculations for a high-aspect-ratio supercritical wing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perry, B.I.

    1978-09-01

    The hinge moments, at selected flight conditions, resulting from deflecting two trailing edge control surfaces (one inboard and one midspan) on a high aspect ratio, swept, fuel conservative wing with a supercritical airfoil are estimated. Hinge moment results obtained from procedures which employ a recently developed transonic analysis are given. In this procedure a three dimensional inviscid transonic aerodynamics computer program is combined with a two dimensional turbulent boundary layer program in order to obtain an interacted solution. These results indicate that trends of the estimated hinge moment as a function of deflection angle are similar to those from experimentalmore » hinge moment measurements made on wind tunnel models with swept supercritical wings tested at similar values of free stream Mach number and angle of attack.« less

  4. Validation of a 30 m resolution flood hazard model of the conterminous United States

    NASA Astrophysics Data System (ADS)

    Wing, Oliver E. J.; Bates, Paul D.; Sampson, Christopher C.; Smith, Andrew M.; Johnson, Kris A.; Erickson, Tyler A.

    2017-09-01

    This paper reports the development of a ˜30 m resolution two-dimensional hydrodynamic model of the conterminous U.S. using only publicly available data. The model employs a highly efficient numerical solution of the local inertial form of the shallow water equations which simulates fluvial flooding in catchments down to 50 km2 and pluvial flooding in all catchments. Importantly, we use the U.S. Geological Survey (USGS) National Elevation Dataset to determine topography; the U.S. Army Corps of Engineers National Levee Dataset to explicitly represent known flood defenses; and global regionalized flood frequency analysis to characterize return period flows and rainfalls. We validate these simulations against the complete catalogue of Federal Emergency Management Agency (FEMA) Special Flood Hazard Area (SFHA) maps and detailed local hydraulic models developed by the USGS. Where the FEMA SFHAs are based on high-quality local models, the continental-scale model attains a hit rate of 86%. This correspondence improves in temperate areas and for basins above 400 km2. Against the higher quality USGS data, the average hit rate reaches 92% for the 1 in 100 year flood, and 90% for all flood return periods. Given typical hydraulic modeling uncertainties in the FEMA maps and USGS model outputs (e.g., errors in estimating return period flows), it is probable that the continental-scale model can replicate both to within error. The results show that continental-scale models may now offer sufficient rigor to inform some decision-making needs with dramatically lower cost and greater coverage than approaches based on a patchwork of local studies.

  5. A Local Agreement Pattern Measure Based on Hazard Functions for Survival Outcomes

    PubMed Central

    Dai, Tian; Guo, Ying; Peng, Limin; Manatunga, Amita K.

    2017-01-01

    Summary Assessing agreement is often of interest in biomedical and clinical research when measurements are obtained on the same subjects by different raters or methods. Most classical agreement methods have been focused on global summary statistics, which cannot be used to describe various local agreement patterns. The objective of this work is to study the local agreement pattern between two continuous measurements subject to censoring. In this paper, we propose a new agreement measure based on bivariate hazard functions to characterize the local agreement pattern between two correlated survival outcomes. The proposed measure naturally accommodates censored observations, fully captures the dependence structure between bivariate survival times and provides detailed information on how the strength of agreement evolves over time. We develop a nonparametric estimation method for the proposed local agreement pattern measure and study theoretical properties including strong consistency and asymptotical normality. We then evaluate the performance of the estimator through simulation studies and illustrate the method using a prostate cancer data example. PMID:28724196

  6. A local agreement pattern measure based on hazard functions for survival outcomes.

    PubMed

    Dai, Tian; Guo, Ying; Peng, Limin; Manatunga, Amita K

    2018-03-01

    Assessing agreement is often of interest in biomedical and clinical research when measurements are obtained on the same subjects by different raters or methods. Most classical agreement methods have been focused on global summary statistics, which cannot be used to describe various local agreement patterns. The objective of this work is to study the local agreement pattern between two continuous measurements subject to censoring. In this article, we propose a new agreement measure based on bivariate hazard functions to characterize the local agreement pattern between two correlated survival outcomes. The proposed measure naturally accommodates censored observations, fully captures the dependence structure between bivariate survival times and provides detailed information on how the strength of agreement evolves over time. We develop a nonparametric estimation method for the proposed local agreement pattern measure and study theoretical properties including strong consistency and asymptotical normality. We then evaluate the performance of the estimator through simulation studies and illustrate the method using a prostate cancer data example. © 2017, The International Biometric Society.

  7. Robust inference in discrete hazard models for randomized clinical trials.

    PubMed

    Nguyen, Vinh Q; Gillen, Daniel L

    2012-10-01

    Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.

  8. Pairwise comparisons and visual perceptions of equal area polygons.

    PubMed

    Adamic, P; Babiy, V; Janicki, R; Kakiashvili, T; Koczkodaj, W W; Tadeusiewicz, R

    2009-02-01

    The number of studies related to visual perception has been plentiful in recent years. Participants rated the areas of five randomly generated shapes of equal area, using a reference unit area that was displayed together with the shapes. Respondents were 179 university students from Canada and Poland. The average error estimated by respondents using the unit square was 25.75%. The error was substantially decreased to 5.51% when the shapes were compared to one another in pairs. This gain of 20.24% for this two-dimensional experiment was substantially better than the 11.78% gain reported in the previous one-dimensional experiments. This is the first statistically sound two-dimensional experiment demonstrating that pairwise comparisons improve accuracy.

  9. A Model-Based Approach for Visualizing the Dimensional Structure of Ordered Successive Categories Preference Data

    ERIC Educational Resources Information Center

    DeSarbo, Wayne S.; Park, Joonwook; Scott, Crystal J.

    2008-01-01

    A cyclical conditional maximum likelihood estimation procedure is developed for the multidimensional unfolding of two- or three-way dominance data (e.g., preference, choice, consideration) measured on ordered successive category rating scales. The technical description of the proposed model and estimation procedure are discussed, as well as the…

  10. Outcome-Dependent Sampling with Interval-Censored Failure Time Data

    PubMed Central

    Zhou, Qingning; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664

  11. Changes in Risk of Immediate Adverse Reactions to Iodinated Contrast Media by Repeated Administrations in Patients with Hepatocellular Carcinoma

    PubMed Central

    Fujiwara, Naoto; Tateishi, Ryosuke; Akahane, Masaaki; Taguri, Masataka; Minami, Tatsuya; Mikami, Shintaro; Sato, Masaya; Uchino, Kouji; Enooku, Kenichiro; Kondo, Yuji; Asaoka, Yoshinari; Yamashiki, Noriyo; Goto, Tadashi; Shiina, Shuichiro; Yoshida, Haruhiko; Ohtomo, Kuni; Koike, Kazuhiko

    2013-01-01

    Background To elucidate whether repeated exposures to iodinated contrast media increase the risk of adverse reaction. Materials and Methods We retrospectively reviewed 1,861 patients with hepatocellular carcinoma who visited authors’ institution, a tertiary referral center, between 2004 and 2008. We analyzed cumulative probability of adverse reactions and risk factors. We categorized all symptoms into hypersensitivity reactions, physiologic reactions, and other reactions, according to the American College of Radiology guidelines, and evaluated each category as an event. We estimated the association between hazard for adverse reactions and the number of cumulative exposures to contrast media. We also evaluated subsequent contrast media injections and adverse reactions. Results There were 23,684 contrast media injections in 1,729 patients. One hundred and thirty-two patients were excluded because they were given no contrast media during the study period. Adverse reactions occurred in 196 (0.83%) patients. The cumulative incidence at 10th, 20th, and 30th examination was 7.9%, 15.2%, and 24.1%, respectively. Presence of renal impairment was found to be one of risk factors for adverse reactions. The estimated hazard of overall adverse reaction gradually decreased until around 10th exposure and rose with subsequent exposures. The estimated hazard of hypersensitivity showed V-shaped change with cumulative number of exposures. The estimated hazard of physiologic reaction had a tendency toward decreasing and that of other reaction had a tendency toward increasing. Second adverse reaction was more severe than the initial in only one among 130 patients receiving subsequent injections. Conclusion Repeated exposures to iodinated contrast media increase the risk of adverse reaction. PMID:24098420

  12. Gender differences in sex-related alcohol expectancies in young adults from a peri-urban area in Lima, Peru.

    PubMed

    Gálvez-Buccollini, Juan A; Paz-Soldán, Valerie A; Herrera, Phabiola M; DeLea, Suzanne; Gilman, Robert H

    2009-06-01

    To estimate the effect of sex-related alcohol expectancies (SRAE) on hazardous drinking prevalence and examine gender differences in reporting SRAE. Trained research assistants administered part of a questionnaire to 393 men and 400 women between 18 and 30 years old from a peri-urban shantytown in Lima, Peru. The remaining questions were self-administered. Two measuring instruments-one testing for hazardous drinking and one for SRAE-were used. Multivariate data analysis was performed using logistic regression. Based on odds ratios adjusted for socio-demographic variables (age, marital status, education, and employment status) (n = 793), men with one or two SRAE and men with three or more SRAE were 2.3 (95% confidence interval (CI) = 1.4-3.8; p = 0.001) and 3.9 (95% CI = 2.1-7.3; p < 0.001) times more likely than men with no SRAE, respectively, to be hazardous drinkers. Reporting of SRAE was significantly higher in men versus women. In a shantytown in Lima, SRAE is associated with hazardous drinking among men, but not among women, and reporting of SRAE differs by gender.

  13. [Study on the influence of bioclogging on permeability of saturated porous media by experiments and models].

    PubMed

    Yang, Jing; Ye, Shu-jun; Wu, Ji-chun

    2011-05-01

    This paper studied on the influence of bioclogging on permeability of saturated porous media. Laboratory hydraulic tests were conducted in a two-dimensional C190 sand-filled cell (55 cm wide x 45 cm high x 1.28 cm thick) to investigate growth of the mixed microorganisms (KB-1) and influence of biofilm on permeability of saturated porous media under condition of rich nutrition. Biomass distributions in the water and on the sand in the cell were measured by protein analysis. The biofilm distribution on the sand was observed by confocal laser scanning microscopy. Permeability was measured by hydraulic tests. The biomass levels measured in water and on the sand increased with time, and were highest at the bottom of the cell. The biofilm on the sand at the bottom of the cell was thicker. The results of the hydraulic tests demonstrated that the permeability due to biofilm growth was estimated to be average 12% of the initial value. To investigate the spatial distribution of permeability in the two dimensional cell, three models (Taylor, Seki, and Clement) were used to calculate permeability of porous media with biofilm growth. The results of Taylor's model showed reduction in permeability of 2-5 orders magnitude. The Clement's model predicted 3%-98% of the initial value. Seki's model could not be applied in this study. Conclusively, biofilm growth could obviously decrease the permeability of two dimensional saturated porous media, however, the reduction was much less than that estimated in one dimensional condition. Additionally, under condition of two dimensional saturated porous media with rich nutrition, Seki's model could not be applied, Taylor's model predicted bigger reductions, and the results of Clement's model were closest to the result of hydraulic test.

  14. Inference for High-dimensional Differential Correlation Matrices.

    PubMed

    Cai, T Tony; Zhang, Anru

    2016-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.

  15. The cost of karst subsidence and sinkhole collapse in the United States compared with other natural hazards

    USGS Publications Warehouse

    Weary, David J.

    2015-01-01

    Rocks with potential for karst formation are found in all 50 states. Damage due to karst subsidence and sinkhole collapse is a natural hazard of national scope. Repair of damage to buildings, highways, and other infrastructure represents a significant national cost. Sparse and incomplete data show that the average cost of karst-related damages in the United States over the last 15 years is estimated to be at least $300,000,000 per year and the actual total is probably much higher. This estimate is lower than the estimated annual costs for other natural hazards; flooding, hurricanes and cyclonic storms, tornadoes, landslides, earthquakes, or wildfires, all of which average over $1 billion per year. Very few state organizations track karst subsidence and sinkhole damage mitigation costs; none occurs at the Federal level. Many states discuss the karst hazard in their State hazard mitigation plans, but seldom include detailed reports of subsidence incidents or their mitigation costs. Most State highway departments do not differentiate karst subsidence or sinkhole collapse from other road repair costs. Amassing of these data would raise the estimated annual cost considerably. Information from insurance organizations about sinkhole damage claims and payouts is also not readily available. Currently there is no agency with a mandate for developing such data. If a more realistic estimate could be made, it would illuminate the national scope of this hazard and make comparison with costs of other natural hazards more realistic.

  16. Estimation in a semi-Markov transformation model

    PubMed Central

    Dabrowska, Dorota M.

    2012-01-01

    Multi-state models provide a common tool for analysis of longitudinal failure time data. In biomedical applications, models of this kind are often used to describe evolution of a disease and assume that patient may move among a finite number of states representing different phases in the disease progression. Several authors developed extensions of the proportional hazard model for analysis of multi-state models in the presence of covariates. In this paper, we consider a general class of censored semi-Markov and modulated renewal processes and propose the use of transformation models for their analysis. Special cases include modulated renewal processes with interarrival times specified using transformation models, and semi-Markov processes with with one-step transition probabilities defined using copula-transformation models. We discuss estimation of finite and infinite dimensional parameters of the model, and develop an extension of the Gaussian multiplier method for setting confidence bands for transition probabilities. A transplant outcome data set from the Center for International Blood and Marrow Transplant Research is used for illustrative purposes. PMID:22740583

  17. Depression and incident dementia. An 8-year population-based prospective study.

    PubMed

    Luppa, Melanie; Luck, Tobias; Ritschel, Franziska; Angermeyer, Matthias C; Villringer, Arno; Riedel-Heller, Steffi G

    2013-01-01

    The aim of the study was to investigate the impact of depression (categorical diagnosis; major depression, MD) and depressive symptoms (dimensional diagnosis and symptom patterns) on incident dementia in the German general population. Within the Leipzig Longitudinal Study of the Aged (LEILA 75+), a representative sample of 1,265 individuals aged 75 years and older were interviewed every 1.5 years over 8 years (mean observation time 4.3 years; mean number of visits 4.2). Cox proportional hazards and binary logistic regressions were used to estimate the effect of baseline depression and depressive symptoms on incident dementia. The incidence of dementia was 48 per 1,000 person-years (95% confidence interval (CI) 45-51). Depressive symptoms (Hazard ratio HR 1.03, 95% CI 1.01-1.05), and in particular mood-related symptoms (HR 1.08, 95% CI 1.03-1.14), showed a significant impact on the incidence of dementia only in univariate analysis, but not after adjustment for cognitive and functional impairment. MD showed only a significant impact on incidence of dementia in Cox proportional hazards regression, but not in binary logistic regression models. The present study using different diagnostic measures of depression on future dementia found no clear significant associations of depression and incident dementia. Further in-depth investigation would help to understand the nature of depression in the context of incident dementia.

  18. Seismic hazard maps for Haiti

    USGS Publications Warehouse

    Frankel, Arthur; Harmsen, Stephen; Mueller, Charles; Calais, Eric; Haase, Jennifer

    2011-01-01

    We have produced probabilistic seismic hazard maps of Haiti for peak ground acceleration and response spectral accelerations that include the hazard from the major crustal faults, subduction zones, and background earthquakes. The hazard from the Enriquillo-Plantain Garden, Septentrional, and Matheux-Neiba fault zones was estimated using fault slip rates determined from GPS measurements. The hazard from the subduction zones along the northern and southeastern coasts of Hispaniola was calculated from slip rates derived from GPS data and the overall plate motion. Hazard maps were made for a firm-rock site condition and for a grid of shallow shear-wave velocities estimated from topographic slope. The maps show substantial hazard throughout Haiti, with the highest hazard in Haiti along the Enriquillo-Plantain Garden and Septentrional fault zones. The Matheux-Neiba Fault exhibits high hazard in the maps for 2% probability of exceedance in 50 years, although its slip rate is poorly constrained.

  19. Experimental Detection and Characterization of Void using Time-Domain Reflection Wave

    NASA Astrophysics Data System (ADS)

    Zahari, M. N. H.; Madun, A.; Dahlan, S. H.; Joret, A.; Zainal Abidin, M. H.; Mohammad, A. H.; Omar, A. H.

    2018-04-01

    Recent technologies in engineering views have brought the significant improvement in terms of performance and precision. One of those improvements is in geophysics studies for underground detection. Reflection method has been demonstrated to able to detect and locate subsurface anomalies in previous studies, including voids. Conventional method merely involves field testing only for limited areas. This may lead to undiscovered of the void position. Problems arose when the voids were not recognised in early stage and thus, causing hazards, costs increment, and can lead to serious accidents and structural damages. Therefore, to achieve better certainty of the site investigation, a dynamic approach is needed to be implemented. To estimate and characterize the anomalies signal in a better way, an attempt has been made to model air-filled void as experimental testing at site. Robust detection and characterization of voids through inexpensive cost using reflection method are proposed to improve the detectability and characterization of the void. The result shows 2-Dimensional and 3-Dimensional analyses of void based on reflection data with P-waves velocity at 454.54 m/s.

  20. The contribution of synchrotron X-ray computed microtomography to understanding volcanic processes.

    PubMed

    Polacci, Margherita; Mancini, Lucia; Baker, Don R

    2010-03-01

    A series of computed microtomography experiments are reported which were performed by using a third-generation synchrotron radiation source on volcanic rocks from various active hazardous volcanoes in Italy and other volcanic areas in the world. The applied technique allowed the internal structure of the investigated material to be accurately imaged at the micrometre scale and three-dimensional views of the investigated samples to be produced as well as three-dimensional quantitative measurements of textural features. The geometry of the vesicle (gas-filled void) network in volcanic products of both basaltic and trachytic compositions were particularly focused on, as vesicle textures are directly linked to the dynamics of volcano degassing. This investigation provided novel insights into modes of gas exsolution, transport and loss in magmas that were not recognized in previous studies using solely conventional two-dimensional imaging techniques. The results of this study are important to understanding the behaviour of volcanoes and can be combined with other geosciences disciplines to forecast their future activity.

  1. Kriging and local polynomial methods for blending satellite-derived and gauge precipitation estimates to support hydrologic early warning systems

    USGS Publications Warehouse

    Verdin, Andrew; Funk, Christopher C.; Rajagopalan, Balaji; Kleiber, William

    2016-01-01

    Robust estimates of precipitation in space and time are important for efficient natural resource management and for mitigating natural hazards. This is particularly true in regions with developing infrastructure and regions that are frequently exposed to extreme events. Gauge observations of rainfall are sparse but capture the precipitation process with high fidelity. Due to its high resolution and complete spatial coverage, satellite-derived rainfall data are an attractive alternative in data-sparse regions and are often used to support hydrometeorological early warning systems. Satellite-derived precipitation data, however, tend to underrepresent extreme precipitation events. Thus, it is often desirable to blend spatially extensive satellite-derived rainfall estimates with high-fidelity rain gauge observations to obtain more accurate precipitation estimates. In this research, we use two different methods, namely, ordinary kriging and κ-nearest neighbor local polynomials, to blend rain gauge observations with the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates in data-sparse Central America and Colombia. The utility of these methods in producing blended precipitation estimates at pentadal (five-day) and monthly time scales is demonstrated. We find that these blending methods significantly improve the satellite-derived estimates and are competitive in their ability to capture extreme precipitation.

  2. Assessment of turbulent flow effects on the vessel wall using four-dimensional flow MRI.

    PubMed

    Ziegler, Magnus; Lantz, Jonas; Ebbers, Tino; Dyverfeldt, Petter

    2017-06-01

    To explore the use of MR-estimated turbulence quantities for the assessment of turbulent flow effects on the vessel wall. Numerical velocity data for two patient-derived models was obtained using computational fluid dynamics (CFD) for two physiological flow rates. The four-dimensional (4D) Flow MRI measurements were simulated at three different spatial resolutions and used to investigate the estimation of turbulent wall shear stress (tWSS) using the intravoxel standard deviation (IVSD) of velocity and turbulent kinetic energy (TKE) estimated near the vessel wall. Accurate estimation of tWSS using the IVSD is limited by the spatial resolution achievable with 4D Flow MRI. TKE, estimated near the wall, has a strong linear relationship to the tWSS (mean R 2  = 0.84). Near-wall TKE estimates from MR simulations have good agreement to CFD-derived ground truth (mean R 2  = 0.90). Maps of near-wall TKE have strong visual correspondence to tWSS. Near-wall estimation of TKE permits assessment of relative maps of tWSS, but direct estimation of tWSS is challenging due to limitations in spatial resolution. Assessment of tWSS and near-wall TKE may open new avenues for analysis of different pathologies. Magn Reson Med 77:2310-2319, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  3. Two-dimensional grid-free compressive beamforming.

    PubMed

    Yang, Yang; Chu, Zhigang; Xu, Zhongming; Ping, Guoli

    2017-08-01

    Compressive beamforming realizes the direction-of-arrival (DOA) estimation and strength quantification of acoustic sources by solving an underdetermined system of equations relating microphone pressures to a source distribution via compressive sensing. The conventional method assumes DOAs of sources to lie on a grid. Its performance degrades due to basis mismatch when the assumption is not satisfied. To overcome this limitation for the measurement with plane microphone arrays, a two-dimensional grid-free compressive beamforming is developed. First, a continuum based atomic norm minimization is defined to denoise the measured pressure and thus obtain the pressure from sources. Next, a positive semidefinite programming is formulated to approximate the atomic norm minimization. Subsequently, a reasonably fast algorithm based on alternating direction method of multipliers is presented to solve the positive semidefinite programming. Finally, the matrix enhancement and matrix pencil method is introduced to process the obtained pressure and reconstruct the source distribution. Both simulations and experiments demonstrate that under certain conditions, the grid-free compressive beamforming can provide high-resolution and low-contamination imaging, allowing accurate and fast estimation of two-dimensional DOAs and quantification of source strengths, even with non-uniform arrays and noisy measurements.

  4. Zeldovich pancakes in observational data are cold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brinckmann, Thejs; Lindholmer, Mikkel; Hansen, Steen

    The present day universe consists of galaxies, galaxy clusters, one-dimensional filaments and two-dimensional sheets or pancakes, all of which combine to form the cosmic web. The so called ''Zeldovich pancakes' are very difficult to observe, because their overdensity is only slightly greater than the average density of the universe. Falco et al. [1] presented a method to identify Zeldovich pancakes in observational data, and these were used as a tool for estimating the mass of galaxy clusters. Here we expand and refine that observational detection method. We study two pancakes on scales of 10 Mpc, identified from spectroscopically observed galaxiesmore » near the Coma cluster, and compare with twenty numerical pancakes.We find that the observed structures have velocity dispersions of about 100 km/sec, which is relatively low compared to typical groups and filaments. These velocity dispersions are consistent with those found for the numerical pancakes. We also confirm that the identified structures are in fact two-dimensional structures. Finally, we estimate the stellar to total mass of the observational pancakes to be 2 · 10{sup −4}, within one order of magnitude, which is smaller than that of clusters of galaxies.« less

  5. Job Loss, Unemployment and the Incidence of Hazardous Drinking during the Late 2000s Recession in Europe among Adults Aged 50-64 Years.

    PubMed

    Bosque-Prous, Marina; Espelt, Albert; Sordo, Luis; Guitart, Anna M; Brugal, M Teresa; Bravo, Maria J

    2015-01-01

    To estimate the incidence of hazardous drinking in middle-aged people during an economic recession and ascertain whether individual job loss and contextual changes in unemployment influence the incidence rate in that period. Longitudinal study based on two waves of the SHARE project (Survey of Health, Ageing and Retirement in Europe). Individuals aged 50-64 years from 11 European countries, who were not hazardous drinkers at baseline (n = 7,615), were selected for this study. We estimated the cumulative incidence of hazardous drinking (≥40g and ≥20g of pure alcohol on average in men and women, respectively) between 2006 and 2012. Furthermore, in the statistical analysis, multilevel Poisson regression models with robust variance were fitted and obtained Risk Ratios (RR) and their 95% Confidence Intervals (95%CI). Over a 6-year period, 505 subjects became hazardous drinkers, with cumulative incidence of 6.6 per 100 persons between 2006 and 2012 (95%CI:6.1-7.2). Age [RR = 1.02 (95%CI:1.00-1.04)] and becoming unemployed [RR = 1.55 (95%CI:1.08-2.23)] were independently associated with higher risk of becoming a hazardous drinker. Conversely, having poorer self-perceived health was associated with lower risk of becoming a hazardous drinker [RR = 0.75 (95%CI:0.60-0.95)]. At country-level, an increase in the unemployment rate during the study period [RR = 1.32 (95%CI:1.17-1.50)] and greater increases in the household disposable income [RR = 0.97 (95%CI:0.95-0.99)] were associated with risk of becoming a hazardous drinker. Job loss among middle-aged individuals during the economic recession was positively associated with becoming a hazardous drinker. Changes in country-level variables were also related to this drinking pattern.

  6. Influence of Crown Biomass Estimators and Distribution on Canopy Fuel Characteristics in Ponderosa Pine Stands of the Black Hills

    Treesearch

    Tara Keyser; Frederick Smith

    2009-01-01

    Two determinants of crown fire hazard are canopy bulk density (CBD) and canopy base height (CBH). The Fire and Fuels Extension to the Forest Vegetation Simulator (FFE-FVS) is a model that predicts CBD and CBH. Currently, FFE-FVS accounts for neither geographic variation in tree allometries nor the nonuniform distribution of crown mass when one is estimating CBH and CBD...

  7. Evaluation and operationalization of a novel forest detrainment modeling approach for computational snow avalanche simulation

    NASA Astrophysics Data System (ADS)

    Teich, M.; Feistl, T.; Fischer, J.; Bartelt, P.; Bebi, P.; Christen, M.; Grêt-Regamey, A.

    2013-12-01

    Two-dimensional avalanche simulation software operating in three-dimensional terrain are widely used for hazard zoning and engineering to predict runout distances and impact pressures of snow avalanche events. Mountain forests are an effective biological protection measure; however, the protective capacity of forests to decelerate or even to stop avalanches that start within forested areas or directly above the treeline is seldom considered in this context. In particular, runout distances of small- to medium-scale avalanches are strongly influenced by the structural conditions of forests in the avalanche path. This varying decelerating effect has rarely been addressed or implemented in avalanche simulation. We present an evaluation and operationalization of a novel forest detrainment modeling approach implemented in the avalanche simulation software RAMMS. The new approach accounts for the effect of forests in the avalanche path by detraining mass, which leads to a deceleration and runout shortening of avalanches. The extracted avalanche mass caught behind trees stops immediately and, therefore, is instantly subtracted from the flow and the momentum of the stopped mass is removed from the total momentum of the avalanche flow. This relationship is parameterized by the empirical detrainment coefficient K [Pa] which accounts for the braking power of different forest types per unit area. To define K dependent on specific forest characteristics, we simulated 40 well-documented small- to medium-scale avalanches which released in and ran through forests with varying K-values. Comparing two-dimensional simulation results with one-dimensional field observations for a high number of avalanche events and simulations manually is however time consuming and rather subjective. In order to process simulation results in a comprehensive and standardized way, we used a recently developed automatic evaluation and comparison method defining runout distances based on a pressure-based runout indicator in an avalanche path dependent coordinate system. Analyzing and comparing observed and simulated runout distances statistically revealed values for K suitable to simulate the combined influence of four forest characteristics on avalanche runout: forest type, crown coverage, vertical structure and surface roughness, e.g. values for K were higher for dense spruce and mixed spruce-beech forests compared to open larch forests at the upper treeline. Considering forest structural conditions within avalanche simulation will improve current applications for avalanche simulation tools in mountain forest and natural hazard management considerably. Furthermore, we show that an objective and standardized evaluation of two-dimensional simulation results is essential for a successful evaluation and further calibration of avalanche models in general.

  8. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    PubMed

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  9. Hazardous waste management and weight-based indicators--the case of Haifa Metropolis.

    PubMed

    Elimelech, E; Ayalon, O; Flicstein, B

    2011-01-30

    The quantity control of hazardous waste in Israel relies primarily on the Environmental Services Company (ESC) reports. With limited management tools, the Ministry of Environmental Protection (MoEP) has no applicable methodology to confirm or monitor the actual amounts of hazardous waste produced by various industrial sectors. The main goal of this research was to develop a method for estimating the amounts of hazardous waste produced by various sectors. In order to achieve this goal, sector-specific indicators were tested on three hazardous waste producing sectors in the Haifa Metropolis: petroleum refineries, dry cleaners, and public hospitals. The findings reveal poor practice of hazardous waste management in the dry cleaning sector and in the public hospitals sector. Large discrepancies were found in the dry cleaning sector, between the quantities of hazardous waste reported and the corresponding indicator estimates. Furthermore, a lack of documentation on hospitals' pharmaceutical and chemical waste production volume was observed. Only in the case of petroleum refineries, the reported amount was consistent with the estimate. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. 2D VARIABLY SATURATED FLOWS: PHYSICAL SCALING AND BAYESIAN ESTIMATION

    EPA Science Inventory

    A novel dimensionless formulation for water flow in two-dimensional variably saturated media is presented. It shows that scaling physical systems requires conservation of the ratio between capillary forces and gravity forces. A direct result of this finding is that for two phys...

  11. New Hybrid Algorithms for Estimating Tree Stem Diameters at Breast Height Using a Two Dimensional Terrestrial Laser Scanner

    PubMed Central

    Kong, Jianlei; Ding, Xiaokang; Liu, Jinhao; Yan, Lei; Wang, Jianli

    2015-01-01

    In this paper, a new algorithm to improve the accuracy of estimating diameter at breast height (DBH) for tree trunks in forest areas is proposed. First, the information is collected by a two-dimensional terrestrial laser scanner (2DTLS), which emits laser pulses to generate a point cloud. After extraction and filtration, the laser point clusters of the trunks are obtained, which are optimized by an arithmetic means method. Then, an algebraic circle fitting algorithm in polar form is non-linearly optimized by the Levenberg-Marquardt method to form a new hybrid algorithm, which is used to acquire the diameters and positions of the trees. Compared with previous works, this proposed method improves the accuracy of diameter estimation of trees significantly and effectively reduces the calculation time. Moreover, the experimental results indicate that this method is stable and suitable for the most challenging conditions, which has practical significance in improving the operating efficiency of forest harvester and reducing the risk of causing accidents. PMID:26147726

  12. Maximum-likelihood spectral estimation and adaptive filtering techniques with application to airborne Doppler weather radar. Thesis Technical Report No. 20

    NASA Technical Reports Server (NTRS)

    Lai, Jonathan Y.

    1994-01-01

    This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.

  13. An alternative to FASTSIM for tangential solution of the wheel-rail contact

    NASA Astrophysics Data System (ADS)

    Sichani, Matin Sh.; Enblom, Roger; Berg, Mats

    2016-06-01

    In most rail vehicle dynamics simulation packages, tangential solution of the wheel-rail contact is gained by means of Kalker's FASTSIM algorithm. While 5-25% error is expected for creep force estimation, the errors of shear stress distribution, needed for wheel-rail damage analysis, may rise above 30% due to the parabolic traction bound. Therefore, a novel algorithm named FaStrip is proposed as an alternative to FASTSIM. It is based on the strip theory which extends the two-dimensional rolling contact solution to three-dimensional contacts. To form FaStrip, the original strip theory is amended to obtain accurate estimations for any contact ellipse size and it is combined by a numerical algorithm to handle spin. The comparison between the two algorithms shows that using FaStrip improves the accuracy of the estimated shear stress distribution and the creep force estimation in all studied cases. In combined lateral creepage and spin cases, for instance, the error in force estimation reduces from 18% to less than 2%. The estimation of the slip velocities in the slip zone, needed for wear analysis, is also studied. Since FaStrip is as fast as FASTSIM, it can be an alternative for tangential solution of the wheel-rail contact in simulation packages.

  14. Multi Hazard Assessment: The Azores Archipelagos (PT) case

    NASA Astrophysics Data System (ADS)

    Aifantopoulou, Dorothea; Boni, Giorgio; Cenci, Luca; Kaskara, Maria; Kontoes, Haris; Papoutsis, Ioannis; Paralikidis, Sideris; Psichogyiou, Christina; Solomos, Stavros; Squicciarino, Giuseppe; Tsouni, Alexia; Xerekakis, Themos

    2016-04-01

    The COPERNICUS EMS Risk & Recovery Mapping (RRM) activity offers services to support efficient design and implementation of mitigation measures and recovery planning based on EO data exploitation. The Azores Archipelagos case was realized in the context of the FWC 259811 Copernicus EMS RRM, and provides potential impact information for a number of natural disasters. The analysis identified population and assets at risk (infrastructures and environment). The risk assessment was based on hazard and vulnerability of structural elements, road network characteristics, etc. Integration of different hazards and risks was accounted in establishing the necessary first response/ first aid infrastructure. EO data (Pleiades and WV-2), were used to establish a detailed background information, common for the assessment of the whole of the risks. A qualitative Flood hazard level was established, through a "Flood Susceptibility Index" that accounts for upstream drainage area and local slope along the drainage network (Manfreda et al. 2014). Indicators, representing different vulnerability typologies, were accounted for. The risk was established through intersecting hazard and vulnerability (risk- specific lookup table). Probabilistic seismic hazards maps (PGA) were obtained by applying the Cornell (1968) methodology as implemented in CRISIS2007 (Ordaz et al. 2007). The approach relied on the identification of potential sources, the assessment of earthquake recurrence and magnitude distribution, the selection of ground motion model, and the mathematical model to calculate seismic hazard. Lava eruption areas and a volcanic activity related coefficient were established through available historical data. Lava flow paths and their convergence were estimated through applying a cellular, automata based, Lava Flow Hazard numerical model (Gestur Leó Gislason, 2013). The Landslide Hazard Index of NGI (Norwegian Geotechnical Institute) for heavy rainfall (100 year extreme monthly rainfall) and earthquake (475 years return period) was used. Topography, lithology, soil moisture and LU/LC were also accounted for. Soil erosion risk was assessed through the empirical model RUSLE (Renard et al. 1991b). Rainfall erosivity, topography and vegetation cover are the main parameters which were used for predicting the proneness to soil loss. Expected, maximum tsunami wave heights were estimated for a specific earthquake scenario at designated forecast points along the coasts. Deformation at the source was calculated by utilizing the Okada code (Okada, 1985). Tsunami waves' generation and propagation is based on the SWAN model (JRC/IPSC modification). To estimate the wave height (forecast points) the Green's Law function was used (JRC Tsunami Analysis Tool). Storm tracks' historical data indicate a return period of 17 /41 years for H1 /H2 hurricane categories respectively. NOAA WAVEWATCH III model hindcast reanalysis was used to estimate the maximum significant wave height (wind and swell) along the coastline during two major storms. The associated storm-surge risk assessment accounted also for the coastline morphology. Seven empirical (independent) indicators were used to express the erosion susceptibility of the coasts. Each indicator is evaluated according to a semi?quantitative score that represents low, medium and high level of erosion risk or impact. The estimation of the coastal erosion hazard was derived through aggregating the indicators in a grid scale.

  15. Remote rainfall sensing for landslide hazard analysis

    USGS Publications Warehouse

    Wieczorek, Gerald F.; McWreath, Harry; Davenport, Clay

    2001-01-01

    Methods of assessing landslide hazards and providing warnings are becoming more advanced as remote sensing of rainfall provides more detailed temporal and spatial data on rainfall distribution. Two recent landslide disasters are examined noting the potential for using remotely sensed rainfall data for landslide hazard analysis. For the June 27, 1995, storm in Madison County, Virginia, USA, National Weather Service WSR-88D Doppler radar provided rainfall estimates based on a relation between cloud reflectivity and moisture content on a 1 sq. km. resolution every 6 minutes. Ground-based measurements of rainfall intensity and precipitation total, in addition to landslide timing and distribution, were compared with the radar-derived rainfall data. For the December 14-16, 1999, storm in Vargas State, Venezuela, infrared sensing from the GOES-8 satellite of cloud top temperatures provided the basis for NOAA/NESDIS rainfall estimates on a 16 sq. km. resolution every 30 minutes. These rainfall estimates were also compared with ground-based measurements of rainfall and landslide distribution. In both examples, the remotely sensed data either overestimated or underestimated ground-based values by up to a factor of 2. The factors that influenced the accuracy of rainfall data include spatial registration and map projection, as well as prevailing wind direction, cloud orientation, and topography.

  16. B-value and slip rate sensitivity analysis for PGA value in Lembang fault and Cimandiri fault area

    NASA Astrophysics Data System (ADS)

    Pratama, Cecep; Ito, Takeo; Meilano, Irwan; Nugraha, Andri Dian

    2017-07-01

    We examine slip rate and b-value contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedence in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi and Bandung using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Uncertainty and coefficient of variation from slip rate and b-value in Lembang and Cimandiri Fault area have been calculated. We observe that seismic hazard estimates are sensitive to fault slip rate and b-value with uncertainty result are 0.25 g dan 0.1-0.2 g, respectively. For specific site, we found seismic hazard estimate are 0.49 + 0.13 g with COV 27% and 0.39 + 0.05 g with COV 13% for Sukabumi and Bandung, respectively.

  17. Comparison of laser anemometer measurements and theory in an annular turbine cascade with experimental accuracy determined by parameter estimation

    NASA Technical Reports Server (NTRS)

    Goldman, L. J.; Seasholtz, R. G.

    1982-01-01

    Experimental measurements of the velocity components in the blade to blade (axial tangential) plane were obtained with an axial flow turbine stator passage and were compared with calculations from three turbomachinery computer programs. The theoretical results were calculated from a quasi three dimensional inviscid code, a three dimensional inviscid code, and a three dimensional viscous code. Parameter estimation techniques and a particle dynamics calculation were used to assess the accuracy of the laser measurements, which allow a rational basis for comparison of the experimenal and theoretical results. The general agreement of the experimental data with the results from the two inviscid computer codes indicates the usefulness of these calculation procedures for turbomachinery blading. The comparison with the viscous code, while generally reasonable, was not as good as for the inviscid codes.

  18. Earthquake Hazard Mitigation Using a Systems Analysis Approach to Risk Assessment

    NASA Astrophysics Data System (ADS)

    Legg, M.; Eguchi, R. T.

    2015-12-01

    The earthquake hazard mitigation goal is to reduce losses due to severe natural events. The first step is to conduct a Seismic Risk Assessment consisting of 1) hazard estimation, 2) vulnerability analysis, 3) exposure compilation. Seismic hazards include ground deformation, shaking, and inundation. The hazard estimation may be probabilistic or deterministic. Probabilistic Seismic Hazard Assessment (PSHA) is generally applied to site-specific Risk assessments, but may involve large areas as in a National Seismic Hazard Mapping program. Deterministic hazard assessments are needed for geographically distributed exposure such as lifelines (infrastructure), but may be important for large communities. Vulnerability evaluation includes quantification of fragility for construction or components including personnel. Exposure represents the existing or planned construction, facilities, infrastructure, and population in the affected area. Risk (expected loss) is the product of the quantified hazard, vulnerability (damage algorithm), and exposure which may be used to prepare emergency response plans, retrofit existing construction, or use community planning to avoid hazards. The risk estimate provides data needed to acquire earthquake insurance to assist with effective recovery following a severe event. Earthquake Scenarios used in Deterministic Risk Assessments provide detailed information on where hazards may be most severe, what system components are most susceptible to failure, and to evaluate the combined effects of a severe earthquake to the whole system or community. Casualties (injuries and death) have been the primary factor in defining building codes for seismic-resistant construction. Economic losses may be equally significant factors that can influence proactive hazard mitigation. Large urban earthquakes may produce catastrophic losses due to a cascading of effects often missed in PSHA. Economic collapse may ensue if damaged workplaces, disruption of utilities, and resultant loss of income produces widespread default on payments. With increased computational power and more complete inventories of exposure, Monte Carlo methods may provide more accurate estimation of severe losses and the opportunity to increase resilience of vulnerable systems and communities.

  19. High-speed autofocusing of a cell using diffraction pattern

    NASA Astrophysics Data System (ADS)

    Oku, Hiromasa; Ishikawa, Masatoshi; Theodorus; Hashimoto, Koichi

    2006-05-01

    This paper proposes a new autofocusing method for observing cells under a transmission illumination. The focusing method uses a quick and simple focus estimation technique termed “depth from diffraction,” which is based on a diffraction pattern in a defocused image of a biological specimen. Since this method can estimate the focal position of the specimen from only a single defocused image, it can easily realize high-speed autofocusing. To demonstrate the method, it was applied to continuous focus tracking of a swimming paramecium, in combination with two-dimensional position tracking. Three-dimensional tracking of the paramecium for 70 s was successfully demonstrated.

  20. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  1. Dispersive estimates for rational symbols and local well-posedness of the nonzero energy NV equation. II

    NASA Astrophysics Data System (ADS)

    Kazeykina, Anna; Muñoz, Claudio

    2018-04-01

    We continue our study on the Cauchy problem for the two-dimensional Novikov-Veselov (NV) equation, integrable via the inverse scattering transform for the two dimensional Schrödinger operator at a fixed energy parameter. This work is concerned with the more involved case of a positive energy parameter. For the solution of the linearized equation we derive smoothing and Strichartz estimates by combining new estimates for two different frequency regimes, extending our previous results for the negative energy case [18]. The low frequency regime, which our previous result was not able to treat, is studied in detail. At non-low frequencies we also derive improved smoothing estimates with gain of almost one derivative. Then we combine the linear estimates with a Fourier decomposition method and Xs,b spaces to obtain local well-posedness of NV at positive energy in Hs, s > 1/2. Our result implies, in particular, that at least for s > 1/2, NV does not change its behavior from semilinear to quasilinear as energy changes sign, in contrast to the closely related Kadomtsev-Petviashvili equations. As a complement to our LWP results, we also provide some new explicit solutions of NV at zero energy, generalizations of the lumps solutions, which exhibit new and nonstandard long time behavior. In particular, these solutions blow up in infinite time in L2.

  2. Estimating the functional dimensionality of neural representations.

    PubMed

    Ahlheim, Christiane; Love, Bradley C

    2018-06-07

    Recent advances in multivariate fMRI analysis stress the importance of information inherent to voxel patterns. Key to interpreting these patterns is estimating the underlying dimensionality of neural representations. Dimensions may correspond to psychological dimensions, such as length and orientation, or involve other coding schemes. Unfortunately, the noise structure of fMRI data inflates dimensionality estimates and thus makes it difficult to assess the true underlying dimensionality of a pattern. To address this challenge, we developed a novel approach to identify brain regions that carry reliable task-modulated signal and to derive an estimate of the signal's functional dimensionality. We combined singular value decomposition with cross-validation to find the best low-dimensional projection of a pattern of voxel-responses at a single-subject level. Goodness of the low-dimensional reconstruction is measured as Pearson correlation with a test set, which allows to test for significance of the low-dimensional reconstruction across participants. Using hierarchical Bayesian modeling, we derive the best estimate and associated uncertainty of underlying dimensionality across participants. We validated our method on simulated data of varying underlying dimensionality, showing that recovered dimensionalities match closely true dimensionalities. We then applied our method to three published fMRI data sets all involving processing of visual stimuli. The results highlight three possible applications of estimating the functional dimensionality of neural data. Firstly, it can aid evaluation of model-based analyses by revealing which areas express reliable, task-modulated signal that could be missed by specific models. Secondly, it can reveal functional differences across brain regions. Thirdly, knowing the functional dimensionality allows assessing task-related differences in the complexity of neural patterns. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Blanket activation and afterheat for the Compact Reversed-Field Pinch Reactor

    NASA Astrophysics Data System (ADS)

    Davidson, J. W.; Battat, M. E.

    A detailed assessment has been made of the activation and afterheat for a Compact Reversed-Field Pinch Reactor (CRFPR) blanket using a two-dimensional model that included the limiter, the vacuum ducts, and the manifolds and headers for cooling the limiter and the first and second walls. Region-averaged, multigroup fluxes and prompt gamma-ray/neutron heating rates were calculated using the two-dimensional, discrete-ordinates code TRISM. Activation and depletion calculations were performed with the code FORIG using one-group cross sections generated with the TRISM region-averaged fluxes. Afterheat calculations were performed for regions near the plasma, i.e., the limiter, first wall, etc. assuming a 10-day irradiation. Decay heats were computed for decay periods up to 100 minutes. For the activation calculations, the irradiation period was taken to be one year and blanket activity inventories were computed for decay times to 4 x 10 years. These activities were also calculated as the toxicity-weighted biological hazard potential (BHP).

  4. Using multi-level data to estimate the effect of an 'alcogenic' environment on hazardous alcohol consumption in the former Soviet Union.

    PubMed

    Murphy, Adrianna; Roberts, Bayard; Ploubidis, George B; Stickley, Andrew; McKee, Martin

    2014-05-01

    The purpose of this study was to assess whether alcohol-related community characteristics act collectively to influence individual-level alcohol consumption in the former Soviet Union (fSU). Using multi-level data from nine countries in the fSU we conducted a factor analysis of seven alcohol-related community characteristics. The association between any latent factors underlying these characteristics and two measures of hazardous alcohol consumption was then analysed using a population average regression modelling approach. Our factor analysis produced one factor with an eigenvalue >1 (EV=1.28), which explained 94% of the variance. This factor was statistically significantly associated with increased odds of CAGE problem drinking (OR=1.40 (1.08-1.82)). The estimated association with EHD was not statistically significant (OR=1.10 (0.85-1.44)). Our findings suggest that a high number of beer, wine and spirit advertisements and high alcohol outlet density may work together to create an 'alcogenic' environment that encourages hazardous alcohol consumption in the fSU. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Seismic hazards at Kilauea and Mauna Loa volcanoes, Hawaii

    NASA Astrophysics Data System (ADS)

    Klein, Fred W.

    1994-04-01

    A significant seismic hazard exists in south Hawaii from large tectonic earthquakes that can reach magnitude 8 and intensity XII. This paper quantifies the hazard by estimating the horizontal peak ground acceleration (PGA) in south Hawaii which occurs with a 90% probability of not being exceeded during exposure times from 10 to 250 years. The largest earthquakes occur beneath active, unbuttressed and mobile flanks of volcanos in their shield building stage. The flanks are compressed and pushed laterally by rift zone intrusions. The largest earthquakes are thus not directly caused by volcanic activity. Historic earthquakes (since 1823) and the best Hawaiian Volcano Observatory catalog (since 1970) under the south side of the island define linear frequency-magnitude distributions that imply average recurrence intervals for M greater than 5.5 earthquakes of 3.4-5 years, for M greater than 7 events of 29-44 years, and for M greater than 8 earthquakes of 120-190 years. These estimated recurrences are compatable with the 107 year interval between the two major April 2, 1868 (M(approximately)7.9) and November 29, 1975 (M=7.2) earthquakes. Frequency-magnitude distributions define the activity levels of 19 different seismic source zones for probabilistic ground motion estimations. The available measurements of PGA (33 from 7 moderate earthquakes) are insufficient to define a new attenuation curve. We use the Boore et al. (1993) curve shifted upward by a factor of 1.2 to fit Hawaiian data. Amplification of sites on volcanic ash or unconsolidated soil are about two times those of hard lava sites. On a map for a 50 year exposure time with a 90% probability of not being exceeded, the peak ground accelerations are 1.0 g Kilauea's and Mauna Loa's mobile south flanks and 0.9 g in the Kaoiki seismic zone. This hazard from strong ground shaking is comparable to that near the San Andreas Fault in California or the subduction zone in the Gulf of Alaska.

  6. A Diffraction Method of Study of Thermal Quasiorder in a Finite Two-Dimensional Harmonic Lattice

    NASA Astrophysics Data System (ADS)

    Aranda, P.; Croset, B.

    1995-09-01

    Due to the non-existence of long-range order, the diffraction peaks of 2D-solids are considered to have a power-law shape g_p^{η-2}. Taking into account the finite size effects and calculating the powder average, we show that this power-law behaviour appears only for high q_p and then for very small intensities. It is therefore quite difficult and hazardous to characterise the quasiorder by using this asymptotic behaviour. Although the shape of the central part of the peak cannot be used to characterise the quasiorder, we show that, for a fairly good resolution, it is possible to determine η using this central part. This determination can be done irrespectively with the other details of the system by comparing the peak width to its value at low temperature, i.e., at low value of η. By using two diffraction peaks, we propose the simple relation: η(Q_{B_1})/Q_{B_1}^2=η(Q_{B_2})/Q_{B_2}^2 as a check of the two-dimensional quasiorder.

  7. Exposure to Flood Hazards in Miami and Houston: Are Hispanic Immigrants at Greater Risk than Other Social Groups?

    PubMed Central

    Maldonado, Alejandra; Collins, Timothy W.; Grineski, Sara E.; Chakraborty, Jayajit

    2016-01-01

    Although numerous studies have been conducted on the vulnerability of marginalized groups in the environmental justice (EJ) and hazards fields, analysts have tended to lump people together in broad racial/ethnic categories without regard for substantial within-group heterogeneity. This paper addresses that limitation by examining whether Hispanic immigrants are disproportionately exposed to risks from flood hazards relative to other racial/ethnic groups (including US-born Hispanics), adjusting for relevant covariates. Survey data were collected for 1283 adult householders in the Houston and Miami Metropolitan Statistical Areas (MSAs) and flood risk was estimated using their residential presence/absence within federally-designated 100-year flood zones. Generalized estimating equations (GEE) with binary logistic specifications that adjust for county-level clustering were used to analyze (separately) and compare the Houston (N = 546) and Miami (N = 560) MSAs in order to clarify determinants of household exposure to flood risk. GEE results in Houston indicate that Hispanic immigrants have the greatest likelihood, and non-Hispanic Whites the least likelihood, of residing in a 100-year flood zone. Miami GEE results contrastingly reveal that non-Hispanic Whites have a significantly greater likelihood of residing in a flood zone when compared to Hispanic immigrants. These divergent results suggest that human-flood hazard relationships have been structured differently between the two MSAs, possibly due to the contrasting role that water-based amenities have played in urbanization within the two study areas. Future EJ research and practice should differentiate between Hispanic subgroups based on nativity status and attend to contextual factors influencing environmental risk disparities. PMID:27490561

  8. Exposure to Flood Hazards in Miami and Houston: Are Hispanic Immigrants at Greater Risk than Other Social Groups?

    PubMed

    Maldonado, Alejandra; Collins, Timothy W; Grineski, Sara E; Chakraborty, Jayajit

    2016-08-01

    Although numerous studies have been conducted on the vulnerability of marginalized groups in the environmental justice (EJ) and hazards fields, analysts have tended to lump people together in broad racial/ethnic categories without regard for substantial within-group heterogeneity. This paper addresses that limitation by examining whether Hispanic immigrants are disproportionately exposed to risks from flood hazards relative to other racial/ethnic groups (including US-born Hispanics), adjusting for relevant covariates. Survey data were collected for 1283 adult householders in the Houston and Miami Metropolitan Statistical Areas (MSAs) and flood risk was estimated using their residential presence/absence within federally-designated 100-year flood zones. Generalized estimating equations (GEE) with binary logistic specifications that adjust for county-level clustering were used to analyze (separately) and compare the Houston (N = 546) and Miami (N = 560) MSAs in order to clarify determinants of household exposure to flood risk. GEE results in Houston indicate that Hispanic immigrants have the greatest likelihood, and non-Hispanic Whites the least likelihood, of residing in a 100-year flood zone. Miami GEE results contrastingly reveal that non-Hispanic Whites have a significantly greater likelihood of residing in a flood zone when compared to Hispanic immigrants. These divergent results suggest that human-flood hazard relationships have been structured differently between the two MSAs, possibly due to the contrasting role that water-based amenities have played in urbanization within the two study areas. Future EJ research and practice should differentiate between Hispanic subgroups based on nativity status and attend to contextual factors influencing environmental risk disparities.

  9. First Volcanological-Probabilistic Pyroclastic Density Current and Fallout Hazard Map for Campi Flegrei and Somma Vesuvius Volcanoes.

    NASA Astrophysics Data System (ADS)

    Mastrolorenzo, G.; Pappalardo, L.; Troise, C.; Panizza, A.; de Natale, G.

    2005-05-01

    Integrated volcanological-probabilistic approaches has been used in order to simulate pyroclastic density currents and fallout and produce hazard maps for Campi Flegrei and Somma Vesuvius areas. On the basis of the analyses of all types of pyroclastic flows, surges, secondary pyroclastic density currents and fallout events occurred in the volcanological history of the two volcanic areas and the evaluation of probability for each type of events, matrixs of input parameters for a numerical simulation have been performed. The multi-dimensional input matrixs include the main controlling parameters of the pyroclasts transport and deposition dispersion, as well as the set of possible eruptive vents used in the simulation program. Probabilistic hazard maps provide of each points of campanian area, the yearly probability to be interested by a given event with a given intensity and resulting demage. Probability of a few events in one thousand years are typical of most areas around the volcanoes whitin a range of ca 10 km, including Neaples. Results provide constrains for the emergency plans in Neapolitan area.

  10. A post hoc evaluation of a sample size re-estimation in the Secondary Prevention of Small Subcortical Strokes study.

    PubMed

    McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S

    2016-10-01

    The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (<130 mmHg) target on recurrent stroke risk in a two-by-two factorial design. A sample size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not achieve, statistical significance with the larger sample (n = 3020 HR (95% confidence interval): 0.81 (0.63, 1.0), p = 0.089; n = 2500 HR (95% confidence interval): 0.89 (0.68, 1.17), p = 0.40). The results from the safety analyses were similar to 3020 and 2500 patients for both study interventions. Other trial-related factors, such as contracts, finances, and study management, were impacted as well. Adaptive designs can have benefits in randomized clinical trials, but do not always result in significant findings. The impact of adaptive designs should be measured in terms of both trial results, as well as practical issues related to trial management. More post hoc analyses of study adaptations will lead to better understanding of the balance between the benefits and the costs. © The Author(s) 2016.

  11. Ignition and combustion characteristics of metallized propellants, phase 2

    NASA Technical Reports Server (NTRS)

    Mueller, D. C.; Turns, S. R.

    1994-01-01

    Experimental and analytical investigations focusing on aluminum/hydrocarbon gel droplet secondary atomization and its effects on gel-fueled rocket engine performance are being conducted. A single laser sheet sizing/velocimetry diagnostic technique, which should eliminate sizing bias in the data collection process, has been designed and constructed to overcome limitations of the two-color forward-scatter technique used in previous work. Calibration of this system is in progress and the data acquisition/validation code is being written. Narrow-band measurements of radiant emission, discussed in previous reports, will be used to determine if aluminum ignition has occurred in a gel droplet. A one-dimensional model of a gel-fueled rocket combustion chamber, described in earlier reports, has been exercised in conjunction with a two-dimensional, two-phase nozzle code to predict the performance of an aluminum/hydrocarbon fueled engine. Estimated secondary atomization effects on propellant burnout distance, condensed particle radiation losses to the chamber walls, and nozzle two phase flow losses are also investigated. Calculations indicate that only modest secondary atomization is required to significantly reduce propellant burnout distances, aluminum oxide residual size, and radiation heat losses. Radiation losses equal to approximately 2-13 percent of the energy released during combustion were estimated, depending on secondary atomization intensity. A two-dimensional, two-phase nozzle code was employed to estimate radiation and nozzle two phase flow effects on overall engine performance. Radiation losses yielded a one percent decrease in engine Isp. Results also indicate that secondary atomization may have less effect on two-phase losses than it does on propellant burnout distance and no effect if oxide particle coagulation and shear induced droplet breakup govern oxide particle size. Engine Isp was found to decrease from 337.4 to 293.7 seconds as gel aluminum mass loading was varied from 0-70 wt percent. Engine Isp efficiencies, accounting for radiation and two phase flow effects, on the order of 0.946 were calculated for a 60 wt percent gel, assuming a fragmentation ratio of five.

  12. Proportional exponentiated link transformed hazards (ELTH) models for discrete time survival data with application

    PubMed Central

    Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook

    2015-01-01

    Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374

  13. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  14. A time-dependent probabilistic seismic-hazard model for California

    USGS Publications Warehouse

    Cramer, C.H.; Petersen, M.D.; Cao, T.; Toppozada, Tousson R.; Reichle, M.

    2000-01-01

    For the purpose of sensitivity testing and illuminating nonconsensus components of time-dependent models, the California Department of Conservation, Division of Mines and Geology (CDMG) has assembled a time-dependent version of its statewide probabilistic seismic hazard (PSH) model for California. The model incorporates available consensus information from within the earth-science community, except for a few faults or fault segments where consensus information is not available. For these latter faults, published information has been incorporated into the model. As in the 1996 CDMG/U.S. Geological Survey (USGS) model, the time-dependent models incorporate three multisegment ruptures: a 1906, an 1857, and a southern San Andreas earthquake. Sensitivity tests are presented to show the effect on hazard and expected damage estimates of (1) intrinsic (aleatory) sigma, (2) multisegment (cascade) vs. independent segment (no cascade) ruptures, and (3) time-dependence vs. time-independence. Results indicate that (1) differences in hazard and expected damage estimates between time-dependent and independent models increase with decreasing intrinsic sigma, (2) differences in hazard and expected damage estimates between full cascading and not cascading are insensitive to intrinsic sigma, (3) differences in hazard increase with increasing return period (decreasing probability of occurrence), and (4) differences in moment-rate budgets increase with decreasing intrinsic sigma and with the degree of cascading, but are within the expected uncertainty in PSH time-dependent modeling and do not always significantly affect hazard and expected damage estimates.

  15. Going beyond the flood insurance rate map: insights from flood hazard map co-production

    NASA Astrophysics Data System (ADS)

    Luke, Adam; Sanders, Brett F.; Goodrich, Kristen A.; Feldman, David L.; Boudreau, Danielle; Eguiarte, Ana; Serrano, Kimberly; Reyes, Abigail; Schubert, Jochen E.; AghaKouchak, Amir; Basolo, Victoria; Matthew, Richard A.

    2018-04-01

    Flood hazard mapping in the United States (US) is deeply tied to the National Flood Insurance Program (NFIP). Consequently, publicly available flood maps provide essential information for insurance purposes, but they do not necessarily provide relevant information for non-insurance aspects of flood risk management (FRM) such as public education and emergency planning. Recent calls for flood hazard maps that support a wider variety of FRM tasks highlight the need to deepen our understanding about the factors that make flood maps useful and understandable for local end users. In this study, social scientists and engineers explore opportunities for improving the utility and relevance of flood hazard maps through the co-production of maps responsive to end users' FRM needs. Specifically, two-dimensional flood modeling produced a set of baseline hazard maps for stakeholders of the Tijuana River valley, US, and Los Laureles Canyon in Tijuana, Mexico. Focus groups with natural resource managers, city planners, emergency managers, academia, non-profit, and community leaders refined the baseline hazard maps by triggering additional modeling scenarios and map revisions. Several important end user preferences emerged, such as (1) legends that frame flood intensity both qualitatively and quantitatively, and (2) flood scenario descriptions that report flood magnitude in terms of rainfall, streamflow, and its relation to an historic event. Regarding desired hazard map content, end users' requests revealed general consistency with mapping needs reported in European studies and guidelines published in Australia. However, requested map content that is not commonly produced included (1) standing water depths following the flood, (2) the erosive potential of flowing water, and (3) pluvial flood hazards, or flooding caused directly by rainfall. We conclude that the relevance and utility of commonly produced flood hazard maps can be most improved by illustrating pluvial flood hazards and by using concrete reference points to describe flooding scenarios rather than exceedance probabilities or frequencies.

  16. Doubly Robust and Efficient Estimation of Marginal Structural Models for the Hazard Function

    PubMed Central

    Zheng, Wenjing; Petersen, Maya; van der Laan, Mark

    2016-01-01

    In social and health sciences, many research questions involve understanding the causal effect of a longitudinal treatment on mortality (or time-to-event outcomes in general). Often, treatment status may change in response to past covariates that are risk factors for mortality, and in turn, treatment status may also affect such subsequent covariates. In these situations, Marginal Structural Models (MSMs), introduced by Robins (1997), are well-established and widely used tools to account for time-varying confounding. In particular, a MSM can be used to specify the intervention-specific counterfactual hazard function, i.e. the hazard for the outcome of a subject in an ideal experiment where he/she was assigned to follow a given intervention on their treatment variables. The parameters of this hazard MSM are traditionally estimated using the Inverse Probability Weighted estimation (IPTW, van der Laan and Petersen (2007), Robins et al. (2000b), Robins (1999), Robins et al. (2008)). This estimator is easy to implement and admits Wald-type confidence intervals. However, its consistency hinges on the correct specification of the treatment allocation probabilities, and the estimates are generally sensitive to large treatment weights (especially in the presence of strong confounding), which are difficult to stabilize for dynamic treatment regimes. In this paper, we present a pooled targeted maximum likelihood estimator (TMLE, van der Laan and Rubin (2006)) for MSM for the hazard function under longitudinal dynamic treatment regimes. The proposed estimator is semiparametric efficient and doubly robust, hence offers bias reduction and efficiency gain over the incumbent IPTW estimator. Moreover, the substitution principle rooted in the TMLE potentially mitigates the sensitivity to large treatment weights in IPTW. We compare the performance of the proposed estimator with the IPTW and a non-targeted substitution estimator in a simulation study. PMID:27227723

  17. Validation of Temperature Histories for Structural Steel Welds Using Estimated Heat-Affected-Zone Edges

    DTIC Science & Technology

    2016-10-12

    used parametrically for inverse thermal analysis of welds corresponding to other welding processes whose process conditions are within similar...regimes. The present study applies an inverse thermal analysis procedure that uses three-dimensional constraint conditions whose two-dimensional...Memorandum Report 63-0000-00 Office of Naval Research One Liberty Center 875 North Randolph Street, Suite 1425 Arlington, VA 22203-1995 ONR Inverse

  18. Synthesis and identification of three-dimensional faces from image(s) and three-dimensional generic models

    NASA Astrophysics Data System (ADS)

    Liu, Zexi; Cohen, Fernand

    2017-11-01

    We describe an approach for synthesizing a three-dimensional (3-D) face structure from an image or images of a human face taken at a priori unknown poses using gender and ethnicity specific 3-D generic models. The synthesis process starts with a generic model, which is personalized as images of the person become available using preselected landmark points that are tessellated to form a high-resolution triangular mesh. From a single image, two of the three coordinates of the model are reconstructed in accordance with the given image of the person, while the third coordinate is sampled from the generic model, and the appearance is made in accordance with the image. With multiple images, all coordinates and appearance are reconstructed in accordance with the observed images. This method allows for accurate pose estimation as well as face identification in 3-D rendering of a difficult two-dimensional (2-D) face recognition problem into a much simpler 3-D surface matching problem. The estimation of the unknown pose is achieved using the Levenberg-Marquardt optimization process. Encouraging experimental results are obtained in a controlled environment with high-resolution images under a good illumination condition, as well as for images taken in an uncontrolled environment under arbitrary illumination with low-resolution cameras.

  19. Ashkin-Teller criticality and weak first-order behavior of the phase transition to a fourfold degenerate state in two-dimensional frustrated Ising antiferromagnets

    NASA Astrophysics Data System (ADS)

    Liu, R. M.; Zhuo, W. Z.; Chen, J.; Qin, M. H.; Zeng, M.; Lu, X. B.; Gao, X. S.; Liu, J.-M.

    2017-07-01

    We study the thermal phase transition of the fourfold degenerate phases (the plaquette and single-stripe states) in the two-dimensional frustrated Ising model on the Shastry-Sutherland lattice using Monte Carlo simulations. The critical Ashkin-Teller-like behavior is identified both in the plaquette phase region and the single-stripe phase region. The four-state Potts critical end points differentiating the continuous transitions from the first-order ones are estimated based on finite-size-scaling analyses. Furthermore, a similar behavior of the transition to the fourfold single-stripe phase is also observed in the anisotropic triangular Ising model. Thus, this work clearly demonstrates that the transitions to the fourfold degenerate states of two-dimensional Ising antiferromagnets exhibit similar transition behavior.

  20. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    PubMed

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Final Report: Seismic Hazard Assessment at the PGDP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhinmeng

    2007-06-01

    Selecting a level of seismic hazard at the Paducah Gaseous Diffusion Plant for policy considerations and engineering design is not an easy task because it not only depends on seismic hazard, but also on seismic risk and other related environmental, social, and economic issues. Seismic hazard is the main focus. There is no question that there are seismic hazards at the Paducah Gaseous Diffusion Plant because of its proximity to several known seismic zones, particularly the New Madrid Seismic Zone. The issues in estimating seismic hazard are (1) the methods being used and (2) difficulty in characterizing the uncertainties ofmore » seismic sources, earthquake occurrence frequencies, and ground-motion attenuation relationships. This report summarizes how input data were derived, which methodologies were used, and what the hazard estimates at the Paducah Gaseous Diffusion Plant are.« less

  2. Safety assessment of plant varieties using transcriptomics profiling and a one-class classifier.

    PubMed

    van Dijk, Jeroen P; de Mello, Carla Souza; Voorhuijzen, Marleen M; Hutten, Ronald C B; Arisi, Ana Carolina Maisonnave; Jansen, Jeroen J; Buydens, Lutgarde M C; van der Voet, Hilko; Kok, Esther J

    2014-10-01

    An important part of the current hazard identification of novel plant varieties is comparative targeted analysis of the novel and reference varieties. Comparative analysis will become much more informative with unbiased analytical approaches, e.g. omics profiling. Data analysis estimating the similarity of new varieties to a reference baseline class of known safe varieties would subsequently greatly facilitate hazard identification. Further biological and eventually toxicological analysis would then only be necessary for varieties that fall outside this reference class. For this purpose, a one-class classifier tool was explored to assess and classify transcriptome profiles of potato (Solanum tuberosum) varieties in a model study. Profiles of six different varieties, two locations of growth, two year of harvest and including biological and technical replication were used to build the model. Two scenarios were applied representing evaluation of a 'different' variety and a 'similar' variety. Within the model higher class distances resulted for the 'different' test set compared with the 'similar' test set. The present study may contribute to a more global hazard identification of novel plant varieties. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Estimating rainfall time series and model parameter distributions using model data reduction and inversion techniques

    NASA Astrophysics Data System (ADS)

    Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.

    2017-08-01

    Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  4. Chemical Distances for Percolation of Planar Gaussian Free Fields and Critical Random Walk Loop Soups

    NASA Astrophysics Data System (ADS)

    Ding, Jian; Li, Li

    2018-05-01

    We initiate the study on chemical distances of percolation clusters for level sets of two-dimensional discrete Gaussian free fields as well as loop clusters generated by two-dimensional random walk loop soups. One of our results states that the chemical distance between two macroscopic annuli away from the boundary for the random walk loop soup at the critical intensity is of dimension 1 with positive probability. Our proof method is based on an interesting combination of a theorem of Makarov, isomorphism theory, and an entropic repulsion estimate for Gaussian free fields in the presence of a hard wall.

  5. Chemical Distances for Percolation of Planar Gaussian Free Fields and Critical Random Walk Loop Soups

    NASA Astrophysics Data System (ADS)

    Ding, Jian; Li, Li

    2018-06-01

    We initiate the study on chemical distances of percolation clusters for level sets of two-dimensional discrete Gaussian free fields as well as loop clusters generated by two-dimensional random walk loop soups. One of our results states that the chemical distance between two macroscopic annuli away from the boundary for the random walk loop soup at the critical intensity is of dimension 1 with positive probability. Our proof method is based on an interesting combination of a theorem of Makarov, isomorphism theory, and an entropic repulsion estimate for Gaussian free fields in the presence of a hard wall.

  6. Wake Vortex Prediction Models for Decay and Transport Within Stratified Environments

    NASA Astrophysics Data System (ADS)

    Switzer, George F.; Proctor, Fred H.

    2002-01-01

    This paper proposes two simple models to predict vortex transport and decay. The models are determined empirically from results of three-dimensional large eddy simulations, and are applicable to wake vortices out of ground effect and not subjected to environmental winds. The results, from the large eddy simulations assume a range of ambient turbulence and stratification levels. The models and the results from the large eddy simulations support the hypothesis that the decay of the vortex hazard is decoupled from its change in descent rate.

  7. A novel post-processing scheme for two-dimensional electrical impedance tomography based on artificial neural networks

    PubMed Central

    2017-01-01

    Objective Electrical Impedance Tomography (EIT) is a powerful non-invasive technique for imaging applications. The goal is to estimate the electrical properties of living tissues by measuring the potential at the boundary of the domain. Being safe with respect to patient health, non-invasive, and having no known hazards, EIT is an attractive and promising technology. However, it suffers from a particular technical difficulty, which consists of solving a nonlinear inverse problem in real time. Several nonlinear approaches have been proposed as a replacement for the linear solver, but in practice very few are capable of stable, high-quality, and real-time EIT imaging because of their very low robustness to errors and inaccurate modeling, or because they require considerable computational effort. Methods In this paper, a post-processing technique based on an artificial neural network (ANN) is proposed to obtain a nonlinear solution to the inverse problem, starting from a linear solution. While common reconstruction methods based on ANNs estimate the solution directly from the measured data, the method proposed here enhances the solution obtained from a linear solver. Conclusion Applying a linear reconstruction algorithm before applying an ANN reduces the effects of noise and modeling errors. Hence, this approach significantly reduces the error associated with solving 2D inverse problems using machine-learning-based algorithms. Significance This work presents radical enhancements in the stability of nonlinear methods for biomedical EIT applications. PMID:29206856

  8. Incorporating High-Dimensional Exposure Modelling into Studies of Air Pollution and Health.

    PubMed

    Liu, Yi; Shaddick, Gavin; Zidek, James V

    2017-01-01

    Performing studies on the risks of environmental hazards on human health requires accurate estimates of exposures that might be experienced by the populations at risk. Often there will be missing data and in many epidemiological studies, the locations and times of exposure measurements and health data do not match. To a large extent this will be due to the health and exposure data having arisen from completely different data sources and not as the result of a carefully designed study, leading to problems of both 'change of support' and 'misaligned data'. In such cases, a direct comparison of the exposure and health outcome is often not possible without an underlying model to align the two in the spatial and temporal domains. The Bayesian approach provides the natural framework for such models; however, the large amounts of data that can arise from environmental networks means that inference using Markov Chain Monte Carlo might not be computationally feasible in this setting. Here we adapt the integrated nested Laplace approximation to implement spatio-temporal exposure models. We also propose methods for the integration of large-scale exposure models and health analyses. It is important that any model structure allows the correct propagation of uncertainty from the predictions of the exposure model through to the estimates of risk and associated confidence intervals. The methods are demonstrated using a case study of the levels of black smoke in the UK, measured over several decades, and respiratory mortality.

  9. Use of the PARC code to estimate the off-design transonic performance of an over/under turboramjet nozzle

    NASA Technical Reports Server (NTRS)

    Lam, David W.

    1995-01-01

    The transonic performance of a dual-throat, single-expansion-ramp nozzle (SERN) was investigated with a PARC computational fluid dynamics (CFD) code, an external flow Navier-Stokes solver. The nozzle configuration was from a conceptual Mach 5 cruise aircraft powered by four air-breathing turboramjets. Initial test cases used the two-dimensional version of PARC in Euler mode to investigate the effect of geometric variation on transonic performance. Additional cases used the two-dimensional version in viscous mode and the three-dimensional version in both Euler and viscous modes. Results of the analysis indicate low nozzle performance and a highly three-dimensional nozzle flow at transonic conditions. In another comparative study using the PARC code, a single-throat SERN configuration for which experimental data were available at transonic conditions was used to validate the results of the over/under turboramjet nozzle.

  10. Cluster Analysis and Gaussian Mixture Estimation of Correlated Time-Series by Means of Multi-dimensional Scaling

    NASA Astrophysics Data System (ADS)

    Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi

    We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.

  11. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    USGS Publications Warehouse

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  12. An effective solution to the nonlinear, nonstationary Navier-Stokes equations for two dimensions

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.

    1975-01-01

    A sequence of approximate solutions for the nonlinear, nonstationary Navier-Stokes equations for a two-dimensional domain, from which explicit error estimates and rates of convergence are obtained, is described. This sequence of approximate solutions is based primarily on the Newton-Kantorovich method.

  13. Estimating dead wood during national forest inventories: a review of inventory methodologies and suggestions for harmonization.

    PubMed

    Woodall, Christopher W; Rondeux, Jacques; Verkerk, Pieter J; Ståhl, Göran

    2009-10-01

    Efforts to assess forest ecosystem carbon stocks, biodiversity, and fire hazards have spurred the need for comprehensive assessments of forest ecosystem dead wood (DW) components around the world. Currently, information regarding the prevalence, status, and methods of DW inventories occurring in the world's forested landscapes is scattered. The goal of this study is to describe the status, DW components measured, sample methods employed, and DW component thresholds used by national forest inventories that currently inventory DW around the world. Study results indicate that most countries do not inventory forest DW. Globally, we estimate that about 13% of countries inventory DW using a diversity of sample methods and DW component definitions. A common feature among DW inventories was that most countries had only just begun DW inventories and employ very low sample intensities. There are major hurdles to harmonizing national forest inventories of DW: differences in population definitions, lack of clarity on sample protocols/estimation procedures, and sparse availability of inventory data/reports. Increasing database/estimation flexibility, developing common dimensional thresholds of DW components, publishing inventory procedures/protocols, releasing inventory data/reports to international peer review, and increasing communication (e.g., workshops) among countries inventorying DW are suggestions forwarded by this study to increase DW inventory harmonization.

  14. Two antenna, two pass interferometric synthetic aperture radar

    DOEpatents

    Martinez, Ana; Doerry, Armin W.; Bickel, Douglas L.

    2005-06-28

    A multi-antenna, multi-pass IFSAR mode utilizing data driven alignment of multiple independent passes can combine the scaling accuracy of a two-antenna, one-pass IFSAR mode with the height-noise performance of a one-antenna, two-pass IFSAR mode. A two-antenna, two-pass IFSAR mode can accurately estimate the larger antenna baseline from the data itself and reduce height-noise, allowing for more accurate information about target ground position locations and heights. The two-antenna, two-pass IFSAR mode can use coarser IFSAR data to estimate the larger antenna baseline. Multi-pass IFSAR can be extended to more than two (2) passes, thereby allowing true three-dimensional radar imaging from stand-off aircraft and satellite platforms.

  15. Considerations on the determination of the limit of detection and the limit of quantification in one-dimensional and comprehensive two-dimensional gas chromatography.

    PubMed

    Krupčík, Ján; Májek, Pavel; Gorovenko, Roman; Blaško, Jaroslav; Kubinec, Robert; Sandra, Pat

    2015-05-29

    Methods based on the blank signal as proposed by IUPAC procedure and on the signal to noise ratio (S/N) as listed in the ISO-11843-1 norm for determination of the limit of detection (LOD) and quantitation (LOQ) in one-dimensional capillary gas chromatography (1D-GC) and comprehensive two-dimensional capillary gas chromatography (CG×GC) are described in detail and compared for both techniques. Flame ionization detection was applied and variables were the data acquisition frequency and, for CG×GC, also the modulation time. It has been stated that LOD and LOQ estimated according to IUPAC might be successfully used for 1D-GC-FID method. Moreover, LOD and LOQ decrease with decrease of data acquisition frequency (DAF). For GC×GC-FID, estimation of LOD by IUPAC gave poor reproducibility of results while for LOQ reproducibility was acceptable (within ±10% rel.). The LOD and LOQ determined by the S/N concept both for 1D-GC-FID and GC×GC-FID methods are ca. three times higher than those values estimated by the standard deviation of the blank. Since the distribution pattern of modulated peaks for any analyte separated by GC×GC is random and cannot be predicted, LOQ and LOD may vary within 30% for 3s modulation time. Concerning sensitivity, 1D-GC-FID at 2Hz and of GC×GC-FID at 50Hz shows a ca. 5 times enhancement of sensitivity in the modulated signal output. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING

    PubMed Central

    Saegusa, Takumi; Wellner, Jon A.

    2013-01-01

    We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559

  17. A spectral clustering search algorithm for predicting shallow landslide size and location

    Treesearch

    Dino Bellugi; David G. Milledge; William E. Dietrich; Jim A. McKean; J. Taylor Perron; Erik B. Sudderth; Brian Kazian

    2015-01-01

    The potential hazard and geomorphic significance of shallow landslides depend on their location and size. Commonly applied one-dimensional stability models do not include lateral resistances and cannot predict landslide size. Multi-dimensional models must be applied to specific geometries, which are not known a priori, and testing all possible geometries is...

  18. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution.

    PubMed

    Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn

    2013-03-06

    Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.

  19. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution

    PubMed Central

    2013-01-01

    Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171

  20. Use of Citizen Science and Social Media to Improve Wind Hazard and Damage Characterization

    NASA Astrophysics Data System (ADS)

    Lombardo, F.; Meidani, H.

    2017-12-01

    Windstorm losses are significant in the U.S. annually and cause damage worldwide. A large percentage of losses are caused by localized events (e.g., tornadoes). In order to better mitigate these losses improvement is needed in understanding the hazard characteristics and physical damage. However, due to the small-scale nature of these events the resolution of the dedicated measuring network does not capture most occurrences. As a result damage-based assessments are sometimes used to gauge intensity. These damage assessments often suffer from a lack of available manpower, inability to arrive at the scene rapidly and difficulty accessing a damaged site. The use and rapid dissemination of social media, the power of crowds engaged in scientific endeavors, and the public's awareness of their vulnerabilities point to a paradigm shift in how hazards can be sensed in a rapid manner. In this way, `human-sensor' data has the potential to radically improve fundamental understanding of hazard and disasters and resolve some of the existing challenges in wind hazard and damage characterization. Data from social media outlets such as Twitter have been used to aid in damage assessments from hazards such as flood and earthquake, however, the reliability and uncertainty of participatory sensing has been questioned and has been called the `biggest challenge' for its sustained use. This research proposes to investigate the efficacy of both citizen science applications and social media data to represent wind hazards and associated damage. Research has focused on a two-phase approach: 1) to have citizen scientists perform their own `damage survey' (i.e., questionnaire) with known damage to assess uncertainty in estimation and 2) downloading and analysis of social media text and imagery streams to ascertain the possibility of performing `unstructured damage surveys'. Early results have shown that the untrained public can estimate tornado damage levels in residential structures with some accuracy. In addition, valuable windstorm hazard and damage information in both text and imagery can be extracted and archived from Twitter in an automated fashion. Information extracted from these sources will feed into advances in hazard and disaster modeling, social-cognitive theories of human behavior and decision-making for hazard mitigation.

  1. Determinants of birth intervals in Kerala: an application of Cox's hazard model.

    PubMed

    Nair, S N

    1996-01-01

    "The present study is an attempt to delineate the differences in the patterns and determinants of birth intervals which appear highly relevant in a transitional population such as Kerala [India]. In this country two comparable surveys, with a period difference of 20 years, were conducted. The study tries to estimate the effects of socio-economic, demographic and proximate variables using Cox's proportional hazard model. For the former data-set, socio-economic variables have [a] significant effect on birth intervals, while for the latter data proximate variables are the significant determinants of birth intervals." (SUMMARY IN ITA AND FRE) excerpt

  2. Robustness of survival estimates for radio-marked animals

    USGS Publications Warehouse

    Bunck, C.M.; Chen, C.-L.

    1992-01-01

    Telemetry techniques are often used to study the survival of birds and mammals; particularly whcn mark-recapture approaches are unsuitable. Both parametric and nonparametric methods to estimate survival have becn developed or modified from other applications. An implicit assumption in these approaches is that the probability of re-locating an animal with a functioning transmitter is one. A Monte Carlo study was conducted to determine the bias and variance of the Kaplan-Meier estimator and an estimator based also on the assumption of constant hazard and to eva!uate the performance of the two-sample tests associated with each. Modifications of each estimator which allow a re-Iocation probability of less than one are described and evaluated. Generallv the unmodified estimators were biased but had lower variance. At low sample sizes all estimators performed poorly. Under the null hypothesis, the distribution of all test statistics reasonably approximated the null distribution when survival was low but not when it was high. The power of the two-sample tests were similar.

  3. Attitude Estimation or Quaternion Estimation?

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2003-01-01

    The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.

  4. Analysis of noise pollution in an andesite quarry with the use of simulation studies and evaluation indices.

    PubMed

    Kosała, Krzysztof; Stępień, Bartłomiej

    2016-01-01

    This paper presents the verification of two partial indices proposed for the evaluation of continuous and impulse noise pollution in quarries. These indices, together with the sound power of machines index and the noise hazard index at the workstation, are components of the global index of assessment of noise hazard in the working environment of a quarry. This paper shows the results of acoustic tests carried out in an andesite quarry. Noise generated by machines and from performed blasting works was investigated. On the basis of acoustic measurements carried out in real conditions, the sound power levels of machines and the phenomenon of explosion were determined and, based on the results, three-dimensional models of acoustic noise propagation in the quarry were developed. To assess the degree of noise pollution in the area of the quarry, the continuous and impulse noise indices were used.

  5. Thermoacoustic energy effects in electrical arcs.

    PubMed

    Capelli-Schellpfeffer, M; Miller, G H; Humilier, M

    1999-10-30

    Electrical arcs commonly occur in electrical injury incidents. Historically, safe work distances from an energized surface along with personal barrier protection have been employee safety strategies used to minimize electrical arc hazard exposures. Here, the two-dimensional computational simulation of an electrical arc explosion is reported using color graphics to depict the temperature and acoustic force propagation across the geometry of a hypothetical workroom during a time from 0 to 50 ms after the arc initiation. The theoretical results are compared to the experimental findings of staged tests involving a mannequin worker monitored for electrical current flow, temperature, and pressure, and reported data regarding neurologic injury thresholds. This report demonstrates a credible link between electrical explosions and the risk for pressure (acoustic) wave trauma. Our ultimate goal is to protect workers through the design and implementation of preventive strategies that properly account for all electrical arc-induced hazards, including electrical, thermal, and acoustic effects.

  6. Estimating the Tradeoff Between Risk Protection and Moral Hazard with a Nonlinear Budget Set Model of Health Insurance*

    PubMed Central

    Kowalski, Amanda E.

    2015-01-01

    Insurance induces a tradeoff between the welfare gains from risk protection and the welfare losses from moral hazard. Empirical work traditionally estimates each side of the tradeoff separately, potentially yielding mutually inconsistent results. I develop a nonlinear budget set model of health insurance that allows for both simultaneously. Nonlinearities in the budget set arise from deductibles, coinsurance rates, and stoplosses that alter moral hazard as well as risk protection. I illustrate the properties of my model by estimating it using data on employer sponsored health insurance from a large firm. PMID:26664035

  7. Three-dimensional reconstruction of highly complex microscopic samples using scanning electron microscopy and optical flow estimation.

    PubMed

    Baghaie, Ahmadreza; Pahlavan Tafti, Ahmad; Owen, Heather A; D'Souza, Roshan M; Yu, Zeyun

    2017-01-01

    Scanning Electron Microscope (SEM) as one of the major research and industrial equipment for imaging of micro-scale samples and surfaces has gained extensive attention from its emerge. However, the acquired micrographs still remain two-dimensional (2D). In the current work a novel and highly accurate approach is proposed to recover the hidden third-dimension by use of multi-view image acquisition of the microscopic samples combined with pre/post-processing steps including sparse feature-based stereo rectification, nonlocal-based optical flow estimation for dense matching and finally depth estimation. Employing the proposed approach, three-dimensional (3D) reconstructions of highly complex microscopic samples were achieved to facilitate the interpretation of topology and geometry of surface/shape attributes of the samples. As a byproduct of the proposed approach, high-definition 3D printed models of the samples can be generated as a tangible means of physical understanding. Extensive comparisons with the state-of-the-art reveal the strength and superiority of the proposed method in uncovering the details of the highly complex microscopic samples.

  8. Analysis of the phase transition in the two-dimensional Ising ferromagnet using a Lempel-Ziv string-parsing scheme and black-box data-compression utilities

    NASA Astrophysics Data System (ADS)

    Melchert, O.; Hartmann, A. K.

    2015-02-01

    In this work we consider information-theoretic observables to analyze short symbolic sequences, comprising time series that represent the orientation of a single spin in a two-dimensional (2D) Ising ferromagnet on a square lattice of size L2=1282 for different system temperatures T . The latter were chosen from an interval enclosing the critical point Tc of the model. At small temperatures the sequences are thus very regular; at high temperatures they are maximally random. In the vicinity of the critical point, nontrivial, long-range correlations appear. Here we implement estimators for the entropy rate, excess entropy (i.e., "complexity"), and multi-information. First, we implement a Lempel-Ziv string-parsing scheme, providing seemingly elaborate entropy rate and multi-information estimates and an approximate estimator for the excess entropy. Furthermore, we apply easy-to-use black-box data-compression utilities, providing approximate estimators only. For comparison and to yield results for benchmarking purposes, we implement the information-theoretic observables also based on the well-established M -block Shannon entropy, which is more tedious to apply compared to the first two "algorithmic" entropy estimation procedures. To test how well one can exploit the potential of such data-compression techniques, we aim at detecting the critical point of the 2D Ising ferromagnet. Among the above observables, the multi-information, which is known to exhibit an isolated peak at the critical point, is very easy to replicate by means of both efficient algorithmic entropy estimation procedures. Finally, we assess how good the various algorithmic entropy estimates compare to the more conventional block entropy estimates and illustrate a simple modification that yields enhanced results.

  9. Sampling design for groundwater solute transport: Tests of methods and analysis of Cape Cod tracer test data

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.; Garabedian, Stephen P.

    1991-01-01

    Tests of a one-dimensional sampling design methodology on measurements of bromide concentration collected during the natural gradient tracer test conducted by the U.S. Geological Survey on Cape Cod, Massachusetts, demonstrate its efficacy for field studies of solute transport in groundwater and the utility of one-dimensional analysis. The methodology was applied to design of sparse two-dimensional networks of fully screened wells typical of those often used in engineering practice. In one-dimensional analysis, designs consist of the downstream distances to rows of wells oriented perpendicular to the groundwater flow direction and the timing of sampling to be carried out on each row. The power of a sampling design is measured by its effectiveness in simultaneously meeting objectives of model discrimination, parameter estimation, and cost minimization. One-dimensional models of solute transport, differing in processes affecting the solute and assumptions about the structure of the flow field, were considered for description of tracer cloud migration. When fitting each model using nonlinear regression, additive and multiplicative error forms were allowed for the residuals which consist of both random and model errors. The one-dimensional single-layer model of a nonreactive solute with multiplicative error was judged to be the best of those tested. Results show the efficacy of the methodology in designing sparse but powerful sampling networks. Designs that sample five rows of wells at five or fewer times in any given row performed as well for model discrimination as the full set of samples taken up to eight times in a given row from as many as 89 rows. Also, designs for parameter estimation judged to be good by the methodology were as effective in reducing the variance of parameter estimates as arbitrary designs with many more samples. Results further showed that estimates of velocity and longitudinal dispersivity in one-dimensional models based on data from only five rows of fully screened wells each sampled five or fewer times were practically equivalent to values determined from moments analysis of the complete three-dimensional set of 29,285 samples taken during 16 sampling times.

  10. Evaluating large-scale propensity score performance through real-world and synthetic data experiments.

    PubMed

    Tian, Yuxi; Schuemie, Martijn J; Suchard, Marc A

    2018-06-22

    Propensity score adjustment is a popular approach for confounding control in observational studies. Reliable frameworks are needed to determine relative propensity score performance in large-scale studies, and to establish optimal propensity score model selection methods. We detail a propensity score evaluation framework that includes synthetic and real-world data experiments. Our synthetic experimental design extends the 'plasmode' framework and simulates survival data under known effect sizes, and our real-world experiments use a set of negative control outcomes with presumed null effect sizes. In reproductions of two published cohort studies, we compare two propensity score estimation methods that contrast in their model selection approach: L1-regularized regression that conducts a penalized likelihood regression, and the 'high-dimensional propensity score' (hdPS) that employs a univariate covariate screen. We evaluate methods on a range of outcome-dependent and outcome-independent metrics. L1-regularization propensity score methods achieve superior model fit, covariate balance and negative control bias reduction compared with the hdPS. Simulation results are mixed and fluctuate with simulation parameters, revealing a limitation of simulation under the proportional hazards framework. Including regularization with the hdPS reduces commonly reported non-convergence issues but has little effect on propensity score performance. L1-regularization incorporates all covariates simultaneously into the propensity score model and offers propensity score performance superior to the hdPS marginal screen.

  11. Quadratic Frequency Modulation Signals Parameter Estimation Based on Two-Dimensional Product Modified Parameterized Chirp Rate-Quadratic Chirp Rate Distribution.

    PubMed

    Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong

    2018-05-19

    In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.

  12. Detrending moving average algorithm for multifractals

    NASA Astrophysics Data System (ADS)

    Gu, Gao-Feng; Zhou, Wei-Xing

    2010-07-01

    The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.

  13. GPS Imaging of Time-Variable Earthquake Hazard: The Hilton Creek Fault, Long Valley California

    NASA Astrophysics Data System (ADS)

    Hammond, W. C.; Blewitt, G.

    2016-12-01

    The Hilton Creek Fault, in Long Valley, California is a down-to-the-east normal fault that bounds the eastern edge of the Sierra Nevada/Great Valley microplate, and lies half inside and half outside the magmatically active caldera. Despite the dense coverage with GPS networks, the rapid and time-variable surface deformation attributable to sporadic magmatic inflation beneath the resurgent dome makes it difficult to use traditional geodetic methods to estimate the slip rate of the fault. While geologic studies identify cumulative offset, constrain timing of past earthquakes, and constrain a Quaternary slip rate to within 1-5 mm/yr, it is not currently possible to use geologic data to evaluate how the potential for slip correlates with transient caldera inflation. To estimate time-variable seismic hazard of the fault we estimate its instantaneous slip rate from GPS data using a new set of algorithms for robust estimation of velocity and strain rate fields and fault slip rates. From the GPS time series, we use the robust MIDAS algorithm to obtain time series of velocity that are highly insensitive to the effects of seasonality, outliers and steps in the data. We then use robust imaging of the velocity field to estimate a gridded time variable velocity field. Then we estimate fault slip rate at each time using a new technique that forms ad-hoc block representations that honor fault geometries, network complexity, connectivity, but does not require labor-intensive drawing of block boundaries. The results are compared to other slip rate estimates that have implications for hazard over different time scales. Time invariant long term seismic hazard is proportional to the long term slip rate accessible from geologic data. Contemporary time-invariant hazard, however, may differ from the long term rate, and is estimated from the geodetic velocity field that has been corrected for the effects of magmatic inflation in the caldera using a published model of a dipping ellipsoidal magma chamber. Contemporary time-variable hazard can be estimated from the time variable slip rate estimated from the evolving GPS velocity field.

  14. A comparison of two- and three-dimensional stochastic models of regional solute movement

    USGS Publications Warehouse

    Shapiro, A.M.; Cvetkovic, V.D.

    1990-01-01

    Recent models of solute movement in porous media that are based on a stochastic description of the porous medium properties have been dedicated primarily to a three-dimensional interpretation of solute movement. In many practical problems, however, it is more convenient and consistent with measuring techniques to consider flow and solute transport as an areal, two-dimensional phenomenon. The physics of solute movement, however, is dependent on the three-dimensional heterogeneity in the formation. A comparison of two- and three-dimensional stochastic interpretations of solute movement in a porous medium having a statistically isotropic hydraulic conductivity field is investigated. To provide an equitable comparison between the two- and three-dimensional analyses, the stochastic properties of the transmissivity are defined in terms of the stochastic properties of the hydraulic conductivity. The variance of the transmissivity is shown to be significantly reduced in comparison to that of the hydraulic conductivity, and the transmissivity is spatially correlated over larger distances. These factors influence the two-dimensional interpretations of solute movement by underestimating the longitudinal and transverse growth of the solute plume in comparison to its description as a three-dimensional phenomenon. Although this analysis is based on small perturbation approximations and the special case of a statistically isotropic hydraulic conductivity field, it casts doubt on the use of a stochastic interpretation of the transmissivity in describing regional scale movement. However, by assuming the transmissivity to be the vertical integration of the hydraulic conductivity field at a given position, the stochastic properties of the hydraulic conductivity can be estimated from the stochastic properties of the transmissivity and applied to obtain a more accurate interpretation of solute movement. ?? 1990 Kluwer Academic Publishers.

  15. Inference for High-dimensional Differential Correlation Matrices *

    PubMed Central

    Cai, T. Tony; Zhang, Anru

    2015-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed. PMID:26500380

  16. Mechanism of Superconductivity in Quasi-Two-Dimensional Organic Conductor β-(BDA-TTP) Salts

    NASA Astrophysics Data System (ADS)

    Nonoyama, Yoshito; Maekawa, Yukiko; Kobayashi, Akito; Suzumura, Yoshikazu; Ito, Hiroshi

    2008-09-01

    We investigate theoretically the superconductivity of two-dimensional organic conductors, β-(BDA-TTP)2SbF6 and β-(BDA-TTP)2AsF6, to understand the role of the spin and charge fluctuations. The transition temperature is estimated by applying random phase approximation to an extended Hubbard model wherein realistic transfer energies are estimated by extended Hückel calculation. We find a gapless superconducting state with a dxy-like symmetry, which is consistent with the experimental results obtained by specific heat and scanning tunneling microscope. In the present model with an effectively half-filled triangular lattice, spin fluctuation competes with charge fluctuation as a mechanism of pairing interaction since both fluctuations have the same characteristic momentum q=(π,0) for V being smaller than U. This is in contrast to a model with a quarter-filled square lattice, wherein both fluctuations contribute cooperatively to pairing interaction due to fluctuations having different characteristic momenta. The resultant difference in the superconductivity of these two materials is also discussed.

  17. Structure and stability of genetic variance-covariance matrices: A Bayesian sparse factor analysis of transcriptional variation in the three-spined stickleback.

    PubMed

    Siren, J; Ovaskainen, O; Merilä, J

    2017-10-01

    The genetic variance-covariance matrix (G) is a quantity of central importance in evolutionary biology due to its influence on the rate and direction of multivariate evolution. However, the predictive power of empirically estimated G-matrices is limited for two reasons. First, phenotypes are high-dimensional, whereas traditional statistical methods are tuned to estimate and analyse low-dimensional matrices. Second, the stability of G to environmental effects and over time remains poorly understood. Using Bayesian sparse factor analysis (BSFG) designed to estimate high-dimensional G-matrices, we analysed levels variation and covariation in 10,527 expressed genes in a large (n = 563) half-sib breeding design of three-spined sticklebacks subject to two temperature treatments. We found significant differences in the structure of G between the treatments: heritabilities and evolvabilities were higher in the warm than in the low-temperature treatment, suggesting more and faster opportunity to evolve in warm (stressful) conditions. Furthermore, comparison of G and its phenotypic equivalent P revealed the latter is a poor substitute of the former. Most strikingly, the results suggest that the expected impact of G on evolvability-as well as the similarity among G-matrices-may depend strongly on the number of traits included into analyses. In our results, the inclusion of only few traits in the analyses leads to underestimation in the differences between the G-matrices and their predicted impacts on evolution. While the results highlight the challenges involved in estimating G, they also illustrate that by enabling the estimation of large G-matrices, the BSFG method can improve predicted evolutionary responses to selection. © 2017 John Wiley & Sons Ltd.

  18. Extended Kalman Filter framework for forecasting shoreline evolution

    USGS Publications Warehouse

    Long, Joseph; Plant, Nathaniel G.

    2012-01-01

    A shoreline change model incorporating both long- and short-term evolution is integrated into a data assimilation framework that uses sparse observations to generate an updated forecast of shoreline position and to estimate unobserved geophysical variables and model parameters. Application of the assimilation algorithm provides quantitative statistical estimates of combined model-data forecast uncertainty which is crucial for developing hazard vulnerability assessments, evaluation of prediction skill, and identifying future data collection needs. Significant attention is given to the estimation of four non-observable parameter values and separating two scales of shoreline evolution using only one observable morphological quantity (i.e. shoreline position).

  19. Statistical analysis of the uncertainty related to flood hazard appraisal

    NASA Astrophysics Data System (ADS)

    Notaro, Vincenza; Freni, Gabriele

    2015-12-01

    The estimation of flood hazard frequency statistics for an urban catchment is of great interest in practice. It provides the evaluation of potential flood risk and related damage and supports decision making for flood risk management. Flood risk is usually defined as function of the probability, that a system deficiency can cause flooding (hazard), and the expected damage, due to the flooding magnitude (damage), taking into account both the exposure and the vulnerability of the goods at risk. The expected flood damage can be evaluated by an a priori estimation of potential damage caused by flooding or by interpolating real damage data. With regard to flood hazard appraisal several procedures propose to identify some hazard indicator (HI) such as flood depth or the combination of flood depth and velocity and to assess the flood hazard corresponding to the analyzed area comparing the HI variables with user-defined threshold values or curves (penalty curves or matrixes). However, flooding data are usually unavailable or piecemeal allowing for carrying out a reliable flood hazard analysis, therefore hazard analysis is often performed by means of mathematical simulations aimed at evaluating water levels and flow velocities over catchment surface. As results a great part of the uncertainties intrinsic to flood risk appraisal can be related to the hazard evaluation due to the uncertainty inherent to modeling results and to the subjectivity of the user defined hazard thresholds applied to link flood depth to a hazard level. In the present work, a statistical methodology was proposed for evaluating and reducing the uncertainties connected with hazard level estimation. The methodology has been applied to a real urban watershed as case study.

  20. An overall strategy based on regression models to estimate relative survival and model the effects of prognostic factors in cancer survival studies.

    PubMed

    Remontet, L; Bossard, N; Belot, A; Estève, J

    2007-05-10

    Relative survival provides a measure of the proportion of patients dying from the disease under study without requiring the knowledge of the cause of death. We propose an overall strategy based on regression models to estimate the relative survival and model the effects of potential prognostic factors. The baseline hazard was modelled until 10 years follow-up using parametric continuous functions. Six models including cubic regression splines were considered and the Akaike Information Criterion was used to select the final model. This approach yielded smooth and reliable estimates of mortality hazard and allowed us to deal with sparse data taking into account all the available information. Splines were also used to model simultaneously non-linear effects of continuous covariates and time-dependent hazard ratios. This led to a graphical representation of the hazard ratio that can be useful for clinical interpretation. Estimates of these models were obtained by likelihood maximization. We showed that these estimates could be also obtained using standard algorithms for Poisson regression. Copyright 2006 John Wiley & Sons, Ltd.

  1. Design of a New Concentration Series for the Orthogonal Sample Design Approach and Estimation of the Number of Reactions in Chemical Systems.

    PubMed

    Shi, Jiajia; Liu, Yuhai; Guo, Ran; Li, Xiaopei; He, Anqi; Gao, Yunlong; Wei, Yongju; Liu, Cuige; Zhao, Ying; Xu, Yizhuang; Noda, Isao; Wu, Jinguang

    2015-11-01

    A new concentration series is proposed for the construction of a two-dimensional (2D) synchronous spectrum for orthogonal sample design analysis to probe intermolecular interaction between solutes dissolved in the same solutions. The obtained 2D synchronous spectrum possesses the following two properties: (1) cross peaks in the 2D synchronous spectra can be used to reflect intermolecular interaction reliably, since interference portions that have nothing to do with intermolecular interaction are completely removed, and (2) the two-dimensional synchronous spectrum produced can effectively avoid accidental collinearity. Hence, the correct number of nonzero eigenvalues can be obtained so that the number of chemical reactions can be estimated. In a real chemical system, noise present in one-dimensional spectra may also produce nonzero eigenvalues. To get the correct number of chemical reactions, we classified nonzero eigenvalues into significant nonzero eigenvalues and insignificant nonzero eigenvalues. Significant nonzero eigenvalues can be identified by inspecting the pattern of the corresponding eigenvector with help of the Durbin-Watson statistic. As a result, the correct number of chemical reactions can be obtained from significant nonzero eigenvalues. This approach provides a solid basis to obtain insight into subtle spectral variations caused by intermolecular interaction.

  2. A one- and two-layer model for estimating evapotranspiration with remotely sensed surface temperature and ground-based meteorological data over partial canopy cover

    NASA Technical Reports Server (NTRS)

    Kustas, William P.; Choudhury, Bhaskar J.; Kunkel, Kenneth E.

    1989-01-01

    Surface-air temperature differences are commonly used in a bulk resistance equation for estimating sensible heat flux (H), which is inserted in the one-dimensional energy balance equation to solve for the latent heat flux (LE) as a residual. Serious discrepancies between estimated and measured LE have been observed for partial-canopy-cover conditions, which are mainly attributed to inappropriate estimates of H. To improve the estimates of H over sparse canopies, one- and two-layer resistance models that account for some of the factors causing poor agreement are developed. The utility of the two models is tested with remotely sensed and micrometeorological data for a furrowed cotton field with 20 percent cover and a dry soil surface. It is found that the one-layer model performs better than the two-layer model when a theoretical bluff-body correction for heat transfer is used instead of an empirical adjustment; otherwise, the two-layer model is better.

  3. Perception of potential nuclear disaster: the relation of likelihood and consequence estimates of risk.

    PubMed

    Mehta, M D; Simpson-Housley, P

    1994-12-01

    This study examined the correlations of ratings of expectation of a future disaster in a nuclear power plant and estimation of its consequences in a random sample of 150 adults who lived within two kilometers of a nuclear power plant. Analysis suggested a significant positive but low relation. This finding indicates that risk perception might be explored using constellations of beliefs and attitudes toward hazards without invoking personality characteristics like trait anxiety or demographic variables such as gender.

  4. A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity.

    PubMed

    Yao, Yijun; Verginelli, Iason; Suuberg, Eric M

    2017-05-01

    In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.

  5. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  6. Generalized moments expansion applied to the two-dimensional S= 1 /2 Heisenberg model

    NASA Astrophysics Data System (ADS)

    Mancini, Jay D.; Murawski, Robert K.; Fessatidis, Vassilios; Bowen, Samuel P.

    2005-12-01

    In this work we derive a generalized moments expansion (GMX), to third order, of which the well-established connected moments expansion and the alternate moments expansion are shown to be special cases. We discuss the benefits of the GMX with respect to the avoidance of singularities which are known to plague such moments methods. We then apply the GMX estimates for the ground-state energy for the two-dimensional S=1/2 Heisenberg square lattice and compare these results to those of both spin-wave theory and the linked-cluster expansion.

  7. Force balance on two-dimensional superconductors with a single moving vortex

    NASA Astrophysics Data System (ADS)

    Chung, Chun Kit; Arahata, Emiko; Kato, Yusuke

    2014-03-01

    We study forces on two-dimensional superconductors with a single moving vortex based on a recent fully self-consistent calculation of DC conductivity in an s-wave superconductor (E. Arahata and Y. Kato, arXiv:1310.0566). By considering momentum balance of the whole liquid, we attempt to identify various contributions to the total transverse force on the vortex. This provides an estimation of the effective Magnus force based on the quasiclassical theory generalized by Kita [T. Kita, Phys. Rev. B, 64, 054503 (2001)], which allows for the Hall effect in vortex states.

  8. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time‐to‐Event Analysis

    PubMed Central

    Gong, Xiajing; Hu, Meng

    2018-01-01

    Abstract Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time‐to‐event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high‐dimensional data featured by a large number of predictor variables. Our results showed that ML‐based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high‐dimensional data. The prediction performances of ML‐based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML‐based methods provide a powerful tool for time‐to‐event analysis, with a built‐in capacity for high‐dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. PMID:29536640

  9. An Inertial Dual-State State Estimator for Precision Planetary Landing with Hazard Detection and Avoidance

    NASA Technical Reports Server (NTRS)

    Bishop, Robert H.; DeMars, Kyle; Trawny, Nikolas; Crain, Tim; Hanak, Chad; Carson, John M.; Christian, John

    2016-01-01

    The navigation filter architecture successfully deployed on the Morpheus flight vehicle is presented. The filter was developed as a key element of the NASA Autonomous Landing and Hazard Avoidance Technology (ALHAT) project and over the course of 15 free fights was integrated into the Morpheus vehicle, operations, and flight control loop. Flight testing completed by demonstrating autonomous hazard detection and avoidance, integration of an altimeter, surface relative velocity (velocimeter) and hazard relative navigation (HRN) measurements into the onboard dual-state inertial estimator Kalman flter software, and landing within 2 meters of the vertical testbed GPS-based navigation solution at the safe landing site target. Morpheus followed a trajectory that included an ascent phase followed by a partial descent-to-landing, although the proposed filter architecture is applicable to more general planetary precision entry, descent, and landings. The main new contribution is the incorporation of a sophisticated hazard relative navigation sensor-originally intended to locate safe landing sites-into the navigation system and employed as a navigation sensor. The formulation of a dual-state inertial extended Kalman filter was designed to address the precision planetary landing problem when viewed as a rendezvous problem with an intended landing site. For the required precision navigation system that is capable of navigating along a descent-to-landing trajectory to a precise landing, the impact of attitude errors on the translational state estimation are included in a fully integrated navigation structure in which translation state estimation is combined with attitude state estimation. The map tie errors are estimated as part of the process, thereby creating a dual-state filter implementation. Also, the filter is implemented using inertial states rather than states relative to the target. External measurements include altimeter, velocimeter, star camera, terrain relative navigation sensor, and a hazard relative navigation sensor providing information regarding hazards on a map generated on-the-fly.

  10. Association between GFR Estimated by Multiple Methods at Dialysis Commencement and Patient Survival

    PubMed Central

    Wong, Muh Geot; Pollock, Carol A.; Cooper, Bruce A.; Branley, Pauline; Collins, John F.; Craig, Jonathan C.; Kesselhut, Joan; Luxton, Grant; Pilmore, Andrew; Harris, David C.

    2014-01-01

    Summary Background and objectives The Initiating Dialysis Early and Late study showed that planned early or late initiation of dialysis, based on the Cockcroft and Gault estimation of GFR, was associated with identical clinical outcomes. This study examined the association of all-cause mortality with estimated GFR at dialysis commencement, which was determined using multiple formulas. Design, setting, participants, & measurements Initiating Dialysis Early and Late trial participants were stratified into tertiles according to the estimated GFR measured by Cockcroft and Gault, Modification of Diet in Renal Disease, or Chronic Kidney Disease-Epidemiology Collaboration formula at dialysis commencement. Patient survival was determined using multivariable Cox proportional hazards model regression. Results Only Initiating Dialysis Early and Late trial participants who commenced on dialysis were included in this study (n=768). A total of 275 patients died during the study. After adjustment for age, sex, racial origin, body mass index, diabetes, and cardiovascular disease, no significant differences in survival were observed between estimated GFR tertiles determined by Cockcroft and Gault (lowest tertile adjusted hazard ratio, 1.11; 95% confidence interval, 0.82 to 1.49; middle tertile hazard ratio, 1.29; 95% confidence interval, 0.96 to 1.74; highest tertile reference), Modification of Diet in Renal Disease (lowest tertile hazard ratio, 0.88; 95% confidence interval, 0.63 to 1.24; middle tertile hazard ratio, 1.20; 95% confidence interval, 0.90 to 1.61; highest tertile reference), and Chronic Kidney Disease-Epidemiology Collaboration equations (lowest tertile hazard ratio, 0.93; 95% confidence interval, 0.67 to 1.27; middle tertile hazard ratio, 1.15; 95% confidence interval, 0.86 to 1.54; highest tertile reference). Conclusion Estimated GFR at dialysis commencement was not significantly associated with patient survival, regardless of the formula used. However, a clinically important association cannot be excluded, because observed confidence intervals were wide. PMID:24178976

  11. Atmospheric Carbon Dioxide and the Global Carbon Cycle: The Key Uncertainties

    DOE R&D Accomplishments Database

    Peng, T. H.; Post, W. M.; DeAngelis, D. L.; Dale, V. H.; Farrell, M. P.

    1987-12-01

    The biogeochemical cycling of carbon between its sources and sinks determines the rate of increase in atmospheric CO{sub 2} concentrations. The observed increase in atmospheric CO{sub 2} content is less than the estimated release from fossil fuel consumption and deforestation. This discrepancy can be explained by interactions between the atmosphere and other global carbon reservoirs such as the oceans, and the terrestrial biosphere including soils. Undoubtedly, the oceans have been the most important sinks for CO{sub 2} produced by man. But, the physical, chemical, and biological processes of oceans are complex and, therefore, credible estimates of CO{sub 2} uptake can probably only come from mathematical models. Unfortunately, one- and two-dimensional ocean models do not allow for enough CO{sub 2} uptake to accurately account for known releases. Thus, they produce higher concentrations of atmospheric CO{sub 2} than was historically the case. More complex three-dimensional models, while currently being developed, may make better use of existing tracer data than do one- and two-dimensional models and will also incorporate climate feedback effects to provide a more realistic view of ocean dynamics and CO{sub 2} fluxes. The instability of current models to estimate accurately oceanic uptake of CO{sub 2} creates one of the key uncertainties in predictions of atmospheric CO{sub 2} increases and climate responses over the next 100 to 200 years.

  12. Coordinated pre-preemption of traffic signals to enhance railroad grade crossing safety in urban areas and estimation of train impacts to arterial travel time delay : [technical summary].

    DOT National Transportation Integrated Search

    2014-01-01

    Rail lines present two major challenges to the : roadways they intersect: potential for collisions : and increased congestion. In addition, congestion : can contribute collision hazards when drivers are : impatient or vehicles are prevented from clea...

  13. Pose determination of a blade implant in three dimensions from a single two-dimensional radiograph.

    PubMed

    Toti, Paolo; Barone, Antonio; Marconcini, Simone; Menchini-Fabris, Giovanni Battista; Martuscelli, Ranieri; Covani, Ugo

    2018-05-01

    The aim of the study was to introduce a mathematical method to estimate the correct pose of a blade by evaluating the radiographic features obtained from a single two-dimensional image. Blade-form implant bed preparation was performed using the piezosurgery device, and placement was attained with the use of magnetic mallet. The pose determination of the blade was described by means of three consecutive rotations defined by three angles of orientation (triplet φ, θ and ψ). Retrospective analysis on periapical radiographs was performed. This method was used to compare implant (axial length along the marker, i.e. the implant structure) vs angular correction factor (a trigonometric function of the triplet). The accuracy of the method was tested by generating two-dimensional radiographic simulations of the blades, which were then compared with the images of the implants as appearing on the real radiographs. Two patients had to be excluded from further evaluation because the values of the estimated pose angles showed a too-wide range to be effective for a good standardization of serial radiographs: intrapatient range from baseline to 1-year survey was > of a threshold determined by the clinicians (30°). The linear dependence between implant (CF°) and angular correction factor (CF^) was estimated by a robust linear regression, yielding the following coefficients: slope, 0.908; intercept, -0.092; and coefficient of determination, 0.924. The absolute error in accuracy was -0.29 ± 4.35, 0.23 ± 3.81 and 0.64 ± 1.18°, respectively, for the angles φ, θ and ψ. The present theoretical and experimental study established the possibility of determining, a posteriori, a unique triplet of angles (φ, θ and ψ) which described the pose of a blade upon a single two-dimensional radiograph, and of suggesting a method to detect cases in which the standardized geometric projection failed. The angular correction of the bone level yielded results very close to those obtained with an internal marker related to the implant length.

  14. Reliable two-dimensional phase unwrapping method using region growing and local linear estimation.

    PubMed

    Zhou, Kun; Zaitsev, Maxim; Bao, Shanglian

    2009-10-01

    In MRI, phase maps can provide useful information about parameters such as field inhomogeneity, velocity of blood flow, and the chemical shift between water and fat. As phase is defined in the (-pi,pi] range, however, phase wraps often occur, which complicates image analysis and interpretation. This work presents a two-dimensional phase unwrapping algorithm that uses quality-guided region growing and local linear estimation. The quality map employs the variance of the second-order partial derivatives of the phase as the quality criterion. Phase information from unwrapped neighboring pixels is used to predict the correct phase of the current pixel using a linear regression method. The algorithm was tested on both simulated and real data, and is shown to successfully unwrap phase images that are corrupted by noise and have rapidly changing phase. (c) 2009 Wiley-Liss, Inc.

  15. A BDDC Algorithm with Deluxe Scaling for Three-Dimensional H (curl) Problems

    DOE PAGES

    Dohrmann, Clark R.; Widlund, Olof B.

    2015-04-28

    In our paper, we present and analyze a BDDC algorithm for a class of elliptic problems in the three-dimensional H(curl) space. Compared with existing results, our condition number estimate requires fewer assumptions and also involves two fewer powers of log(H/h), making it consistent with optimal estimates for other elliptic problems. Here, H/his the maximum of Hi/hi over all subdomains, where Hi and hi are the diameter and the smallest element diameter for the subdomain Ωi. The analysis makes use of two recent developments. The first is our new approach to averaging across the subdomain interfaces, while the second is amore » new technical tool that allows arguments involving trace classes to be avoided. Furthermore, numerical examples are presented to confirm the theory and demonstrate the importance of the new averaging approach in certain cases.« less

  16. Folding Properties of Two-Dimensional Deployable Membrane Using FEM Analyses

    NASA Astrophysics Data System (ADS)

    Satou, Yasutaka; Furuya, Hiroshi

    Folding FEM analyses are presented to examine folding properties of a two-dimensional deployable membrane for a precise deployment simulation. A fold model of the membrane is proposed by dividing the wrapping fold process into two regions which are the folded state and the transient process. The cross-section of the folded state is assumed to be a repeating structure, and analytical procedures of the repeating structure are constructed. To investigate the mechanical properties of the crease in detail, the bending stiffness is considered in the FEM analyses. As the results of the FEM analyses, the configuration of the membrane and the contact force by the adjacent membrane are obtained quantitatively for an arbitrary layer pitch. Possible occurrence of the plastic deformation is estimated using the Mises stress in the crease. The FEM results are compared with one-dimensional approximation analyses to evaluate these results.

  17. Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Ta'asan, Shlomo

    1996-01-01

    In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.

  18. Study of the hydrodynamics of the formation of flows caused by the interaction of a shock wave with two-dimensional density perturbations on the Iskra-5 laser facility

    NASA Astrophysics Data System (ADS)

    Babanov, A. V.; Barinov, M. A.; Barinov, S. P.; Garanin, R. V.; Zhidkov, N. V.; Kalmykov, N. A.; Kovalenko, V. P.; Kokorin, S. N.; Pinegin, A. V.; Solomatina, E. Yu.; Solomatin, I. I.; Suslov, N. A.

    2017-03-01

    The hydrodynamics of the flow formation due to the interaction of a shock wave with two-dimensional density perturbations is experimentally investigated on the Iskra-5 laser facility. Shadow images of a jet arising as a result of the impact of a shock wave (formed by a soft X-ray pulse from a target-illuminator) on a flat aluminium target with a blind cylindrical cavity are recorded in experiments with point-like X-ray backlighting having a photon energy of ~4.5 keV. The sizes and mass of the jet ejected from the aluminium cavity by this shock wave are estimated. The experimental data are compared with the results of numerical simulation of the jet formation and dynamics according to the two-dimensional MID-ND2D code.

  19. Landslide-Generated Waves in a Dam Reservoir: The Effects of Landslide Rheology and Initial Submergence

    NASA Astrophysics Data System (ADS)

    Yavari Ramsheh, S.; Ataie-Ashtiani, B.

    2017-12-01

    Recent studies revealed that landslide-generated waves (LGWs) impose the largest tsunami hazard to our shorelines although earthquake-generated waves (EGWs) occur more often. Also, EGWs are commonly followed by a large number of landslide hazards. Dam reservoirs are more vulnerable to landslide events due to being located in mountainous areas. Accurate estimation of such hazards and their destructive consequences help authorities to reduce their risks by constructive measures. In this regard, a two-layer two-phase Coulomb mixture flow (2LCMFlow) model is applied to investigate the effects of landslide characteristics on LGWs for a real-sized simplification of the Maku dam reservoir, located in the North of Iran. A sensitivity analysis is performed on the role of landslide rheological and constitutive parameters and its initial submergence in LGW characteristics and formation patterns. The numerical results show that for a subaerial (SAL), a semi-submerged (SSL), and a submarine landslide (SML) with the same initial geometry, the SSLs can create the largest wave crest, up to 60% larger than SALs, for dense material. However, SMLs generally create the largest wave troughs and SALs travel the maximum runout distances beneath the water. Regarding the two-phase (solid-liquid) nature of the landslide, when interestial water is isolated from the water layer along the water/landslide interface, a LGW with up to 30% higher wave crest can be created. In this condition, increasing the pore water pressure within the granular layer results in up to 35% higher wave trough and 40% lower wave crest at the same time. These results signify the importance of appropriate description of two-phase nature and rheological behavior of landslides in accurate estimation of LGWs which demands further numerical, physical, and field studies about such phenomena.

  20. Calibrating a Rainfall-Runoff and Routing Model for the Continental United States

    NASA Astrophysics Data System (ADS)

    Jankowfsky, S.; Li, S.; Assteerawatt, A.; Tillmanns, S.; Hilberts, A.

    2014-12-01

    Catastrophe risk models are widely used in the insurance industry to estimate the cost of risk. The models consist of hazard models linked to vulnerability and financial loss models. In flood risk models, the hazard model generates inundation maps. In order to develop country wide inundation maps for different return periods a rainfall-runoff and routing model is run using stochastic rainfall data. The simulated discharge and runoff is then input to a two dimensional inundation model, which produces the flood maps. In order to get realistic flood maps, the rainfall-runoff and routing models have to be calibrated with observed discharge data. The rainfall-runoff model applied here is a semi-distributed model based on the Topmodel (Beven and Kirkby, 1979) approach which includes additional snowmelt and evapotranspiration models. The routing model is based on the Muskingum-Cunge (Cunge, 1969) approach and includes the simulation of lakes and reservoirs using the linear reservoir approach. Both models were calibrated using the multiobjective NSGA-II (Deb et al., 2002) genetic algorithm with NLDAS forcing data and around 4500 USGS discharge gauges for the period from 1979-2013. Additional gauges having no data after 1979 were calibrated using CPC rainfall data. The model performed well in wetter regions and shows the difficulty of simulating areas with sinks such as karstic areas or dry areas. Beven, K., Kirkby, M., 1979. A physically based, variable contributing area model of basin hydrology. Hydrol. Sci. Bull. 24 (1), 43-69. Cunge, J.A., 1969. On the subject of a flood propagation computation method (Muskingum method), J. Hydr. Research, 7(2), 205-230. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T., 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on evolutionary computation, 6(2), 182-197.

  1. A new method of fully three dimensional analysis of stress field in the soil layer of a soil-mantled hillslope

    NASA Astrophysics Data System (ADS)

    Wu, Y. H.; Nakakita, E.

    2017-12-01

    Hillslope stability is highly related to stress equilibrium near the top surface of soil-mantled hillslopes. Stress field in a hillslope can also be significantly altered by variable groundwater motion under the rainfall influence as well as by different vegetation above and below the slope. The topographic irregularity, biological effects from vegetation and variable rainfall patterns couple with others to make the prediction of shallow landslide complicated and difficult. In an increasing tendency of extreme rainfall, the mountainous area in Japan has suffered more and more shallow landslides. To better assess shallow landslide hazards, we would like to develop a new mechanically-based method to estimate the fully three-dimensional stress field in hillslopes. The surface soil-layer of hillslope is modelled as a poroelastic medium, and the tree surcharge on the slope surface is considered as a boundary input of stress forcing. The modelling of groundwater motion is involved to alter effective stress state in the soil layer, and the tree root-reinforcement estimated by allometric equations is taken into account for influencing the soil strength. The Mohr-Coulomb failure theory is then used for locating possible yielding surfaces, or says for identifying failure zones. This model is implemented by using the finite element method. Finally, we performed a case study of the real event of massive shallow landslides occurred in Hiroshima in August, 2014. The result shows good agreement with the field condition.

  2. Estimates of occupational safety and health impacts resulting from large-scale production of major photovoltaic technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, T.; Ungers, L.; Briggs, T.

    1980-08-01

    The purpose of this study is to estimate both quantitatively and qualitatively, the worker and societal risks attributable to four photovoltaic cell (solar cell) production processes. Quantitative risk values were determined by use of statistics from the California semiconductor industry. The qualitative risk assessment was performed using a variety of both governmental and private sources of data. The occupational health statistics derived from the semiconductor industry were used to predict injury and fatality levels associated with photovoltaic cell manufacturing. The use of these statistics to characterize the two silicon processes described herein is defensible from the standpoint that many ofmore » the same process steps and materials are used in both the semiconductor and photovoltaic industries. These health statistics are less applicable to the gallium arsenide and cadmium sulfide manufacturing processes, primarily because of differences in the materials utilized. Although such differences tend to discourage any absolute comparisons among the four photovoltaic cell production processes, certain relative comparisons are warranted. To facilitate a risk comparison of the four processes, the number and severity of process-related chemical hazards were assessed. This qualitative hazard assessment addresses both the relative toxicity and the exposure potential of substances in the workplace. In addition to the worker-related hazards, estimates of process-related emissions and wastes are also provided.« less

  3. Modeling a glacial lake outburst flood process chain: the case of Lake Palcacocha and Huaraz, Peru

    NASA Astrophysics Data System (ADS)

    Somos-Valenzuela, Marcelo A.; Chisolm, Rachel E.; Rivas, Denny S.; Portocarrero, Cesar; McKinney, Daene C.

    2016-07-01

    One of the consequences of recent glacier recession in the Cordillera Blanca, Peru, is the risk of glacial lake outburst floods (GLOFs) from lakes that have formed at the base of retreating glaciers. GLOFs are often triggered by avalanches falling into glacial lakes, initiating a chain of processes that may culminate in significant inundation and destruction downstream. This paper presents simulations of all of the processes involved in a potential GLOF originating from Lake Palcacocha, the source of a previously catastrophic GLOF on 13 December 1941, killing about 1800 people in the city of Huaraz, Peru. The chain of processes simulated here includes (1) avalanches above the lake; (2) lake dynamics resulting from the avalanche impact, including wave generation, propagation, and run-up across lakes; (3) terminal moraine overtopping and dynamic moraine erosion simulations to determine the possibility of breaching; (4) flood propagation along downstream valleys; and (5) inundation of populated areas. The results of each process feed into simulations of subsequent processes in the chain, finally resulting in estimates of inundation in the city of Huaraz. The results of the inundation simulations were converted into flood intensity and preliminary hazard maps (based on an intensity-likelihood matrix) that may be useful for city planning and regulation. Three avalanche events with volumes ranging from 0.5 to 3 × 106 m3 were simulated, and two scenarios of 15 and 30 m lake lowering were simulated to assess the potential of mitigating the hazard level in Huaraz. For all three avalanche events, three-dimensional hydrodynamic models show large waves generated in the lake from the impact resulting in overtopping of the damming moraine. Despite very high discharge rates (up to 63.4 × 103 m3 s-1), the erosion from the overtopping wave did not result in failure of the damming moraine when simulated with a hydro-morphodynamic model using excessively conservative soil characteristics that provide very little erosion resistance. With the current lake level, all three avalanche events result in inundation in Huaraz due to wave overtopping, and the resulting preliminary hazard map shows a total affected area of 2.01 km2, most of which is in the high hazard category. Lowering the lake has the potential to reduce the affected area by up to 35 %, resulting in a smaller portion of the inundated area in the high hazard category.

  4. Epidemiology of inflammatory bowel disease among participants of the Millennium Cohort: incidence, deployment-related risk factors, and antecedent episodes of infectious gastroenteritis.

    PubMed

    Porter, C K; Welsh, M; Riddle, M S; Nieh, C; Boyko, E J; Gackstetter, G; Hooper, T I

    2017-04-01

    Crohn's disease (CD) and ulcerative colitis (UC) are two pathotypes of inflammatory bowel disease (IBD) with unique pathology, risk factors and significant morbidity. To estimate incidence and identify IBD risk factors in a US military population, a healthy subset of the US population, using information from the Millennium Cohort Study. Incident IBD was identified from medical encounters from 2001 to 2009 or by self-report. Our primary risk factor of interest, infectious gastroenteritis, was identified from medical encounters and self-reported post-deployment health assessments. Other potential risk factors were assessed using self-reported survey responses and military personnel files. Hazard ratios were estimated using Cox proportional hazards analysis. We estimated 23.2 and 21.9 diagnoses per 100 000 person-years, respectively, for CD and UC. For CD, significant risk factors included [adjusted hazard ratio (aHR), 95% confidence interval]: current smoking (aHR: 2.7, 1.4-5.1), two life stressors (aHR: 2.8, 1.4-5.6) and prior irritable bowel syndrome (aHR: 4.7, 1.5-15.2). There was no significant association with prior infectious gastroenteritis. There was an apparent dose-response relationship between UC risk and an increasing number of life stressors. In addition, antecedent infectious gastroenteritis was associated with almost a three-fold increase in UC risk (aHR: 2.9, 1.4-6.0). Moderate alcohol consumption (aHR: 0.4, 0.2-0.6) was associated with lower UC risk. Stressful conditions and the high risk of infectious gastroenteritis in deployment operations may play a role in the development of IBD in military populations. However, observed differences in risk factors for UC and CD warrant further investigation. © 2017 John Wiley & Sons Ltd.

  5. A 3D Polymer Based Printed Two-Dimensional Laser Scanner

    NASA Astrophysics Data System (ADS)

    Oyman, H. A.; Gokdel, Y. D.; Ferhanoglu, O.; Yalcinkaya, A. D.

    2016-10-01

    A two-dimensional (2D) polymer based scanning mirror with magnetic actuation is developed for imaging applications. Proposed device consists of a circular suspension holding a rectangular mirror and can generate a 2D scan pattern. Three dimensional (3D) printing technology which is used for implementation of the device, offers added flexibility in controlling the cross-sectional profile as well as the stress distribution compared to the traditional planar process technologies. The mirror device is developed to meet a portable, miniaturized confocal microscope application in mind, delivering 4.5 and 4.8 degrees of optical scan angles at 111 and 267 Hz, respectively. As a result of this mechanical performance, the resulting microscope incorporating the mirror is estimated to accomplish a field of view (FOV) of 350 µm × 350 µm.

  6. Thermal conductivity in one-dimensional nonlinear systems

    NASA Astrophysics Data System (ADS)

    Politi, Antonio; Giardinà, Cristian; Livi, Roberto; Vassalli, Massimo

    2000-03-01

    Thermal conducitivity of one-dimensional nonlinear systems typically diverges in the thermodynamic limit, whenever the momentum is conserved (i.e. in the absence of interactions with an external substrate). Evidence comes from detailed studies of Fermi-Pasta-Ulam and diatomic Toda chains. Here, we discuss the first example of a one-dimensional system obeying Fourier law : a chain of coupled rotators. Numerical estimates of the thermal conductivity obtained by simulating a chain in contact with two thermal baths at different temperatures are found to be consistent with those ones based on linear response theory. The dynamics of the Fourier modes provides direct evidence of energy diffusion. The finiteness of the conductivity is traced back to the occurrence of phase-jumps. Our conclusions are confirmed by the analysis of two variants of the rotator model.

  7. Additive hazards regression and partial likelihood estimation for ecological monitoring data across space.

    PubMed

    Lin, Feng-Chang; Zhu, Jun

    2012-01-01

    We develop continuous-time models for the analysis of environmental or ecological monitoring data such that subjects are observed at multiple monitoring time points across space. Of particular interest are additive hazards regression models where the baseline hazard function can take on flexible forms. We consider time-varying covariates and take into account spatial dependence via autoregression in space and time. We develop statistical inference for the regression coefficients via partial likelihood. Asymptotic properties, including consistency and asymptotic normality, are established for parameter estimates under suitable regularity conditions. Feasible algorithms utilizing existing statistical software packages are developed for computation. We also consider a simpler additive hazards model with homogeneous baseline hazard and develop hypothesis testing for homogeneity. A simulation study demonstrates that the statistical inference using partial likelihood has sound finite-sample properties and offers a viable alternative to maximum likelihood estimation. For illustration, we analyze data from an ecological study that monitors bark beetle colonization of red pines in a plantation of Wisconsin.

  8. Smoothing spline ANOVA frailty model for recurrent event data.

    PubMed

    Du, Pang; Jiang, Yihua; Wang, Yuedong

    2011-12-01

    Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data. © 2011, The International Biometric Society.

  9. Linear and nonlinear variable selection in competing risks data.

    PubMed

    Ren, Xiaowei; Li, Shanshan; Shen, Changyu; Yu, Zhangsheng

    2018-06-15

    Subdistribution hazard model for competing risks data has been applied extensively in clinical researches. Variable selection methods of linear effects for competing risks data have been studied in the past decade. There is no existing work on selection of potential nonlinear effects for subdistribution hazard model. We propose a two-stage procedure to select the linear and nonlinear covariate(s) simultaneously and estimate the selected covariate effect(s). We use spectral decomposition approach to distinguish the linear and nonlinear parts of each covariate and adaptive LASSO to select each of the 2 components. Extensive numerical studies are conducted to demonstrate that the proposed procedure can achieve good selection accuracy in the first stage and small estimation biases in the second stage. The proposed method is applied to analyze a cardiovascular disease data set with competing death causes. Copyright © 2018 John Wiley & Sons, Ltd.

  10. MODELS TO ESTIMATE VOLATILE ORGANIC HAZARDOUS AIR POLLUTANT EMISSIONS FROM MUNICIPAL SEWER SYSTEMS

    EPA Science Inventory

    Emissions from municipal sewers are usually omitted from hazardous air pollutant (HAP) emission inventories. This omission may result from a lack of appreciation for the potential emission impact and/or from inadequate emission estimation procedures. This paper presents an analys...

  11. Injuries Associated with Specific Motor Vehicle Hazards: Radiators, Batteries, Power Windows, and Power Roofs

    DOT National Transportation Integrated Search

    1997-07-01

    This report provides estimates of the numbers of persons injured as a result of : hazards involving four specific motor vehicle components: radiators, batteries, : power windows, and power roofs. The injury estimates are based upon data from : the Co...

  12. Estimating Angle-of-Arrival and Time-of-Flight for Multipath Components Using WiFi Channel State Information.

    PubMed

    Ahmed, Afaz Uddin; Arablouei, Reza; Hoog, Frank de; Kusy, Branislav; Jurdak, Raja; Bergmann, Neil

    2018-05-29

    Channel state information (CSI) collected during WiFi packet transmissions can be used for localization of commodity WiFi devices in indoor environments with multipath propagation. To this end, the angle of arrival (AoA) and time of flight (ToF) for all dominant multipath components need to be estimated. A two-dimensional (2D) version of the multiple signal classification (MUSIC) algorithm has been shown to solve this problem using 2D grid search, which is computationally expensive and is therefore not suited for real-time localisation. In this paper, we propose using a modified matrix pencil (MMP) algorithm instead. Specifically, we show that the AoA and ToF estimates can be found independently of each other using the one-dimensional (1D) MMP algorithm and the results can be accurately paired to obtain the AoA⁻ToF pairs for all multipath components. Thus, the 2D estimation problem reduces to running 1D estimation multiple times, substantially reducing the computational complexity. We identify and resolve the problem of degenerate performance when two or more multipath components have the same AoA. In addition, we propose a packet aggregation model that uses the CSI data from multiple packets to improve the performance under noisy conditions. Simulation results show that our algorithm achieves two orders of magnitude reduction in the computational time over the 2D MUSIC algorithm while achieving similar accuracy. High accuracy and low computation complexity of our approach make it suitable for applications that require location estimation to run on resource-constrained embedded devices in real time.

  13. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    NASA Astrophysics Data System (ADS)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially interpolating 100-year dew point values rather than a more gauge-based approach. Site specific reviews demonstrated that both issues had potential for lowering the PMP estimate significantly by affecting the in-place and transposed moisture maximization value and, in turn, the final controlling storm for a given basin size and PMP estimate.

  14. Time frequency requirements for radio interferometric earth physics

    NASA Technical Reports Server (NTRS)

    Thomas, J. B.; Fliegel, H. F.

    1973-01-01

    Two systems of VLBI (Very Long Baseline Interferometry) are now applicable to earth physics: an intercontinental baseline system using antennas of the NASA Deep Space Network, now observing at one-month intervals to determine UTI for spacecraft navigation; and a shorter baseline system called ARIES (Astronomical Radio Interferometric Earth Surveying), to be used to measure crustal movement in California for earthquake hazards estimation. On the basis of experience with the existing DSN system, a careful study has been made to estimate the time and frequency requirements of both the improved intercontinental system and of ARIES. Requirements for the two systems are compared and contrasted.

  15. Numerical Model of Flame Spread Over Solids in Microgravity: A Supplementary Tool for Designing a Space Experiment

    NASA Technical Reports Server (NTRS)

    Shih, Hsin-Yi; Tien, James S.; Ferkul, Paul (Technical Monitor)

    2001-01-01

    The recently developed numerical model of concurrent-flow flame spread over thin solids has been used as a simulation tool to help the designs of a space experiment. The two-dimensional and three-dimensional, steady form of the compressible Navier-Stokes equations with chemical reactions are solved. With the coupled multi-dimensional solver of the radiative heat transfer, the model is capable of answering a number of questions regarding the experiment concept and the hardware designs. In this paper, the capabilities of the numerical model are demonstrated by providing the guidance for several experimental designing issues. The test matrix and operating conditions of the experiment are estimated through the modeling results. The three-dimensional calculations are made to simulate the flame-spreading experiment with realistic hardware configuration. The computed detailed flame structures provide the insight to the data collection. In addition, the heating load and the requirements of the product exhaust cleanup for the flow tunnel are estimated with the model. We anticipate that using this simulation tool will enable a more efficient and successful space experiment to be conducted.

  16. Probabilistic hazard assessment for skin sensitization potency by dose–response modeling using feature elimination instead of quantitative structure–activity relationships

    PubMed Central

    McKim, James M.; Hartung, Thomas; Kleensang, Andre; Sá-Rocha, Vanessa

    2016-01-01

    Supervised learning methods promise to improve integrated testing strategies (ITS), but must be adjusted to handle high dimensionality and dose–response data. ITS approaches are currently fueled by the increasing mechanistic understanding of adverse outcome pathways (AOP) and the development of tests reflecting these mechanisms. Simple approaches to combine skin sensitization data sets, such as weight of evidence, fail due to problems in information redundancy and high dimension-ality. The problem is further amplified when potency information (dose/response) of hazards would be estimated. Skin sensitization currently serves as the foster child for AOP and ITS development, as legislative pressures combined with a very good mechanistic understanding of contact dermatitis have led to test development and relatively large high-quality data sets. We curated such a data set and combined a recursive variable selection algorithm to evaluate the information available through in silico, in chemico and in vitro assays. Chemical similarity alone could not cluster chemicals’ potency, and in vitro models consistently ranked high in recursive feature elimination. This allows reducing the number of tests included in an ITS. Next, we analyzed with a hidden Markov model that takes advantage of an intrinsic inter-relationship among the local lymph node assay classes, i.e. the monotonous connection between local lymph node assay and dose. The dose-informed random forest/hidden Markov model was superior to the dose-naive random forest model on all data sets. Although balanced accuracy improvement may seem small, this obscures the actual improvement in misclassifications as the dose-informed hidden Markov model strongly reduced "false-negatives" (i.e. extreme sensitizers as non-sensitizer) on all data sets. PMID:26046447

  17. Probabilistic hazard assessment for skin sensitization potency by dose-response modeling using feature elimination instead of quantitative structure-activity relationships.

    PubMed

    Luechtefeld, Thomas; Maertens, Alexandra; McKim, James M; Hartung, Thomas; Kleensang, Andre; Sá-Rocha, Vanessa

    2015-11-01

    Supervised learning methods promise to improve integrated testing strategies (ITS), but must be adjusted to handle high dimensionality and dose-response data. ITS approaches are currently fueled by the increasing mechanistic understanding of adverse outcome pathways (AOP) and the development of tests reflecting these mechanisms. Simple approaches to combine skin sensitization data sets, such as weight of evidence, fail due to problems in information redundancy and high dimensionality. The problem is further amplified when potency information (dose/response) of hazards would be estimated. Skin sensitization currently serves as the foster child for AOP and ITS development, as legislative pressures combined with a very good mechanistic understanding of contact dermatitis have led to test development and relatively large high-quality data sets. We curated such a data set and combined a recursive variable selection algorithm to evaluate the information available through in silico, in chemico and in vitro assays. Chemical similarity alone could not cluster chemicals' potency, and in vitro models consistently ranked high in recursive feature elimination. This allows reducing the number of tests included in an ITS. Next, we analyzed with a hidden Markov model that takes advantage of an intrinsic inter-relationship among the local lymph node assay classes, i.e. the monotonous connection between local lymph node assay and dose. The dose-informed random forest/hidden Markov model was superior to the dose-naive random forest model on all data sets. Although balanced accuracy improvement may seem small, this obscures the actual improvement in misclassifications as the dose-informed hidden Markov model strongly reduced " false-negatives" (i.e. extreme sensitizers as non-sensitizer) on all data sets. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Potential postwildfire debris-flow hazards: a prewildfire evaluation for the Sandia and Manzano Mountains and surrounding areas, central New Mexico

    USGS Publications Warehouse

    Tillery, Anne C.; Haas, Jessica R.; Miller, Lara W.; Scott, Joe H.; Thompson, Matthew P.

    2014-01-01

    Wildfire can drastically increase the probability of debris flows, a potentially hazardous and destructive form of mass wasting, in landscapes that have otherwise been stable throughout recent history. Although there is no way to know the exact location, extent, and severity of wildfire, or the subsequent rainfall intensity and duration before it happens, probabilities of fire and debris-flow occurrence for different locations can be estimated with geospatial analysis and modeling efforts. The purpose of this report is to provide information on which watersheds might constitute the most serious, potential, debris-flow hazards in the event of a large-scale wildfire and subsequent rainfall in the Sandia and Manzano Mountains. Potential probabilities and estimated volumes of postwildfire debris flows in the unburned Sandia and Manzano Mountains and surrounding areas were estimated using empirical debris-flow models developed by the U.S. Geological Survey in combination with fire behavior and burn probability models developed by the U.S. Department of Agriculture Forest Service. The locations of the greatest debris-flow hazards correlate with the areas of steepest slopes and simulated crown-fire behavior. The four subbasins with the highest computed debris-flow probabilities (greater than 98 percent) were all in the Manzano Mountains, two flowing east and two flowing west. Volumes in sixteen subbasins were greater than 50,000 square meters and most of these were in the central Manzanos and the western facing slopes of the Sandias. Five subbasins on the west-facing slopes of the Sandia Mountains, four of which have downstream reaches that lead into the outskirts of the City of Albuquerque, are among subbasins in the 98th percentile of integrated relative debris-flow hazard rankings. The bulk of the remaining subbasins in the 98th percentile of integrated relative debris-flow hazard rankings are located along the highest and steepest slopes of the Manzano Mountains. One of the subbasins is several miles upstream from the community of Tajique and another is several miles upstream from the community of Manzano, both on the eastern slopes of the Manzano Mountains. This prewildfire assessment approach is valuable to resource managers because the analysis of the debris-flow threat is made before a wildfire occurs, which facilitates prewildfire management, planning, and mitigation. In northern New Mexico, widespread watershed restoration efforts are being carried out to safeguard vital watersheds against the threat of catastrophic wildfire. This study was initiated to help select ideal locations for the restoration efforts that could have the best return on investment.

  19. ILIAD Testing; and a Kalman Filter for 3-D Pose Estimation

    NASA Technical Reports Server (NTRS)

    Richardson, A. O.

    1996-01-01

    This report presents the results of a two-part project. The first part presents results of performance assessment tests on an Internet Library Information Assembly Data Base (ILIAD). It was found that ILLAD performed best when queries were short (one-to-three keywords), and were made up of rare, unambiguous words. In such cases as many as 64% of the typically 25 returned documents were found to be relevant. It was also found that a query format that was not so rigid with respect to spelling errors and punctuation marks would be more user-friendly. The second part of the report shows the design of a Kalman Filter for estimating motion parameters of a three dimensional object from sequences of noisy data derived from two-dimensional pictures. Given six measured deviation values represendng X, Y, Z, pitch, yaw, and roll, twelve parameters were estimated comprising the six deviations and their time rate of change. Values for the state transiton matrix, the observation matrix, the system noise covariance matrix, and the observation noise covariance matrix were determined. A simple way of initilizing the error covariance matrix was pointed out.

  20. Evaluation of odometry algorithm performances using a railway vehicle dynamic model

    NASA Astrophysics Data System (ADS)

    Allotta, B.; Pugi, L.; Ridolfi, A.; Malvezzi, M.; Vettori, G.; Rindi, A.

    2012-05-01

    In modern railway Automatic Train Protection and Automatic Train Control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. Simplified two-dimensional models of railway vehicles have been usually used for Hardware in the Loop test rig testing of conventional odometry algorithms and of on-board safety relevant subsystems (like the Wheel Slide Protection braking system) in which the train speed is estimated from the measures of the wheel angular speed. Two-dimensional models are not suitable to develop solutions like the inertial type localisation algorithms (using 3D accelerometers and 3D gyroscopes) and the introduction of Global Positioning System (or similar) or the magnetometer. In order to test these algorithms correctly and increase odometry performances, a three-dimensional multibody model of a railway vehicle has been developed, using Matlab-Simulink™, including an efficient contact model which can simulate degraded adhesion conditions (the development and prototyping of odometry algorithms involve the simulation of realistic environmental conditions). In this paper, the authors show how a 3D railway vehicle model, able to simulate the complex interactions arising between different on-board subsystems, can be useful to evaluate the odometry algorithm and safety relevant to on-board subsystem performances.

  1. Chemometrics comparison of gas chromatography with mass spectrometry and comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry Daphnia magna metabolic profiles exposed to salinity.

    PubMed

    Parastar, Hadi; Garreta-Lara, Elba; Campos, Bruno; Barata, Carlos; Lacorte, Silvia; Tauler, Roma

    2018-06-01

    The performances of gas chromatography with mass spectrometry and of comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry are examined through the comparison of Daphnia magna metabolic profiles. Gas chromatography with mass spectrometry and comprehensive two-dimensional gas chromatography with mass spectrometry were used to compare the concentration changes of metabolites under saline conditions. In this regard, a chemometric strategy based on wavelet compression and multivariate curve resolution-alternating least squares is used to compare the performances of gas chromatography with mass spectrometry and comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry for the untargeted metabolic profiling of Daphnia magna in control and salinity-exposed samples. Examination of the results confirmed the outperformance of comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry over gas chromatography with mass spectrometry for the detection of metabolites in D. magna samples. The peak areas of multivariate curve resolution-alternating least squares resolved elution profiles in every sample analyzed by comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry were arranged in a new data matrix that was then modeled by partial least squares discriminant analysis. The control and salt-exposed daphnids samples were discriminated and the most relevant metabolites were estimated using variable importance in projection and selectivity ratio values. Salinity de-regulated 18 metabolites from metabolic pathways involved in protein translation, transmembrane cell transport, carbon metabolism, secondary metabolism, glycolysis, and osmoregulation. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Assessing the Impacts of Flooding Caused by Extreme Rainfall Events Through a Combined Geospatial and Numerical Modeling Approach

    NASA Astrophysics Data System (ADS)

    Santillan, J. R.; Amora, A. M.; Makinano-Santillan, M.; Marqueso, J. T.; Cutamora, L. C.; Serviano, J. L.; Makinano, R. M.

    2016-06-01

    In this paper, we present a combined geospatial and two dimensional (2D) flood modeling approach to assess the impacts of flooding due to extreme rainfall events. We developed and implemented this approach to the Tago River Basin in the province of Surigao del Sur in Mindanao, Philippines, an area which suffered great damage due to flooding caused by Tropical Storms Lingling and Jangmi in the year 2014. The geospatial component of the approach involves extraction of several layers of information such as detailed topography/terrain, man-made features (buildings, roads, bridges) from 1-m spatial resolution LiDAR Digital Surface and Terrain Models (DTM/DSMs), and recent land-cover from Landsat 7 ETM+ and Landsat 8 OLI images. We then used these layers as inputs in developing a Hydrologic Engineering Center Hydrologic Modeling System (HEC HMS)-based hydrologic model, and a hydraulic model based on the 2D module of the latest version of HEC River Analysis System (RAS) to dynamically simulate and map the depth and extent of flooding due to extreme rainfall events. The extreme rainfall events used in the simulation represent 6 hypothetical rainfall events with return periods of 2, 5, 10, 25, 50, and 100 years. For each event, maximum flood depth maps were generated from the simulations, and these maps were further transformed into hazard maps by categorizing the flood depth into low, medium and high hazard levels. Using both the flood hazard maps and the layers of information extracted from remotely-sensed datasets in spatial overlay analysis, we were then able to estimate and assess the impacts of these flooding events to buildings, roads, bridges and landcover. Results of the assessments revealed increase in number of buildings, roads and bridges; and increase in areas of land-cover exposed to various flood hazards as rainfall events become more extreme. The wealth of information generated from the flood impact assessment using the approach can be very useful to the local government units and the concerned communities within Tago River Basin as an aid in determining in an advance manner all those infrastructures (buildings, roads and bridges) and land-cover that can be affected by different extreme rainfall event flood scenarios.

  3. Heat as a tracer to estimate dissolved organic carbon flux from a restored wetland

    USGS Publications Warehouse

    Burow, K.R.; Constantz, J.; Fujii, R.

    2005-01-01

    Heat was used as a natural tracer to characterize shallow ground water flow beneath a complex wetland system. Hydrogeologic data were combined with measured vertical temperature profiles to constrain a series of two-dimensional, transient simulations of ground water flow and heat transport using the model code SUTRA (Voss 1990). The measured seasonal temperature signal reached depths of 2.7 m beneath the pond. Hydraulic conductivity was varied in each of the layers in the model in a systematic manual calibration of the two-dimensional model to obtain the best fit to the measured temperature and hydraulic head. Results of a series of representative best-fit simulations represent a range in hydraulic conductivity values that had the best agreement between simulated and observed temperatures and that resulted in simulated pond seepage values within 1 order of magnitude of pond seepage estimated from the water budget. Resulting estimates of ground water discharge to an adjacent agricultural drainage ditch were used to estimate potential dissolved organic carbon (DOC) loads resulting from the restored wetland. Estimated DOC loads ranged from 45 to 1340 g C/(m2 year), which is higher than estimated DOC loads from surface water. In spite of the complexity in characterizing ground water flow in peat soils, using heat as a tracer provided a constrained estimate of subsurface flow from the pond to the agricultural drainage ditch. Copyright ?? 2005 National Ground Water Association.

  4. Thermodynamics of Yukawa fluids near the one-component-plasma limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khrapak, Sergey A.; Aix-Marseille-Université, CNRS, Laboratoire PIIM, UMR 7345, 13397 Marseille Cedex 20; Semenov, Igor L.

    Thermodynamics of weakly screened (near the one-component-plasma limit) Yukawa fluids in two and three dimensions is analyzed in detail. It is shown that the thermal component of the excess internal energy of these fluids, when expressed in terms of the properly normalized coupling strength, exhibits the scaling pertinent to the corresponding one-component-plasma limit (the scalings differ considerably between the two- and three-dimensional situations). This provides us with a simple and accurate practical tool to estimate thermodynamic properties of weakly screened Yukawa fluids. Particular attention is paid to the two-dimensional fluids, for which several important thermodynamic quantities are calculated to illustratemore » the application of the approach.« less

  5. A methodology for physically based rockfall hazard assessment

    NASA Astrophysics Data System (ADS)

    Crosta, G. B.; Agliardi, F.

    Rockfall hazard assessment is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. The mobility of rockfalls implies a more difficult hazard definition with respect to other slope instabilities with minimal runout. Rockfall hazard assessment involves complex definitions for "occurrence probability" and "intensity". This paper is an attempt to evaluate rockfall hazard using the results of 3-D numerical modelling on a topography described by a DEM. Maps portraying the maximum frequency of passages, velocity and height of blocks at each model cell, are easily combined in a GIS in order to produce physically based rockfall hazard maps. Different methods are suggested and discussed for rockfall hazard mapping at a regional and local scale both along linear features or within exposed areas. An objective approach based on three-dimensional matrixes providing both a positional "Rockfall Hazard Index" and a "Rockfall Hazard Vector" is presented. The opportunity of combining different parameters in the 3-D matrixes has been evaluated to better express the relative increase in hazard. Furthermore, the sensitivity of the hazard index with respect to the included variables and their combinations is preliminarily discussed in order to constrain as objective as possible assessment criteria.

  6. Acoustic source localization in mixed field using spherical microphone arrays

    NASA Astrophysics Data System (ADS)

    Huang, Qinghua; Wang, Tong

    2014-12-01

    Spherical microphone arrays have been used for source localization in three-dimensional space recently. In this paper, a two-stage algorithm is developed to localize mixed far-field and near-field acoustic sources in free-field environment. In the first stage, an array signal model is constructed in the spherical harmonics domain. The recurrent relation of spherical harmonics is independent of far-field and near-field mode strengths. Therefore, it is used to develop spherical estimating signal parameter via rotational invariance technique (ESPRIT)-like approach to estimate directions of arrival (DOAs) for both far-field and near-field sources. In the second stage, based on the estimated DOAs, simple one-dimensional MUSIC spectrum is exploited to distinguish far-field and near-field sources and estimate the ranges of near-field sources. The proposed algorithm can avoid multidimensional search and parameter pairing. Simulation results demonstrate the good performance for localizing far-field sources, or near-field ones, or mixed field sources.

  7. Sex Estimation from Human Cranium: Forensic and Anthropological Interest of Maxillary Sinus Volumes.

    PubMed

    Radulesco, Thomas; Michel, Justin; Mancini, Julien; Dessi, Patrick; Adalian, Pascal

    2018-05-01

    Sex estimation is a key objective of forensic science. We aimed to establish whether maxillary sinus volumes (MSV) could assist in estimating an individual's sex. One hundred and three CT scans were included. MSV were determined using three-dimensional reconstructions. Two observers performed three-dimensional MSV reconstructions using the same methods. Intra- and interobserver reproducibility were statistically compared using the intraclass correlation coefficient (ICC) (α = 5%). Both intra- and interobserver reproducibility were perfect regarding MSV; both ICCs were 100%. There were no significant differences between right and left MSV (p = 0.083). No correlation was found between age and MSV (p > 0.05). We demonstrated the existence of sexual dimorphism in MSV (p < 0.001) and showed that MSV measurements gave a 68% rate of correct allocations to sex group. MSV measurements could be useful to support sex estimation in forensic medicine. © 2017 American Academy of Forensic Sciences.

  8. Numerical estimation of structure constants in the three-dimensional Ising conformal field theory through Markov chain uv sampler

    NASA Astrophysics Data System (ADS)

    Herdeiro, Victor

    2017-09-01

    Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] introduced a numerical recipe, dubbed uv sampler, offering precise estimations of the conformal field theory (CFT) data of the planar two-dimensional (2D) critical Ising model. It made use of scale invariance emerging at the critical point in order to sample finite sublattice marginals of the infinite plane Gibbs measure of the model by producing holographic boundary distributions. The main ingredient of the Markov chain Monte Carlo sampler is the invariance under dilation. This paper presents a generalization to higher dimensions with the critical 3D Ising model. This leads to numerical estimations of a subset of the CFT data—scaling weights and structure constants—through fitting of measured correlation functions. The results are shown to agree with the recent most precise estimations from numerical bootstrap methods [Kos, Poland, Simmons-Duffin, and Vichi, J. High Energy Phys. 08 (2016) 036, 10.1007/JHEP08(2016)036].

  9. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    PubMed

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  10. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model

    PubMed Central

    Austin, Peter C.

    2017-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694

  11. Size effect in thermoelectric materials

    NASA Astrophysics Data System (ADS)

    Mao, Jun; Liu, Zihang; Ren, Zhifeng

    2016-12-01

    Thermoelectric applications have attracted increasing interest recently due to its capability of converting waste heat into electricity without hazardous emissions. Materials with enhanced thermoelectric performance have been reported in recent two decades. The revival of research for thermoelectric materials began in early 1990s when the size effect is considered. Low-dimensional materials with exceptionally high thermoelectric figure of merit (ZT) have been presented, which broke the limit of ZT around unity. The idea of size effect in thermoelectric materials even inspired the later nanostructuring and band engineering strategies, which effectively enhanced the thermoelectric performance of bulk materials. In this overview, the size effect in low-dimensional thermoelectric materials is reviewed. We first discuss the quantum confinement effect on carriers, including the enhancement of electronic density of states, semimetal to semiconductor transition and carrier pocket engineering. Then, the effect of assumptions on theoretical calculations is presented. Finally, the effect of phonon confinement and interface scattering on lattice thermal conductivity is discussed.

  12. Inference of Vohradský's Models of Genetic Networks by Solving Two-Dimensional Function Optimization Problems

    PubMed Central

    Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko

    2013-01-01

    The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175

  13. Fifty-year flood-inundation maps for Nacaome, Honduras

    USGS Publications Warehouse

    Kresch, David L.; Mastin, M.C.; Olsen, T.D.

    2002-01-01

    After the devastating floods caused by Hurricane Mitch in 1998, maps of the areas and depths of 50-year-flood inundation at 15 municipalities in Honduras were prepared as a tool for agencies involved in reconstruction and planning. This report, which is one in a series of 15, presents maps of areas in the municipality of Nacaome that would be inundated by 50-year floods on Rio Nacaome, Rio Grande, and Rio Guacirope. Geographic Information System (GIS) coverages of the flood inundation are available on a computer in the municipality of Nacaome as part of the Municipal GIS project and on the Internet at the Flood Hazard Mapping Web page (http://mitchnts1.cr.usgs.gov/projects/floodhazard.html). These coverages allow users to view the flood inundation in much more detail than is possible using the maps in this report. Water-surface elevations for 50-year-floods on Rio Nacaome, Rio Grande, and Rio Guacirope at Nacaome were computed using HEC-RAS, a one-dimensional, steady-flow, step-backwater computer program. The channel and floodplain cross sections used in HEC-RAS were developed from an airborne light-detection-and-ranging (LIDAR) topographic survey of the area and ground surveys at two bridges. The estimated 50-year-flood discharge for Rio Nacaome at Nacaome, 5,040 cubic meters per second, was computed as the drainage-area-adjusted weighted average of two independently estimated 50-year-flood discharges for the gaging station Rio Nacaome en Las Mercedes, located about 13 kilometers upstream from Nacaome. One of the discharges, 4,549 cubic meters per second, was estimated from a frequency analysis of the 16 years of peak-discharge record for the gage, and the other, 1,922 cubic meters per second, was estimated from a regression equation that relates the 50-year-flood discharge to drainage area and mean annual precipitation. The weighted-average of the two discharges is 3,770 cubic meters per second. The 50-year-flood discharges for Rio Grande, 3,890 cubic meters per second, and Rio Guacirope, 1,080 cubic meters per second, were also computed by adjusting the weighted-average 50-year-flood discharge for the Rio Nacaome en Las Mercedes gaging station for the difference in drainage areas between the gage and these river reaches.

  14. Child labor still with us after all these years.

    PubMed Central

    Landrigan, P J; McCammon, J B

    1997-01-01

    Child labor is a major threat to the health of children in the United States. The U.S. Department of Labor estimates that more than four million children are legally employed and that another one to two million are employed under illegal, often exploitative conditions. Across the United States, child labor accounts for 20,000 workers compensation claims, 200,000 injuries, thousands of cases of permanent disability, and more than 70 deaths each year. Agriculture and newspaper delivery are the two most hazardous areas of employment for children and adolescents. Poverty, massive immigration, and relaxation in enforcement of Federal child labor law are the three factors principally responsible for the last two decades' resurgence of child labor in the United States. Control of the hazards of child labor will require a combination of strategies including vigorous enforcement, education, and public health surveillance. Images p466-a p467-a p468-a PMID:10822472

  15. An audit strategy for progression-free survival

    PubMed Central

    Dodd, Lori E.; Korn, Edward L.; Freidlin, Boris; Gray, Robert; Bhattacharya, Suman

    2010-01-01

    Summary In randomized clinical trials, the use of potentially subjective endpoints has led to frequent use of blinded independent central review (BICR) and event adjudication committees to reduce possible bias in treatment effect estimators based on local evaluations (LE). In oncology trials, progression-free survival (PFS) is one such endpoint. PFS requires image interpretation to determine whether a patient’s cancer has progressed, and BICR has been advocated to reduce the potential for endpoints to be biased by knowledge of treatment assignment. There is current debate, however, about the value of such reviews with time-to-event outcomes like PFS. We propose a BICR audit strategy as an alternative to a complete-case BICR to provide assurance of the presence of a treatment effect. We develop an auxiliary-variable estimator of the log-hazard ratio that is more efficient than simply using the audited (i.e., sampled) BICR data for estimation. Our estimator incorporates information from the LE on all the cases and the audited BICR cases, and is an asymptotically unbiased estimator of the log-hazard ratio from BICR. The estimator offers considerable efficiency gains that improve as the correlation between LE and BICR increases. A two-stage auditing strategy is also proposed and evaluated through simulation studies. The method is applied retrospectively to a large oncology trial that had a complete-case BICR, showing the potential for efficiency improvements. PMID:21210772

  16. Prioritizing hazardous pollutants in two Nigerian water supply schemes: a risk-based approach.

    PubMed

    Etchie, Ayotunde T; Etchie, Tunde O; Adewuyi, Gregory O; Krishnamurthi, Kannan; Saravanadevi, S; Wate, Satish R

    2013-08-01

    To rank pollutants in two Nigerian water supply schemes according to their effect on human health using a risk-based approach. Hazardous pollutants in drinking-water in the study area were identified from a literature search and selected pollutants were monitored from April 2010 to December 2011 in catchments, treatment works and consumer taps. The disease burden due to each pollutant was estimated in disability-adjusted life years (DALYs) using data on the pollutant's concentration, exposure to the pollutant, the severity of its health effects and the consumer population. The pollutants identified were microbial organisms, cadmium, cobalt, chromium, copper, iron, manganese, nickel, lead and zinc. All were detected in the catchments but only cadmium, cobalt, chromium, manganese and lead exceeded World Health Organization (WHO) guideline values after water treatment. Post-treatment contamination was observed. The estimated disease burden was greatest for chromium in both schemes, followed in decreasing order by cadmium, lead, manganese and cobalt. The total disease burden of all pollutants in the two schemes was 46 000 and 9500 DALYs per year or 0.14 and 0.088 DALYs per person per year, respectively, much higher than the WHO reference level of 1 × 10(-6) DALYs per person per year. For each metal, the disease burden exceeded the reference level and was comparable with that due to microbial contamination reported elsewhere in Africa. The estimated disease burden of metal contamination of two Nigerian water supply systems was high. It could best be reduced by protection of water catchment and pretreatment by electrocoagulation.

  17. Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors

    PubMed Central

    Pan, Jin; Ma, Boyuan

    2018-01-01

    This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323

  18. Methodology for prediction of rip currents using a three-dimensional numerical, coupled, wave current model

    USGS Publications Warehouse

    Voulgaris, George; Kumar, Nirnimesh; Warner, John C.; Leatherman, Stephen; Fletemeyer, John

    2011-01-01

    Rip current currents constitute one of the most common hazards in the nearshore that threaten the lives of the unaware public that makes recreational use of the coastal zone. Society responds to this danger through a number of measures that include: (a) the deployment of trained lifeguards; (b) public education related to the hidden hazards of the nearshore; and (c) establishment of warning systems.

  19. Exploring the Differences Between the European (SHARE) and the Reference Italian Seismic Hazard Models

    NASA Astrophysics Data System (ADS)

    Visini, F.; Meletti, C.; D'Amico, V.; Rovida, A.; Stucchi, M.

    2014-12-01

    The recent release of the probabilistic seismic hazard assessment (PSHA) model for Europe by the SHARE project (Giardini et al., 2013, www.share-eu.org) arises questions about the comparison between its results for Italy and the official Italian seismic hazard model (MPS04; Stucchi et al., 2011) adopted by the building code. The goal of such a comparison is identifying the main input elements that produce the differences between the two models. It is worthwhile to remark that each PSHA is realized with data and knowledge available at the time of the release. Therefore, even if a new model provides estimates significantly different from the previous ones that does not mean that old models are wrong, but probably that the current knowledge is strongly changed and improved. Looking at the hazard maps with 10% probability of exceedance in 50 years (adopted as the standard input in the Italian building code), the SHARE model shows increased expected values with respect to the MPS04 model, up to 70% for PGA. However, looking in detail at all output parameters of both the models, we observe a different behaviour for other spectral accelerations. In fact, for spectral periods greater than 0.3 s, the current reference PSHA for Italy proposes higher values than the SHARE model for many and large areas. This observation suggests that this behaviour could not be due to a different definition of seismic sources and relevant seismicity rates; it mainly seems the result of the adoption of recent ground-motion prediction equations (GMPEs) that estimate higher values for PGA and for accelerations with periods lower than 0.3 s and lower values for higher periods with respect to old GMPEs. Another important set of tests consisted in analysing separately the PSHA results obtained by the three source models adopted in SHARE (i.e., area sources, fault sources with background, and a refined smoothed seismicity model), whereas MPS04 only uses area sources. Results seem to confirm the strong impact of the new generation GMPEs on the seismic hazard estimates. Giardini D. et al., 2013. Seismic Hazard Harmonization in Europe (SHARE): Online Data Resource, doi:10.12686/SED-00000001-SHARE. Stucchi M. et al., 2011. Seismic Hazard Assessment (2003-2009) for the Italian Building Code. Bull. Seismol. Soc. Am. 101, 1885-1911.

  20. Assessment on the leakage hazard of landfill leachate using three-dimensional excitation-emission fluorescence and parallel factor analysis method.

    PubMed

    Pan, Hongwei; Lei, Hongjun; Liu, Xin; Wei, Huaibin; Liu, Shufang

    2017-09-01

    A large number of simple and informal landfills exist in developing countries, which pose as tremendous soil and groundwater pollution threats. Early warning and monitoring of landfill leachate pollution status is of great importance. However, there is a shortage of affordable and effective tools and methods. In this study, a soil column experiment was performed to simulate the pollution status of leachate using three-dimensional excitation-emission fluorescence (3D-EEMF) and parallel factor analysis (PARAFAC) models. Sum of squared residuals (SSR) and principal component analysis (PCA) were used to determine the optimal components for PARAFAC. A one-way analysis of variance showed that the component scores of the soil column leachate were significant influenced by landfill leachate (p<0.05). Therefore, the ratio of the component scores of the soil under the landfill to that of natural soil could be used to evaluate the leakage status of landfill leachate. Furthermore, a hazard index (HI) and a hazard evaluation standard were established. A case study of Kaifeng landfill indicated a low hazard (level 5) by the use of HI. In summation, HI is presented as a tool to evaluate landfill pollution status and for the guidance of municipal solid waste management. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. FDA-iRISK--a comparative risk assessment system for evaluating and ranking food-hazard pairs: case studies on microbial hazards.

    PubMed

    Chen, Yuhuan; Dennis, Sherri B; Hartnett, Emma; Paoli, Greg; Pouillot, Régis; Ruthman, Todd; Wilson, Margaret

    2013-03-01

    Stakeholders in the system of food safety, in particular federal agencies, need evidence-based, transparent, and rigorous approaches to estimate and compare the risk of foodborne illness from microbial and chemical hazards and the public health impact of interventions. FDA-iRISK (referred to here as iRISK), a Web-based quantitative risk assessment system, was developed to meet this need. The modeling tool enables users to assess, compare, and rank the risks posed by multiple food-hazard pairs at all stages of the food supply system, from primary production, through manufacturing and processing, to retail distribution and, ultimately, to the consumer. Using standard data entry templates, built-in mathematical functions, and Monte Carlo simulation techniques, iRISK integrates data and assumptions from seven components: the food, the hazard, the population of consumers, process models describing the introduction and fate of the hazard up to the point of consumption, consumption patterns, dose-response curves, and health effects. Beyond risk ranking, iRISK enables users to estimate and compare the impact of interventions and control measures on public health risk. iRISK provides estimates of the impact of proposed interventions in various ways, including changes in the mean risk of illness and burden of disease metrics, such as losses in disability-adjusted life years. Case studies for Listeria monocytogenes and Salmonella were developed to demonstrate the application of iRISK for the estimation of risks and the impact of interventions for microbial hazards. iRISK was made available to the public at http://irisk.foodrisk.org in October 2012.

  2. Mission hazard assessment for STARS Mission 1 (M1) in the Marshall Islands area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Outka, D.E.; LaFarge, R.A.

    1993-07-01

    A mission hazard assessment has been performed for the Strategic Target System Mission 1 (known as STARS M1) for hazards due to potential debris impact in the Marshall Islands area. The work was performed at Sandia National Laboratories as a result of discussion with Kwajalein Missile Range (KMR) safety officers. The STARS M1 rocket will be launched from the Kauai Test Facility (KTF), Hawaii, and deliver two payloads to within the viewing range of sensors located on the Kwajalein Atoll. The purpose of this work has been to estimate upper bounds for expected casualty rates and impact probability or themore » Marshall Islands areas which adjoin the STARS M1 instantaneous impact point (IIP) trace. This report documents the methodology and results of the analysis.« less

  3. A Comparison of Peak Electric Fields and GICs in the Pacific Northwest Using 1-D and 3-D Conductivity

    NASA Astrophysics Data System (ADS)

    Gannon, J. L.; Birchfield, A. B.; Shetye, K. S.; Overbye, T. J.

    2017-11-01

    Geomagnetically induced currents (GICs) are a result of the changing magnetic fields during a geomagnetic disturbance interacting with the deep conductivity structures of the Earth. When assessing GIC hazard, it is a common practice to use layer-cake or one-dimensional conductivity models to approximate deep Earth conductivity. In this paper, we calculate the electric field and estimate GICs induced in the long lines of a realistic system model of the Pacific Northwest, using the traditional 1-D models, as well as 3-D models represented by Earthscope's Electromagnetic transfer functions. The results show that the peak electric field during a given event has considerable variation across the analysis region in the Pacific Northwest, but the 1-D physiographic approximations may accurately represent the average response of an area, although corrections are needed. Rotations caused by real deep Earth conductivity structures greatly affect the direction of the induced electric field. This effect may be just as, or more, important than peak intensity when estimating GICs induced in long bulk power system lines.

  4. Hazardous materials incident costs : estimating the costs of the March 25, 2004, tanker truck crash in Bridgeport, Connecticut

    DOT National Transportation Integrated Search

    2004-08-01

    Significant variations in the reporting of hazardous materials incident costs are illustrated using a case study of the March 2004 crash of a fuel tanker truck on Interstate 95 in Bridgeport, Connecticut. Three separate cost estimates are presented, ...

  5. A resolution measure for three-dimensional microscopy

    PubMed Central

    Chao, Jerry; Ram, Sripad; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.

    2009-01-01

    A three-dimensional (3D) resolution measure for the conventional optical microscope is introduced which overcomes the drawbacks of the classical 3D (axial) resolution limit. Formulated within the context of a parameter estimation problem and based on the Cramer-Rao lower bound, this 3D resolution measure indicates the accuracy with which a given distance between two objects in 3D space can be determined from the acquired image. It predicts that, given enough photons from the objects of interest, arbitrarily small distances of separation can be estimated with prespecified accuracy. Using simulated images of point source pairs, we show that the maximum likelihood estimator is capable of attaining the accuracy predicted by the resolution measure. We also demonstrate how different factors, such as extraneous noise sources and the spatial orientation of the imaged object pair, can affect the accuracy with which a given distance of separation can be determined. PMID:20161040

  6. Surface Fractal Analysis for Estimating the Fracture Energy Absorption of Nanoparticle Reinforced Composites

    PubMed Central

    Pramanik, Brahmananda; Tadepalli, Tezeswi; Mantena, P. Raju

    2012-01-01

    In this study, the fractal dimensions of failure surfaces of vinyl ester based nanocomposites are estimated using two classical methods, Vertical Section Method (VSM) and Slit Island Method (SIM), based on the processing of 3D digital microscopic images. Self-affine fractal geometry has been observed in the experimentally obtained failure surfaces of graphite platelet reinforced nanocomposites subjected to quasi-static uniaxial tensile and low velocity punch-shear loading. Fracture energy and fracture toughness are estimated analytically from the surface fractal dimensionality. Sensitivity studies show an exponential dependency of fracture energy and fracture toughness on the fractal dimensionality. Contribution of fracture energy to the total energy absorption of these nanoparticle reinforced composites is demonstrated. For the graphite platelet reinforced nanocomposites investigated, surface fractal analysis has depicted the probable ductile or brittle fracture propagation mechanism, depending upon the rate of loading. PMID:28817017

  7. Using strain rates to forecast seismic hazards

    USGS Publications Warehouse

    Evans, Eileen

    2017-01-01

    One essential component in forecasting seismic hazards is observing the gradual accumulation of tectonic strain accumulation along faults before this strain is suddenly released as earthquakes. Typically, seismic hazard models are based on geologic estimates of slip rates along faults and historical records of seismic activity, neither of which records actively accumulating strain. But this strain can be estimated by geodesy: the precise measurement of tiny position changes of Earth’s surface, obtained from GPS, interferometric synthetic aperture radar (InSAR), or a variety of other instruments.

  8. Visually estimated ejection fraction by two dimensional and triplane echocardiography is closely correlated with quantitative ejection fraction by real-time three dimensional echocardiography.

    PubMed

    Shahgaldi, Kambiz; Gudmundsson, Petri; Manouras, Aristomenis; Brodin, Lars-Ake; Winter, Reidar

    2009-08-25

    Visual assessment of left ventricular ejection fraction (LVEF) is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS) measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing) is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE) and triplane echocardiography (TPE) using quantitative real-time three-dimensional echocardiography (RT3DE) as the reference method. Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively) without any significant bias (-0.5 +/- 3.7% and -0.2 +/- 2.9% respectively). Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant.

  9. Deep belief networks for false alarm rejection in forward-looking ground-penetrating radar

    NASA Astrophysics Data System (ADS)

    Becker, John; Havens, Timothy C.; Pinar, Anthony; Schulz, Timothy J.

    2015-05-01

    Explosive hazards are one of the most deadly threats in modern conflicts. The U.S. Army is interested in a reliable way to detect these hazards at range. A promising way of accomplishing this task is using a forward-looking ground-penetrating radar (FLGPR) system. Recently, the Army has been testing a system that utilizes both L-band and X-band radar arrays on a vehicle mounted platform. Using data from this system, we sought to improve the performance of a constant false-alarm-rate (CFAR) prescreener through the use of a deep belief network (DBN). DBNs have also been shown to perform exceptionally well at generalized anomaly detection. They combine unsupervised pre-training with supervised fine-tuning to generate low-dimensional representations of high-dimensional input data. We seek to take advantage of these two properties by training a DBN on the features of the CFAR prescreener's false alarms (FAs) and then use that DBN to separate FAs from true positives. Our analysis shows that this method improves the detection statistics significantly. By training the DBN on a combination of image features, we were able to significantly increase the probability of detection while maintaining a nominal number of false alarms per square meter. Our research shows that DBNs are a good candidate for improving detection rates in FLGPR systems.

  10. Sparse kernel methods for high-dimensional survival data.

    PubMed

    Evers, Ludger; Messow, Claudia-Martina

    2008-07-15

    Sparse kernel methods like support vector machines (SVM) have been applied with great success to classification and (standard) regression settings. Existing support vector classification and regression techniques however are not suitable for partly censored survival data, which are typically analysed using Cox's proportional hazards model. As the partial likelihood of the proportional hazards model only depends on the covariates through inner products, it can be 'kernelized'. The kernelized proportional hazards model however yields a solution that is dense, i.e. the solution depends on all observations. One of the key features of an SVM is that it yields a sparse solution, depending only on a small fraction of the training data. We propose two methods. One is based on a geometric idea, where-akin to support vector classification-the margin between the failed observation and the observations currently at risk is maximised. The other approach is based on obtaining a sparse model by adding observations one after another akin to the Import Vector Machine (IVM). Data examples studied suggest that both methods can outperform competing approaches. Software is available under the GNU Public License as an R package and can be obtained from the first author's website http://www.maths.bris.ac.uk/~maxle/software.html.

  11. Investigations of simulated aircraft flight through thunderstorm outflows

    NASA Technical Reports Server (NTRS)

    Frost, W.; Crosby, B.

    1978-01-01

    The effects of wind shear on aircraft flying through thunderstorm gust fronts were investigated. A computer program was developed to solve the two dimensional, nonlinear equations of aircraft motion, including wind shear. The procedure described and documented accounts for spatial and temporal variations of the aircraft within the flow regime. Analysis of flight paths and control inputs necessary to maintain specified trajectories for aircraft having characteristics of DC-8, B-747, augmentor wing STOL, and DHC-6 aircraft was recorded. From the analysis an attempt was made to find criteria for reduction of the hazards associated with landing through thunderstorm gust fronts.

  12. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann

    1993-01-01

    A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  13. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann; Usab, William J., Jr.

    1993-01-01

    A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  14. On the Development of a Deterministic Three-Dimensional Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Rockell, Candice; Tweed, John

    2011-01-01

    Since astronauts on future deep space missions will be exposed to dangerous radiations, there is a need to accurately model the transport of radiation through shielding materials and to estimate the received radiation dose. In response to this need a three dimensional deterministic code for space radiation transport is now under development. The new code GRNTRN is based on a Green's function solution of the Boltzmann transport equation that is constructed in the form of a Neumann series. Analytical approximations will be obtained for the first three terms of the Neumann series and the remainder will be estimated by a non-perturbative technique . This work discusses progress made to date and exhibits some computations based on the first two Neumann series terms.

  15. Application of seismic interpretation in the development of Jerneh Field, Malay Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yusoff, Z.

    1994-07-01

    Development of the Jerneh gas field has been significantly aided by the use of 3-D and site survey seismic interpretations. The two aspects that have been of particular importance are identification of sea-floor and near-surface safety hazards for safe platform installation/development drilling and mapping of reservoirs/hydrocarbons within gas-productive sands of the Miocene groups B, D, and E. Choice of platform location as well as casing design require detailed analysis of sea-floor and near-surface safety hazards. At Jerneh, sea-floor pockmarks near-surface high amplitudes, distributary channels, and minor faults were recognized as potential operational safety hazards. The integration of conventional 3-D andmore » site survey seismic data enabled comprehensive understanding of the occurrence and distribution of potential hazards to platform installation and development well drilling. Three-dimensional seismic interpretation has been instrumental not only in the field structural definition but also in recognition of reservoir trends and hydrocarbon distribution. Additional gas reservoirs were identified by their DHI characteristics and subsequently confirmed by development wells. The innovative use of seismic attribute mapping techniques has been very important in defining both fluid and reservoir distribution in groups B and D. Integration of 3-D seismic data and well-log interpretations has helped in optimal field development, including the planning of well locations and drilling sequence.« less

  16. A strategic assessment of crown fire hazard in Montana: potential effectiveness and costs of hazard reduction treatments.

    Treesearch

    Carl E. Fiedler; Charles E. Keegan; Christopher W. Woodall; Todd A. Morgan

    2004-01-01

    Estimates of crown fire hazard are presented for existing forest conditions in Montana by density class, structural class, forest type, and landownership. Three hazard reduction treatments were evaluated for their effectiveness in treating historically fire-adapted forests (ponderosa pine (Pinus ponderosa Dougl. ex Laws.), Douglas-fir (...

  17. Volcano collapse promoted by hydrothermal alteration and edifice shape, Mount Rainier, Washington

    USGS Publications Warehouse

    Reid, M.E.; Sisson, T.W.; Brien, D.L.

    2001-01-01

    Catastrophic collapses of steep volcano flanks threaten many populated regions, and understanding factors that promote collapse could save lives and property. Large collapses of hydrothermally altered parts of Mount Rainier have generated far-traveled debris flows; future flows would threaten densely populated parts of the Puget Sound region. We evaluate edifice collapse hazards at Mount Rainier using a new three-dimensional slope stability method incorporating detailed geologic mapping and subsurface geophysical imaging to determine distributions of strong (fresh) and weak (altered) rock. Quantitative three-dimensional slope stability calculations reveal that sizeable flank collapse (>0.1 km3) is promoted by voluminous, weak, hydrothermally altered rock situated high on steep slopes. These conditions exist only on Mount Rainier's upper west slope, consistent with the Holocene debris-flow history. Widespread alteration on lower flanks or concealed in regions of gentle slope high on the edifice does not greatly facilitate collapse. Our quantitative stability assessment method can also provide useful hazard predictions using reconnaissance geologic information and is a potentially rapid and inexpensive new tool for aiding volcano hazard assessments.

  18. An initial investigation of multidimensional flow and transverse mixing characteristics of the Ohio River near Cincinnati, Ohio

    USGS Publications Warehouse

    Holtschlag, David J.

    2009-01-01

    Two-dimensional hydrodynamic and transport models were applied to a 34-mile reach of the Ohio River from Cincinnati, Ohio, upstream to Meldahl Dam near Neville, Ohio. The hydrodynamic model was based on the generalized finite-element hydrodynamic code RMA2 to simulate depth-averaged velocities and flow depths. The generalized water-quality transport code RMA4 was applied to simulate the transport of vertically mixed, water-soluble constituents that have a density similar to that of water. Boundary conditions for hydrodynamic simulations included water levels at the U.S. Geological Survey water-level gaging station near Cincinnati, Ohio, and flow estimates based on a gate rating at Meldahl Dam. Flows estimated on the basis of the gate rating were adjusted with limited flow-measurement data to more nearly reflect current conditions. An initial calibration of the hydrodynamic model was based on data from acoustic Doppler current profiler surveys and water-level information. These data provided flows, horizontal water velocities, water levels, and flow depths needed to estimate hydrodynamic parameters related to channel resistance to flow and eddy viscosity. Similarly, dye concentration measurements from two dye-injection sites on each side of the river were used to develop initial estimates of transport parameters describing mixing and dye-decay characteristics needed for the transport model. A nonlinear regression-based approach was used to estimate parameters in the hydrodynamic and transport models. Parameters describing channel resistance to flow (Manning’s “n”) were estimated in areas of deep and shallow flows as 0.0234, and 0.0275, respectively. The estimated RMA2 Peclet number, which is used to dynamically compute eddy-viscosity coefficients, was 38.3, which is in the range of 15 to 40 that is typically considered appropriate. Resulting hydrodynamic simulations explained 98.8 percent of the variability in depth-averaged flows, 90.0 percent of the variability in water levels, 93.5 percent of the variability in flow depths, and 92.5 percent of the variability in velocities. Estimates of the water-quality-transport-model parameters describing turbulent mixing characteristics converged to different values for the two dye-injection reaches. For the Big Indian Creek dye-injection study, an RMA4 Peclet number of 37.2 was estimated, which was within the recommended range of 15 to 40, and similar to the RMA2 Peclet number. The estimated dye-decay coefficient was 0.323. Simulated dye concentrations explained 90.2 percent of the variations in measured dye concentrations for the Big Indian Creek injection study. For the dye-injection reach starting downstream from Twelvemile Creek, however, an RMA4 Peclet number of 173 was estimated, which is far outside the recommended range. Simulated dye concentrations were similar to measured concentration distributions at the first four transects downstream from the dye-injection site that were considered vertically mixed. Farther downstream, however, simulated concentrations did not match the attenuation of maximum concentrations or cross-channel transport of dye that were measured. The difficulty of determining a consistent RMA4 Peclet was related to the two-dimension model assumption that velocity distributions are closely approximated by their depth-averaged values. Analysis of velocity data showed significant variations in velocity direction with depth in channel reaches with curvature. Channel irregularities (including curvatures, depth irregularities, and shoreline variations) apparently produce transverse currents that affect the distribution of constituents, but are not fully accounted for in a two-dimensional model. The two-dimensional flow model, using channel resistance to flow parameters of 0.0234 and 0.0275 for deep and shallow areas, respectively, and an RMA2 Peclet number of 38.3, and the RMA4 transport model with a Peclet number of 37.2, may have utility for emergency-planning purposes. Emergency-response efforts would be enhanced by continuous streamgaging records downstream from Meldahl Dam, real-time water-quality monitoring, and three-dimensional modeling. Decay coefficients are constituent specific.

  19. Estimation of object motion parameters from noisy images.

    PubMed

    Broida, T J; Chellappa, R

    1986-01-01

    An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.

  20. Two-Dimensional Echocardiography Estimates of Fetal Ventricular Mass throughout Gestation.

    PubMed

    Aye, Christina Y L; Lewandowski, Adam James; Ohuma, Eric O; Upton, Ross; Packham, Alice; Kenworthy, Yvonne; Roseman, Fenella; Norris, Tess; Molloholli, Malid; Wanyonyi, Sikolia; Papageorghiou, Aris T; Leeson, Paul

    2017-08-12

    Two-dimensional (2D) ultrasound quality has improved in recent years. Quantification of cardiac dimensions is important to screen and monitor certain fetal conditions. We assessed the feasibility and reproducibility of fetal ventricular measures using 2D echocardiography, reported normal ranges in our cohort, and compared estimates to other modalities. Mass and end-diastolic volume were estimated by manual contouring in the four-chamber view using TomTec Image Arena 4.6 in end diastole. Nomograms were created from smoothed centiles of measures, constructed using fractional polynomials after log transformation. The results were compared to those of previous studies using other modalities. A total of 294 scans from 146 fetuses from 15+0 to 41+6 weeks of gestation were included. Seven percent of scans were unanalysable and intraobserver variability was good (intraclass correlation coefficients for left and right ventricular mass 0.97 [0.87-0.99] and 0.99 [0.95-1.0], respectively). Mass and volume increased exponentially, showing good agreement with 3D mass estimates up to 28 weeks of gestation, after which our measurements were in better agreement with neonatal cardiac magnetic resonance imaging. There was good agreement with 4D volume estimates for the left ventricle. Current state-of-the-art 2D echocardiography platforms provide accurate, feasible, and reproducible fetal ventricular measures across gestation, and in certain circumstances may be the modality of choice. © 2017 S. Karger AG, Basel.

  1. Estimation of human damage and economic loss of buildings for the worst-credible scenario of tsunami inundation in the city of Augusta, Italy

    NASA Astrophysics Data System (ADS)

    Pagnoni, Gianluca; Tinti, Stefano

    2017-04-01

    The city of Augusta is located in the southern part of the eastern coast of Sicily. Italian tsunami catalogue and paleo-tsunami surveys indicate that at least 7 events of tsunami affected the bay of Augusta in the last 4,000 years, two of which are associated with earthquakes (1169 and 1693) that destroyed the city. For these reasons Augusta has been chosen in the project ASTARTE as a test site for the study of issues related to tsunami hazard and risk. In last two years we studied hazard through the approach of the worst-case credible scenario and carried out vulnerability and damage analysis for buildings. In this work, we integrate that research, and estimate the damage to people and the economic loss of buildings due to structural damage. As regards inundation, we assume both uniform inundation levels (bath-tub hypothesis) and inundation data resulting from the worst-case scenario elaborated for the area by Armigliato et al. (2015). Human damage is calculated in three steps using the method introduced by Pagnoni et al. (2016) following the work by Terrier et al. (2012) and by Koshimura et al. (2009). First, we use census data to estimate the number of people present in each residential building affected by inundation; second, based on water column depth and building type, we evaluate the level of damage to people; third, we provide an estimate of fatalities. The economic loss is computed for two types of buildings (residential and trade-industrial) by using data on inundation and data from the real estate market. This study was funded by the EU Project ASTARTE - "Assessment, STrategy And Risk Reduction for Tsunamis in Europe", Grant 603839, 7th FP (ENV.2013.6.4-3)

  2. Smoking, health knowledge, and anti-smoking campaigns: an empirical study in Taiwan.

    PubMed

    Hsieh, C R; Yen, L L; Liu, J T; Lin, C J

    1996-02-01

    This paper uses a measure of health knowledge of smoking hazards to investigate the determinants of health knowledge and its effect on smoking behavior. In our analysis, two equations are estimated: smoking participation and health knowledge. The simultaneity problem in estimating smoking behavior and health knowledge is also considered. Overall, the estimated results suggest that anti-smoking campaigns have a significantly positive effect on the public's health knowledge, and this health knowledge in turn, has a significantly negative effect on smoking participation. The health knowledge elasticities of smoking participation are -0.48 and -0.56 for all adults and adult males, respectively.

  3. The 1868 Hayward fault, California, earthquake: Implications for earthquake scaling relations on partially creeping faults

    USGS Publications Warehouse

    Hough, Susan E.; Martin, Stacey

    2015-01-01

    The 21 October 1868 Hayward, California, earthquake is among the best-characterized historical earthquakes in California. In contrast to many other moderate-to-large historical events, the causative fault is clearly established. Published magnitude estimates have been fairly consistent, ranging from 6.8 to 7.2, with 95% confidence limits including values as low as 6.5. The magnitude is of particular importance for assessment of seismic hazard associated with the Hayward fault and, more generally, to develop appropriate magnitude–rupture length scaling relations for partially creeping faults. The recent reevaluation of archival accounts by Boatwright and Bundock (2008), together with the growing volume of well-calibrated intensity data from the U.S. Geological Survey “Did You Feel It?” (DYFI) system, provide an opportunity to revisit and refine the magnitude estimate. In this study, we estimate the magnitude using two different methods that use DYFI data as calibration. Both approaches yield preferred magnitude estimates of 6.3–6.6, assuming an average stress drop. A consideration of data limitations associated with settlement patterns increases the range to 6.3–6.7, with a preferred estimate of 6.5. Although magnitude estimates for historical earthquakes are inevitably uncertain, we conclude that, at a minimum, a lower-magnitude estimate represents a credible alternative interpretation of available data. We further discuss implications of our results for probabilistic seismic-hazard assessment from partially creeping faults.

  4. Numerically simulated two-dimensional auroral double layers

    NASA Technical Reports Server (NTRS)

    Borovsky, J. E.; Joyce, G.

    1983-01-01

    A magnetized 2 1/2-dimensional particle-in-cell system which is periodic in one direction and bounded by reservoirs of Maxwellian plasma in the other is used to numerically simulate electrostatic plasma double layers. For the cases of both oblique and two-dimensional double layers, the present results indicate periodic instability, Debye length rather than gyroradii scaling, and low frequency electrostatic turbulence together with electron beam-excited electrostatatic electron-cyclotron waves. Estimates are given for the thickness of auroral doule layers, as well as the separations within multiple auroral arcs. Attention is given to the temporal modulation of accelerated beams, and the possibilities for ion precipitation and ion conic production by the double layer are hypothesized. Simulations which include the atmospheric backscattering of electrons imply the action of an ionospheric sheath which accelerates ionospheric ions upward.

  5. Pair creation, motion, and annihilation of topological defects in two-dimensional nematic liquid crystals

    NASA Astrophysics Data System (ADS)

    Cortese, Dario; Eggers, Jens; Liverpool, Tanniemola B.

    2018-02-01

    We present a framework for the study of disclinations in two-dimensional active nematic liquid crystals and topological defects in general. The order tensor formalism is used to calculate exact multiparticle solutions of the linearized static equations inside a planar uniformly aligned state so that the total charge has to vanish. Topological charge conservation then requires that there is always an equal number of q =1 /2 and q =-1 /2 charges. Starting from a set of hydrodynamic equations, we derive a low-dimensional dynamical system for the parameters of the static solutions, which describes the motion of a half-disclination pair or of several pairs. Within this formalism, we model defect production and annihilation, as observed in experiments. Our dynamics also provide an estimate for the critical density at which production and annihilation rates are balanced.

  6. A thermal analysis of a spirally wound battery using a simple mathematical model

    NASA Technical Reports Server (NTRS)

    Evans, T. I.; White, R. E.

    1989-01-01

    A two-dimensional thermal model for spirally wound batteries has been developed. The governing equation of the model is the energy balance. Convective and insulated boundary conditions are used, and the equations are solved using a finite element code called TOPAZ2D. The finite element mesh is generated using a preprocessor to TOPAZ2D called MAZE. The model is used to estimate temperature profiles within a spirally wound D-size cell. The model is applied to the lithium/thionyl chloride cell because of the thermal management problems that this cell exhibits. Simplified one-dimensional models are presented that can be used to predict best and worst temperature profiles. The two-dimensional model is used to predict the regions of maximum temperature within the spirally wound cell. Normal discharge as well as thermal runaway conditions are investigated.

  7. The Kirkendall and Frenkel effects during 2D diffusion process

    NASA Astrophysics Data System (ADS)

    Wierzba, Bartek

    2014-11-01

    The two-dimensional approach for inter-diffusion and voids generation is presented. The voids evolution and growth is discussed. This approach is based on the bi-velocity (Darken) method which combines the Darken and Brenner concepts that the volume velocity is essential in defining the local material velocity in multi-component mixture at non-equilibrium. The model is formulated for arbitrary multi-component two-dimensional systems. It is shown that the voids growth is due to the drift velocity and vacancy migration. The radius of the void can be easily estimated. The distributions of (1) components, (2) vacancy and (3) voids radius over the distance is presented.

  8. Correlation techniques and measurements of wave-height statistics

    NASA Technical Reports Server (NTRS)

    Guthart, H.; Taylor, W. C.; Graf, K. A.; Douglas, D. G.

    1972-01-01

    Statistical measurements of wave height fluctuations have been made in a wind wave tank. The power spectral density function of temporal wave height fluctuations evidenced second-harmonic components and an f to the minus 5th power law decay beyond the second harmonic. The observations of second harmonic effects agreed very well with a theoretical prediction. From the wave statistics, surface drift currents were inferred and compared to experimental measurements with satisfactory agreement. Measurements were made of the two dimensional correlation coefficient at 15 deg increments in angle with respect to the wind vector. An estimate of the two-dimensional spatial power spectral density function was also made.

  9. 40 CFR 261.142 - Cost estimate.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE Financial Requirements for Management of Excluded Hazardous Secondary... hazardous waste, and the potential cost of closing the facility as a treatment, storage, and disposal...

  10. 40 CFR 261.142 - Cost estimate.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE Financial Requirements for Management of Excluded Hazardous Secondary... hazardous waste, and the potential cost of closing the facility as a treatment, storage, and disposal...

  11. 40 CFR 261.142 - Cost estimate.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE Financial Requirements for Management of Excluded Hazardous Secondary... hazardous waste, and the potential cost of closing the facility as a treatment, storage, and disposal...

  12. On the mixing time in the Wang-Landau algorithm

    NASA Astrophysics Data System (ADS)

    Fadeeva, Marina; Shchur, Lev

    2018-01-01

    We present preliminary results of the investigation of the properties of the Markov random walk in the energy space generated by the Wang-Landau probability. We build transition matrix in the energy space (TMES) using the exact density of states for one-dimensional and two-dimensional Ising models. The spectral gap of TMES is inversely proportional to the mixing time of the Markov chain. We estimate numerically the dependence of the mixing time on the lattice size, and extract the mixing exponent.

  13. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.

  14. Rockfall hazard and risk assessment in the Yosemite Valley, California, USA

    USGS Publications Warehouse

    Guzzetti, F.; Reichenbach, P.; Wieczorek, G.F.

    2003-01-01

    Rock slides and rock falls are the most frequent types of slope movements in Yosemite National Park, California. In historical time (1857-2002) 392 rock falls and rock slides have been documented in the valley, and some of them have been mapped in detail. We present the results of an attempt to assess rock fall hazards in the Yosemite Valley. Spatial and temporal aspects of rock falls hazard are considered. A detailed inventory of slope movements covering the 145-year period from 1857 to 2002 is used to determine the frequency-volume statistics of rock falls and to estimate the annual frequency of rock falls, providing the temporal component of rock fall hazard. The extent of the areas potentially subject to rock fall hazards in the Yosemite Valley were obtained using STONE, a physically-based rock fall simulation computer program. The software computes 3-dimensional rock fall trajectories starting from a digital elevation model (DEM), the location of rock fall release points, and maps of the dynamic rolling friction coefficient and of the coefficients of normal and tangential energy restitution. For each DEM cell the software calculates the number of rock falls passing through the cell, the maximum rock fall velocity and the maximum flying height. For the Yosemite Valley, a DEM with a ground resolution of 10 ?? 10 m was prepared using topographic contour lines from the U.S. Geological Survey 1:24 000-scale maps. Rock fall release points were identified as DEM cells having a slope steeper than 60??, an assumption based on the location of historical rock falls. Maps of the normal and tangential energy restitution coefficients and of the rolling friction coefficient were produced from a surficial geologic map. The availability of historical rock falls mapped in detail allowed us to check the computer program performance and to calibrate the model parameters. Visual and statistical comparison of the model results with the mapped rock falls confirmed the accuracy of the model. The model results are compared with a previous map of rockfall talus and with a geomorphic assessment of rock fall hazard based on potential energy referred to as a shadow angle approach, recently completed for the Yosemite Valley. The model results are then used to identify the roads and trails more subject to rock fall hazard. Of the 166.5 km of roads and trails in the Yosemite Valley 31.2% were found to be potentially subject to rock fall hazard, of which 14% are subject to very high hazard. ?? European Geosciences Union 2003.

  15. Big Data Challenges of High-Dimensional Continuous-Time Mean-Variance Portfolio Selection and a Remedy.

    PubMed

    Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying

    2017-08-01

    Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.

  16. Influence of 2D and 3D view on performance and time estimation in minimal invasive surgery.

    PubMed

    Blavier, A; Nyssen, A S

    2009-11-01

    This study aimed to evaluate the impact of two-dimensional (2D) and three-dimensional (3D) images on time performance and time estimation during a surgical motor task. A total of 60 subjects without any surgical experience (nurses) and 20 expert surgeons performed a fine surgical task with a new laparoscopic technology (da Vinci robotic system). The 80 subjects were divided into two groups, one using 3D view option and the other using 2D view option. We measured time performance and asked subjects to verbally estimate their time performance. Our results showed faster performance in 3D than in 2D view for novice subjects while the performance in 2D and 3D was similar in the expert group. We obtained a significant interaction between time performance and time evaluation: in 2D condition, all subjects accurately estimated their time performance while they overestimated it in the 3D condition. Our results emphasise the role of 3D in improving performance and the contradictory feeling about time evaluation in 2D and 3D. This finding is discussed in regard with the retrospective paradigm and suggests that 2D and 3D images are differently processed and memorised.

  17. ESPRIT-Like Two-Dimensional DOA Estimation for Monostatic MIMO Radar with Electromagnetic Vector Received Sensors under the Condition of Gain and Phase Uncertainties and Mutual Coupling

    PubMed Central

    Zhang, Yongshun; Zheng, Guimei; Feng, Cunqian; Tang, Jun

    2017-01-01

    In this paper, we focus on the problem of two-dimensional direction of arrival (2D-DOA) estimation for monostatic MIMO Radar with electromagnetic vector received sensors (MIMO-EMVSs) under the condition of gain and phase uncertainties (GPU) and mutual coupling (MC). GPU would spoil the invariance property of the EMVSs in MIMO-EMVSs, thus the effective ESPRIT algorithm unable to be used directly. Then we put forward a C-SPD ESPRIT-like algorithm. It estimates the 2D-DOA and polarization station angle (PSA) based on the instrumental sensors method (ISM). The C-SPD ESPRIT-like algorithm can obtain good angle estimation accuracy without knowing the GPU. Furthermore, it can be applied to arbitrary array configuration and has low complexity for avoiding the angle searching procedure. When MC and GPU exist together between the elements of EMVSs, in order to make our algorithm feasible, we derive a class of separated electromagnetic vector receiver and give the S-SPD ESPRIT-like algorithm. It can solve the problem of GPU and MC efficiently. And the array configuration can be arbitrary. The effectiveness of our proposed algorithms is verified by the simulation result. PMID:29072588

  18. ESPRIT-Like Two-Dimensional DOA Estimation for Monostatic MIMO Radar with Electromagnetic Vector Received Sensors under the Condition of Gain and Phase Uncertainties and Mutual Coupling.

    PubMed

    Zhang, Dong; Zhang, Yongshun; Zheng, Guimei; Feng, Cunqian; Tang, Jun

    2017-10-26

    In this paper, we focus on the problem of two-dimensional direction of arrival (2D-DOA) estimation for monostatic MIMO Radar with electromagnetic vector received sensors (MIMO-EMVSs) under the condition of gain and phase uncertainties (GPU) and mutual coupling (MC). GPU would spoil the invariance property of the EMVSs in MIMO-EMVSs, thus the effective ESPRIT algorithm unable to be used directly. Then we put forward a C-SPD ESPRIT-like algorithm. It estimates the 2D-DOA and polarization station angle (PSA) based on the instrumental sensors method (ISM). The C-SPD ESPRIT-like algorithm can obtain good angle estimation accuracy without knowing the GPU. Furthermore, it can be applied to arbitrary array configuration and has low complexity for avoiding the angle searching procedure. When MC and GPU exist together between the elements of EMVSs, in order to make our algorithm feasible, we derive a class of separated electromagnetic vector receiver and give the S-SPD ESPRIT-like algorithm. It can solve the problem of GPU and MC efficiently. And the array configuration can be arbitrary. The effectiveness of our proposed algorithms is verified by the simulation result.

  19. Analysis of Maneuvering Targets with Complex Motions by Two-Dimensional Product Modified Lv's Distribution for Quadratic Frequency Modulation Signals.

    PubMed

    Jing, Fulong; Jiao, Shuhong; Hou, Changbo; Si, Weijian; Wang, Yu

    2017-06-21

    For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM) signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR) and the quadratic chirp rate (QCR) are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF) and modified scaled Fourier transform (mSFT), an effective parameter estimation algorithm is proposed-referred to as the Two-Dimensional product modified Lv's distribution (2D-PMLVD)-for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT) and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified.

  20. Water Sensation During Passive Propulsion for Expert and Nonexpert Swimmers.

    PubMed

    Kusanagi, Kenta; Sato, Daisuke; Hashimoto, Yasuhiro; Yamada, Norimasa

    2017-06-01

    This study determined whether expert swimmers, compared with nonexperts, have superior movement perception and physical sensations of propulsion in water. Expert (national level competitors, n = 10) and nonexpert (able to swim 50 m in > 3 styles, n = 10) swimmers estimated distance traveled in water with their eyes closed. Both groups indicated their subjective physical sensations in the water. For each of two trials, two-dimensional coordinates were obtained from video recordings using the two-dimensional direct linear transformation method for calculating changes in speed. The mean absolute error of the difference between the actual and estimated distance traveled in the water was significantly lower for expert swimmers (0.90 ± 0.71 meters) compared with nonexpert swimmers (3.85 ± 0.84 m). Expert swimmers described the sensation of propulsion in water in cutaneous terms as the "sense of flow" and sensation of "skin resistance." Therefore, expert swimmers appear to have a superior sense of distance during their movement in the water compared with that of nonexpert swimmers. In addition, expert swimmers may have a better perception of movement in water. We propose that expert swimmers integrate sensations and proprioceptive senses, enabling them to better perceive and estimate distance moved through water.

Top