Effects of variability in probable maximum precipitation patterns on flood losses
NASA Astrophysics Data System (ADS)
Zischg, Andreas Paul; Felder, Guido; Weingartner, Rolf; Quinn, Niall; Coxon, Gemma; Neal, Jeffrey; Freer, Jim; Bates, Paul
2018-05-01
The assessment of the impacts of extreme floods is important for dealing with residual risk, particularly for critical infrastructure management and for insurance purposes. Thus, modelling of the probable maximum flood (PMF) from probable maximum precipitation (PMP) by coupling hydrological and hydraulic models has gained interest in recent years. Herein, we examine whether variability in precipitation patterns exceeds or is below selected uncertainty factors in flood loss estimation and if the flood losses within a river basin are related to the probable maximum discharge at the basin outlet. We developed a model experiment with an ensemble of probable maximum precipitation scenarios created by Monte Carlo simulations. For each rainfall pattern, we computed the flood losses with a model chain and benchmarked the effects of variability in rainfall distribution with other model uncertainties. The results show that flood losses vary considerably within the river basin and depend on the timing and superimposition of the flood peaks from the basin's sub-catchments. In addition to the flood hazard component, the other components of flood risk, exposure, and vulnerability contribute remarkably to the overall variability. This leads to the conclusion that the estimation of the probable maximum expectable flood losses in a river basin should not be based exclusively on the PMF. Consequently, the basin-specific sensitivities to different precipitation patterns and the spatial organization of the settlements within the river basin need to be considered in the analyses of probable maximum flood losses.
14 CFR 440.7 - Determination of maximum probable loss.
Code of Federal Regulations, 2010 CFR
2010-01-01
... determine the maximum probable loss (MPL) from covered claims by a third party for bodily injury or property... licensee, or permittee, if interagency consultation may delay issuance of the MPL determination. (c... after the MPL determination is issued. Any change in financial responsibility requirements as a result...
NASA Astrophysics Data System (ADS)
Aller, D.; Hohl, R.; Mair, F.; Schiesser, H.-H.
2003-04-01
Extreme hailfall can cause massive damage to building structures. For the insurance and reinsurance industry it is essential to estimate the probable maximum hail loss of their portfolio. The probable maximum loss (PML) is usually defined with a return period of 1 in 250 years. Statistical extrapolation has a number of critical points, as historical hail loss data are usually only available from some events while insurance portfolios change over the years. At the moment, footprints are derived from historical hail damage data. These footprints (mean damage patterns) are then moved over a portfolio of interest to create scenario losses. However, damage patterns of past events are based on the specific portfolio that was damaged during that event and can be considerably different from the current spread of risks. A new method for estimating the probable maximum hail loss to a building portfolio is presented. It is shown that footprints derived from historical damages are different to footprints of hail kinetic energy calculated from radar reflectivity measurements. Based on the relationship between radar-derived hail kinetic energy and hail damage to buildings, scenario losses can be calculated. A systematic motion of the hail kinetic energy footprints over the underlying portfolio creates a loss set. It is difficult to estimate the return period of losses calculated with footprints derived from historical damages being moved around. To determine the return periods of the hail kinetic energy footprints over Switzerland, 15 years of radar measurements and 53 years of agricultural hail losses are available. Based on these data, return periods of several types of hailstorms were derived for different regions in Switzerland. The loss set is combined with the return periods of the event set to obtain an exceeding frequency curve, which can be used to derive the PML.
NASA Astrophysics Data System (ADS)
Hwang, Eunju; Kim, Kyung Jae; Roijers, Frank; Choi, Bong Dae
In the centralized polling mode in IEEE 802.16e, a base station (BS) polls mobile stations (MSs) for bandwidth reservation in one of three polling modes; unicast, multicast, or broadcast pollings. In unicast polling, the BS polls each individual MS to allow to transmit a bandwidth request packet. This paper presents an analytical model for the unicast polling of bandwidth request in IEEE 802.16e networks over Gilbert-Elliot error channel. We derive the probability distribution for the delay of bandwidth requests due to wireless transmission errors and find the loss probability of request packets due to finite retransmission attempts. By using the delay distribution and the loss probability, we optimize the number of polling slots within a frame and the maximum retransmission number while satisfying QoS on the total loss probability which combines two losses: packet loss due to the excess of maximum retransmission and delay outage loss due to the maximum tolerable delay bound. In addition, we obtain the utilization of polling slots, which is defined as the ratio of the number of polling slots used for the MS's successful transmission to the total number of polling slots used by the MS over a long run time. Analysis results are shown to well match with simulation results. Numerical results give examples of the optimal number of polling slots within a frame and the optimal maximum retransmission number depending on delay bounds, the number of MSs, and the channel conditions.
A quantitative method for risk assessment of agriculture due to climate change
NASA Astrophysics Data System (ADS)
Dong, Zhiqiang; Pan, Zhihua; An, Pingli; Zhang, Jingting; Zhang, Jun; Pan, Yuying; Huang, Lei; Zhao, Hui; Han, Guolin; Wu, Dong; Wang, Jialin; Fan, Dongliang; Gao, Lin; Pan, Xuebiao
2018-01-01
Climate change has greatly affected agriculture. Agriculture is facing increasing risks as its sensitivity and vulnerability to climate change. Scientific assessment of climate change-induced agricultural risks could help to actively deal with climate change and ensure food security. However, quantitative assessment of risk is a difficult issue. Here, based on the IPCC assessment reports, a quantitative method for risk assessment of agriculture due to climate change is proposed. Risk is described as the product of the degree of loss and its probability of occurrence. The degree of loss can be expressed by the yield change amplitude. The probability of occurrence can be calculated by the new concept of climate change effect-accumulated frequency (CCEAF). Specific steps of this assessment method are suggested. This method is determined feasible and practical by using the spring wheat in Wuchuan County of Inner Mongolia as a test example. The results show that the fluctuation of spring wheat yield increased with the warming and drying climatic trend in Wuchuan County. The maximum yield decrease and its probability were 3.5 and 64.6%, respectively, for the temperature maximum increase 88.3%, and its risk was 2.2%. The maximum yield decrease and its probability were 14.1 and 56.1%, respectively, for the precipitation maximum decrease 35.2%, and its risk was 7.9%. For the comprehensive impacts of temperature and precipitation, the maximum yield decrease and its probability were 17.6 and 53.4%, respectively, and its risk increased to 9.4%. If we do not adopt appropriate adaptation strategies, the degree of loss from the negative impacts of multiclimatic factors and its probability of occurrence will both increase accordingly, and the risk will also grow obviously.
The returns and risks of investment portfolio in stock market crashes
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Long, Chao; Chen, Xiao-Dan
2015-06-01
The returns and risks of investment portfolio in stock market crashes are investigated by considering a theoretical model, based on a modified Heston model with a cubic nonlinearity, proposed by Spagnolo and Valenti. Through numerically simulating probability density function of returns and the mean escape time of the model, the results indicate that: (i) the maximum stability of returns is associated with the maximum dispersion of investment portfolio and an optimal stop-loss position; (ii) the maximum risks are related with a worst dispersion of investment portfolio and the risks of investment portfolio are enhanced by increasing stop-loss position. In addition, the good agreements between the theoretical result and real market data are found in the behaviors of the probability density function and the mean escape time.
DOT National Transportation Integrated Search
1997-07-22
The Commercial Space Launch Act requires that all commercial licensees : demonstrate financial responsibility to compensate for the maximum probable : loss (MPL) from claims by a third party for death, bodily injury, or property : damage or loss resu...
Code of Federal Regulations, 2010 CFR
2010-01-01
... payload, including type (e.g., telecommunications, remote sensing), propellants, and hazardous components... description of any payload, including type (e.g., telecommunications, remote sensing), propellants, and...
Analytic saddlepoint approximation for ionization energy loss distributions
Sjue, Sky K. L.; George, Jr., Richard Neal; Mathews, David Gregory
2017-07-27
Here, we present a saddlepoint approximation for ionization energy loss distributions, valid for arbitrary relativistic velocities of the incident particle 0 < v/c < 1, provided that ionizing collisions are still the dominant energy loss mechanism. We derive a closed form solution closely related to Moyal’s distribution. This distribution is intended for use in simulations with relatively low computational overhead. The approximation generally reproduces the Vavilov most probable energy loss and full width at half maximum to better than 1% and 10%, respectively, with significantly better agreement as Vavilov’s κ approaches 1.
Analytic saddlepoint approximation for ionization energy loss distributions
NASA Astrophysics Data System (ADS)
Sjue, S. K. L.; George, R. N.; Mathews, D. G.
2017-09-01
We present a saddlepoint approximation for ionization energy loss distributions, valid for arbitrary relativistic velocities of the incident particle 0 < v / c < 1 , provided that ionizing collisions are still the dominant energy loss mechanism. We derive a closed form solution closely related to Moyal's distribution. This distribution is intended for use in simulations with relatively low computational overhead. The approximation generally reproduces the Vavilov most probable energy loss and full width at half maximum to better than 1% and 10%, respectively, with significantly better agreement as Vavilov's κ approaches 1.
Analytic saddlepoint approximation for ionization energy loss distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sjue, Sky K. L.; George, Jr., Richard Neal; Mathews, David Gregory
Here, we present a saddlepoint approximation for ionization energy loss distributions, valid for arbitrary relativistic velocities of the incident particle 0 < v/c < 1, provided that ionizing collisions are still the dominant energy loss mechanism. We derive a closed form solution closely related to Moyal’s distribution. This distribution is intended for use in simulations with relatively low computational overhead. The approximation generally reproduces the Vavilov most probable energy loss and full width at half maximum to better than 1% and 10%, respectively, with significantly better agreement as Vavilov’s κ approaches 1.
Estimation of Multinomial Probabilities.
1978-11-01
1971) and Alam (1978) have shown that the maximum likelihood estimator is admissible with respect to the quadratic loss. Steinhaus (1957) and Trybula...appear). Johnson, B. Mck. (1971). On admissible estimators for certain fixed sample binomial populations. Ann. Math. Statist. 92, 1579-1587. Steinhaus , H
NASA Astrophysics Data System (ADS)
Liu, Yuan; Wang, Mingqiang; Ning, Xingyao
2018-02-01
Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.
1980-03-01
recommended guidelines, the Spillway Design Flood (SDF) ranges between the 1 /2-PMF (Probable Maximum Flood) and PMF. Since the dam is near the lower end of...overtopping. A breach analysis indicates that failure under 1 /2-PMF conditions would probably not lead to increased property damage or loss of life at...ii OVERVIEW PHOTOGRAPH ......... .................. V TABLE OF CONTENTS ......... ................... vi SECTION 1 - GENERAL INFORMATION
Wood, Chris C; Gross, Mart R
2008-02-01
Conservation biologists mostly agree on the need to identify and protect biodiversity below the species level but have not yet resolved the best approach. We addressed 2 issues relevant to this debate. First, we distinguished between the abstract goal of preserving the maximum amount of unique biodiversity and the pragmatic goal of minimizing the loss of ecological goods and services given that further loss of biodiversity seems inevitable. Second, we distinguished between the scientific task of assessing extinction risk and the normative task of choosing targets for protection. We propose that scientific advice on extinction risk be given at the smallest meaningful scale: the elemental conservation unit (ECU). An ECU is a demographically isolated population whose probability of extinction over the time scale of interest (say 100 years) is not substantially affected by natural immigration from other populations. Within this time frame, the loss of an ECU would be irreversible without human intervention. Society's decision to protect an ECU ought to reflect human values that have social, economic, and political dimensions. Scientists can best inform this decision by providing advice about the probability that an ECU will be lost and the ecological and evolutionary consequences of that loss in a form that can be integrated into landscape planning. The ECU approach provides maximum flexibility to decision makers and ensures that the scientific task of assessing extinction risk informs, but remains distinct from, the normative social challenge of setting conservation targets.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Requirements for Licensed Launch, Including Suborbital Launch I. General Information A. Mission description. 1.... Orbit altitudes (apogee and perigee). 2. Flight sequence. 3. Staging events and the time for each event... shall cover the range of launch trajectories, inclinations and orbits for which authorization is sought...
Code of Federal Regulations, 2013 CFR
2013-01-01
... Requirements for Licensed Launch, Including Suborbital Launch I. General Information A. Mission description. 1.... Orbit altitudes (apogee and perigee). 2. Flight sequence. 3. Staging events and the time for each event... shall cover the range of launch trajectories, inclinations and orbits for which authorization is sought...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Requirements for Licensed Launch, Including Suborbital Launch I. General Information A. Mission description. 1.... Orbit altitudes (apogee and perigee). 2. Flight sequence. 3. Staging events and the time for each event... shall cover the range of launch trajectories, inclinations and orbits for which authorization is sought...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Requirements for Licensed Launch, Including Suborbital Launch I. General Information A. Mission description. 1.... Orbit altitudes (apogee and perigee). 2. Flight sequence. 3. Staging events and the time for each event... shall cover the range of launch trajectories, inclinations and orbits for which authorization is sought...
NASA Technical Reports Server (NTRS)
Bell, V. L.
1980-01-01
The potential damage to electrical equipment caused by the release of carbon fibers from burning commercial airliners is assessed in terms of annual expected costs and maximum losses at low probabilities of occurrence. A materials research program to provide alternate or modified composite materials for aircraft structures is reviewed.
Multivariate flood risk assessment: reinsurance perspective
NASA Astrophysics Data System (ADS)
Ghizzoni, Tatiana; Ellenrieder, Tobias
2013-04-01
For insurance and re-insurance purposes the knowledge of the spatial characteristics of fluvial flooding is fundamental. The probability of simultaneous flooding at different locations during one event and the associated severity and losses have to be estimated in order to assess premiums and for accumulation control (Probable Maximum Losses calculation). Therefore, the identification of a statistical model able to describe the multivariate joint distribution of flood events in multiple location is necessary. In this context, copulas can be viewed as alternative tools for dealing with multivariate simulations as they allow to formalize dependence structures of random vectors. An application of copula function for flood scenario generation is presented for Australia (Queensland, New South Wales and Victoria) where 100.000 possible flood scenarios covering approximately 15.000 years were simulated.
Estimating extreme losses for the Florida Public Hurricane Model—part II
NASA Astrophysics Data System (ADS)
Gulati, Sneh; George, Florence; Hamid, Shahid
2018-02-01
Rising global temperatures are leading to an increase in the number of extreme events and losses (http://www.epa.gov/climatechange/science/indicators/). Accurate estimation of these extreme losses with the intention of protecting themselves against them is critical to insurance companies. In a previous paper, Gulati et al. (2014) discussed probable maximum loss (PML) estimation for the Florida Public Hurricane Loss Model (FPHLM) using parametric and nonparametric methods. In this paper, we investigate the use of semi-parametric methods to do the same. Detailed analysis of the data shows that the annual losses from FPHLM do not tend to be very heavy tailed, and therefore, neither the popular Hill's method nor the moment's estimator work well. However, Pickand's estimator with threshold around the 84th percentile provides a good fit for the extreme quantiles for the losses.
Macro-economic assessment of flood risk in Italy under current and future climate
NASA Astrophysics Data System (ADS)
Carrera, Lorenzo; Koks, Elco; Mysiak, Jaroslav; Aerts, Jeroen; Standardi, Gabriele
2014-05-01
This paper explores an integrated methodology for assessing direct and indirect costs of fluvial flooding to estimate current and future fluvial flood risk in Italy. Our methodology combines a Geographic Information System spatial approach, with a general economic equilibrium approach using a downscaled modified version of a Computable General Equilibrium model at NUTS2 scale. Given the level of uncertainty in the behavior of disaster-affected economies, the simulation considers a wide range of business recovery periods. We calculate expected annual losses for each NUTS2 region, and exceedence probability curves to determine probable maximum losses. Given a certain acceptable level of risk, we describe the conditions of flood protection and business recovery periods under which losses are contained within this limit. Because of the difference between direct costs, which are an overestimation of stock losses, and indirect costs, which represent the macro-economic effects, our results have different policy meanings. While the former is relevant for post-disaster recovery, the latter is more relevant for public policy issues, particularly for cost-benefit analysis and resilience assessment.
NASA Astrophysics Data System (ADS)
Florian, Ehmele; Michael, Kunz
2016-04-01
Several major flood events occurred in Germany in the past 15-20 years especially in the eastern parts along the rivers Elbe and Danube. Examples include the major floods of 2002 and 2013 with an estimated loss of about 2 billion Euros each. The last major flood events in the State of Baden-Württemberg in southwest Germany occurred in the years 1978 and 1993/1994 along the rivers Rhine and Neckar with an estimated total loss of about 150 million Euros (converted) each. Flood hazard originates from a combination of different meteorological, hydrological and hydraulic processes. Currently there is no defined methodology available for evaluating and quantifying the flood hazard and related risk for larger areas or whole river catchments instead of single gauges. In order to estimate the probable maximum loss for higher return periods (e.g. 200 years, PML200), a stochastic model approach is designed since observational data are limited in time and space. In our approach, precipitation is linearly composed of three elements: background precipitation, orographically-induces precipitation, and a convectively-driven part. We use linear theory of orographic precipitation formation for the stochastic precipitation model (SPM), which is based on fundamental statistics of relevant atmospheric variables. For an adequate number of historic flood events, the corresponding atmospheric conditions and parameters are determined in order to calculate a probability density function (pdf) for each variable. This method involves all theoretically possible scenarios which may not have happened, yet. This work is part of the FLORIS-SV (FLOod RISk Sparkassen Versicherung) project and establishes the first step of a complete modelling chain of the flood risk. On the basis of the generated stochastic precipitation event set, hydrological and hydraulic simulations will be performed to estimate discharge and water level. The resulting stochastic flood event set will be used to quantify the flood risk and to estimate probable maximum loss (e.g. PML200) for a given property (buildings, industry) portfolio.
Losses in chopper-controlled DC series motors
NASA Technical Reports Server (NTRS)
Hamilton, H. B.
1982-01-01
Motors for electric vehicle (EV) applications must have different features than dc motors designed for industrial applications. The EV motor application is characterized by the following requirements: (1) the need for highest possible efficiency from light load to overload, for maximum EV range, (2) large short time overload capability (The ratio of peak to average power varies from 5/1 in heavy city traffic to 3/1 in suburban driving situations) and (3) operation from power supply voltage levels of 84 to 144 volts (probably 120 volts maximum). A test facility utilizing a dc generator as a substitute for a battery pack was designed and utilized. Criteria for the design of such a facility are presented. Two motors, differing in design detail, commercially available for EV use were tested. Losses measured are discussed, as are waves forms and their harmonic content, the measurements of resistance and inductance, EV motor/chopper application criteria, and motor design considerations.
Blocking Losses With a Photon Counter
NASA Technical Reports Server (NTRS)
Moision, Burce E.; Piazzolla, Sabino
2012-01-01
It was not known how to assess accurately losses in a communications link due to photodetector blocking, a phenomenon wherein a detector is rendered inactive for a short time after the detection of a photon. When used to detect a communications signal, blocking leads to losses relative to an ideal detector, which may be measured as a reduction in the communications rate for a given received signal power, or an increase in the signal power required to support the same communications rate. This work involved characterizing blocking losses for single detectors and arrays of detectors. Blocking may be mitigated by spreading the signal intensity over an array of detectors, reducing the count rate on any one detector. A simple approximation was made to the blocking loss as a function of the probability that a detector is unblocked at a given time, essentially treating the blocking probability as a scaling of the detection efficiency. An exact statistical characterization was derived for a single detector, and an approximation for multiple detectors. This allowed derivation of several accurate approximations to the loss. Methods were also derived to account for a rise time in recovery, and non-uniform illumination due to diffraction and atmospheric distortion of the phase front. It was assumed that the communications signal is intensity modulated and received by an array of photon-counting photodetectors. For the purpose of this analysis, it was assumed that the detectors are ideal, in that they produce a signal that allows one to reproduce the arrival times of electrons, produced either as photoelectrons or from dark noise, exactly. For single detectors, the performance of the maximum-likelihood (ML) receiver in blocking is illustrated, as well as a maximum-count (MC) receiver, that, when receiving a pulse-position-modulated (PPM) signal, selects the symbol corresponding to the slot with the largest electron count. Whereas the MC receiver saturates at high count rates, the ML receiver may not. The loss in capacity, symbol-error-rate (SER), and count-rate were numerically computed. It was shown that the capacity and symbol-error-rate losses track, whereas the count-rate loss does not generally reflect the SER or capacity loss, as the slot-statistics at the detector output are no longer Poisson. It is also shown that the MC receiver loss may be accurately predicted for dead times on the order of a slot.
Karl, J Bradley; Medders, Lorilee A; Maroney, Patrick F
2016-06-01
We examine whether the risk characterization estimated by catastrophic loss projection models is sensitive to the revelation of new information regarding risk type. We use commercial loss projection models from two widely employed modeling firms to estimate the expected hurricane losses of Florida Atlantic University's building stock, both including and excluding secondary information regarding hurricane mitigation features that influence damage vulnerability. We then compare the results of the models without and with this revealed information and find that the revelation of additional, secondary information influences modeled losses for the windstorm-exposed university building stock, primarily evidenced by meaningful percent differences in the loss exceedance output indicated after secondary modifiers are incorporated in the analysis. Secondary risk characteristics for the data set studied appear to have substantially greater impact on probable maximum loss estimates than on average annual loss estimates. While it may be intuitively expected for catastrophe models to indicate that secondary risk characteristics hold value for reducing modeled losses, the finding that the primary value of secondary risk characteristics is in reduction of losses in the "tail" (low probability, high severity) events is less intuitive, and therefore especially interesting. Further, we address the benefit-cost tradeoffs that commercial entities must consider when deciding whether to undergo the data collection necessary to include secondary information in modeling. Although we assert the long-term benefit-cost tradeoff is positive for virtually every entity, we acknowledge short-term disincentives to such an effort. © 2015 Society for Risk Analysis.
The returns and risks of investment portfolio in a financial market
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Mei, Dong-Cheng
2014-07-01
The returns and risks of investment portfolio in a financial system was investigated by constructing a theoretical model based on the Heston model. After the theoretical model and analysis of portfolio were calculated and analyzed, we find the following: (i) The statistical properties (i.e., the probability distribution, the variance and loss rate of equity portfolio return) between simulation results of the theoretical model and the real financial data obtained from Dow Jones Industrial Average are in good agreement; (ii) The maximum dispersion of the investment portfolio is associated with the maximum stability of the equity portfolio return and minimal investment risks; (iii) An increase of the investment period and a worst investment period are associated with a decrease of stability of the equity portfolio return and a maximum investment risk, respectively.
Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.
Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen
In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.
The Effect of Ice Formations on Propeller Performance
NASA Technical Reports Server (NTRS)
Neel, C. B., Jr.; Bright, L. G.
1950-01-01
Measurements of propeller efficiency loss due to ice formation are supplemented by an analysis to establish the magnitude of efficiency losses to be anticipated during flight in icing conditions. The measurements were made during flight in natural icing conditions; whereas the analysis consisted of an investIgation of changes in blade-section aerodynamic characteristics caused by ice formation and the resulting propeller efficiency changes. Agreement in the order of magnitude of eff 1- ciency losses to be expected is obtained between measured and analytical results. The results indicate that, in general, efficiency losses can be expected to be less than 10 percent; whereas maximum losses, which will be encountered only rarely, may be as high as 15 or 20 percent. Reported. losses larger than 15 or 20 percent, based on reductions in airplane performance, probably are due to ice accretions on other parts of the airplane. Blade-element theory is used in the analytical treatment, and calculations are made to show the degree to which the aerodynamic characteristics of a blade section. must be altered to produce various propeller efficiency losses. The effects of ice accretions on airfoil-section characteristics at subcritical speeds and their influence on drag-divergence Mach number are examined, and. the attendant maximum efficiency losses are computed. The effect of kinetic heating on the radial extent of ice formation is considered, and its influence on required length of blade heating shoes is discussed. It is demonstrated how the efficiency loss resulting from an icing encounter is influenced by the decisions of the pilot in adjusting the engine and propeller controls.
Assessment of the reliability of standard automated perimetry in regions of glaucomatous damage.
Gardiner, Stuart K; Swanson, William H; Goren, Deborah; Mansberger, Steven L; Demirel, Shaban
2014-07-01
Visual field testing uses high-contrast stimuli in areas of severe visual field loss. However, retinal ganglion cells saturate with high-contrast stimuli, suggesting that the probability of detecting perimetric stimuli may not increase indefinitely as contrast increases. Driven by this concept, this study examines the lower limit of perimetric sensitivity for reliable testing by standard automated perimetry. Evaluation of a diagnostic test. A total of 34 participants with moderate to severe glaucoma; mean deviation at their last clinic visit averaged -10.90 dB (range, -20.94 to -3.38 dB). A total of 75 of the 136 locations tested had a perimetric sensitivity of ≤ 19 dB. Frequency-of-seeing curves were constructed at 4 nonadjacent visual field locations by the Method of Constant Stimuli (MOCS), using 35 stimulus presentations at each of 7 contrasts. Locations were chosen a priori and included at least 2 with glaucomatous damage but a sensitivity of ≥ 6 dB. Cumulative Gaussian curves were fit to the data, first assuming a 5% false-negative rate and subsequently allowing the asymptotic maximum response probability to be a free parameter. The strength of the relation (R(2)) between perimetric sensitivity (mean of last 2 clinic visits) and MOCS sensitivity (from the experiment) for all locations with perimetric sensitivity within ± 4 dB of each selected value, at 0.5 dB intervals. Bins centered at sensitivities ≥ 19 dB always had R(2) >0.1. All bins centered at sensitivities ≤ 15 dB had R(2) <0.1, an indication that sensitivities are unreliable. No consistent conclusions could be drawn between 15 and 19 dB. At 57 of the 81 locations with perimetric sensitivity <19 dB, including 49 of the 63 locations ≤ 15 dB, the fitted asymptotic maximum response probability was <80%, consistent with the hypothesis of response saturation. At 29 of these locations the asymptotic maximum was <50%, and so contrast sensitivity (50% response rate) is undefined. Clinical visual field testing may be unreliable when visual field locations have sensitivity below approximately 15 to 19 dB because of a reduction in the asymptotic maximum response probability. Researchers and clinicians may have difficulty detecting worsening sensitivity in these visual field locations, and this difficulty may occur commonly in patients with glaucoma with moderate to severe glaucomatous visual field loss. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Outgassing Data for Selecting Spacecraft Materials
NASA Technical Reports Server (NTRS)
Campbell, William A., Jr.; Scialdone, John J.
1993-01-01
This tenth compilation of outgassing data of materials intended for spacecraft use supersedes Reference Publication 1124, Revision 2, November 1990. The data were obtained at the Goddard Space Flight Center (GSFC), utilizing equipment developed at Stanford Research Institute (SRI) under contract to the Jet Propulsion Laboratory (JPL). SRI personnel developed an apparatus for determining the mass loss in vacuum and for collecting the outgassed products. The outgassing data have been presented in three different ways in order to facilitate material selection. In Section A, the materials are divided by category into the 18 probable uses, such as adhesives, greases, paints, potting compounds, and so forth. In Section B, all the materials contained in Section A are listed in alphabetical order by the manufacturer's identification. In Section C, the only materials listed are those having 'Total Mass Loss' (TML) and Collected Volatile Condensable Materials (CVCM) equal to or lower than a maximum 1.0 percent TML and a maximum 0.10 percent CVCM. These are grouped by use, as in Section A.
Hydrometeorological Report No. 39 Probable Maximum Precipitation in the Hawaiian Islands 1963 Hydrometeorological Report No. 41 Probable Maximum and TVA Precipitation over the Tennessee River Basin above Chattanooga 1965 Hydrometeorological Report No. 46 Probable Maximum Precipitation, Mekong River Basin 1970
2024 Unmanned Undersea Warfare Concept
2013-06-01
mine. Assumptions are that the high-tech mine would have a 400 - meter range that spans 360 degrees, a 90% probability of detecting a HVU, and a 30...motor volume – The electric propulsion motor is assumed to be 0.127 cubic meters . A common figure of 24” x 18” x 18” is assumed. This size will allow...regard to propagation loss is assumed to be 400 HZ. Using Excel spreadsheet modeling, the maximum range is determined by finding that range resulting in
The internal consistency of the standard gamble: tests after adjusting for prospect theory.
Oliver, Adam
2003-07-01
This article reports a study that tests whether the internal consistency of the standard gamble can be improved upon by incorporating loss weighting and probability transformation parameters in the standard gamble valuation procedure. Five alternatives to the standard EU formulation are considered: (1) probability transformation within an EU framework; and, within a prospect theory framework, (2) loss weighting and full probability transformation, (3) no loss weighting and full probability transformation, (4) loss weighting and no probability transformation, and (5) loss weighting and partial probability transformation. Of the five alternatives, only the prospect theory formulation with loss weighting and no probability transformation offers an improvement in internal consistency over the standard EU valuation procedure.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao
1991-01-01
Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Generation of a Catalogue of European Windstorms
NASA Astrophysics Data System (ADS)
Varino, Filipa; Baptiste Granier, Jean; Bordoy, Roger; Arbogast, Philippe; Joly, Bruno; Riviere, Gwendal; Fandeur, Marie-Laure; Bovy, Henry; Mitchell-Wallace, Kirsten; Souch, Claire
2016-04-01
The probability of multiple wind-storm events within a year is crucial to any (re)insurance company writing European wind business. Indeed, the volatility of losses is enhanced by the clustering of storms (cyclone families), as occurred in early 1990 (Daria, Vivian, Wiebke), December 1999 (Lothar, Martin) or December 2015 (Desmond, Eva, Frank), among others. In order to track winter extratropical cyclones, we use the maximum relative vorticity at 850 hPa of the new-released long-term ERA-20C reanalysis from the ECMWF since the beginning of the 20th Century until 2010. We develop an automatic procedure to define events. We then quantify the severity of each storm using loss and meteorological indices at country and Europe-wide level. Validation against market losses for the period 1970-2010 is undertaken before considering the severity and frequency of European windstorms for the 110 years period.
Maximum predictive power and the superposition principle
NASA Technical Reports Server (NTRS)
Summhammer, Johann
1994-01-01
In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.
NASA Astrophysics Data System (ADS)
Sabarish, R. Mani; Narasimhan, R.; Chandhru, A. R.; Suribabu, C. R.; Sudharsan, J.; Nithiyanantham, S.
2017-05-01
In the design of irrigation and other hydraulic structures, evaluating the magnitude of extreme rainfall for a specific probability of occurrence is of much importance. The capacity of such structures is usually designed to cater to the probability of occurrence of extreme rainfall during its lifetime. In this study, an extreme value analysis of rainfall for Tiruchirapalli City in Tamil Nadu was carried out using 100 years of rainfall data. Statistical methods were used in the analysis. The best-fit probability distribution was evaluated for 1, 2, 3, 4 and 5 days of continuous maximum rainfall. The goodness of fit was evaluated using Chi-square test. The results of the goodness-of-fit tests indicate that log-Pearson type III method is the overall best-fit probability distribution for 1-day maximum rainfall and consecutive 2-, 3-, 4-, 5- and 6-day maximum rainfall series of Tiruchirapalli. To be reliable, the forecasted maximum rainfalls for the selected return periods are evaluated in comparison with the results of the plotting position.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1991-01-01
In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake
NASA Astrophysics Data System (ADS)
Durukal, E.; Sesetyan, K.; Erdik, M.
2009-04-01
The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing period, which would certainly be a major difficulty for the expected low-frequency/high intensity loss case of Istanbul.
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.
Rapidly assessing the probability of exceptionally high natural hazard losses
NASA Astrophysics Data System (ADS)
Gollini, Isabella; Rougier, Jonathan
2014-05-01
One of the objectives in catastrophe modeling is to assess the probability distribution of losses for a specified period, such as a year. From the point of view of an insurance company, the whole of the loss distribution is interesting, and valuable in determining insurance premiums. But the shape of the righthand tail is critical, because it impinges on the solvency of the company. A simple measure of the risk of insolvency is the probability that the annual loss will exceed the company's current operating capital. Imposing an upper limit on this probability is one of the objectives of the EU Solvency II directive. If a probabilistic model is supplied for the loss process, then this tail probability can be computed, either directly, or by simulation. This can be a lengthy calculation for complex losses. Given the inevitably subjective nature of quantifying loss distributions, computational resources might be better used in a sensitivity analysis. This requires either a quick approximation to the tail probability or an upper bound on the probability, ideally a tight one. We present several different bounds, all of which can be computed nearly instantly from a very general event loss table. We provide a numerical illustration, and discuss the conditions under which the bound is tight. Although we consider the perspective of insurance and reinsurance companies, exactly the same issues concern the risk manager, who is typically very sensitive to large losses.
Dietary counseling adherence during tuberculosis treatment: A longitudinal study.
Bacelo, Adriana Costa; do Brasil, Pedro Emmanuel Alvarenga Americano; Cople-Rodrigues, Cláudia Dos Santos; Ingebourg, Georg; Paiva, Eliane; Ramalho, Andrea; Rolla, Valeria Cavalcanti
2017-02-01
The World Health Organization (WHO) recommends the use of dietary counseling to overcome malnutrition for patients with tuberculosis, with or without HIV, however the response to nutritional treatment depends on patient's adherence to nutritional counseling. Identify the degree of adherence to dietary counseling and predictors of adherence among patients undergoing tuberculosis treatment. Observational prospective follow-up study conducted in adults treating for tuberculosis with or without HIV. Self-reported adherence and 24-h diet recall were checked. Diet counseling according to WHO strategy was offered at each visit for all patients. The endpoint was the adherence to the recommended dietary allowance (RDA) and total calories consumed during tuberculosis treatment. Data were mainly analyzed with marginal models to estimate adjusted trajectories. Sixty-eight patients were included in the study. The maximum probability of total calories consumption of at least one RDA was 80%. The adherence to dietary counseling was low regardless of HIV infection. The negative determinants of adherence were the presence of loss of appetite and nausea/vomiting. For patients with loss of appetite and nausea/vomiting, the probability of total calories consumption of at least one RDA is less than 20% at any time. The loss of appetite and nausea/vomiting are highly prevalents and were the main causes of non-adherence to dietary counseling. Copyright © 2016 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.
Consumptive use and resulting leach-field water budget of a mountain residence
Stannard, David; Paul, William T.; Laws, Roy; Poeter, Eileen P.
2010-01-01
Consumptive use of water in a dispersed rural community has important implications for maximum housing density and its effects on sustainability of groundwater withdrawals. Recent rapid growth in Colorado, USA has stressed groundwater supplies in some areas, thereby increasing scrutiny of approximate methods developed there more than 30 years ago to estimate consumptive use that are still used today. A foothills residence was studied during a 2-year period to estimate direct and indirect water losses. Direct losses are those from evaporation inside the home, plus any outdoor use. Indirect loss is evapotranspiration (ET) from the residential leach-field in excess of ET from the immediately surrounding terrain. Direct losses were 18.7% of water supply to the home, substantially larger than estimated historically in Colorado. A new approach was developed to estimate indirect loss, using chamber methods together with the Penman–Monteith model. Indirect loss was only 0.9% of water supply, but this value probably was anomalously low due to a recurring leach-field malfunction. Resulting drainage beneath the leach-field was 80.4% of water supply. Guidelines are given to apply the same methodology at other sites and combine results with a survey of leach-fields in an area to obtain more realistic average values of ET losses.
Busanello, Marcos; de Freitas, Larissa Nazareth; Winckler, João Pedro Pereira; Farias, Hiron Pereira; Dos Santos Dias, Carlos Tadeu; Cassoli, Laerte Dagher; Machado, Paulo Fernando
2017-01-01
Payment programs based on milk quality (PPBMQ) are used in several countries around the world as an incentive to improve milk quality. One of the principal milk parameters used in such programs is the bulk tank somatic cell count (BTSCC). In this study, using data from an average of 37,000 farms per month in Brazil where milk was analyzed, BTSCC data were divided into different payment classes based on milk quality. Then, descriptive and graphical analyses were performed. The probability of a change to a worse payment class was calculated, future BTSCC values were predicted using time series models, and financial losses due to the failure to reach the maximum bonus for the payment based on milk quality were simulated. In Brazil, the mean BTSCC has remained high in recent years, without a tendency to improve. The probability of changing to a worse payment class was strongly affected by both the BTSCC average and BTSCC standard deviation for classes 1 and 2 (1000-200,000 and 201,000-400,000 cells/mL, respectively) and only by the BTSCC average for classes 3 and 4 (401,000-500,000 and 501,000-800,000 cells/mL, respectively). The time series models indicated that at some point in the year, farms would not remain in their current class and would accrue financial losses due to payments based on milk quality. The BTSCC for Brazilian dairy farms has not recently improved. The probability of a class change to a worse class is a metric that can aid in decision-making and stimulate farmers to improve milk quality. A time series model can be used to predict the future value of the BTSCC, making it possible to estimate financial losses and to show, moreover, that financial losses occur in all classes of the PPBMQ because the farmers do not remain in the best payment class in all months.
Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method
NASA Astrophysics Data System (ADS)
Pei-Jui, Wu; Hwa-Lung, Yu
2016-04-01
The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .
Long-distance quantum key distribution with imperfect devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo Piparo, Nicoló; Razavi, Mohsen
2014-12-04
Quantum key distribution over probabilistic quantum repeaters is addressed. We compare, under practical assumptions, two such schemes in terms of their secure key generation rate per memory, R{sub QKD}. The two schemes under investigation are the one proposed by Duan et al. in [Nat. 414, 413 (2001)] and that of Sangouard et al. proposed in [Phys. Rev. A 76, 050301 (2007)]. We consider various sources of imperfections in the latter protocol, such as a nonzero double-photon probability for the source, dark count per pulse, channel loss and inefficiencies in photodetectors and memories, to find the rate for different nesting levels.more » We determine the maximum value of the double-photon probability beyond which it is not possible to share a secret key anymore. We find the crossover distance for up to three nesting levels. We finally compare the two protocols.« less
Analysis of the variation of the 0°C isothermal altitude during rainfall events
NASA Astrophysics Data System (ADS)
Zeimetz, Fränz; Garcìa, Javier; Schaefli, Bettina; Schleiss, Anton J.
2016-04-01
In numerous countries of the world (USA, Canada, Sweden, Switzerland,…), the dam safety verifications for extreme floods are realized by referring to the so called Probable Maximum Flood (PMF). According to the World Meteorological Organization (WMO), this PMF is determined based on the PMP (Probable Maximum Precipitation). The PMF estimation is performed with a hydrological simulation model by routing the PMP. The PMP-PMF simulation is normally event based; therefore, if no further information is known, the simulation needs assumptions concerning the initial soil conditions such as saturation or snow cover. In addition, temperature series are also of interest for the PMP-PMF simulations. Temperature values can not only be deduced from temperature measurement but also using the temperature gradient method, the 0°C isothermal altitude can lead to temperature estimations on the ground. For practitioners, the usage of the isothermal altitude for referring to temperature is convenient and simpler because one value can give information over a large region under the assumption of a certain temperature gradient. The analysis of the evolution of the 0°C isothermal altitude during rainfall events is aimed here and based on meteorological soundings from the two sounding stations Payerne (CH) and Milan (I). Furthermore, hourly rainfall and temperature data are available from 110 pluviometers spread over the Swiss territory. The analysis of the evolution of the 0°C isothermal altitude is undertaken for different precipitation durations based on the meteorological measurements mentioned above. The results show that on average, the isothermal altitude tends to decrease during the rainfall events and that a correlation between the duration of the altitude loss and the duration of the rainfall exists. A significant difference in altitude loss is appearing when the soundings from Payerne and Milan are compared.
Spatial modeling for estimation of earthquakes economic loss in West Java
NASA Astrophysics Data System (ADS)
Retnowati, Dyah Ayu; Meilano, Irwan; Riqqi, Akhmad; Hanifa, Nuraini Rahma
2017-07-01
Indonesia has a high vulnerability towards earthquakes. The low adaptive capacity could make the earthquake become disaster that should be concerned. That is why risk management should be applied to reduce the impacts, such as estimating the economic loss caused by hazard. The study area of this research is West Java. The main reason of West Java being vulnerable toward earthquake is the existence of active faults. These active faults are Lembang Fault, Cimandiri Fault, Baribis Fault, and also Megathrust subduction zone. This research tries to estimates the value of earthquakes economic loss from some sources in West Java. The economic loss is calculated by using HAZUS method. The components that should be known are hazard (earthquakes), exposure (building), and the vulnerability. Spatial modeling is aimed to build the exposure data and make user get the information easier by showing the distribution map, not only in tabular data. As the result, West Java could have economic loss up to 1,925,122,301,868,140 IDR ± 364,683,058,851,703.00 IDR, which is estimated from six earthquake sources with maximum possibly magnitude. However, the estimation of economic loss value in this research is the worst case earthquakes occurrence which is probably over-estimated.
A performance-based approach to landslide risk analysis
NASA Astrophysics Data System (ADS)
Romeo, R. W.
2009-04-01
An approach for the risk assessment based on a probabilistic analysis of the performance of structures threatened by landslides is shown and discussed. The risk is a possible loss due to the occurrence of a potentially damaging event. Analytically the risk is the probability convolution of hazard, which defines the frequency of occurrence of the event (i.e., the demand), and fragility that defines the capacity of the system to withstand the event given its characteristics (i.e., severity) and those of the exposed goods (vulnerability), that is: Risk=p(D>=d|S,V) The inequality sets a damage (or loss) threshold beyond which the system's performance is no longer met. Therefore a consistent approach to risk assessment should: 1) adopt a probabilistic model which takes into account all the uncertainties of the involved variables (capacity and demand), 2) follow a performance approach based on given loss or damage thresholds. The proposed method belongs to the category of the semi-empirical ones: the theoretical component is given by the probabilistic capacity-demand model; the empirical component is given by the observed statistical behaviour of structures damaged by landslides. Two landslide properties alone are required: the area-extent and the type (or kinematism). All other properties required to determine the severity of landslides (such as depth, speed and frequency) are derived via probabilistic methods. The severity (or intensity) of landslides, in terms of kinetic energy, is the demand of resistance; the resistance capacity is given by the cumulative distribution functions of the limit state performance (fragility functions) assessed via damage surveys and cards compilation. The investigated limit states are aesthetic (of nominal concern alone), functional (interruption of service) and structural (economic and social losses). The damage probability is the probabilistic convolution of hazard (the probability mass function of the frequency of occurrence of given severities) and vulnerability (the probability of a limit state performance be reached, given a certain severity). Then, for each landslide all the exposed goods (structures and infrastructures) within the landslide area and within a buffer (representative of the maximum extension of a landslide given a reactivation), are counted. The risk is the product of the damage probability and the ratio of the exposed goods of each landslide to the whole assets exposed to the same type of landslides. Since the risk is computed numerically and by the same procedure applied to all landslides, it is free from any subjective assessment such as those implied in the qualitative methods.
Estimation from incomplete multinomial data. Ph.D. Thesis - Harvard Univ.
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1978-01-01
The vector of multinomial cell probabilities was estimated from incomplete data, incomplete in that it contains partially classified observations. Each such partially classified observation was observed to fall in one of two or more selected categories but was not classified further into a single category. The data were assumed to be incomplete at random. The estimation criterion was minimization of risk for quadratic loss. The estimators were the classical maximum likelihood estimate, the Bayesian posterior mode, and the posterior mean. An approximation was developed for the posterior mean. The Dirichlet, the conjugate prior for the multinomial distribution, was assumed for the prior distribution.
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
NASA Astrophysics Data System (ADS)
Dewi Ratih, Iis; Sutijo Supri Ulama, Brodjol; Prastuti, Mike
2018-03-01
Value at Risk (VaR) is one of the statistical methods used to measure market risk by estimating the worst losses in a given time period and level of confidence. The accuracy of this measuring tool is very important in determining the amount of capital that must be provided by the company to cope with possible losses. Because there is a greater losses to be faced with a certain degree of probability by the greater risk. Based on this, VaR calculation analysis is of particular concern to researchers and practitioners of the stock market to be developed, thus getting more accurate measurement estimates. In this research, risk analysis of stocks in four banking sub-sector, Bank Rakyat Indonesia, Bank Mandiri, Bank Central Asia and Bank Negara Indonesia will be done. Stock returns are expected to be influenced by exogenous variables, namely ICI and exchange rate. Therefore, in this research, stock risk estimation are done by using VaR ARMAX-GARCHX method. Calculating the VaR value with the ARMAX-GARCHX approach using window 500 gives more accurate results. Overall, Bank Central Asia is the only bank had the estimated maximum loss in the 5% quantile.
[Chemical Loss of Volatile Organic Compounds and Its Impact on the Formation of Ozone in Shanghai].
Wang, Hong-li
2015-09-01
The spatial characterization of ozone (O3) and its precursors was studied based on the field measurements in urban and rural areas of Shanghai during the summer of 2014. The chemical loss of volatile organic compounds (VOCs) was estimated by the parameterization method. The mixing ratio of VOCs was 20 x 10(-9) in urban area and 17 x 10(-9) in the west rural area during the measurements. The average values of the maximum incremental reactivity were comparable in urban and rural areas, namely 5. 0 mol.mol-1 (O3/VOCs). By contrast, the chemical loss of VOCs was 8. 3 x 10(-9) in west rural area, which was two times as that in urban area. The more chemical loss of VOCs was probably one of the important reasons leading to the higher O3 concentration in west rural area. The regional transport might be important reason of the variation of O3 in the eastern coastal rural area. The chemical loss of VOCs showed good agreement with the local formation of O3 in both urban and rural areas, suggesting a similar efficiency of O3 formation from the chemical loss of VOCs. Among the chemical loss, aromatics and alkenes are the dominant VOC species of the atmospheric chemistry which accounts for more than 90% . The diurnal profile of VOC chemical loss matched well with the production of O3 with one-hour postponement.
Acute bilateral leg amputation following combat injury in UK servicemen.
Penn-Barwell, J G; Bennett, P M; Kay, A; Sargeant, I D
2014-07-01
This study aims to characterise the injuries and surgical management of British servicemen sustaining bilateral lower limb amputations. The UK Military Trauma Registry was searched for all cases of primary bilateral lower limb amputation sustained between March 2004 and March 2010. Amputations were excluded if they occurred more than 7 days after injury or if they were at the ankle or more distal. There were 1694 UK military patients injured or killed during this six-year study period. Forty-three of these (2.8%) were casualties with bilateral lower limb amputations. All casualties were men with a mean age of 25.1 years (SD 4.3): all were injured in Afghanistan by Improvised Explosive Devices (IEDs). Six casualties were in vehicles when they were injured with the remaining 37 (80%) patrolling on foot. The mean New Injury Severity Score (NISS) was 48.2 (SD 13.2): four patients had a maximum score of 75. The mean TRISS probability of survival was 60% (SD 39.4), with 18 having a survival probability of less than 50% i.e. unexpected survivors. The most common amputation pattern was bilateral trans-femoral (TF) amputations, which was seen in 25 patients (58%). Nine patients also lost an upper limb (triple amputation): no patients survived loss of all four limbs. In retained upper limbs extensive injuries to the hands and forearms were common, including loss of digits. Six patients (14%) sustained an open pelvic fracture. Perineal/genital injury was a feature in 19 (44%) patients, ranging from unilateral orchidectomy to loss of genitalia and permanent requirement for colostomy and urostomy. The mean requirement for blood products was 66 units (SD 41.7). The maximum transfusion was 12 units of platelets, 94 packed red cells, 8 cryoprecipitate, 76 units of fresh frozen plasma and 3 units of fresh whole blood, a total of 193 units of blood products. Our findings detail the severe nature of these injuries together with the massive surgical and resuscitative efforts required to firstly keep patients alive and secondly reconstruct and prepare them for rehabilitation. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Why does Japan use the probability method to set design flood?
NASA Astrophysics Data System (ADS)
Nakamura, S.; Oki, T.
2015-12-01
Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of socio-economic situation in design flood, and they applied to Japanese rivers in 1958. The probability method was applied Japan to adapt the specific socio-economic and natural situation during the confusion after the war.
A Coupled Natural-Human Modeling of the Land Loss Probability in the Mississippi River Delta
NASA Astrophysics Data System (ADS)
Cai, H.; Lam, N.; Zou, L.
2017-12-01
The Mississippi River Delta (MRD) is one of the most environmentally threatened areas in the United States. The area has been suffering substantial land loss during the past decades. Land loss in the MRD has been a subject of intense research by many researchers from multiple disciplines, aiming at mitigating the land loss process and its potential damage. A majority of land loss projections were derived solely from the natural processes, such as sea level rise, regional subsidence, and reduced sediment flows. However, sufficient evidence has shown that land loss in the MRD also relates to human-induced factors such as land fragmentation, neighborhood effects, urbanization, energy industrialization, and marine transportation. How to incorporate both natural and human factors into the land loss modeling stays a huge challenge. Using a coupled-natural and human (CNH) approach can help uncover the complex mechanism of land loss in the MRD, and provide more accurate spatiotemporal projection of land loss patterns and probability. This study uses quantitative approaches to investigate the relationships between land loss and a wide range of socio-ecological variables in the MRD. A model of land loss probability based on selected socio-ecological variables and its neighborhood effects will be derived through variogram and regression analyses. Then, we will simulate the land loss probability and patterns under different scenarios such as sea-level rise, changes in storm frequency and strength, and changes in population to evaluate the sustainability of the MRD. The outcome of this study will be a layer of pixels with information on the probability of land-water conversion. Knowledge gained from this study will provide valuable insights into the optimal mitigation strategies of land loss prevention and restoration and help build long-term sustainability in the Mississippi River Delta.
Complexity, information loss, and model building: from neuro- to cognitive dynamics
NASA Astrophysics Data System (ADS)
Arecchi, F. Tito
2007-06-01
A scientific problem described within a given code is mapped by a corresponding computational problem, We call complexity (algorithmic) the bit length of the shortest instruction which solves the problem. Deterministic chaos in general affects a dynamical systems making the corresponding problem experimentally and computationally heavy, since one must reset the initial conditions at a rate higher than that of information loss (Kolmogorov entropy). One can control chaos by adding to the system new degrees of freedom (information swapping: information lost by chaos is replaced by that arising from the new degrees of freedom). This implies a change of code, or a new augmented model. Within a single code, changing hypotheses is equivalent to fixing different sets of control parameters, each with a different a-priori probability, to be then confirmed and transformed to an a-posteriori probability via Bayes theorem. Sequential application of Bayes rule is nothing else than the Darwinian strategy in evolutionary biology. The sequence is a steepest ascent algorithm, which stops once maximum probability has been reached. At this point the hypothesis exploration stops. By changing code (and hence the set of relevant variables) one can start again to formulate new classes of hypotheses . We call semantic complexity the number of accessible scientific codes, or models, that describe a situation. It is however a fuzzy concept, in so far as this number changes due to interaction of the operator with the system under investigation. These considerations are illustrated with reference to a cognitive task, starting from synchronization of neuron arrays in a perceptual area and tracing the putative path toward a model building.
Errors in Seismic Hazard Assessment are Creating Huge Human Losses
NASA Astrophysics Data System (ADS)
Bela, J.
2015-12-01
The current practice of representing earthquake hazards to the public based upon their perceived likelihood or probability of occurrence is proven now by the global record of actual earthquakes to be not only erroneous and unreliable, but also too deadly! Earthquake occurrence is sporadic and therefore assumptions of earthquake frequency and return-period are both not only misleading, but also categorically false. More than 700,000 people have now lost their lives (2000-2011), wherein 11 of the World's Deadliest Earthquakes have occurred in locations where probability-based seismic hazard assessments had predicted only low seismic low hazard. Unless seismic hazard assessment and the setting of minimum earthquake design safety standards for buildings and bridges are based on a more realistic deterministic recognition of "what can happen" rather than on what mathematical models suggest is "most likely to happen" such future huge human losses can only be expected to continue! The actual earthquake events that did occur were at or near the maximum potential-size event that either already had occurred in the past; or were geologically known to be possible. Haiti's M7 earthquake, 2010 (with > 222,000 fatalities) meant the dead could not even be buried with dignity. Japan's catastrophic Tohoku earthquake, 2011; a M9 Megathrust earthquake, unleashed a tsunami that not only obliterated coastal communities along the northern Japanese coast, but also claimed > 20,000 lives. This tsunami flooded nuclear reactors at Fukushima, causing 4 explosions and 3 reactors to melt down. But while this history of huge human losses due to erroneous and misleading seismic hazard estimates, despite its wrenching pain, cannot be unlived; if faced with courage and a more realistic deterministic estimate of "what is possible", it need not be lived again. An objective testing of the results of global probability based seismic hazard maps against real occurrences has never been done by the GSHAP team; even though the obvious inadequacy of the GSHAP map could have been established in the course of a simple check before the project completion. The doctrine of "psha exceptionalism" that created the maps can only be esponged by carefully examining the facts . . . which unfortunately include huge human losses!
Stationary properties of maximum-entropy random walks.
Dixit, Purushottam D
2015-10-01
Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.
Leue, Anja; Cano Rodilla, Carmen; Beauducel, André
2015-01-01
Individuals typically evaluate whether their performance and the obtained feedback match. Previous research has shown that feedback negativity (FN) depends on outcome probability and feedback valence. It is, however, less clear to what extent previous effects of outcome probability on FN depend on self-evaluations of response correctness. Therefore, we investigated the effects of outcome probability on FN amplitude in a simple go/no-go task that allowed for the self-evaluation of response correctness. We also investigated effects of performance incompatibility and feedback valence. In a sample of N = 22 participants, outcome probability was manipulated by means of precues, feedback valence by means of monetary feedback, and performance incompatibility by means of feedback that induced a match versus mismatch with individuals' performance. We found that the 100% outcome probability condition induced a more negative FN following no-loss than the 50% outcome probability condition. The FN following loss was more negative in the 50% compared to the 100% outcome probability condition. Performance-incompatible loss resulted in a more negative FN than performance-compatible loss. Our results indicate that the self-evaluation of the correctness of responses should be taken into account when the effects of outcome probability and expectation mismatch on FN are investigated. PMID:26783525
Leue, Anja; Cano Rodilla, Carmen; Beauducel, André
2015-01-01
Individuals typically evaluate whether their performance and the obtained feedback match. Previous research has shown that feedback negativity (FN) depends on outcome probability and feedback valence. It is, however, less clear to what extent previous effects of outcome probability on FN depend on self-evaluations of response correctness. Therefore, we investigated the effects of outcome probability on FN amplitude in a simple go/no-go task that allowed for the self-evaluation of response correctness. We also investigated effects of performance incompatibility and feedback valence. In a sample of N = 22 participants, outcome probability was manipulated by means of precues, feedback valence by means of monetary feedback, and performance incompatibility by means of feedback that induced a match versus mismatch with individuals' performance. We found that the 100% outcome probability condition induced a more negative FN following no-loss than the 50% outcome probability condition. The FN following loss was more negative in the 50% compared to the 100% outcome probability condition. Performance-incompatible loss resulted in a more negative FN than performance-compatible loss. Our results indicate that the self-evaluation of the correctness of responses should be taken into account when the effects of outcome probability and expectation mismatch on FN are investigated.
A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis
NASA Astrophysics Data System (ADS)
Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann
2017-04-01
The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for the entire study area.
Detection of hail signatures from single-polarization C-band radar reflectivity
NASA Astrophysics Data System (ADS)
Kunz, Michael; Kugel, Petra I. S.
2015-02-01
Five different criteria that estimate hail signatures from single-polarization radar data are statistically evaluated over a 15-year period by categorical verification against loss data provided by a building insurance company. The criteria consider different levels or thresholds of radar reflectivity, some of them complemented by estimates of the 0 °C level or cloud top temperature. Applied to reflectivity data from a single C-band radar in southwest Germany, it is found that all criteria are able to reproduce most of the past damage-causing hail events. However, the criteria substantially overestimate hail occurrence by up to 80%, mainly due to the verification process using damage data. Best results in terms of highest Heidke Skill Score HSS or Critical Success Index CSI are obtained for the Hail Detection Algorithm (HDA) and the Probability of Severe Hail (POSH). Radar-derived hail probability shows a high spatial variability with a maximum on the lee side of the Black Forest mountains and a minimum in the broad Rhine valley.
Probability Discounting of Gains and Losses: Implications for Risk Attitudes and Impulsivity
ERIC Educational Resources Information Center
Shead, N. Will; Hodgins, David C.
2009-01-01
Sixty college students performed three discounting tasks: probability discounting of gains, probability discounting of losses, and delay discounting of gains. Each task used an adjusting-amount procedure, and participants' choices affected the amount and timing of their remuneration for participating. Both group and individual discounting…
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Methods for estimating drought streamflow probabilities for Virginia streams
Austin, Samuel H.
2014-01-01
Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.
Decisions under risk in Parkinson's disease: preserved evaluation of probability and magnitude.
Sharp, Madeleine E; Viswanathan, Jayalakshmi; McKeown, Martin J; Appel-Cresswell, Silke; Stoessl, A Jon; Barton, Jason J S
2013-11-01
Unmedicated Parkinson's disease patients tend to be risk-averse while dopaminergic treatment causes a tendency to take risks. While dopamine agonists may result in clinically apparent impulse control disorders, treatment with levodopa also causes shift in behaviour associated with an enhanced response to rewards. Two important determinants in decision-making are how subjects perceive the magnitude and probability of outcomes. Our objective was to determine if patients with Parkinson's disease on or off levodopa showed differences in their perception of value when making decisions under risk. The Vancouver Gambling task presents subjects with a choice between one prospect with larger outcome and a second with higher probability. Eighteen age-matched controls and eighteen patients with Parkinson's disease before and after levodopa were tested. In the Gain Phase subjects chose between one prospect with higher probability and another with larger reward to maximize their gains. In the Loss Phase, subjects played to minimize their losses. Patients with Parkinson's disease, on or off levodopa, were similar to controls when evaluating gains. However, in the Loss Phase before levodopa, they were more likely to avoid the prospect with lower probability but larger loss, as indicated by the steeper slope of their group psychometric function (t(24) = 2.21, p = 0.04). Modelling with prospect theory suggested that this was attributable to a 28% overestimation of the magnitude of loss, rather than an altered perception of its probability. While pre-medicated patients with Parkinson's disease show risk-aversion for large losses, patients on levodopa have normal perception of magnitude and probability for both loss and gain. The finding of accurate and normally biased decisions under risk in medicated patients with PD is important because it indicates that, if there is indeed anomalous risk-seeking behaviour in such a cohort, it may derive from abnormalities in components of decision making that are separate from evaluations of size and probability. © 2013 Elsevier Ltd. All rights reserved.
7 CFR 762.106 - Preferred and certified lender programs.
Code of Federal Regulations, 2010 CFR
2010-01-01
... writing why the excessive loss rate is beyond their control; (B) The lender provides a written plan that...) The Agency determines that exceeding the maximum PLP loss rate standard was beyond the control of the... eligible lender under § 762.105; (2) Have a lender loss rate not in excess of the maximum CLP loss rate...
NASA Astrophysics Data System (ADS)
Mandal, S.; Choudhury, B. U.
2015-07-01
Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.
NASA Astrophysics Data System (ADS)
Mignan, Arnaud; Landtwing, Delano; Mena, Banu; Wiemer, Stefan
2013-04-01
A project to exploit the geothermal potential of the crystalline rocks below the city of Basel, Switzerland, was abandoned in recent years due to unacceptable risk associated to increased seismic activity during and following hydraulic stimulation. The largest induced earthquake (Mw = 3.2, 8 December 2006) was widely felt by the local population and provoked slight non-structural damage to buildings. Here we present a probabilistic risk assessment analysis for the 2006 Basel EGS project, including uncertainty linked to the following parameters: induced seismicity forecast model, maximum magnitude, intensity prediction equation, site amplification or not, vulnerability index and cost function. Uncertainty is implemented using a logic tree composed of a total of 324 branches. Exposure is defined from the Basel area building stock of Baisch et al. (2009) (SERIANEX study). We first generate deterministic loss curves, defined as the insured value loss (IVL) as a function of earthquake magnitude. We calibrate the vulnerability curves for low EMS-98 intensities (using the input parameters fixed in the SERIANEX study) such that we match the real loss value, which has been estimated to 3 million CHF (lower than the paid value) for the Mw = 3.2 event. Coupling the deterministic loss curves with seismic hazard curves using the short-term earthquake risk (STEER) method, we obtain site-specific probabilistic loss curves (PLC, i.e., probability of exceeding a given IVL) for the 79 settlements considered. We then integrate over the different PLCs to calculate the most probable IVL. Based on the proposed logic tree, we find considerable variations in the most probable IVL, with lower values for the 6-day injection period than for the first 6 days of the post-injection period. This difference is due to a b-value significantly lower in the second period than in the first one, yielding a higher likelihood of larger earthquakes in the post-injection phase. Based on tornado diagrams, we show that the variability in the most probable IVL is mostly due to the choice of the vulnerability index, followed by the choice of including or not site amplification. The choice of the cost function comes in third place. Based on these results, we finally provide guidelines for decision-making. To the best of our knowledge, this study is the first one to consider uncertainties at the hazard and risk level in a systematic way in the scope of induced seismicity regimes. The proposed method is transferable to other EGS projects as well as to earthquake sequences triggered by wastewater disposal, carbon capture and sequestration.
Silicon Framework Allotropes for Li-ion and Na-ion Batteries: New Insight for a Reversible Capacity.
NASA Astrophysics Data System (ADS)
Marzouk, Asma; Soto, Fernando; Burgos, Juan; Balbuena, Perla; El-Mellouhi, Fadwa
Silicon has the capacity to host a large amount of Li which makes it an attractive anode material despite suffering from swelling problem leading to irreversible capacity loss. The possibility of an easy extraction of Na atoms from Si24Na4 inspired us to adopt the Si24 as an anode material for Lithium-ion and sodium-ion Batteries. Using DFT, we evaluate the specific capacity and the intercalation potential of Si24 allotrope. Enhanced capacities are sought by designing a new silicon allotrope. We demonstrated that these Si24 allotropes show a negligible volume expansion and conserve their periodic structures after the maximum insertion/disinsertion of the ions which is crucial to prevent the capacity loss during cycling. DFT and ab-initio molecular dynamics (AIMD) studies give insights on the most probable surface adsorption and reaction sites, lithiation and sodiation, as well as initial stages of SEI formation and ionic diffusion. Qatar National Research Fund (QNRF) (NPRP 7-162-2-077).
7 CFR 762.129 - Percent of guarantee and maximum loss.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false Percent of guarantee and maximum loss. 762.129 Section 762.129 Agriculture Regulations of the Department of Agriculture (Continued) FARM SERVICE AGENCY, DEPARTMENT OF AGRICULTURE SPECIAL PROGRAMS GUARANTEED FARM LOANS § 762.129 Percent of guarantee and maximum...
Comparison of estimators of standard deviation for hydrologic time series
Tasker, Gary D.; Gilroy, Edward J.
1982-01-01
Unbiasing factors as a function of serial correlation, ρ, and sample size, n for the sample standard deviation of a lag one autoregressive model were generated by random number simulation. Monte Carlo experiments were used to compare the performance of several alternative methods for estimating the standard deviation σ of a lag one autoregressive model in terms of bias, root mean square error, probability of underestimation, and expected opportunity design loss. Three methods provided estimates of σ which were much less biased but had greater mean square errors than the usual estimate of σ: s = (1/(n - 1) ∑ (xi −x¯)2)½. The three methods may be briefly characterized as (1) a method using a maximum likelihood estimate of the unbiasing factor, (2) a method using an empirical Bayes estimate of the unbiasing factor, and (3) a robust nonparametric estimate of σ suggested by Quenouille. Because s tends to underestimate σ, its use as an estimate of a model parameter results in a tendency to underdesign. If underdesign losses are considered more serious than overdesign losses, then the choice of one of the less biased methods may be wise.
Probabilistic description of probable maximum precipitation
NASA Astrophysics Data System (ADS)
Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin
2017-04-01
Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.
Variation of Probable Maximum Precipitation in Brazos River Basin, TX
NASA Astrophysics Data System (ADS)
Bhatia, N.; Singh, V. P.
2017-12-01
The Brazos River basin, the second-largest river basin by area in Texas, generates the highest amount of flow volume of any river in a given year in Texas. With its headwaters located at the confluence of Double Mountain and Salt forks in Stonewall County, the third-longest flowline of the Brazos River traverses within narrow valleys in the area of rolling topography of west Texas, and flows through rugged terrains in mainly featureless plains of central Texas, before its confluence with Gulf of Mexico. Along its major flow network, the river basin covers six different climate regions characterized on the basis of similar attributes of vegetation, temperature, humidity, rainfall, and seasonal weather changes, by National Oceanic and Atmospheric Administration (NOAA). Our previous research on Texas climatology illustrated intensified precipitation regimes, which tend to result in extreme flood events. Such events have caused huge losses of lives and infrastructure in the Brazos River basin. Therefore, a region-specific investigation is required for analyzing precipitation regimes along the geographically-diverse river network. Owing to the topographical and hydroclimatological variations along the flow network, 24-hour Probable Maximum Precipitation (PMP) was estimated for different hydrologic units along the river network, using the revised Hershfield's method devised by Lan et al. (2017). The method incorporates the use of a standardized variable describing the maximum deviation from the average of a sample scaled by the standard deviation of the sample. The hydrometeorological literature identifies this method as more reasonable and consistent with the frequency equation. With respect to the calculation of stable data size required for statistically reliable results, this study also quantified the respective uncertainty associated with PMP values in different hydrologic units. The corresponding range of return periods of PMPs in different hydrologic units was further evaluated using the inverse CDF functions of the most appropriate probability distributions. The analysis will aid regional water boards in designing hydraulic structures, such as dams, spillways, levees, and in identifying and implementing prevention and control mechanisms for extreme flood events resulting from the PMPs.
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Loss Aversion in the Classroom: A Nudge towards a Better Grade?
ERIC Educational Resources Information Center
Grijalva, Therese; Koford, Brandon C.; Parkhurst, Gregory
2018-01-01
Using data from 499 students over 12 sections, 2 courses, and 3 instructors, we estimate the effect of loss aversion on the probability of turning in extra credit assignments and the effect on the overall grade. Regression results indicate no effect of loss aversion on the probability of turning in extra credit assignments and no effect on a…
DISCOUNTING OF DELAYED AND PROBABILISTIC LOSSES OVER A WIDE RANGE OF AMOUNTS
Green, Leonard; Myerson, Joel; Oliveira, Luís; Chang, Seo Eun
2014-01-01
The present study examined delay and probability discounting of hypothetical monetary losses over a wide range of amounts (from $20 to $500,000) in order to determine how amount affects the parameters of the hyperboloid discounting function. In separate conditions, college students chose between immediate payments and larger, delayed payments and between certain payments and larger, probabilistic payments. The hyperboloid function accurately described both types of discounting, and amount of loss had little or no systematic effect on the degree of discounting. Importantly, the amount of loss also had little systematic effect on either the rate parameter or the exponent of the delay and probability discounting functions. The finding that the parameters of the hyperboloid function remain relatively constant across a wide range of amounts of delayed and probabilistic loss stands in contrast to the robust amount effects observed with delayed and probabilistic rewards. At the individual level, the degree to which delayed losses were discounted was uncorrelated with the degree to which probabilistic losses were discounted, and delay and probability loaded on two separate factors, similar to what is observed with delayed and probabilistic rewards. Taken together, these findings argue that although delay and probability discounting involve fundamentally different decision-making mechanisms, nevertheless the discounting of delayed and probabilistic losses share an insensitivity to amount that distinguishes it from the discounting of delayed and probabilistic gains. PMID:24745086
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Ulusoy, Nuran
2017-01-01
The aim of this study was to evaluate the effects of two endocrown designs and computer aided design/manufacturing (CAD/CAM) materials on stress distribution and failure probability of restorations applied to severely damaged endodontically treated maxillary first premolar tooth (MFP). Two types of designs without and with 3 mm intraradicular extensions, endocrown (E) and modified endocrown (ME), were modeled on a 3D Finite element (FE) model of the MFP. Vitablocks Mark II (VMII), Vita Enamic (VE), and Lava Ultimate (LU) CAD/CAM materials were used for each type of design. von Mises and maximum principle values were evaluated and the Weibull function was incorporated with FE analysis to calculate the long term failure probability. Regarding the stresses that occurred in enamel, for each group of material, ME restoration design transmitted less stress than endocrown. During normal occlusal function, the overall failure probability was minimum for ME with VMII. ME restoration design with VE was the best restorative option for premolar teeth with extensive loss of coronal structure under high occlusal loads. Therefore, ME design could be a favorable treatment option for MFPs with missing palatal cusp. Among the CAD/CAM materials tested, VMII and VE were found to be more tooth-friendly than LU. PMID:29119108
A discrimination method for the detection of pneumonia using chest radiograph.
Noor, Norliza Mohd; Rijal, Omar Mohd; Yunus, Ashari; Abu-Bakar, S A R
2010-03-01
This paper presents a statistical method for the detection of lobar pneumonia when using digitized chest X-ray films. Each region of interest was represented by a vector of wavelet texture measures which is then multiplied by the orthogonal matrix Q(2). The first two elements of the transformed vectors were shown to have a bivariate normal distribution. Misclassification probabilities were estimated using probability ellipsoids and discriminant functions. The result of this study recommends the detection of pneumonia by constructing probability ellipsoids or discriminant function using maximum energy and maximum column sum energy texture measures where misclassification probabilities were less than 0.15. 2009 Elsevier Ltd. All rights reserved.
Climatology of damage-causing hailstorms over Germany
NASA Astrophysics Data System (ADS)
Kunz, M.; Puskeiler, M.; Schmidberger, M.
2012-04-01
In several regions of Central Europe, such as southern Germany, Austria, Switzerland, and northern Italy, hailstorms often cause substantial damage to buildings, crops, or automobiles on the order of several million EUR. In the federal state of Baden-Württemberg, for example, most of the insured damage to buildings is caused by large hailstones. Due to both their local-scale extent and insufficient direct monitoring systems, hail swaths are not captured accurately and uniquely by a single observation system. Remote-sensing systems such as radars are able to detect convection signals in a basic way, but they lack the ability to discern a clear relation between measured intensity and hail on the ground. These shortcomings hamper statistical analysis on the hail probability and intensity. Hail modelling thus is a big challenge for the insurance industry. Within the project HARIS-CC (Hail Risk and Climate Change), different meteorological observations are combined (3D / 2D radar, lightning, satellite and radiosounding data) to obtain a comprehensive picture of the hail climatology over Germany. The various approaches were tested and calibrated with loss data from different insurance companies between 2005 and 2011. Best results are obtained by considering the vertical distance between the 0°C level of the atmosphere and the echo top height estimated from 3D reflectivity data from the radar network of German Weather Service (DWD). Additionally, frequency, intensity, width, and length of hail swaths are determined by applying a cell tracking algorithm to the 3D radar data (TRACE3D; Handwerker, 2002). The hailstorm tracks identified are merged with loss data using a geographical information system (GIS) to verify damage-causing hail on the ground. Evaluating the hailstorm climatology revealed that hail probability exhibits high spatial variability even over short distances. An important issue is the spatial pattern of hail occurrence that is considered to be due to orographic modifications of the flow. It is found that hail probability downstream of the low mountain ranges of Germany is strongly controlled by the Froude number. In the case of low Froude number flow, a convergence zone may develop downstream of the mountains, which may lead to the triggering or intensification of deep convection. Based on the results obtained, a hail loss model will be created for the insurance marked to convert the observed hail parameter into monetary parameters, for example, mean loss or maximum loss. Such a model will allow to quantify the hail risk for a certain return period on the local-scale or to assess worst case scenarios.
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
Dorazio, R.M.; Rago, P.J.
1991-01-01
We simulated mark–recapture experiments to evaluate a method for estimating fishing mortality and migration rates of populations stratified at release and recovery. When fish released in two or more strata were recovered from different recapture strata in nearly the same proportions, conditional recapture probabilities were estimated outside the [0, 1] interval. The maximum likelihood estimates tended to be biased and imprecise when the patterns of recaptures produced extremely "flat" likelihood surfaces. Absence of bias was not guaranteed, however, in experiments where recapture rates could be estimated within the [0, 1] interval. Inadequate numbers of tag releases and recoveries also produced biased estimates, although the bias was easily detected by the high sampling variability of the estimates. A stratified tag–recapture experiment with sockeye salmon (Oncorhynchus nerka) was used to demonstrate procedures for analyzing data that produce biased estimates of recapture probabilities. An estimator was derived to examine the sensitivity of recapture rate estimates to assumed differences in natural and tagging mortality, tag loss, and incomplete reporting of tag recoveries.
Amplifying Dynamic Nuclear Polarization of Frozen Solutions by Incorporating Dielectric Particles
2014-01-01
There is currently great interest in understanding the limits on NMR signal enhancements provided by dynamic nuclear polarization (DNP), and in particular if the theoretical maximum enhancements can be achieved. We show that over a 2-fold improvement in cross-effect DNP enhancements can be achieved in MAS experiments on frozen solutions by simply incorporating solid particles into the sample. At 9.4 T and ∼105 K, enhancements up to εH = 515 are obtained in this way, corresponding to 78% of the theoretical maximum. We also underline that degassing of the sample is important to achieve highest enhancements. We link the amplification effect to the dielectric properties of the solid material, which probably gives rise to scattering, diffraction, and amplification of the microwave field in the sample. This is substantiated by simulations of microwave propagation. A reduction in sample heating at a given microwave power also likely occurs due to reduced dielectric loss. Simulations indicate that the microwave field (and thus the DNP enhancement) is inhomogeneous in the sample, and we deduce that in these experiments between 5 and 10% of the solution actually yields the theoretical maximum signal enhancement of 658. The effect is demonstrated for a variety of particles added to both aqueous and organic biradical solutions. PMID:25285480
Kim, Yang-Hyun; Ahn, Kyung-Sik; Cho, Kyung-Hwan; Kang, Chang Ho; Cho, Sung Bum; Han, Kyungdo; Rho, Yong-Kyun; Park, Yong-Gyu
2017-08-01
This study aimed to examine average height loss and the relationship between height loss and socioeconomic status (SES) among the elderly in South Korea.Data were obtained from the Korean National Health and Nutrition Examination Survey 2008-2010. A total of 5265 subjects (2818 men and 2447 women) were included. Height loss was calculated as the difference between the subject's self-reported maximum adult height and their measured current height. The height loss values were divided into quartiles (Q1-Q4) for men and women. SES was determined using a self-reported questionnaire for education level, family income, and occupation.Height loss was associated with SES in all age groups, and mean height loss increased with age. In the relationship between education level and maximum height loss (Q4), men with ≤6, 7-9, or 10-12 years of education had higher odds ratios for the prevalence of height loss (Q4) than men with the highest education level (≥13 years). With regard to the relationship between the income level and height loss (Q4), the subjects with the lowest income had an increased prevalence of maximum height loss (Q4) than the subjects with the highest income (odds ratios = 2.03 in men and 1.94 in women). Maximum height loss (Q4) was more prevalent in men and women with a low SES and less prevalent in men with a high SES than in men with a middle SES.Height loss (Q4) was associated with education level in men and with income level (especially low income) in men and women. Height loss was also associated with a low SES in men and women.
Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method
NASA Astrophysics Data System (ADS)
Ardianti, Fitri; Sutarman
2018-01-01
In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.
Maximum powers of low-loss series-shunt FET RF switches
NASA Astrophysics Data System (ADS)
Yang, Z.; Hu, X.; Yang, J.; Simin, G.; Shur, M.; Gaska, R.
2009-02-01
Low-loss high-power single pole single throw (SPST) monolithic RF switch based on AlGaN/GaN heterojunction field effect transistors (HFETs) demonstrate the insertion loss and isolation of 0.15 dB and 45.9 dB at 0.5 GHz and 0.23 dB and 34.3 dB at 2 GHz. Maximum switching powers are estimated +47 dBm or higher. Factors determining the maximum switching powers are analyzed. Design principles to obtain equally high switching powers in the ON and OFF-states are developed.
Further studies of particle acceleration in cassiopeia A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chevalier, R.A.; Oegerle, W.R.; Scott, J.S.
We have further investigated models for statistical particle acceleration in the supernova remnant Cas A. Simple (three-parameter) models involving continuous second order Fermi acceleration and variable relativistic particle injection can reproduce the observed radio properties of Cas A, including the low-frequency flux anomaly first noted by Erickson and Perley. Models dominated by adiabatic expansion losses are preferable to those dominated by particle escape. The gain time determined from these models agrees well with that predicted from the hydrodynamic situation in Cas A. A model predicting the high-frequency nonthermal spectrum of Cas A indicates that the spectrum turns down in themore » optical regime due to synchrotron losses. The maximum relativistic particle energy content of Cas A was probably about several times 10/sup 49/-10/sup 50/ ergs, which can be compared with an estimated initial kinetic energy in the range 0.24 to 1.0 x 10/sup 52/ ergs. If relativistic particles can escape from Cas A, their spectra will have certain characteristics: the electron spectrum will have a turnover due to synchrotron losses and the proton spectrum will have a cutoff due to the particle gyroradii becoming larger than the sizes of the magnetic scattering centers. The observed bend in the galactic cosmic ray spectrum could be due to energy losses within the source remnant itself instead of losses incurred during propagation through the Galaxy. We also comment on other models for the relativistic electron content of Cas A.« less
Parallel Low-Loss Measurement of Multiple Atomic Qubits
NASA Astrophysics Data System (ADS)
Kwon, Minho; Ebert, Matthew F.; Walker, Thad G.; Saffman, M.
2017-11-01
We demonstrate low-loss measurement of the hyperfine ground state of rubidium atoms by state dependent fluorescence detection in a dipole trap array of five sites. The presence of atoms and their internal states are minimally altered by utilizing circularly polarized probe light and a strictly controlled quantization axis. We achieve mean state detection fidelity of 97% without correcting for imperfect state preparation or background losses, and 98.7% when corrected. After state detection and correction for background losses, the probability of atom loss due to the state measurement is <2 % and the initial hyperfine state is preserved with >98 % probability.
NASA Technical Reports Server (NTRS)
Lanzi, R. James; Vincent, Brett T.
1993-01-01
The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.
THE INFLUENCE OF ENDOTOXIN ADMINISTRATION ON THE NUTRITIONAL REQUIREMENTS OF MICE
Dubos, René; Costello, Richard; Schaedler, Russell W.
1965-01-01
Albino mice lose weight within 24 hours following administration of bacterial endotoxin. The initial weight loss is proportional to the dose of endotoxin injected only when this dose is very small. The loss during the 1st day reaches a maximum with 10 to 30 µg of endotoxin; larger doses increase the duration of the overall effect. The rate at which mice regain weight after administration of endotoxin is markedly influenced by the composition of the diet. Recovery was rapid and complete within a few days when the animals were fed commercial pellets or a semisynthetic diet containing casein. In contrast, recovery was slow and incomplete when wheat gluten was used instead of casein in the diet. The deleterious effect of the gluten diet was less marked in older than in younger animals, probably because the latter have less exacting nutritional requirements. It was postulated that the failure of endotoxin-treated mice to regain weight when fed the gluten diet was due to the fact that this protein is low in certain amino acids. In fact, rapid and complete recovery from the weight loss uniformly occurred when the gluten diet was supplemented with proper amounts of lysine and threonine. The composition of the diet did not influence the extent of the initial loss of weight caused by endotoxin, nor did it prevent the animals from developing tolerance to this substance. PMID:5322368
Effects of Statewide Job Losses on Adolescent Suicide-Related Behaviors
Ananat, Elizabeth Oltmans; Gibson-Davis, Christina M.
2014-01-01
Objectives. We investigated the impact of statewide job loss on adolescent suicide-related behaviors. Methods. We used 1997 to 2009 data from the Youth Risk Behavior Survey and the Bureau of Labor Statistics to estimate the effects of statewide job loss on adolescents’ suicidal ideation, suicide attempts, and suicide plans. Probit regression models controlled for demographic characteristics, state of residence, and year; samples were divided according to gender and race/ethnicity. Results. Statewide job losses during the year preceding the survey increased girls’ probability of suicidal ideation and suicide plans and non-Hispanic Black adolescents’ probability of suicidal ideation, suicide plans, and suicide attempts. Job losses among 1% of a state’s working-age population increased the probability of girls and Blacks reporting suicide-related behaviors by 2 to 3 percentage points. Job losses did not affect the suicide-related behaviors of boys, non-Hispanic Whites, or Hispanics. The results were robust to the inclusion of other state economic characteristics. Conclusions. As are adults, adolescents are affected by economic downturns. Our findings show that statewide job loss increases adolescent girls’ and non-Hispanic Blacks’ suicide-related behaviors. PMID:25122027
Effects of delay and probability combinations on discounting in humans
Cox, David J.; Dallery, Jesse
2017-01-01
To determine discount rates, researchers typically adjust the amount of an immediate or certain option relative to a delayed or uncertain option. Because this adjusting amount method can be relatively time consuming, researchers have developed more efficient procedures. One such procedure is a 5-trial adjusting delay procedure, which measures the delay at which an amount of money loses half of its value (e.g., $1000 is valued at $500 with a 10-year delay to its receipt). Experiment 1 (n = 212) used 5-trial adjusting delay or probability tasks to measure delay discounting of losses, probabilistic gains, and probabilistic losses. Experiment 2 (n = 98) assessed combined probabilistic and delayed alternatives. In both experiments, we compared results from 5-trial adjusting delay or probability tasks to traditional adjusting amount procedures. Results suggest both procedures produced similar rates of probability and delay discounting in six out of seven comparisons. A magnitude effect consistent with previous research was observed for probabilistic gains and losses, but not for delayed losses. Results also suggest that delay and probability interact to determine the value of money. Five-trial methods may allow researchers to assess discounting more efficiently as well as study more complex choice scenarios. PMID:27498073
A study of electric transmission lines for use on the lunar surface
NASA Technical Reports Server (NTRS)
Gaustad, Krista L.; Gordon, Lloyd B.; Weber, Jennifer R.
1994-01-01
The sources for electrical power on a lunar base are said to include solar/chemical, nuclear (static conversion), and nuclear (dynamic conversion). The transmission of power via transmission lines is more practical than power beaming or superconducting because of its low cost and reliable, proven technology. Transmission lines must have minimum mass, maximum efficiency, and the ability to operate reliably in the lunar environment. The transmission line design includes conductor material, insulator material, conductor geometry, conductor configuration, line location, waveform, phase selection, and frequency. This presentation oulines the design. Liquid and gaseous dielectrics are undesirable for long term use in the lunar vacuum due to a high probability of loss. Thus, insulation for high voltage transmission line will most likely be solid dielectric or vacuum insulation.
Risk Decision Making Model for Reservoir Floodwater resources Utilization
NASA Astrophysics Data System (ADS)
Huang, X.
2017-12-01
Floodwater resources utilization(FRU) can alleviate the shortage of water resources, but there are risks. In order to safely and efficiently utilize the floodwater resources, it is necessary to study the risk of reservoir FRU. In this paper, the risk rate of exceeding the design flood water level and the risk rate of exceeding safety discharge are estimated. Based on the principle of the minimum risk and the maximum benefit of FRU, a multi-objective risk decision making model for FRU is constructed. Probability theory and mathematical statistics method is selected to calculate the risk rate; C-D production function method and emergy analysis method is selected to calculate the risk benefit; the risk loss is related to flood inundation area and unit area loss; the multi-objective decision making problem of the model is solved by the constraint method. Taking the Shilianghe reservoir in Jiangsu Province as an example, the optimal equilibrium solution of FRU of the Shilianghe reservoir is found by using the risk decision making model, and the validity and applicability of the model are verified.
Prospect theory reflects selective allocation of attention.
Pachur, Thorsten; Schulte-Mecklenbeck, Michael; Murphy, Ryan O; Hertwig, Ralph
2018-02-01
There is a disconnect in the literature between analyses of risky choice based on cumulative prospect theory (CPT) and work on predecisional information processing. One likely reason is that for expectation models (e.g., CPT), it is often assumed that people behaved only as if they conducted the computations leading to the predicted choice and that the models are thus mute regarding information processing. We suggest that key psychological constructs in CPT, such as loss aversion and outcome and probability sensitivity, can be interpreted in terms of attention allocation. In two experiments, we tested hypotheses about specific links between CPT parameters and attentional regularities. Experiment 1 used process tracing to monitor participants' predecisional attention allocation to outcome and probability information. As hypothesized, individual differences in CPT's loss-aversion, outcome-sensitivity, and probability-sensitivity parameters (estimated from participants' choices) were systematically associated with individual differences in attention allocation to outcome and probability information. For instance, loss aversion was associated with the relative attention allocated to loss and gain outcomes, and a more strongly curved weighting function was associated with less attention allocated to probabilities. Experiment 2 manipulated participants' attention to losses or gains, causing systematic differences in CPT's loss-aversion parameter. This result indicates that attention allocation can to some extent cause choice regularities that are captured by CPT. Our findings demonstrate an as-if model's capacity to reflect characteristics of information processing. We suggest that the observed CPT-attention links can be harnessed to inform the development of process models of risky choice. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.
Scaling behavior for random walks with memory of the largest distance from the origin
NASA Astrophysics Data System (ADS)
Serva, Maurizio
2013-11-01
We study a one-dimensional random walk with memory. The behavior of the walker is modified with respect to the simple symmetric random walk only when he or she is at the maximum distance ever reached from his or her starting point (home). In this case, having the choice to move farther or to move closer, the walker decides with different probabilities. If the probability of a forward step is higher then the probability of a backward step, the walker is bold, otherwise he or she is timorous. We investigate the asymptotic properties of this bold-timorous random walk, showing that the scaling behavior varies continuously from subdiffusive (timorous) to superdiffusive (bold). The scaling exponents are fully determined with a new mathematical approach based on a decomposition of the dynamics in active journeys (the walker is at the maximum distance) and lazy journeys (the walker is not at the maximum distance).
Flood Frequency Curves - Use of information on the likelihood of extreme floods
NASA Astrophysics Data System (ADS)
Faber, B.
2011-12-01
Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.
[Hygienic estimation of combined influence of noise and infrasound on the organism of military men].
Akhmetzianov, I M; Zinkin, V N; Petreev, I V; Dragan, S P
2011-11-01
Hygienic estimation of combined influence of noise and infrasound on the organism of military men. Combined influence of noise and infrasound is accompanied by essential increase of risk of development neurosensory deafness and hypertensive illness. At combined influence of noise and infrasound with a maximum of a spectrum in the field of a sound range the probability of development neurosensory deafness will prevail. Thus probability of development of pathology of ear above the values established ISO 1999:1990. In a case if the spectrum maximum is necessary on an infrasonic range the probability of development of a hypertensive illness.
Water level dynamics in wetlands and nesting success of Black Terns in Maine
Gilbert, A.T.; Servello, F.A.
2005-01-01
The Black Tern (Chlidonias niger) nests in freshwater wetlands that are prone to water level fluctuations, and nest losses to flooding are common. We examined temporal patterns in water levels at six sites with Black Tern colonies in Maine and determined probabilities of flood events and associated nest loss at Douglas Pond, the location of the largest breeding colony. Daily precipitation data from weather stations and water flow data from a flow gauge below Douglas Pond were obtained for 1960-1999. Information on nest losses from three floods at Douglas Pond in 1997-1999 were used to characterize small (6% nest loss), medium (56% nest loss) and large (94% nest loss) flood events, and we calculated probabilities of these three levels of flooding occurring at Douglas Pond using historic water levels data. Water levels generally decreased gradually during the nesting season at colony sites, except at Douglas Pond where water levels fluctuated substantially in response to rain events. Annual probabilities of small, medium, and large flood events were 68%, 35%, and 13% for nests initiated during 23 May-12 July, with similar probabilities for early (23 May-12 June) and late (13 June-12 July) periods. An index of potential nest loss indicated that medium floods at Douglas Pond had the greatest potential effect on nest success because they occurred relatively frequently and inundated large proportions of nests. Nest losses at other colonies were estimated to be approximately 30% of those at Douglas Pond. Nest losses to flooding appear to be common for the Black Tern in Maine and related to spring precipitation patterns, but ultimate effects on breeding productivity are uncertain.
Influence of maneuverability on helicopter combat effectiveness
NASA Technical Reports Server (NTRS)
Falco, M.; Smith, R.
1982-01-01
A computational procedure employing a stochastic learning method in conjunction with dynamic simulation of helicopter flight and weapon system operation was used to derive helicopter maneuvering strategies. The derived strategies maximize either survival or kill probability and are in the form of a feedback control based upon threat visual or warning system cues. Maneuverability parameters implicit in the strategy development include maximum longitudinal acceleration and deceleration, maximum sustained and transient load factor turn rate at forward speed, and maximum pedal turn rate and lateral acceleration at hover. Results are presented in terms of probability of skill for all combat initial conditions for two threat categories.
Power loss in open cavity diodes and a modified Child-Langmuir law
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biswas, Debabrata; Kumar, Raghwendra; Puri, R.R.
Diodes used in most high power devices are inherently open. It is shown that under such circumstances, there is a loss of electromagnetic radiation leading to a lower critical current as compared to closed diodes. The power loss can be incorporated in the standard Child-Langmuir framework by introducing an effective potential. The modified Child-Langmuir law can be used to predict the maximum power loss for a given plate separation and potential difference as well as the maximum transmitted current for this power loss. The effectiveness of the theory is tested numerically.
Häusler, Alexander Niklas; Oroz Artigas, Sergio; Trautner, Peter; Weber, Bernd
2016-01-01
People differ in the way they approach and handle choices with unsure outcomes. In this study, we demonstrate that individual differences in the neural processing of gains and losses relates to attentional differences in the way individuals search for information in gambles. Fifty subjects participated in two independent experiments. Participants first completed an fMRI experiment involving financial gains and losses. Subsequently, they performed an eye-tracking experiment on binary choices between risky gambles, each displaying monetary outcomes and their respective probabilities. We find that individual differences in gain and loss processing relate to attention distribution. Individuals with a stronger reaction to gains in the ventromedial prefrontal cortex paid more attention to monetary amounts, while a stronger reaction in the ventral striatum to losses was correlated with an increased attention to probabilities. Reaction in the posterior cingulate cortex to losses was also found to correlate with an increased attention to probabilities. Our data show that individual differences in brain activity and differences in information search processes are closely linked.
Trautner, Peter
2016-01-01
Abstract People differ in the way they approach and handle choices with unsure outcomes. In this study, we demonstrate that individual differences in the neural processing of gains and losses relates to attentional differences in the way individuals search for information in gambles. Fifty subjects participated in two independent experiments. Participants first completed an fMRI experiment involving financial gains and losses. Subsequently, they performed an eye-tracking experiment on binary choices between risky gambles, each displaying monetary outcomes and their respective probabilities. We find that individual differences in gain and loss processing relate to attention distribution. Individuals with a stronger reaction to gains in the ventromedial prefrontal cortex paid more attention to monetary amounts, while a stronger reaction in the ventral striatum to losses was correlated with an increased attention to probabilities. Reaction in the posterior cingulate cortex to losses was also found to correlate with an increased attention to probabilities. Our data show that individual differences in brain activity and differences in information search processes are closely linked. PMID:27679814
Maximum parsimony, substitution model, and probability phylogenetic trees.
Weng, J F; Thomas, D A; Mareels, I
2011-01-01
The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.
Schecklmann, Martin; Vielsmeier, Veronika; Steffens, Thomas; Landgrebe, Michael; Langguth, Berthold; Kleinjung, Tobias
2012-01-01
Background Different mechanisms have been proposed to be involved in tinnitus generation, among them reduced lateral inhibition and homeostatic plasticity. On a perceptual level these different mechanisms should be reflected by the relationship between the individual audiometric slope and the perceived tinnitus pitch. Whereas some studies found the tinnitus pitch corresponding to the maximum hearing loss, others stressed the relevance of the edge frequency. This study investigates the relationship between tinnitus pitch and audiometric slope in a large sample. Methodology This retrospective observational study analyzed 286 patients. The matched tinnitus pitch was compared to the frequency of maximum hearing loss and the edge of the audiogram (steepest hearing loss) by t-tests and correlation coefficients. These analyses were performed for the whole group and for sub-groups (uni- vs. bilateral (117 vs. 338 ears), pure-tone vs. narrow-band (340 vs. 115 ears), and low and high audiometric slope (114 vs. 113 ears)). Findings For the right ear, tinnitus pitch was in the same range and correlated significantly with the frequency of maximum hearing loss, but differed from and did not correlate with the edge frequency. For the left ear, similar results were found but the correlation between tinnitus pitch and maximum hearing loss did not reach significance. Sub-group analyses (bi- and unilateral, tinnitus character, slope steepness) revealed identical results except for the sub-group with high audiometric slope which revealed a higher frequency of maximum hearing loss as compared to the tinnitus pitch. Conclusion The study-results confirm a relationship between tinnitus pitch and maximum hearing loss but not to the edge frequency, suggesting that tinnitus is rather a fill-in-phenomenon resulting from homeostatic mechanisms, than the result of deficient lateral inhibition. Sub-group analyses suggest that audiometric steepness and the side of affected ear affect this relationship. Future studies should control for these potential confounding factors. PMID:22529949
Effects of delay and probability combinations on discounting in humans.
Cox, David J; Dallery, Jesse
2016-10-01
To determine discount rates, researchers typically adjust the amount of an immediate or certain option relative to a delayed or uncertain option. Because this adjusting amount method can be relatively time consuming, researchers have developed more efficient procedures. One such procedure is a 5-trial adjusting delay procedure, which measures the delay at which an amount of money loses half of its value (e.g., $1000 is valued at $500 with a 10-year delay to its receipt). Experiment 1 (n=212) used 5-trial adjusting delay or probability tasks to measure delay discounting of losses, probabilistic gains, and probabilistic losses. Experiment 2 (n=98) assessed combined probabilistic and delayed alternatives. In both experiments, we compared results from 5-trial adjusting delay or probability tasks to traditional adjusting amount procedures. Results suggest both procedures produced similar rates of probability and delay discounting in six out of seven comparisons. A magnitude effect consistent with previous research was observed for probabilistic gains and losses, but not for delayed losses. Results also suggest that delay and probability interact to determine the value of money. Five-trial methods may allow researchers to assess discounting more efficiently as well as study more complex choice scenarios. Copyright © 2016 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-01-01
... external doors and windows are accounted for in the investigation of the probable behavior of the... and windows must be designed to withstand the probable maximum local pressures. [Amdt. 27-11, 41 FR...
Code of Federal Regulations, 2010 CFR
2010-01-01
... external doors and windows are accounted for in the investigation of the probable behavior of the... and windows must be designed to withstand the probable maximum local pressures. [Amdt. 29-12, 41 FR...
Academic decision making and prospect theory.
Mowrer, Robert R; Davidson, William B
2011-08-01
Two studies are reported that investigate the applicability of prospect theory to college students' academic decision making. Exp. 1 failed to provide support for the risk-seeking portion of the fourfold pattern predicted by prospect theory but did find the greater weighting of losses over gains. Using a more sensitive dependent measure, in Exp. 2 the results of the first experiment were replicated in terms of the gain-loss effect and also found some support for the fourfold pattern in the interaction between probabilities and gain versus loss. The greatest risk-seeking was found in the high probability loss condition.
NASA Technical Reports Server (NTRS)
Gracey, William; Jewel, Joseph W., Jr.; Carpenter, Gene T.
1960-01-01
The overall errors of the service altimeter installations of a variety of civil transport, military, and general-aviation airplanes have been experimentally determined during normal landing-approach and take-off operations. The average height above the runway at which the data were obtained was about 280 feet for the landings and about 440 feet for the take-offs. An analysis of the data obtained from 196 airplanes during 415 landing approaches and from 70 airplanes during 152 take-offs showed that: 1. The overall error of the altimeter installations in the landing- approach condition had a probable value (50 percent probability) of +/- 36 feet and a maximum probable value (99.7 percent probability) of +/- 159 feet with a bias of +10 feet. 2. The overall error in the take-off condition had a probable value of +/- 47 feet and a maximum probable value of +/- 207 feet with a bias of -33 feet. 3. The overall errors of the military airplanes were generally larger than those of the civil transports in both the landing-approach and take-off conditions. In the landing-approach condition the probable error and the maximum probable error of the military airplanes were +/- 43 and +/- 189 feet, respectively, with a bias of +15 feet, whereas those for the civil transports were +/- 22 and +/- 96 feet, respectively, with a bias of +1 foot. 4. The bias values of the error distributions (+10 feet for the landings and -33 feet for the take-offs) appear to represent a measure of the hysteresis characteristics (after effect and recovery) and friction of the instrument and the pressure lag of the tubing-instrument system.
46 CFR 28.880 - Hydraulic equipment.
Code of Federal Regulations, 2011 CFR
2011-10-01
... times the system's maximum operating pressure. (c) Each hydraulic system must be equipped with at least... sudden loss of control due to loss of hydraulic system pressure. A system is considered to be fail-safe... catalog number and maximum allowable working pressure. (k) Existing hydraulic piping, nonmetallic hose...
46 CFR 28.880 - Hydraulic equipment.
Code of Federal Regulations, 2013 CFR
2013-10-01
... times the system's maximum operating pressure. (c) Each hydraulic system must be equipped with at least... sudden loss of control due to loss of hydraulic system pressure. A system is considered to be fail-safe... catalog number and maximum allowable working pressure. (k) Existing hydraulic piping, nonmetallic hose...
46 CFR 28.880 - Hydraulic equipment.
Code of Federal Regulations, 2014 CFR
2014-10-01
... times the system's maximum operating pressure. (c) Each hydraulic system must be equipped with at least... sudden loss of control due to loss of hydraulic system pressure. A system is considered to be fail-safe... catalog number and maximum allowable working pressure. (k) Existing hydraulic piping, nonmetallic hose...
46 CFR 28.880 - Hydraulic equipment.
Code of Federal Regulations, 2012 CFR
2012-10-01
... times the system's maximum operating pressure. (c) Each hydraulic system must be equipped with at least... sudden loss of control due to loss of hydraulic system pressure. A system is considered to be fail-safe... catalog number and maximum allowable working pressure. (k) Existing hydraulic piping, nonmetallic hose...
2013-01-01
Background Zirconia materials are known for their optimal aesthetics, but they are brittle, and concerns remain about whether their mechanical properties are sufficient for withstanding the forces exerted in the oral cavity. Therefore, this study compared the maximum deformation and failure forces of titanium implants between titanium-alloy and zirconia abutments under oblique compressive forces in the presence of two levels of marginal bone loss. Methods Twenty implants were divided into Groups A and B, with simulated bone losses of 3.0 and 1.5 mm, respectively. Groups A and B were also each divided into two subgroups with five implants each: (1) titanium implants connected to titanium-alloy abutments and (2) titanium implants connected to zirconia abutments. The maximum deformation and failure forces of each sample was determined using a universal testing machine. The data were analyzed using the nonparametric Mann–Whitney test. Results The mean maximum deformation and failure forces obtained the subgroups were as follows: A1 (simulated bone loss of 3.0 mm, titanium-alloy abutment) = 540.6 N and 656.9 N, respectively; A2 (simulated bone loss of 3.0 mm, zirconia abutment) = 531.8 N and 852.7 N; B1 (simulated bone loss of 1.5 mm, titanium-alloy abutment) = 1070.9 N and 1260.2 N; and B2 (simulated bone loss of 1.5 mm, zirconia abutment) = 907.3 N and 1182.8 N. The maximum deformation force differed significantly between Groups B1 and B2 but not between Groups A1 and A2. The failure force did not differ between Groups A1 and A2 or between Groups B1 and B2. The maximum deformation and failure forces differed significantly between Groups A1 and B1 and between Groups A2 and B2. Conclusions Based on this experimental study, the maximum deformation and failure forces are lower for implants with a marginal bone loss of 3.0 mm than of 1.5 mm. Zirconia abutments can withstand physiological occlusal forces applied in the anterior region. PMID:23688204
Recyclable amplification for single-photon entanglement from photon loss and decoherence
NASA Astrophysics Data System (ADS)
Zhou, Lan; Chen, Ling-Quan; Zhong, Wei; Sheng, Yu-Bo
2018-01-01
We put forward a highly efficient recyclable single-photon assisted amplification protocol, which can protect single-photon entanglement (SPE) from photon loss and decoherence. Making use of quantum nondemolition detection gates constructed with the help of cross-Kerr nonlinearity, our protocol has some attractive advantages. First, the parties can recover less-entangled SPE to be maximally entangled SPE, and reduce photon loss simultaneously. Second, if the protocol fails, the parties can repeat the protocol to reuse some discarded items, which can increase the success probability. Third, when the protocol is successful, they can similarly repeat the protocol to further increase the fidelity of the SPE. Thereby, our protocol provides a possible way to obtain high entanglement, high fidelity and high success probability simultaneously. In particular, our protocol shows higher success probability in the practical high photon loss channel. Based on the above features, our amplification protocol has potential for future application in long-distance quantum communication.
Andreassen, Bettina K; Myklebust, Tor Å; Haug, Erik S
2017-02-01
Reports from cancer registries often lack clinically relevant information, which would be useful in estimating the prognosis of individual patients with urothelial carcinoma of the urinary bladder (UCB). This article presents estimates of crude probabilities of death due to UCB and the expected loss of lifetime stratified for patient characteristics. In Norway, 10,332 patients were diagnosed with UCB between 2001 and 2010. The crude probabilities of death due to UCB were estimated, stratified by gender, age and T stage, using flexible parametric survival models. Based on these models, the loss in expectation of lifetime due to UCB was also estimated for the different strata. There is large variation in the estimated crude probabilities of death due to UCB (from 0.03 to 0.76 within 10 years since diagnosis) depending on age, gender and T stage. Furthermore, the expected loss of life expectancy is more than a decade for younger patients with muscle-invasive UCB and between a few months and 5 years for nonmuscle-invasive UCB. The suggested framework leads to clinically relevant prognostic risk estimates for individual patients diagnosed with UCB and the consequence in terms of loss of lifetime expectation. The published probability tables can be used in clinical praxis for risk communication.
Building Loss Estimation for Earthquake Insurance Pricing
NASA Astrophysics Data System (ADS)
Durukal, E.; Erdik, M.; Sesetyan, K.; Demircioglu, M. B.; Fahjan, Y.; Siyahi, B.
2005-12-01
After the 1999 earthquakes in Turkey several changes in the insurance sector took place. A compulsory earthquake insurance scheme was introduced by the government. The reinsurance companies increased their rates. Some even supended operations in the market. And, most important, the insurance companies realized the importance of portfolio analysis in shaping their future market strategies. The paper describes an earthquake loss assessment methodology that can be used for insurance pricing and portfolio loss estimation that is based on our work esperience in the insurance market. The basic ingredients are probabilistic and deterministic regional site dependent earthquake hazard, regional building inventory (and/or portfolio), building vulnerabilities associated with typical construction systems in Turkey and estimations of building replacement costs for different damage levels. Probable maximum and average annualized losses are estimated as the result of analysis. There is a two-level earthquake insurance system in Turkey, the effect of which is incorporated in the algorithm: the national compulsory earthquake insurance scheme and the private earthquake insurance system. To buy private insurance one has to be covered by the national system, that has limited coverage. As a demonstration of the methodology we look at the case of Istanbul and use its building inventory data instead of a portfolio. A state-of-the-art time depent earthquake hazard model that portrays the increased earthquake expectancies in Istanbul is used. Intensity and spectral displacement based vulnerability relationships are incorporated in the analysis. In particular we look at the uncertainty in the loss estimations that arise from the vulnerability relationships, and at the effect of the implemented repair cost ratios.
Peterman, W E; Semlitsch, R D
2014-10-01
Many patterns observed in ecology, such as species richness, life history variation, habitat use, and distribution, have physiological underpinnings. For many ectothermic organisms, temperature relationships shape these patterns, but for terrestrial amphibians, water balance may supersede temperature as the most critical physiologically limiting factor. Many amphibian species have little resistance to water loss, which restricts them to moist microhabitats, and may significantly affect foraging, dispersal, and courtship. Using plaster models as surrogates for terrestrial plethodontid salamanders (Plethodon albagula), we measured water loss under ecologically relevant field conditions to estimate the duration of surface activity time across the landscape. Surface activity time was significantly affected by topography, solar exposure, canopy cover, maximum air temperature, and time since rain. Spatially, surface activity times were highest in ravine habitats and lowest on ridges. Surface activity time was a significant predictor of salamander abundance, as well as a predictor of successful recruitment; the probability of a juvenile salamander occupying an area with high surface activity time was two times greater than an area with limited predicted surface activity. Our results suggest that survival, recruitment, or both are demographic processes that are affected by water loss and the ability of salamanders to be surface-active. Results from our study extend our understanding of plethodontid salamander ecology, emphasize the limitations imposed by their unique physiology, and highlight the importance of water loss to spatial population dynamics. These findings are timely for understanding the effects that fluctuating temperature and moisture conditions predicted for future climates will have on plethodontid salamanders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herberger, Sarah M.; Boring, Ronald L.
Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Managing risk with chance-constrained programming
Michael Bevers; Brian Kent
2007-01-01
Reducing catastrophic fire risk is an important objective of many fuel treatment programs (Kent et al. 2003; Machlis et al. 2002; USDA/USDI 2001a). In practice, risk reductions can be accomplished by lowering the probability of a given loss to forest fires, the amount of probable loss, or both. Forest fire risk objectives are seldom quantified, however, making it...
Age, Loss Minimization, and the Role of Probability for Decision-Making.
Best, Ryan; Freund, Alexandra M
2018-04-05
Older adults are stereotypically considered to be risk averse compared to younger age groups, although meta-analyses on age and the influence of gain/loss framing on risky choices have not found empirical evidence for age differences in risk-taking. The current study extends the investigation of age differences in risk preference by including analyses on the effect of the probability of a risky option on choices in gain versus loss situations. Participants (n = 130 adults aged 19-80 years) chose between a certain option and a risky option of varying probability in gain- and loss-framed gambles with actual monetary outcomes. Only younger adults displayed an overall framing effect. Younger and older adults responded differently to probability fluctuations depending on the framing condition. Older adults were more likely to choose the risky option as the likelihood of avoiding a larger loss increased and as the likelihood of a larger gain decreased. Younger adults responded with the opposite pattern: they were more likely to choose the risky option as the likelihood of a larger gain increased and as the likelihood of avoiding a (slightly) larger loss decreased. Results suggest that older adults are more willing to select a risky option when it increases the likelihood that larger losses be avoided, whereas younger adults are more willing to select a risky option when it allows for slightly larger gains. This finding supports expectations based on theoretical accounts of goal orientation shifting away from securing gains in younger adulthood towards maintenance and avoiding losses in older adulthood. Findings are also discussed in respect to the affective enhancement perspective and socioemotional selectivity theory. © 2018 S. Karger AG, Basel.
Scale-Invariant Transition Probabilities in Free Word Association Trajectories
Costa, Martin Elias; Bonomo, Flavia; Sigman, Mariano
2009-01-01
Free-word association has been used as a vehicle to understand the organization of human thoughts. The original studies relied mainly on qualitative assertions, yielding the widely intuitive notion that trajectories of word associations are structured, yet considerably more random than organized linguistic text. Here we set to determine a precise characterization of this space, generating a large number of word association trajectories in a web implemented game. We embedded the trajectories in the graph of word co-occurrences from a linguistic corpus. To constrain possible transport models we measured the memory loss and the cycling probability. These two measures could not be reconciled by a bounded diffusive model since the cycling probability was very high (16% of order-2 cycles) implying a majority of short-range associations whereas the memory loss was very rapid (converging to the asymptotic value in ∼7 steps) which, in turn, forced a high fraction of long-range associations. We show that memory loss and cycling probabilities of free word association trajectories can be simultaneously accounted by a model in which transitions are determined by a scale invariant probability distribution. PMID:19826622
Unambiguous discrimination between linearly dependent equidistant states with multiple copies
NASA Astrophysics Data System (ADS)
Zhang, Wen-Hai; Ren, Gang
2018-07-01
Linearly independent quantum states can be unambiguously discriminated, but linearly dependent ones cannot. For linearly dependent quantum states, however, if C copies of the single states are available, then they may form linearly independent states, and can be unambiguously discriminated. We consider unambiguous discrimination among N = D + 1 linearly dependent states given that C copies are available and that the single copies span a D-dimensional space with equal inner products. The maximum unambiguous discrimination probability is derived for all C with equal a priori probabilities. For this classification of the linearly dependent equidistant states, our result shows that if C is even then adding a further copy fails to increase the maximum discrimination probability.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Pneumatic Control Device for the Pershing 2 Adaption Kit
1979-03-14
forward force to main- tain a pressure seal (this, versus an-I6-to 25 pound maximum reverse .force component due to pressure). In all probability, initial...stem forward force to main- tain a pressure seal (this, versus an 48-to-25-pound maximum " reverse.force, component due-topressue). In-all probability...PII Li L! Ramn Eniern Inc Contrato . 2960635 GAS GENERATOR COMPATIBILITY U TEST REPORT 1.j Requirement s The requirements for the Pershing II, Phase I
Markiewicz, Łukasz; Kubińska, Elżbieta
2015-01-01
This paper aims to provide insight into information processing differences between hot and cold risk taking decision tasks within a single domain. Decision theory defines risky situations using at least three parameters: outcome one (often a gain) with its probability and outcome two (often a loss) with a complementary probability. Although a rational agent should consider all of the parameters, s/he could potentially narrow their focus to only some of them, particularly when explicit Type 2 processes do not have the resources to override implicit Type 1 processes. Here we investigate differences in risky situation parameters' influence on hot and cold decisions. Although previous studies show lower information use in hot than in cold processes, they do not provide decision weight changes and therefore do not explain whether this difference results from worse concentration on each parameter of a risky situation (probability, gain amount, and loss amount) or from ignoring some parameters. Two studies were conducted, with participants performing the Columbia Card Task (CCT) in either its Cold or Hot version. In the first study, participants also performed the Cognitive Reflection Test (CRT) to monitor their ability to override Type 1 processing cues (implicit processes) with Type 2 explicit processes. Because hypothesis testing required comparison of the relative importance of risky situation decision weights (gain, loss, probability), we developed a novel way of measuring information use in the CCT by employing a conjoint analysis methodology. Across the two studies, results indicated that in the CCT Cold condition decision makers concentrate on each information type (gain, loss, probability), but in the CCT Hot condition they concentrate mostly on a single parameter: probability of gain/loss. We also show that an individual's CRT score correlates with information use propensity in cold but not hot tasks. Thus, the affective dimension of hot tasks inhibits correct information processing, probably because it is difficult to engage Type 2 processes in such circumstances. Individuals' Type 2 processing abilities (measured by the CRT) assist greater use of information in cold tasks but do not help in hot tasks.
Markiewicz, Łukasz; Kubińska, Elżbieta
2015-01-01
Objective: This paper aims to provide insight into information processing differences between hot and cold risk taking decision tasks within a single domain. Decision theory defines risky situations using at least three parameters: outcome one (often a gain) with its probability and outcome two (often a loss) with a complementary probability. Although a rational agent should consider all of the parameters, s/he could potentially narrow their focus to only some of them, particularly when explicit Type 2 processes do not have the resources to override implicit Type 1 processes. Here we investigate differences in risky situation parameters' influence on hot and cold decisions. Although previous studies show lower information use in hot than in cold processes, they do not provide decision weight changes and therefore do not explain whether this difference results from worse concentration on each parameter of a risky situation (probability, gain amount, and loss amount) or from ignoring some parameters. Methods: Two studies were conducted, with participants performing the Columbia Card Task (CCT) in either its Cold or Hot version. In the first study, participants also performed the Cognitive Reflection Test (CRT) to monitor their ability to override Type 1 processing cues (implicit processes) with Type 2 explicit processes. Because hypothesis testing required comparison of the relative importance of risky situation decision weights (gain, loss, probability), we developed a novel way of measuring information use in the CCT by employing a conjoint analysis methodology. Results: Across the two studies, results indicated that in the CCT Cold condition decision makers concentrate on each information type (gain, loss, probability), but in the CCT Hot condition they concentrate mostly on a single parameter: probability of gain/loss. We also show that an individual's CRT score correlates with information use propensity in cold but not hot tasks. Thus, the affective dimension of hot tasks inhibits correct information processing, probably because it is difficult to engage Type 2 processes in such circumstances. Individuals' Type 2 processing abilities (measured by the CRT) assist greater use of information in cold tasks but do not help in hot tasks. PMID:26635652
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
50 CFR 648.100 - Catch quotas and other restrictions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... at least a 50-percent probability of success, a fishing mortality rate (F) that produces the maximum... probability of success, that the F specified in paragraph (a) of this section will not be exceeded: (1... necessary to ensure, with at least a 50-percent probability of success, that the applicable specified F will...
Classification of change detection and change blindness from near-infrared spectroscopy signals
NASA Astrophysics Data System (ADS)
Tanaka, Hirokazu; Katura, Takusige
2011-08-01
Using a machine-learning classification algorithm applied to near-infrared spectroscopy (NIRS) signals, we classify a success (change detection) or a failure (change blindness) in detecting visual changes for a change-detection task. Five subjects perform a change-detection task, and their brain activities are continuously monitored. A support-vector-machine algorithm is applied to classify the change-detection and change-blindness trials, and correct classification probability of 70-90% is obtained for four subjects. Two types of temporal shapes in classification probabilities are found: one exhibiting a maximum value after the task is completed (postdictive type), and another exhibiting a maximum value during the task (predictive type). As for the postdictive type, the classification probability begins to increase immediately after the task completion and reaches its maximum in about the time scale of neuronal hemodynamic response, reflecting a subjective report of change detection. As for the predictive type, the classification probability shows an increase at the task initiation and is maximal while subjects are performing the task, predicting the task performance in detecting a change. We conclude that decoding change detection and change blindness from NIRS signal is possible and argue some future applications toward brain-machine interfaces.
Entropy Methods For Univariate Distributions in Decision Analysis
NASA Astrophysics Data System (ADS)
Abbas, Ali E.
2003-03-01
One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.
Coe, J.A.; Michael, J.A.; Crovelli, R.A.; Savage, W.Z.; Laprade, W.T.; Nashem, W.D.
2004-01-01
Ninety years of historical landslide records were used as input to the Poisson and binomial probability models. Results from these models show that, for precipitation-triggered landslides, approximately 9 percent of the area of Seattle has annual exceedance probabilities of 1 percent or greater. Application of the Poisson model for estimating the future occurrence of individual landslides results in a worst-case scenario map, with a maximum annual exceedance probability of 25 percent on a hillslope near Duwamish Head in West Seattle. Application of the binomial model for estimating the future occurrence of a year with one or more landslides results in a map with a maximum annual exceedance probability of 17 percent (also near Duwamish Head). Slope and geology both play a role in localizing the occurrence of landslides in Seattle. A positive correlation exists between slope and mean exceedance probability, with probability tending to increase as slope increases. Sixty-four percent of all historical landslide locations are within 150 m (500 ft, horizontal distance) of the Esperance Sand/Lawton Clay contact, but within this zone, no positive or negative correlation exists between exceedance probability and distance to the contact.
NASA Technical Reports Server (NTRS)
Galvas, M. R.
1972-01-01
Centrifugal compressor performance was examined analytically to determine optimum geometry for various applications as characterized by specific speed. Seven specific losses were calculated for various combinations of inlet tip-exit diameter ratio, inlet hub-tip diameter ratio, blade exit backsweep, and inlet-tip absolute tangential velocity for solid body prewhirl. The losses considered were inlet guide vane loss, blade loading loss, skin friction loss, recirculation loss, disk friction loss, vaneless diffuser loss, and vaned diffuser loss. Maximum total efficiencies ranged from 0.497 to 0.868 for a specific speed range of 0.257 to 1.346. Curves of rotor exit absolute flow angle, inlet tip-exit diameter ratio, inlet hub-tip diameter ratio, head coefficient and blade exit backsweep are presented over a range of specific speeds for various inducer tip speeds to permit rapid selection of optimum compressor size and shape for a variety of applications.
Retinal Image Quality During Accommodation
López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.
2013-01-01
Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386
Retinal image quality during accommodation.
López-Gil, Norberto; Martin, Jesson; Liu, Tao; Bradley, Arthur; Díaz-Muñoz, David; Thibos, Larry N
2013-07-01
We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Subjects viewed a monochromatic (552 nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye's higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
April 23, 1983 tornado at the Savannah River Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, A.J.
1983-07-01
Just before 8:00 p.m. on Saturday, April 23, 1983, a small (Fl) tornado touched ground in Jackson, South Carolina and traveled northeast for several miles, passing just northwest of the SRP 700-A Area. The tornado uprooted or snapped many large trees in Jackson, and damaged several homes and buildings, including the loss of an entire roof from one store. After it passed through Jackson, the tornado damaged pine forests on the SRP border. Based on the Fujita tornado intensity scale and observed damage, the maximum winds in the tornado were probably 100 to 150 mph. Several swaths in the forestedmore » area, each several hundred yards long, were almost denuded (80 to 90% uprooting or snapping of trees). As the tornado approached A Area, it appeared to be weakening, with maximum tree losses of 30 to 50%. The A-Area meteorological tower measured winds of 62 mph before the wind sensor was blown off the tower. Damage to A Area was small, although several trailers lost windows and pieces of roofing, one trailer was overturned, and at least one small shed was demolished. The tornado continued to the northeast where it died out over the SRP forest after felling trees for several more miles. Inspection of the rest of the SRP site from a helicopter showed that no other tornados hit SRP during the April 23 storm, although other tornados hit parts of South Carolina and Georgia. It was the first known occurrence of a tornado at SRP since 1976.« less
The Significance of the Record Length in Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Senarath, S. U.
2013-12-01
Of all of the potential natural hazards, flood is the most costly in many regions of the world. For example, floods cause over a third of Europe's average annual catastrophe losses and affect about two thirds of the people impacted by natural catastrophes. Increased attention is being paid to determining flow estimates associated with pre-specified return periods so that flood-prone areas can be adequately protected against floods of particular magnitudes or return periods. Flood frequency analysis, which is conducted by using an appropriate probability density function that fits the observed annual maximum flow data, is frequently used for obtaining these flow estimates. Consequently, flood frequency analysis plays an integral role in determining the flood risk in flood prone watersheds. A long annual maximum flow record is vital for obtaining accurate estimates of discharges associated with high return period flows. However, in many areas of the world, flood frequency analysis is conducted with limited flow data or short annual maximum flow records. These inevitably lead to flow estimates that are subject to error. This is especially the case with high return period flow estimates. In this study, several statistical techniques are used to identify errors caused by short annual maximum flow records. The flow estimates used in the error analysis are obtained by fitting a log-Pearson III distribution to the flood time-series. These errors can then be used to better evaluate the return period flows in data limited streams. The study findings, therefore, have important implications for hydrologists, water resources engineers and floodplain managers.
Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time
NASA Astrophysics Data System (ADS)
Smith, James F.
2017-03-01
A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Libenson, B. N., E-mail: libenson-b@yandex.ru
2011-10-15
The probability of single characteristic energy loss of a fast electron in a reflection experiment has been calculated. Unlike many works concerning this subject, the bremsstrahlung of bulk plasmons in the non- Cherenkov ranges of frequencies and wavevectors of a plasmon has been taken into account. The contributions to the probability of single loss and to the shape of the spectral line from a quantum correction that is due to the interference of elastic and inelastic electron scattering events have been determined. The probability has been calculated in the kinetic approximation for the relative permittivity, where the short-wavelength range ofmore » the plasmon spectrum is correctly taken into account. In view of these circumstances, the expression for the mean free path of the electron with respect to the emission of a bulk plasmon that was obtained by Pines [D. Pines, Elementary Excitations in Solids (Benjamin, New York, 1963)] has been refined. The coherence length of the fast electron in the medium-energy range under consideration has been estimated. The shape of the spectral line of energy losses in the non-Cherenkov frequency range has been determined. It has been shown that the probability of the single emission of the bulk plasmon incompletely corresponds to the Poisson statistics.« less
NASA Astrophysics Data System (ADS)
Tan, Elcin
A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the physically possible upper limits of precipitation due to climate change. The simulation results indicate that the meridional shift in atmospheric conditions is the optimum method to determine maximum precipitation in consideration of cost and efficiency. Finally, exceedance probability analyses of the model results of 42 historical extreme precipitation events demonstrate that the 72-hr basin averaged probable maximum precipitation is 21.72 inches for the exceedance probability of 0.5 percent. On the other hand, the current operational PMP estimation for the American River Watershed is 28.57 inches as published in the hydrometeorological report no. 59 and a previous PMP value was 31.48 inches as published in the hydrometeorological report no. 36. According to the exceedance probability analyses of this proposed method, the exceedance probabilities of these two estimations correspond to 0.036 percent and 0.011 percent, respectively.
The Effects of Framing, Reflection, Probability, and Payoff on Risk Preference in Choice Tasks.
Kühberger; Schulte-Mecklenbeck; Perner
1999-06-01
A meta-analysis of Asian-disease-like studies is presented to identify the factors which determine risk preference. First the confoundings between probability levels, payoffs, and framing conditions are clarified in a task analysis. Then the role of framing, reflection, probability, type, and size of payoff is evaluated in a meta-analysis. It is shown that bidirectional framing effects exist for gains and for losses. Presenting outcomes as gains tends to induce risk aversion, while presenting outcomes as losses tends to induce risk seeking. Risk preference is also shown to depend on the size of the payoffs, on the probability levels, and on the type of good at stake (money/property vs human lives). In general, higher payoffs lead to increasing risk aversion. Higher probabilities lead to increasing risk aversion for gains and to increasing risk seeking for losses. These findings are confirmed by a subsequent empirical test. Shortcomings of existing formal theories, such as prospect theory, cumulative prospect theory, venture theory, and Markowitz's utility theory, are identified. It is shown that it is not probabilities or payoffs, but the framing condition, which explains most variance. These findings are interpreted as showing that no linear combination of formally relevant predictors is sufficient to capture the essence of the framing phenomenon. Copyright 1999 Academic Press.
A Connection Admission Control Method for Web Server Systems
NASA Astrophysics Data System (ADS)
Satake, Shinsuke; Inai, Hiroshi; Saito, Tomoya; Arai, Tsuyoshi
Most browsers establish multiple connections and download files in parallel to reduce the response time. On the other hand, a web server limits the total number of connections to prevent from being overloaded. That could decrease the response time, but would increase the loss probability, the probability of which a newly arriving client is rejected. This paper proposes a connection admission control method which accepts only one connection from a newly arriving client when the number of connections exceeds a threshold, but accepts new multiple connections when the number of connections is less than the threshold. Our method is aimed at reducing the response time by allowing as many clients as possible to establish multiple connections, and also reducing the loss probability. In order to reduce spending time to examine an adequate threshold for web server administrators, we introduce a procedure which approximately calculates the loss probability under a condition that the threshold is given. Via simulation, we validate the approximation and show effectiveness of the admission control.
Effects of breastfeeding on postpartum weight loss among U.S. women
Jarlenski, Marian P.; Bennett, Wendy L.; Bleich, Sara N.; Barry, Colleen L.; Stuart, Elizabeth A.
2014-01-01
Objective To evaluate the effects of breastfeeding on maternal weight loss in the 12 months postpartum among U.S. women. Methods Using data from a national cohort of U.S. women conducted in 2005-2007 (N=2,102), we employed propensity scores to match women who breastfed exclusively and non-exclusive for at least three months to comparison women who had not breastfed or breastfed for less than three months. Outcomes included postpartum weight loss at 3, 6, 9, and 12 months postpartum; and the probability of returning to pre-pregnancy body mass index (BMI) category and the probability of returning to pre-pregnancy weight. Results Compared to women who did not breastfeed or breastfed non-exclusively, exclusive breastfeeding for at least 3 months resulted in 3.2 pounds (95% CI: 1.4,4.7) greater weight loss at 12 months postpartum, a 6.0-percentage-point increase (95% CI: 2.3,9.7) in the probability of returning to the same or lower BMI category postpartum; and a 6.1-percentage-point increase (95% CI: 1.0,11.3) in the probability of returning to pre-pregnancy weight or lower postpartum. Non-exclusive breastfeeding did not significantly affect any outcomes. Conclusion Our study provides evidence that exclusive breastfeeding for at least three months has a small effect on postpartum weight loss among U.S. women. PMID:25284261
Effects of breastfeeding on postpartum weight loss among U.S. women.
Jarlenski, Marian P; Bennett, Wendy L; Bleich, Sara N; Barry, Colleen L; Stuart, Elizabeth A
2014-12-01
The aim of this study is to evaluate the effects of breastfeeding on maternal weight loss in the 12months postpartum among U.S. women. Using data from a national cohort of U.S. women conducted in 2005-2007 (N=2102), we employed propensity scores to match women who breastfed exclusively and non-exclusive for at least three months to comparison women who had not breastfed or breastfed for less than three months. Outcomes included postpartum weight loss at 3, 6, 9, and 12months postpartum; and the probability of returning to pre-pregnancy body mass index (BMI) category and the probability of returning to pre-pregnancy weight. Compared to women who did not breastfeed or breastfed non-exclusively, exclusive breastfeeding for at least 3months resulted in 3.2 pound (95% CI: 1.4,4.7) greater weight loss at 12months postpartum, a 6.0-percentage-point increase (95% CI: 2.3,9.7) in the probability of returning to the same or lower BMI category postpartum; and a 6.1-percentage-point increase (95% CI: 1.0,11.3) in the probability of returning to pre-pregnancy weight or lower postpartum. Non-exclusive breastfeeding did not significantly affect any outcomes. Our study provides evidence that exclusive breastfeeding for at least three months has a small effect on postpartum weight loss among U.S. women. Copyright © 2014 Elsevier Inc. All rights reserved.
Reward Expectation Modulates Feedback-Related Negativity and EEG Spectra
Cohen, Michael X; Elger, Christian E.; Ranganath, Charan
2007-01-01
The ability to evaluate outcomes of previous decisions is critical to adaptive decision-making. The feedback-related negativity (FRN) is an event-related potential (ERP) modulation that distinguishes losses from wins, but little is known about the effects of outcome probability on these ERP responses. Further, little is known about the frequency characteristics of feedback processing, for example, event-related oscillations and phase synchronizations. Here, we report an EEG experiment designed to address these issues. Subjects engaged in a probabilistic reinforcement learning task in which we manipulated, across blocks, the probability of winning and losing to each of two possible decision options. Behaviorally, all subjects quickly adapted their decision-making to maximize rewards. ERP analyses revealed that the probability of reward modulated neural responses to wins, but not to losses. This was seen both across blocks as well as within blocks, as learning progressed. Frequency decomposition via complex wavelets revealed that EEG responses to losses, compared to wins, were associated with enhanced power and phase coherence in the theta frequency band. As in the ERP analyses, power and phase coherence values following wins but not losses were modulated by reward probability. Some findings between ERP and frequency analyses diverged, suggesting that these analytic approaches provide complementary insights into neural processing. These findings suggest that the neural mechanisms of feedback processing may differ between wins and losses. PMID:17257860
Ren, Shuai; Cai, Maolin; Shi, Yan; Xu, Weiqing; Zhang, Xiaohua Douglas
2018-03-01
Bronchial diameter is a key parameter that affects the respiratory treatment of mechanically ventilated patients. In this paper, to reveal the influence of bronchial diameter on the airflow dynamics of pressure-controlled mechanically ventilated patients, a new respiratory system model is presented that combines multigeneration airways with lungs. Furthermore, experiments and simulation studies to verify the model are performed. Finally, through the simulation study, it can be determined that in airway generations 2 to 7, when the diameter is reduced to half of the original value, the maximum air pressure (maximum air pressure in lungs) decreases by nearly 16%, the maximum flow decreases by nearly 30%, and the total airway pressure loss (sum of each generation pressure drop) is more than 5 times the original value. Moreover, in airway generations 8 to 16, with increasing diameter, the maximum air pressure, maximum flow, and total airway pressure loss remain almost constant. When the diameter is reduced to half of the original value, the maximum air pressure decreases by 3%, the maximum flow decreases by nearly 5%, and the total airway pressure loss increases by 200%. The study creates a foundation for improvement in respiratory disease diagnosis and treatment. Copyright © 2017 John Wiley & Sons, Ltd.
DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.
2012-01-01
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694
Veda, Supriya; Platel, Kalpana; Srinivasan, Krishnapura
2008-09-24
Four common food acidulants--amchur, lime, tamarind, and kokum--and two antioxidant spices--turmeric and onion--were examined for their influence on the bioaccessibility of beta-carotene from two fleshy and two leafy vegetables. Amchur and lime generally enhanced the bioaccessibility of beta-carotene from these test vegetables in many instances. Such an improved bioaccessibility was evident in both raw and heat-processed vegetables. The effect of lime juice was generally more pronounced than that of amchur. Turmeric significantly enhanced the bioaccessibility of beta-carotene from all of the vegetables tested, especially when heat-processed. Onion enhanced the bioaccessibility of beta-carotene from pressure-cooked carrot and amaranth leaf and from open-pan-boiled pumpkin and fenugreek leaf. Lime juice and the antioxidant spices turmeric and onion minimized the loss of beta-carotene during heat processing of the vegetables. In the case of antioxidant spices, improved bioaccessibility of beta-carotene from heat-processed vegetables is attributable to their role in minimizing the loss of this provitamin. Lime juice, which enhanced the bioaccessibility of this provitamin from both raw and heat-processed vegetables, probably exerted this effect by some other mechanism in addition to minimizing the loss of beta-carotene. Thus, the presence of food acidulants (lime juice/amchur) and antioxidant spices (turmeric/onion) proved to be advantageous in the context of deriving maximum beta-carotene from the vegetable sources.
DeWeber, Jefferson T; Wagner, Tyler
2018-06-01
Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30-day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species' distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold-water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid-century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our study demonstrates that even relatively small differences in the definitions of climate metrics can result in very different projections and reveal high uncertainty in predicted climate change effects. © 2018 John Wiley & Sons Ltd.
DeWeber, Jefferson T.; Wagner, Tyler
2018-01-01
Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30‐day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species’ distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold‐water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid‐century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our study demonstrates that even relatively small differences in the definitions of climate metrics can result in very different projections and reveal high uncertainty in predicted climate change effects.
NASA Astrophysics Data System (ADS)
Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping
2015-05-01
It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.
Flood protection diversification to reduce probabilities of extreme losses.
Zhou, Qian; Lambert, James H; Karvetski, Christopher W; Keisler, Jeffrey M; Linkov, Igor
2012-11-01
Recent catastrophic losses because of floods require developing resilient approaches to flood risk protection. This article assesses how diversification of a system of coastal protections might decrease the probabilities of extreme flood losses. The study compares the performance of portfolios each consisting of four types of flood protection assets in a large region of dike rings. A parametric analysis suggests conditions in which diversifications of the types of included flood protection assets decrease extreme flood losses. Increased return periods of extreme losses are associated with portfolios where the asset types have low correlations of economic risk. The effort highlights the importance of understanding correlations across asset types in planning for large-scale flood protection. It allows explicit integration of climate change scenarios in developing flood mitigation strategy. © 2012 Society for Risk Analysis.
A Deterministic Approach to Active Debris Removal Target Selection
NASA Astrophysics Data System (ADS)
Lidtke, A.; Lewis, H.; Armellin, R.
2014-09-01
Many decisions, with widespread economic, political and legal consequences, are being considered based on space debris simulations that show that Active Debris Removal (ADR) may be necessary as the concerns about the sustainability of spaceflight are increasing. The debris environment predictions are based on low-accuracy ephemerides and propagators. This raises doubts about the accuracy of those prognoses themselves but also the potential ADR target-lists that are produced. Target selection is considered highly important as removal of many objects will increase the overall mission cost. Selecting the most-likely candidates as soon as possible would be desirable as it would enable accurate mission design and allow thorough evaluation of in-orbit validations, which are likely to occur in the near-future, before any large investments are made and implementations realized. One of the primary factors that should be used in ADR target selection is the accumulated collision probability of every object. A conjunction detection algorithm, based on the smart sieve method, has been developed. Another algorithm is then applied to the found conjunctions to compute the maximum and true probabilities of collisions taking place. The entire framework has been verified against the Conjunction Analysis Tools in AGIs Systems Toolkit and relative probability error smaller than 1.5% has been achieved in the final maximum collision probability. Two target-lists are produced based on the ranking of the objects according to the probability they will take part in any collision over the simulated time window. These probabilities are computed using the maximum probability approach, that is time-invariant, and estimates of the true collision probability that were computed with covariance information. The top-priority targets are compared, and the impacts of the data accuracy and its decay are highlighted. General conclusions regarding the importance of Space Surveillance and Tracking for the purpose of ADR are also drawn and a deterministic method for ADR target selection, which could reduce the number of ADR missions to be performed, is proposed.
Seismic Risk Assessment and Loss Estimation for Tbilisi City
NASA Astrophysics Data System (ADS)
Tsereteli, Nino; Alania, Victor; Varazanashvili, Otar; Gugeshashvili, Tengiz; Arabidze, Vakhtang; Arevadze, Nika; Tsereteli, Emili; Gaphrindashvili, Giorgi; Gventcadze, Alexander; Goguadze, Nino; Vephkhvadze, Sophio
2013-04-01
The proper assessment of seismic risk is of crucial importance for society protection and city sustainable economic development, as it is the essential part to seismic hazard reduction. Estimation of seismic risk and losses is complicated tasks. There is always knowledge deficiency on real seismic hazard, local site effects, inventory on elements at risk, infrastructure vulnerability, especially for developing countries. Lately great efforts was done in the frame of EMME (earthquake Model for Middle East Region) project, where in the work packages WP1, WP2 , WP3 and WP4 where improved gaps related to seismic hazard assessment and vulnerability analysis. Finely in the frame of work package wp5 "City Scenario" additional work to this direction and detail investigation of local site conditions, active fault (3D) beneath Tbilisi were done. For estimation economic losses the algorithm was prepared taking into account obtained inventory. The long term usage of building is very complex. It relates to the reliability and durability of buildings. The long term usage and durability of a building is determined by the concept of depreciation. Depreciation of an entire building is calculated by summing the products of individual construction unit' depreciation rates and the corresponding value of these units within the building. This method of calculation is based on an assumption that depreciation is proportional to the building's (constructions) useful life. We used this methodology to create a matrix, which provides a way to evaluate the depreciation rates of buildings with different type and construction period and to determine their corresponding value. Finally loss was estimated resulting from shaking 10%, 5% and 2% exceedance probability in 50 years. Loss resulting from scenario earthquake (earthquake with possible maximum magnitude) also where estimated.
The impact of clustering of extreme European windstorm events on (re)insurance market portfolios
NASA Astrophysics Data System (ADS)
Mitchell-Wallace, Kirsten; Alvarez-Diaz, Teresa
2010-05-01
Traditionally the occurrence of windstorm loss events in Europe has been considered as independent. However, a number of significant losses close in space and time indicates that this assumption may need to be revised. Under particular atmospheric conditions multiple loss-causing cyclones can occur in succession, affecting similar geographic regions and, therefore, insurance markets. A notable example is of Lothar and Martin in France in December 1999. Although the existence of cyclone families is well-known by meteorologists, there has been limited research into occurrence of serial windstorms. However, climate modelling research is now providing the ability to explore the physical drivers of clustering, and to improve understanding of the hazard aspect of catastrophe modelling. While analytics tools, including catastrophe models, may incorporate assumptions regarding the influence of dependency through statistical means, the most recent research outputs provide a new strand of information with the potential to re-assess the probabilistic loss potential in light of clustering and to provide an additional view on probable maximum losses to windstorm-exposed portfolios across regions such as Northwest Europe. There is however, a need for the testing of these new techniques within operational (re)insurance applications, and this paper provide an overview of the most current clustering research, including the 2009 paper by Vitolo et. al., in relation to reinsurance risk modelling, and to assess the potential impact of such additional information on the overall risk assessment process. We examine the consequences of the serial clustering of extra-tropical cyclones demonstrated by Vitolo et al. (2009) from the perspective of a large European reinsurer, examining potential implications for: • Pricing • Accumulation And • Capital adequacy
Probability of stress-corrosion fracture under random loading
NASA Technical Reports Server (NTRS)
Yang, J. N.
1974-01-01
Mathematical formulation is based on cumulative-damage hypothesis and experimentally-determined stress-corrosion characteristics. Under both stationary random loadings, mean value and variance of cumulative damage are obtained. Probability of stress-corrosion fracture is then evaluated, using principle of maximum entropy.
Duels where both marksmen ’home’ or ’zero in’ on one another are here considered, and the effect of this on the win probability is determined. It is...leads to win probabilities that can be straightforwardly evaluated. Maximum-likelihood estimation of the hit probability and homing from field data is outlined. The solutions of the duels are displayed as contour maps. (Author)
A mass reconstruction technique for a heavy resonance decaying to τ + τ -
NASA Astrophysics Data System (ADS)
Xia, Li-Gang
2016-11-01
For a resonance decaying to τ + τ -, it is difficult to reconstruct its mass accurately because of the presence of neutrinos in the decay products of the τ leptons. If the resonance is heavy enough, we show that its mass can be well determined by the momentum component of the τ decay products perpendicular to the velocity of the τ lepton, p ⊥, and the mass of the visible/invisible decay products, m vis/inv, for τ decaying to hadrons/leptons. By sampling all kinematically allowed values of p ⊥ and m vis/inv according to their joint probability distributions determined by the MC simulations, the mass of the mother resonance is assumed to lie at the position with the maximal probability. Since p ⊥ and m vis/inv are invariant under the boost in the τ lepton direction, the joint probability distributions are independent upon the τ’s origin. Thus this technique is able to determine the mass of an unknown resonance with no efficiency loss. It is tested using MC simulations of the physics processes pp → Z/h(125)/h(750) + X → ττ + X at 13 TeV. The ratio of the full width at half maximum and the peak value of the reconstructed mass distribution is found to be 20%-40% using the information of missing transverse energy. Supported by General Financial Grant from the China Postdoctoral Science Foundation (2015M581062)
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Modulations of stratospheric ozone by volcanic eruptions
NASA Technical Reports Server (NTRS)
Blanchette, Christian; Mcconnell, John C.
1994-01-01
We have used a time series of aerosol surface based on the measurements of Hofmann to investigate the modulation of total column ozone caused by the perturbation to gas phase chemistry by the reaction N2O5(gas) + H2O(aero) yields 2HNO3(gas) on the surface of stratospheric aerosols. We have tested a range of values for its reaction probability, gamma = 0.02, 0.13, and 0.26 which we compared to unperturbed homogeneous chemistry. Our analysis spans a period from Jan. 1974 to Oct. 1994. The results suggest that if lower values of gamma are the norm then we would expect larger ozone losses for highly enhanced aerosol content that for larger values of gamma. The ozone layer is more sensitive to the magnitude of the reaction probability under background conditions than during volcanically active periods. For most conditions, the conversion of NO2 to HNO3 is saturated for reaction probability in the range of laboratory measurements, but is only absolutely saturated following major volcanic eruptions when the heterogeneous loss dominates the losses of N2O5. The ozone loss due to this heterogeneous reaction increases with the increasing chlorine load. Total ozone losses calculated are comparable to ozone losses reported from TOMS and Dobson data.
Prospect theory in the health domain: a quantitative assessment.
Attema, Arthur E; Brouwer, Werner B F; I'Haridon, Olivier
2013-12-01
It is well-known that expected utility (EU) has empirical deficiencies. Cumulative prospect theory (CPT) has developed as an alternative with more descriptive validity. However, CPT's full function had not yet been quantified in the health domain. This paper is therefore the first to simultaneously measure utility of life duration, probability weighting, and loss aversion in this domain. We observe loss aversion and risk aversion for gains and losses, which for gains can be explained by probabilistic pessimism. Utility for gains is almost linear. For losses, we find less weighting of probability 1/2 and concave utility. This contrasts with the common finding of convex utility for monetary losses. However, CPT was proposed to explain choices among lotteries involving monetary outcomes. Life years are arguably very different from monetary outcomes and need not generate convex utility for losses. Moreover, utility of life duration reflects discounting, causing concave utility. Copyright © 2013 Elsevier B.V. All rights reserved.
Ladar range image denoising by a nonlocal probability statistics algorithm
NASA Astrophysics Data System (ADS)
Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi
2013-01-01
According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.
NASA Technical Reports Server (NTRS)
Ungar, Eugene K.
2014-01-01
The aircraft-based Stratospheric Observatory for Infrared Astronomy (SOFIA) is a platform for multiple infrared observation experiments. The experiments carry sensors cooled to liquid helium (LHe) temperatures. A question arose regarding the heat input and peak pressure that would result from a sudden loss of the dewar vacuum insulation. Owing to concerns about the adequacy of dewar pressure relief in the event of a sudden loss of the dewar vacuum insulation, the SOFIA Program engaged the NASA Engineering and Safety Center (NESC). This report summarizes and assesses the experiments that have been performed to measure the heat flux into LHe dewars following a sudden vacuum insulation failure, describes the physical limits of heat input to the dewar, and provides an NESC recommendation for the wall heat flux that should be used to assess the sudden loss of vacuum insulation case. This report also assesses the methodology used by the SOFIA Program to predict the maximum pressure that would occur following a loss of vacuum event.
NASA Astrophysics Data System (ADS)
Reveillere, A. R.; Bertil, D. B.; Douglas, J. D.; Grisanti, L. G.; Lecacheux, S. L.; Monfort, D. M.; Modaressi, H. M.; Müller, H. M.; Rohmer, J. R.; Sedan, O. S.
2012-04-01
In France, risk assessments for natural hazards are usually carried out separately and decision makers lack comprehensive information. Moreover, since the cause of the hazard (e.g. meteorological, geological) and the physical phenomenon that causes damage (e.g. inundation, ground shaking) may be fundamentally different, the quantitative comparison of single risk assessments that were not conducted in a compatible framework is not straightforward. Comprehensive comparative risk assessments exist in a few other countries. For instance, the Risk Map Germany project has developed and applied a methodology for quantitatively comparing the risk of relevant natural hazards at various scales (city, state) in Germany. The present on-going work applies a similar methodology to the Pointe-à-Pitre urban area, which represents more than half of the population of Guadeloupe, an overseas region in the French West Indies. Relevant hazards as well as hazard intensity levels differ from continental Europe, which will lead to different conclusions. French West Indies are prone to a large number of hazards, among which hurricanes, volcanic eruptions and earthquakes dominate. Hurricanes cause damage through three phenomena: wind, heavy rainfall and storm surge, the latter having had a preeminent role during the largest historical event in 1928. Seismic risk is characterized by many induced phenomena, among which earthquake shocks dominate. This study proposes a comparison of earthquake and cyclonic storm surge risks. Losses corresponding to hazard intensities having the same probability of occurrence are calculated. They are quantified in a common loss unit, chosen to be the direct economic losses. Intangible or indirect losses are not considered. The methodology therefore relies on (i) a probabilistic hazard assessment, (ii) a loss ratio estimation for the exposed elements and (iii) an economic estimation of these assets. Storm surge hazard assessment is based on the selection of relevant historical cyclones and on the simulation of the associated wave and cyclonic surge. The combined local sea elevations, called "set-up", are then fitted with a statistical distribution in order to obtain its time return characteristics. Several run-ups are then extracted, the inundation areas are calculated and the relative losses of the affected assets are deduced. The Probabilistic Seismic Hazard Assessment and the exposed elements location and seismic vulnerability result from past public risk assessment studies. The loss estimations are computed for several return time periods, measured in percentage of buildings being in a given EMS-98 damage state per grid block, which are then converted into loss ratio. In parallel, an asset estimation is conducted. It is mainly focused on private housing, but it considers some major public infrastructures as well. The final outcome of this work is a direct economic loss-frequency plot for earthquake and storm surge. The Probable Maximum Loss and the Average Annual Loss derivate from this risk curve. In addition, different sources of uncertainty are identified through the loss estimation process. The full propagation of these uncertainties can provide an interval of confidence, which can be assigned to the risk-curve and we show how such additional information can be useful for risk comparison.
Vulnerability of manned spacecraft to crew loss from orbital debris penetration
NASA Technical Reports Server (NTRS)
Williamsen, J. E.
1994-01-01
Orbital debris growth threatens the survival of spacecraft systems from impact-induced failures. Whereas the probability of debris impact and spacecraft penetration may currently be calculated, another parameter of great interest to safety engineers is the probability that debris penetration will cause actual spacecraft or crew loss. Quantifying the likelihood of crew loss following a penetration allows spacecraft designers to identify those design features and crew operational protocols that offer the highest improvement in crew safety for available resources. Within this study, a manned spacecraft crew survivability (MSCSurv) computer model is developed that quantifies the conditional probability of losing one or more crew members, P(sub loss/pen), following the remote likelihood of an orbital debris penetration into an eight module space station. Contributions to P(sub loss/pen) are quantified from three significant penetration-induced hazards: pressure wall rupture (explosive decompression), fragment-induced injury, and 'slow' depressurization. Sensitivity analyses are performed using alternate assumptions for hazard-generating functions, crew vulnerability thresholds, and selected spacecraft design and crew operations parameters. These results are then used to recommend modifications to the spacecraft design and expected crew operations that quantitatively increase crew safety from orbital debris impacts.
NASA Astrophysics Data System (ADS)
Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick
2016-06-01
Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.
HIV Patients Drop Out in Indonesia: Associated Factors and Potential Productivity Loss.
Siregar, Adiatma Ym; Pitriyan, Pipit; Wisaksana, Rudi
2016-07-01
this study reported various factors associated with a higher probability of HIV patients drop out, and potential productivity loss due to HIV patients drop out. we analyzed data of 658 HIV patients from a database in a main referral hospital in Bandung city, West Java, Indonesia from 2007 to 2013. First, we utilized probit regression analysis and included, among others, the following variables: patients' status (active or drop out), CD4 cell count, TB and opportunistic infection (OI), work status, sex, history of injecting drugs, and support from family and peers. Second, we used the drop out data from our database and CD 4 cell count decline rate from another study to estimate the productivity loss due to HIV patients drop out. lower CD4 cell count was associated with a higher probability of drop out. Support from family/peers, living with family, and diagnosed with TB were associated with lower probability of drop out. The productivity loss at national level due to treatment drop out (consequently, due to CD4 cell count decline) can reach US$365 million (using average wage). first, as lower CD 4 cell count was associated with higher probability of drop out, we recommend (to optimize) early ARV initiation at a higher CD 4 cell count, involving scaling up HIV service at the community level. Second, family/peer support should be further emphasized to further ensure treatment success. Third, dropping out from ART will result in a relatively large productivity loss.
Coaxial tube array space transmission line characterization
NASA Technical Reports Server (NTRS)
Switzer, Colleen A.; Bents, David J.
1987-01-01
The coaxial tube array tether/transmission line used to connect an SP-100 nuclear power system to the space station was characterized over the range of reactor-to-platform separation distances of 1 to 10 km. Characterization was done with respect to array performance, physical dimensions and masses. Using a fixed design procedure, a family of designs was generated for the same power level (300 kWe), power loss (1.5 percent), and meteoroid survival probability (99.5 percent over 10 yr). To differentiate between vacuum insulated and gas insulated lines, two different maximum values of the E field were considered: 20 kV/cm (appropriate to vacuum insulation) and 50 kV/cm (compressed SF6). Core conductor, tube, bumper, standoff, spacer and bumper support dimensions, and masses were also calculated. The results of the characterization show mainly how transmission line size and mass scale with reactor-to-platform separation distance.
A rapid loss of stripes: the evolutionary history of the extinct quagga
Leonard, Jennifer A; Rohland, Nadin; Glaberman, Scott; Fleischer, Robert C; Caccone, Adalgisa; Hofreiter, Michael
2005-01-01
Twenty years ago, the field of ancient DNA was launched with the publication of two short mitochondrial (mt) DNA sequences from a single quagga (Equus quagga) museum skin, an extinct South African equid (Higuchi et al. 1984 Nature 312, 282–284). This was the first extinct species from which genetic information was retrieved. The DNA sequences of the quagga showed that it was more closely related to zebras than to horses. However, quagga evolutionary history is far from clear. We have isolated DNA from eight quaggas and a plains zebra (subspecies or phenotype Equus burchelli burchelli). We show that the quagga displayed little genetic diversity and very recently diverged from the plains zebra, probably during the penultimate glacial maximum. This emphasizes the importance of Pleistocene climate changes for phylogeographic patterns in African as well as Holarctic fauna. PMID:17148190
Early and late mammalian responses to heavy charged particles
NASA Technical Reports Server (NTRS)
Ainsworth, E. J.
1986-01-01
This overview summarizes murine results on acute lethality responses, inactivation of marrow CFU-S and intestinal microcolonies, testes weight loss, life span shortening, and posterior lens opacification in mice irradiated with heavy charged particles. RBE-LET relationships for these mammalian responses are compared with results from in vitro studies. The trend is that the maximum RBE for in vivo responses tends to be lower and occurs at a lower LET than for inactivation of V79 and T-1 cells in culture. Based on inactivation cross sections, the response of CFU-S in vivo conforms to expectations from earlier studies with prokaryotic systems and mammalian cells in culture. Effects of heavy ions are compared with fission spectrum neutrons, and the results are consistent with the interpretation that RBEs are lower than for fission neutrons at about the same LET, probably due to differences in track structure.
Coaxial tube array space transmission line characterization
NASA Astrophysics Data System (ADS)
Switzer, Colleen A.; Bents, David J.
The coaxial tube array tether/transmission line used to connect an SP-100 nuclear power system to the space station was characterized over the range of reactor-to-platform separation distances of 1 to 10 km. Characterization was done with respect to array performance, physical dimensions and masses. Using a fixed design procedure, a family of designs was generated for the same power level (300 kWe), power loss (1.5 percent), and meteoroid survival probability (99.5 percent over 10 yr). To differentiate between vacuum insulated and gas insulated lines, two different maximum values of the E field were considered: 20 kV/cm (appropriate to vacuum insulation) and 50 kV/cm (compressed SF6). Core conductor, tube, bumper, standoff, spacer and bumper support dimensions, and masses were also calculated. The results of the characterization show mainly how transmission line size and mass scale with reactor-to-platform separation distance.
A novel approach for cognitive radio application in 2.4-GHz ISM band
NASA Astrophysics Data System (ADS)
Das, Deepa; Das, Susmita
2017-05-01
This paper reveals the issues of incorporating the cognitive radio (CR) technology in the 2.4-GHz industrial, scientific and medical band. The objective of allowing the coexisting systems in this unlicensed band is to opportunistically share the underutilised spectrum so as to improve the spectral usages efficiency. Hence, proper evaluation of the spectrum occupancy is one of the important tasks. Therefore, we adopt a double threshold-based detection scheme to differentiate the sub-bands with respect to their occupancy statistics satisfying the target miss detection probability. Further, an adaptive power allocation to the CR user (CRU) is proposed for maximising the system throughput under the constraints of interference to the co-existing systems and maximum transmission power. We consider path loss in the channel modelling between the CRUs. Our proposed approach is investigated over the real measurement data collected in the Swearingen Engineering Center, University of South Carolina, Columbia, SC, USA.
Application of the Maximum Amplitude-Early Rise Correlation to Cycle 23
NASA Technical Reports Server (NTRS)
Willson, Robert M.; Hathaway, David H.
2004-01-01
On the basis of the maximum amplitude-early rise correlation, cycle 23 could have been predicted to be about the size of the mean cycle as early as 12 mo following cycle minimum. Indeed, estimates for the size of cycle 23 throughout its rise consistently suggested a maximum amplitude that would not differ appreciably from the mean cycle, contrary to predictions based on precursor information. Because cycle 23 s average slope during the rising portion of the solar cycle measured 2.4, computed as the difference between the conventional maximum (120.8) and minimum (8) amplitudes divided by the ascent duration in months (47), statistically speaking, it should be a cycle of shorter period. Hence, conventional sunspot minimum for cycle 24 should occur before December 2006, probably near July 2006 (+/-4 mo). However, if cycle 23 proves to be a statistical outlier, then conventional sunspot minimum for cycle 24 would be delayed until after July 2007, probably near December 2007 (+/-4 mo). In anticipation of cycle 24, a chart and table are provided for easy monitoring of the nearness and size of its maximum amplitude once onset has occurred (with respect to the mean cycle and using the updated maximum amplitude-early rise relationship).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, William C.
This report presents a methodology for deriving the equations which can be used for calculating the radially-averaged effective impact area for a theoretical aircraft crash into a structure. Conventionally, a maximum effective impact area has been used in calculating the probability of an aircraft crash into a structure. Whereas the maximum effective impact area is specific to a single direction of flight, the radially-averaged effective impact area takes into consideration the real life random nature of the direction of flight with respect to a structure. Since the radially-averaged effective impact area is less than the maximum effective impact area, themore » resulting calculated probability of an aircraft crash into a structure is reduced.« less
Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoneking, M.R.; Den Hartog, D.J.
1996-06-01
The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less
[In the absence of early orthodontic treatment, is there a loss of chance?].
Béry, A
2006-06-01
Chance is the probability that something will happen, and, in this sense, the absence of chance can be defined as the damage resulting from the disappearance of the probability of a favorable outcome (the contrary being the non-realization of the risk). This is autonomous damage that should be differentiated from final damage. Moral damage is a notion very close to the loss of chance even though it reposes on the indemnization of a final damage of an affection or malady. This article deals with these matters: an insufficient amount of information, the cause of final damage or the loss of chance, the loss of chance being a function of the deficit of information. In this sense, can the failure to begin early, appropriate dento-facial orthopedic treatment be considered a loss of chance for the child?
Popescu, F; Jaslow, C R; Kutteh, W H
2018-04-01
Will the addition of 24-chromosome microarray analysis on miscarriage tissue combined with the standard American Society for Reproductive Medicine (ASRM) evaluation for recurrent miscarriage explain most losses? Over 90% of patients with recurrent pregnancy loss (RPL) will have a probable or definitive cause identified when combining genetic testing on miscarriage tissue with the standard ASRM evaluation for recurrent miscarriage. RPL is estimated to occur in 2-4% of reproductive age couples. A probable cause can be identified in approximately 50% of patients after an ASRM recommended workup including an evaluation for parental chromosomal abnormalities, congenital and acquired uterine anomalies, endocrine imbalances and autoimmune factors including antiphospholipid syndrome. Single-center, prospective cohort study that included 100 patients seen in a private RPL clinic from 2014 to 2017. All 100 women had two or more pregnancy losses, a complete evaluation for RPL as defined by the ASRM, and miscarriage tissue evaluated by 24-chromosome microarray analysis after their second or subsequent miscarriage. Frequencies of abnormal results for evidence-based diagnostic tests considered definite or probable causes of RPL (karyotyping for parental chromosomal abnormalities, and 24-chromosome microarray evaluation for products of conception (POC); pelvic sonohysterography, hysterosalpingogram, or hysteroscopy for uterine anomalies; immunological tests for lupus anticoagulant and anticardiolipin antibodies; and blood tests for thyroid stimulating hormone (TSH), prolactin and hemoglobin A1c) were evaluated. We excluded cases where there was maternal cell contamination of the miscarriage tissue or if the ASRM evaluation was incomplete. A cost analysis for the evaluation of RPL was conducted to determine whether a proposed procedure of 24-chromome microarray evaluation followed by an ASRM RPL workup (for those RPL patients who had a normal 24-chromosome microarray evaluation) was more cost-efficient than conducting ASRM RPL workups on RPL patients followed by 24-chromosome microarray analysis (for those RPL patients who had a normal RPL workup). A definite or probable cause of pregnancy loss was identified in the vast majority (95/100; 95%) of RPL patients when a 24-chromosome pair microarray evaluation of POC testing is combined with the standard ASRM RPL workup evaluation at the time of the second or subsequent loss. The ASRM RPL workup identified an abnormality and a probable explanation for pregnancy loss in only 45/100 or 45% of all patients. A definite abnormality was identified in 67/100 patients or 67% when initial testing was performed using 24-chromosome microarray analyses on the miscarriage tissue. Only 5/100 (5%) patients, who had a euploid loss and a normal ASRM RPL workup, had a pregnancy loss without a probable or definitive cause identified. All other losses were explained by an abnormal 24-chromosome microarray analysis of the miscarriage tissue, an abnormal finding of the RPL workup, or a combination of both. Results from the cost analysis indicated that an initial approach of using a 24-chromosome microarray analysis on miscarriage tissue resulted in a 50% savings in cost to the health care system and to the patient. This is a single-center study on a small group of well-characterized women with RPL. There was an incomplete follow-up on subsequent pregnancy outcomes after evaluation, however this should not affect our principal results. The maternal age of patients varied from 26 to 45 years old. More aneuploid pregnancy losses would be expected in older women, particularly over the age of 35 years old. Evaluation of POC using 24-chromosome microarray analysis adds significantly to the ASRM recommended evaluation of RPL. Genetic evaluation on miscarriage tissue obtained at the time of the second and subsequent pregnancy losses should be offered to all couples with two or more consecutive pregnancy losses. The combination of a genetic evaluation on miscarriage tissue with an evidence-based evaluation for RPL will identify a probable or definitive cause in over 90% of miscarriages. No funding was received for this study and there are no conflicts of interest to declare. Not applicable.
Offerman, Theo; Palley, Asa B
2016-01-01
Strictly proper scoring rules are designed to truthfully elicit subjective probabilistic beliefs from risk neutral agents. Previous experimental studies have identified two problems with this method: (i) risk aversion causes agents to bias their reports toward the probability of [Formula: see text], and (ii) for moderate beliefs agents simply report [Formula: see text]. Applying a prospect theory model of risk preferences, we show that loss aversion can explain both of these behavioral phenomena. Using the insights of this model, we develop a simple off-the-shelf probability assessment mechanism that encourages loss-averse agents to report true beliefs. In an experiment, we demonstrate the effectiveness of this modification in both eliminating uninformative reports and eliciting true probabilistic beliefs.
It's all about gains: Risk preferences in problem gambling.
Ring, Patrick; Probst, Catharina C; Neyse, Levent; Wolff, Stephan; Kaernbach, Christian; van Eimeren, Thilo; Camerer, Colin F; Schmidt, Ulrich
2018-06-07
Problem gambling is a serious socioeconomic problem involving high individual and social costs. In this article, we study risk preferences of problem gamblers including their risk attitudes in the gain and loss domains, their weighting of probabilities, and their degree of loss aversion. Our findings indicate that problem gamblers are systematically more risk taking and less sensitive toward changes in probabilities in the gain domain only. Neither their risk attitudes in the loss domain nor their degree of loss aversion are significantly different from the controls. Additional evidence for a similar degree of sensitivity toward negative outcomes is gained from skin conductance data-a psychophysiological marker for emotional arousal-in a threat-of-shock task. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
The Priority Heuristic: Making Choices Without Trade-Offs
Brandstätter, Eduard; Gigerenzer, Gerd; Hertwig, Ralph
2010-01-01
Bernoulli's framework of expected utility serves as a model for various psychological processes, including motivation, moral sense, attitudes, and decision making. To account for evidence at variance with expected utility, we generalize the framework of fast and frugal heuristics from inferences to preferences. The priority heuristic predicts (i) Allais' paradox, (ii) risk aversion for gains if probabilities are high, (iii) risk seeking for gains if probabilities are low (lottery tickets), (iv) risk aversion for losses if probabilities are low (buying insurance), (v) risk seeking for losses if probabilities are high, (vi) certainty effect, (vii) possibility effect, and (viii) intransitivities. We test how accurately the heuristic predicts people's choices, compared to previously proposed heuristics and three modifications of expected utility theory: security-potential/aspiration theory, transfer-of-attention-exchange model, and cumulative prospect theory. PMID:16637767
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
Noise thresholds for optical quantum computers.
Dawson, Christopher M; Haselgrove, Henry L; Nielsen, Michael A
2006-01-20
In this Letter we numerically investigate the fault-tolerant threshold for optical cluster-state quantum computing. We allow both photon loss noise and depolarizing noise (as a general proxy for all local noise), and obtain a threshold region of allowed pairs of values for the two types of noise. Roughly speaking, our results show that scalable optical quantum computing is possible for photon loss probabilities <3 x 10(-3), and for depolarization probabilities <10(-4).
NASA Astrophysics Data System (ADS)
Afzali, Arezoo; Mottaghitalab, Vahid; Seyyed Afghahi, Seyyed Salman; Jafarian, Mojtaba; Atassi, Yomen
2017-11-01
Current investigation focuses on the electromagnetic properties of nonwoven fabric coated with BaFe12O19 (BHF) /MWCNTs/PANi nanocomposite in X and Ku bands. The BHF/MWCNTs and BHF/MWCNTs/PANi nanocomposites are prepared using the sol gel and in-situ polymerization methods respectively. The absorbent fabric was prepared based on applying a 40 wt% of BHF/MWCNTs/PANi nanocomposite in silicon resin on nonwoven fabric via roller coating technique The X-ray diffraction (XRD), scanning electron microscopy (SEM), vibrating sample magnetometer (VSM) and vector network analysis (VNA) are used to peruse microstructural, magnetic and electromagnetic features of the composite and absorber fabric respectively. The microscopic images of the fabric coated with magnetic nanocomposite shows a homogenous layer of nanoparticles on the fabric surface. The maximum reflection loss of binary nano-composite BHF/MWCNTs was measured about -28.50 dB at 11.72 GHz with 1.7 GHz bandwidth (RL < -10 dB) in X band. Moreover in Ku band, the maximum reflection loss is -29.66 dB at 15.78 GHz with 3.2 GHz bandwidths. Also the ternary nanocomposite BHF/MWCNTs/PANi exhibits a broad band absorber over a wide range of X band with a maximum reflection loss of -36.2 dB at 10.2 GHz with 1.5 GHz bandwidth and in the Ku band has arrived a maximum reflection loss of -37.65 dB at 12.84 GHz with 2.43 GHz bandwidth. This result reflects the synergistic effect of the different components with different loss mechanisms. As it is observed due to the presence of PANi in the structure of nanocomposite, the amount of absorption has increased extraordinarily. The absorber fabric exhibits a maximum reflection loss of -24.2 dB at 11.6 GHz with 4 GHz bandwidth in X band. However, in Ku band, the absorber fabric has had the maximum absorption in 16.88 GHz that is about -24.34 dB with 6 GHz bandwidth. Therefore, results indicate that the fabric samples coated represents appreciable maximum absorption value of more than 99% in X and Ku bands which can be attributed to presence of carbon and polyaniline structure in composite material.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Covariance Based Pre-Filters and Screening Criteria for Conjunction Analysis
NASA Astrophysics Data System (ADS)
George, E., Chan, K.
2012-09-01
Several relationships are developed relating object size, initial covariance and range at closest approach to probability of collision. These relationships address the following questions: - Given the objects' initial covariance and combined hard body size, what is the maximum possible value of the probability of collision (Pc)? - Given the objects' initial covariance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the combined hard body radius, what is the minimum miss distance for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the miss distance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? The first relationship above allows the elimination of object pairs from conjunction analysis (CA) on the basis of the initial covariance and hard-body sizes of the objects. The application of this pre-filter to present day catalogs with estimated covariance results in the elimination of approximately 35% of object pairs as unable to ever conjunct with a probability of collision exceeding 1x10-6. Because Pc is directly proportional to object size and inversely proportional to covariance size, this pre-filter will have a significantly larger impact on future catalogs, which are expected to contain a much larger fraction of small debris tracked only by a limited subset of available sensors. This relationship also provides a mathematically rigorous basis for eliminating objects from analysis entirely based on element set age or quality - a practice commonly done by rough rules of thumb today. Further, these relations can be used to determine the required geometric screening radius for all objects. This analysis reveals the screening volumes for small objects are much larger than needed, while the screening volumes for pairs of large objects may be inadequate. These relationships may also form the basis of an important metric for catalog maintenance by defining the maximum allowable covariance size for effective conjunction analysis. The application of these techniques promises to greatly improve the efficiency and completeness of conjunction analysis.
Simpson, David J.; Robinson, Simon P.
1984-01-01
Leaves from spinach (Spinacia oleracea L. cv Hybrid 102) plants grown in Mn-deficient nutrient solution were characterized by chlorosis, lowered chlorophyll a/b ratio and reduced electron transport. There were characteristic changes in room temperature fluorescence induction kinetics with increased initial yield (Fo) and decreased variable fluorescence (Fv). The fluorescence yield after the maximum fell rapidly to a level below Fo. The shape of the rise from Fo to the maximum was altered and the size of photosystem II units increased, as measured by half-rise time of Fv in the presence of 3-(3,4-dichlorophenyl)-1,1-dimethylurea. The Mn-deficient leaves were harvested before necrosis, when thin section electron microscopy revealed no disorganization of the thylakoid system. Thylakoid membranes were examined by freeze-fracture electron microscopy. The effect of Mn-deficiency was the specific loss of three-quarters of the particles from the endoplasmic fracture face of appressed thylakoids (EFs). Mn-deficient leaves were restored to near normal 2 days after application of exogenous Mn to the nutrient solution. It is concluded that the loss of most, but not all, functional photosystem II reaction centers from grana, with no alteration in light-harvesting complex or photosystem I, is responsible for the fluorescence and functional properties observed. The response of thylakoids to Mn deficiency shows that there is a fundamental difference in composition and function of stacked and unstacked endoplasmic fracture particles. The stacked endoplasmic fracture particle probably contains, in close association, the photosystem II reaction center and also the Mn-containing polypeptide, the 3-(3,4-dichlorophenyl)-1,1-dimethylurea-binding protein, and all electron transport components in between. Images Fig. 3 Fig. 4 Fig. 5 PMID:16663491
Lim, Grace; Horowitz, Jeanne M; Berggruen, Senta; Ernst, Linda M; Linn, Rebecca L; Hewlett, Bradley; Kim, Jennifer; Chalifoux, Laurie A; McCarthy, Robert J
2016-11-01
To evaluate the hypothesis that assigning grades to magnetic resonance imaging (MRI) findings of suspected placenta accreta will correlate with hemorrhagic outcomes. We chose a single-center, retrospective, observational design. Nulliparous or multiparous women who had antenatal placental MRI performed at a tertiary level academic hospital were included. Cases with antenatal placental MRI were included and compared with cases without MRI performed. Two radiologists assigned a probability score for accreta to each study. Estimated blood loss and transfusion requirements were compared among groups by the Kruskal-Wallis H test. Thirty-five cases had placental MRI performed. MRI performance was associated with higher blood loss compared with the non-MRI group (2600 [1400-4500]mL vs 900[600-1500]mL, P<.001). There was no difference in estimated blood loss (P=.31) or transfusion (P=.57) among the MRI probability groups. In cases of suspected placenta accreta, probability scores for antenatal placental MRI may not be associated with increasing degrees of hemorrhage. Continued research is warranted to determine the effectiveness of assigning probability scores for antenatal accreta imaging studies, combined with clinical indices of suspicion, in assisting with antenatal multidisciplinary team planning for operative management of this morbid condition. Copyright © 2016 Elsevier Inc. All rights reserved.
Rendon, Samuel H.; Ashworth, Chad E.; Smith, S. Jerrod
2012-01-01
Dams provide beneficial functions such as flood control, recreation, and reliable water supplies, but they also entail risk: dam breaches and resultant floods can cause substantial property damage and loss of life. The State of Oklahoma requires each owner of a high-hazard dam, which the Federal Emergency Management Agency defines as dams for which failure or misoperation probably will cause loss of human life, to develop an emergency action plan specific to that dam. Components of an emergency action plan are to simulate a flood resulting from a possible dam breach and map the resulting downstream flood-inundation areas. The resulting flood-inundation maps can provide valuable information to city officials, emergency managers, and local residents for planning the emergency response if a dam breach occurs. Accurate topographic data are vital for developing flood-inundation maps. This report presents results of a cooperative study by the city of Lawton, Oklahoma, and the U.S. Geological Survey (USGS) to model dam-breach scenarios at Lakes Ellsworth and Lawtonka near Lawton and to map the potential flood-inundation areas of such dam breaches. To assist the city of Lawton with completion of the emergency action plans for Lakes Ellsworth and Lawtonka Dams, the USGS collected light detection and ranging (lidar) data that were used to develop a high-resolution digital elevation model and a 1-foot contour elevation map for the flood plains downstream from Lakes Ellsworth and Lawtonka. This digital elevation model and field measurements, streamflow-gaging station data (USGS streamflow-gaging station 07311000, East Cache Creek near Walters, Okla.), and hydraulic values were used as inputs for the dynamic (unsteady-flow) model, Hydrologic Engineering Center's River Analysis System (HEC-RAS). The modeled flood elevations were exported to a geographic information system to produce flood-inundation maps. Water-surface profiles were developed for a 75-percent probable maximum flood scenario and a sunny-day dam-breach scenario, as well as for maximum flood-inundation elevations and flood-wave arrival times for selected bridge crossings. Some areas of concern near the city of Lawton, if a dam breach occurs at Lakes Ellsworth or Lawtonka, include water treatment plants, wastewater treatment plants, recreational areas, and community-services offices.
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
NASA Astrophysics Data System (ADS)
Okolelova, Ella; Shibaeva, Marina; Shalnev, Oleg
2018-03-01
The article analyses risks in high-rise construction in terms of investment value with account of the maximum probable loss in case of risk event. The authors scrutinized the risks of high-rise construction in regions with various geographic, climatic and socio-economic conditions that may influence the project environment. Risk classification is presented in general terms, that includes aggregated characteristics of risks being common for many regions. Cluster analysis tools, that allow considering generalized groups of risk depending on their qualitative and quantitative features, were used in order to model the influence of the risk factors on the implementation of investment project. For convenience of further calculations, each type of risk is assigned a separate code with the number of the cluster and the subtype of risk. This approach and the coding of risk factors makes it possible to build a risk matrix, which greatly facilitates the task of determining the degree of impact of risks. The authors clarified and expanded the concept of the price risk, which is defined as the expected value of the event, 105 which extends the capabilities of the model, allows estimating an interval of the probability of occurrence and also using other probabilistic methods of calculation.
Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments
Griffis, V.W.; Stedinger, Jery R.; Cohn, T.A.
2004-01-01
The recently developed expected moments algorithm (EMA) [Cohn et al., 1997] does as well as maximum likelihood estimations at estimating log‐Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.
Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments
NASA Astrophysics Data System (ADS)
Griffis, V. W.; Stedinger, J. R.; Cohn, T. A.
2004-07-01
The recently developed expected moments algorithm (EMA) [, 1997] does as well as maximum likelihood estimations at estimating log-Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.
Beam, Craig A; MacCallum, Colleen; Herold, Kevan C; Wherrett, Diane K; Palmer, Jerry; Ludvigsson, Johnny
2017-01-01
GAD is a major target of the autoimmune response that occurs in type 1 diabetes mellitus. Randomised controlled clinical trials of a GAD + alum vaccine in human participants have so far given conflicting results. In this study, we sought to see whether a clearer answer to the question of whether GAD65 has an effect on C-peptide could be reached by combining individual-level data from the randomised controlled trials using Bayesian meta-analysis to estimate the probability of a positive biological effect (a reduction in C-peptide loss compared with placebo approximately 1 year after the GAD vaccine). We estimate that there is a 98% probability that 20 μg GAD with alum administered twice yields a positive biological effect. The effect is probably a 15-20% reduction in the loss of C-peptide at approximately 1 year after treatment. This translates to an annual expected loss of between -0.250 and -0.235 pmol/ml in treated patients compared with an expected 2 h AUC loss of -0.294 pmol/ml at 1 year for untreated newly diagnosed patients. The biological effect of this vaccination should be developed further in order to reach clinically desirable reductions in insulin loss in patients recently diagnosed with type 1 diabetes.
A study into the loss of lock of the space telescope fine guidance sensor
NASA Technical Reports Server (NTRS)
Polites, M. E.
1983-01-01
The results of a study into the loss of lock phenomenon associated with the Space Telescope Fine Guidance Sensor (FGS) are documented. The primary cause of loss of lock has been found to be a combination of cosmic ray spikes and photon noise due to a 14.5 Mv star. The probability of maintaining lock versus time is estimated both for the baseline FGS design and with parameter changes in the FGS firmware which will improve the probability of maintaining lock. The parameters varied are changeable in-flight from the ground and hence do not impact the design of the FGS hardware.
NASA Technical Reports Server (NTRS)
Wood, B. J.; Ablow, C. M.; Wise, H.
1973-01-01
For a number of candidate materials of construction for the dual air density explorer satellites the rate of oxygen atom loss by adsorption, surface reaction, and recombination was determined as a function of surface and temperature. Plain aluminum and anodized aluminum surfaces exhibit a collisional atom loss probability alpha .01 in the temperature range 140 - 360 K, and an initial sticking probability. For SiO coated aluminum in the same temperature range, alpha .001 and So .001. Atom-loss on gold is relatively rapid alpha .01. The So for gold varies between 0.25 and unity in the temperature range 360 - 140 K.
NASA Astrophysics Data System (ADS)
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Rhodes, Jean; Chan, Christian; Paxson, Christina; Rouse, Cecilia Elena; Waters, Mary; Fussell, Elizabeth
2010-04-01
The purpose of this study was to document changes in mental and physical health among 392 low-income parents exposed to Hurricane Katrina and to explore how hurricane-related stressors and loss relate to post-Katrina well-being. The prevalence of probable serious mental illness doubled, and nearly half of the respondents exhibited probable posttraumatic stress disorder. Higher levels of hurricane-related loss and stressors were generally associated with worse health outcomes, controlling for baseline sociodemographic and health measures. Higher baseline resources predicted fewer hurricane-associated stressors, but the consequences of stressors and loss were similar regardless of baseline resources. Adverse health consequences of Hurricane Katrina persisted for a year or more and were most severe for those experiencing the most stressors and loss. Long-term health and mental health services are needed for low-income disaster survivors, especially those who experience disaster-related stressors and loss.
Rhodes, Jean; Chan, Christian; Paxson, Christina; Rouse, Cecilia Elena; Waters, Mary; Fussell, Elizabeth
2012-01-01
The purpose of this study was to document changes in mental and physical health among 392 low-income parents exposed to Hurricane Katrina and to explore how hurricane-related stressors and loss relate to post-Katrina well being. The prevalence of probable serious mental illness doubled, and nearly half of the respondents exhibited probable PTSD. Higher levels of hurricane-related loss and stressors were generally associated with worse health outcomes, controlling for baseline socio-demographic and health measures. Higher baseline resources predicted fewer hurricane-associated stressors, but the consequences of stressors and loss were similar regardless of baseline resources. Adverse health consequences of Hurricane Katrina persisted for a year or more, and were most severe for those experiencing the most stressors and loss. Long-term health and mental health services are needed for low-income disaster survivors, especially those who experience disaster-related stressors and loss. PMID:20553517
Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.
Li, Yuhong; Jia, Fucang; Qin, Jing
2016-10-01
Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.
Yield loss assessment due to Alternaria blight and its management in linseed.
Singh, R B; Singh, H K; Parmar, Arpita
2014-04-01
Field experiments were conducted during 2010-11 and 2011-12 to assess the yield losses due to Alternaria blight disease caused by Alternaria lini and A. linicola in recently released cultivars and their management with the integration of Trichoderma viride, fungicides and plant extract. Disease severity on leaves varied from 41.07% (Parvati) to 65.01% (Chambal) while bud damage per cent ranged between 23.56% (Shekhar) to 46.12% (T-397), respectively in different cultivars. Maximum yield loss of 58.44% was recorded in cultivar Neelum followed by Parvati (55.56%), Meera (55.56%) and Chambal (51.72%), respectively while minimum loss was recorded in Kiran (19.99%) and Jeevan (22.22%). Minimum mean disease severity (19.47%) with maximum disease control (69.74%) was recorded with the treatment: seed treatment (ST) with vitavax power (2 g kg(-1) seed) + 2 foliar sprays (FS) of Saaf (a mixture of carbendazim+mancozeb) 0.2% followed by ST with Trichoderma viride (4g kg(-1) seed) + 2 FS of Saaf (0.2%). Minimum bud damage (13.75%) with maximum control (60.94%) was recorded with treatment of ST with vitavax power+2 FS of propiconazole (0.2%). Maximum mean seed yield (1440 kg ha(-1)) with maximum net return (Rs. 15352/ha) and benefit cost ratio (1:11.04) was obtained with treatment ST with vitavax power + 2 FS of Neem leaf extract followed by treatment ST with vitavax power+2 FS of Saaf (1378 kg ha(-1)).
Treur, Maarten; Heeg, Bart; Möller, Hans-Jürgen; Schmeding, Annette; van Hout, Ben
2009-02-18
As schizophrenia patients are typically suspicious of, or are hostile to changes they may be reluctant to accept generic substitution, possibly affecting compliance. This may counteract drug costs savings due to less symptom control and increased hospitalization risk. Although compliance losses following generic substitution have not been quantified so far, one can estimate the possible health-economic consequences. The current study aims to do so by considering the case of risperidone in Germany. An existing DES model was adapted to compare staying on branded risperidone with generic substitution. Differences include the probability of non-compliance and medication costs. Incremental probability of non-compliance after generic substitution was varied between 2.5% and 10%, while generic medication costs were assumed to be 40% lower. Effect of medication price was assessed as well as the effect of applying compliance losses to all treatment settings. The probability of staying on branded risperidone being cost-effective was calculated for various outcomes of a hypothetical study that would investigate non-compliance following generic substitution of risperidone. If the incremental probability of non-compliance after generic substitution is 2.5%, 5.0%, 7.5% and 10% respectively, incremental effects of staying on branded risperidone are 0.004, 0.007, 0.011 and 0.015 Quality Adjusted Life Years (QALYs). Incremental costs are euro757, euro343, -euro123 and -euro554 respectively. Benefits of staying on branded risperidone include improved symptom control and fewer hospitalizations. If generic substitution results in a 5.2% higher probability of non-compliance, the model predicts staying on branded risperidone to be cost-effective (NICE threshold of 30,000 per QALY gained). Compliance losses of more than 6.9% makes branded risperidone the dominant alternative. Results are sensitive to the locations at which compliance loss is applied and the price of generic risperidone. The probability that staying on branded risperidone is cost-effective would increase with larger compliance differences and more patients included in the hypothetical study. The model predicts that it is cost-effective to keep a patient with schizophrenia in Germany on branded risperidone instead of switching him/her to generic risperidone (assuming a 40% reduction in medication costs), if the incremental probability of becoming non-compliant after generic substitution exceeds 5.2%.
40 CFR 258.14 - Seismic impact zones.
Code of Federal Regulations, 2012 CFR
2012-07-01
... horizontal acceleration in lithified earth material for the site. The owner or operator must place the... greater probability that the maximum horizontal acceleration in lithified earth material, expressed as a percentage of the earth's gravitational pull (g), will exceed 0.10g in 250 years. (2) Maximum horizontal...
40 CFR 258.14 - Seismic impact zones.
Code of Federal Regulations, 2014 CFR
2014-07-01
... horizontal acceleration in lithified earth material for the site. The owner or operator must place the... greater probability that the maximum horizontal acceleration in lithified earth material, expressed as a percentage of the earth's gravitational pull (g), will exceed 0.10g in 250 years. (2) Maximum horizontal...
40 CFR 258.14 - Seismic impact zones.
Code of Federal Regulations, 2013 CFR
2013-07-01
... horizontal acceleration in lithified earth material for the site. The owner or operator must place the... greater probability that the maximum horizontal acceleration in lithified earth material, expressed as a percentage of the earth's gravitational pull (g), will exceed 0.10g in 250 years. (2) Maximum horizontal...
NASA Astrophysics Data System (ADS)
Pérez-Sánchez, Julio; Senent-Aparicio, Javier
2017-08-01
Dry spells are an essential concept of drought climatology that clearly defines the semiarid Mediterranean environment and whose consequences are a defining feature for an ecosystem, so vulnerable with regard to water. The present study was conducted to characterize rainfall drought in the Segura River basin located in eastern Spain, marked by the self seasonal nature of these latitudes. A daily precipitation set has been utilized for 29 weather stations during a period of 20 years (1993-2013). Furthermore, four sets of dry spell length (complete series, monthly maximum, seasonal maximum, and annual maximum) are used and simulated for all the weather stations with the following probability distribution functions: Burr, Dagum, error, generalized extreme value, generalized logistic, generalized Pareto, Gumbel Max, inverse Gaussian, Johnson SB, Log-Logistic, Log-Pearson 3, Triangular, Weibull, and Wakeby. Only the series of annual maximum spell offer a good adjustment for all the weather stations, thereby gaining the role of Wakeby as the best result, with a p value means of 0.9424 for the Kolmogorov-Smirnov test (0.2 significance level). Probability of dry spell duration for return periods of 2, 5, 10, and 25 years maps reveal the northeast-southeast gradient, increasing periods with annual rainfall of less than 0.1 mm in the eastern third of the basin, in the proximity of the Mediterranean slope.
[Signal loss in magnetic resonance imaging caused by intraoral anchored dental magnetic materials].
Blankenstein, F H; Truong, B; Thomas, A; Schröder, R J; Naumann, M
2006-08-01
To measure the maximum extent of the signal loss areas in the center of the susceptibility artifacts generated by ferromagnetic dental magnet attachments using three different sequences in the 1.5 and 3.0 Tesla MRI. Five different pieces of standard dental magnet attachments with volumes of 6.5 to 31.4 mm(3) were used: a NdFeB magnet with an open magnetic field, a NdFeB magnet with a closed magnetic field, a SmCo magnet with an open magnetic field, a stainless steel keeper (AUM-20) and a PdCo piece. The attachments were placed between two cylindrical phantoms and examined in 1.5 and 3.0 Tesla MRI using gradient echo and T1- and T2-weighted spin echoes. We measured the maximum extent of the generated signal loss areas parallel and perpendicular to the direction of B (O). In gradient echoes the artifacts were substantially larger and symmetrically adjusted around the object. The areas with total signal loss were mushroom-like with a maximum extent of 7.4 to 9.7 cm parallel to the direction of B (O) and 6.7 to 7.4 cm perpendicular to B (O). In spin echoes the signal loss areas were obviously smaller, but not centered. The maximum values ranged between 4.9 and 7.2 cm (parallel B (O)) and 3.6 and 7.0 cm (perpendicular B (O)). The different ferromagnetic attachments had no clinically relevant influence on the signal loss neither in 1.5 T nor 3.0 T MRI. Ferromagnetic materials used in dentistry are not intraorally standardized. To ensure, that the area of interest is not affected by the described artifacts, the maximum extent of the signal loss area should be assumed: a radius of up to 7 cm in 1.5 and 3.0 T MRI by T1 and T2 sequences, and a radius of up to 10 cm in T2* sequences. To decide whether magnet attachments have to be removed before MR imaging, physicians should consider both the intact retention of the keepers and the safety distance between the ferromagnetic objects and the area of interest.
Koottathape, Natthavoot; Takahashi, Hidekazu; Finger, Wernerj; Kanehira, Masafumi; Iwasaki, Naohiko; Aoyagi, Yujin
2012-06-01
Although attritive and abrasive wear of recent composite resins has been substantially reduced, in vitro wear testing with reasonably simulating devices and quantitative determination of resulting wear is still needed. Three-dimensional scanning methods are frequently used for this purpose. The aim of this trial was to compare maximum depth of wear and volume loss of composite samples, evaluated with a contact profilometer and a non-contact CCD camera imaging system, respectively. Twenty-three random composite specimens with wear traces produced in a ball-on-disc sliding device, using poppy seed slurry and PMMA suspension as third-body media, were evaluated with the contact profilometer (TalyScan 150, Taylor Hobson LTD, Leicester, UK) and with the digital CCD microscope (VHX1000, KEYENCE, Osaka, Japan). The target parameters were maximum depth of the wear and volume loss.Results - The individual time of measurement needed with the non-contact CCD method was almost three hours less than that with the contact method. Both, maximum depth of wear and volume loss data, recorded with the two methods were linearly correlated (r(2) > 0.97; p < 0.01). The contact scanning method and the non-contact CCD method are equally suitable for determination of maximum depth of wear and volume loss of abraded composite resins.
Al-Samman, A. M.; Rahman, T. A.; Azmi, M. H.; Hindia, M. N.; Khan, I.; Hanafi, E.
2016-01-01
This paper presents an experimental characterization of millimeter-wave (mm-wave) channels in the 6.5 GHz, 10.5 GHz, 15 GHz, 19 GHz, 28 GHz and 38 GHz frequency bands in an indoor corridor environment. More than 4,000 power delay profiles were measured across the bands using an omnidirectional transmitter antenna and a highly directional horn receiver antenna for both co- and cross-polarized antenna configurations. This paper develops a new path-loss model to account for the frequency attenuation with distance, which we term the frequency attenuation (FA) path-loss model and introduce a frequency-dependent attenuation factor. The large-scale path loss was characterized based on both new and well-known path-loss models. A general and less complex method is also proposed to estimate the cross-polarization discrimination (XPD) factor of close-in reference distance with the XPD (CIX) and ABG with the XPD (ABGX) path-loss models to avoid the computational complexity of minimum mean square error (MMSE) approach. Moreover, small-scale parameters such as root mean square (RMS) delay spread, mean excess (MN-EX) delay, dispersion factors and maximum excess (MAX-EX) delay parameters were used to characterize the multipath channel dispersion. Multiple statistical distributions for RMS delay spread were also investigated. The results show that our proposed models are simpler and more physically-based than other well-known models. The path-loss exponents for all studied models are smaller than that of the free-space model by values in the range of 0.1 to 1.4 for all measured frequencies. The RMS delay spread values varied between 0.2 ns and 13.8 ns, and the dispersion factor values were less than 1 for all measured frequencies. The exponential and Weibull probability distribution models best fit the RMS delay spread empirical distribution for all of the measured frequencies in all scenarios. PMID:27654703
Al-Samman, A M; Rahman, T A; Azmi, M H; Hindia, M N; Khan, I; Hanafi, E
This paper presents an experimental characterization of millimeter-wave (mm-wave) channels in the 6.5 GHz, 10.5 GHz, 15 GHz, 19 GHz, 28 GHz and 38 GHz frequency bands in an indoor corridor environment. More than 4,000 power delay profiles were measured across the bands using an omnidirectional transmitter antenna and a highly directional horn receiver antenna for both co- and cross-polarized antenna configurations. This paper develops a new path-loss model to account for the frequency attenuation with distance, which we term the frequency attenuation (FA) path-loss model and introduce a frequency-dependent attenuation factor. The large-scale path loss was characterized based on both new and well-known path-loss models. A general and less complex method is also proposed to estimate the cross-polarization discrimination (XPD) factor of close-in reference distance with the XPD (CIX) and ABG with the XPD (ABGX) path-loss models to avoid the computational complexity of minimum mean square error (MMSE) approach. Moreover, small-scale parameters such as root mean square (RMS) delay spread, mean excess (MN-EX) delay, dispersion factors and maximum excess (MAX-EX) delay parameters were used to characterize the multipath channel dispersion. Multiple statistical distributions for RMS delay spread were also investigated. The results show that our proposed models are simpler and more physically-based than other well-known models. The path-loss exponents for all studied models are smaller than that of the free-space model by values in the range of 0.1 to 1.4 for all measured frequencies. The RMS delay spread values varied between 0.2 ns and 13.8 ns, and the dispersion factor values were less than 1 for all measured frequencies. The exponential and Weibull probability distribution models best fit the RMS delay spread empirical distribution for all of the measured frequencies in all scenarios.
Sufficient Statistics for Divergence and the Probability of Misclassification
NASA Technical Reports Server (NTRS)
Quirein, J.
1972-01-01
One particular aspect is considered of the feature selection problem which results from the transformation x=Bz, where B is a k by n matrix of rank k and k is or = to n. It is shown that in general, such a transformation results in a loss of information. In terms of the divergence, this is equivalent to the fact that the average divergence computed using the variable x is less than or equal to the average divergence computed using the variable z. A loss of information in terms of the probability of misclassification is shown to be equivalent to the fact that the probability of misclassification computed using variable x is greater than or equal to the probability of misclassification computed using variable z. First, the necessary facts relating k-dimensional and n-dimensional integrals are derived. Then the mentioned results about the divergence and probability of misclassification are derived. Finally it is shown that if no information is lost (in x = Bz) as measured by the divergence, then no information is lost as measured by the probability of misclassification.
Maximum ikelihood estimation for the double-count method with independent observers
Manly, Bryan F.J.; McDonald, Lyman L.; Garner, Gerald W.
1996-01-01
Data collected under a double-count protocol during line transect surveys were analyzed using new maximum likelihood methods combined with Akaike's information criterion to provide estimates of the abundance of polar bear (Ursus maritimus Phipps) in a pilot study off the coast of Alaska. Visibility biases were corrected by modeling the detection probabilities using logistic regression functions. Independent variables that influenced the detection probabilities included perpendicular distance of bear groups from the flight line and the number of individuals in the groups. A series of models were considered which vary from (1) the simplest, where the probability of detection was the same for both observers and was not affected by either distance from the flight line or group size, to (2) models where probability of detection is different for the two observers and depends on both distance from the transect and group size. Estimation procedures are developed for the case when additional variables may affect detection probabilities. The methods are illustrated using data from the pilot polar bear survey and some recommendations are given for design of a survey over the larger Chukchi Sea between Russia and the United States.
Critical parameters for sterilization of oil palm fruit by microwave irradiation
NASA Astrophysics Data System (ADS)
Sarah, Maya; Taib, M. R.
2017-08-01
Study to evaluate critical parameters for microwave irradiation to sterilize oil palm fruit was carried out at power density of 560 to 1120 W/kg. Critical parameters are important to ensure moisture loss during sterilization exceed the critical moisture (Mc) but less than maximum moisture (Mmax). Critical moisture in this study was determined according to dielectric loss factor of heated oil palm fruits at 2450 MHz. It was obtained from slope characterization of dielectric loss factor-vs-moisture loss curve. The Mc was used to indicate critical temperature (Tc) and critical time (tc) for microwave sterilization. To ensure moisture loss above critical value but not exceed maximum value, the combinations of time-temperature for sterilization of oil palm fruits by microwave irradiation were 6 min and 75°C to 17 min and 82°C respectively.
Stellar Contrails in Quasi-stellar Objects: The Origin of Broad Absorption Lines
NASA Astrophysics Data System (ADS)
Scoville, Nick; Norman, Colin
1995-10-01
Active galactic nuclei (AGNs) and quasars often exhibit infrared excesses at λ = 2-10 microns attributable to thermal dust emission. In this paper we propose that this hot dust is supplied by circumstellar mass loss from evolved stars in the nuclear star cluster. The physics of the mass-loss dust, specifically the evaporation temperature, is a critical parameter in determining the accretion rate of mass-loss material onto the central AGN. For standard interstellar dust grains with an evaporation temperature of 1800 K the dust is destroyed inside a radius of 1 pc from a central luminosity source of 5 × 10 Lsun. The mass-loss material inside 1 pc will therefore have a lower radiation pressure efficiency and accrete inward. Outside this critical radius, dust may survive, and the mass loss is accelerated outward owing to the high radiation pressure efficiency of the dust mixed with the gas. The outflowing material will consist of discrete trails of debris shed by the individual mass-loss stars, and we suggest that these trails produce the broad absorption lines (BALs) seen in 5%-10% of QSOs. The model accounts naturally for the maximum outflow velocities seen in the BALs (˜30,000 km s-1 and varying as L¼) since this maximum terminal velocity occurs for matter originating at the inner edge of the radiative equilibrium dust survival zone. Although the radiation pressure acts on the dust, individual grains will be highly charged (Z ˜ 103+), and the grains are therefore strongly coupled to the gas through the ambient magnetic fields. Numerical hydrodynamic calculations were done to follow the evolution of mass-loss material. As the orbiting debris is driven outward by radiation pressure, the trail forms a spiral with initially high pitch angle (˜85°). The trails are compressed into thin ribbons in the radial direction initially by the radiation pressure gradients due to absorption within the trail. After reaching > 104 km s-1 radial velocity, the compression can be maintained by ram pressure due to an ambient gas of modest density (˜102 cm-3). Each of the stellar contrails will have mean column density ˜1019-1021 cm-2, volume density ˜108-109 cm-3, and thickness 1011-1012 cm along the line of sight to the AGN corresponding to parameters deduced from observations of the BAL clouds. Assuming minimal expansion perpendicular to the line of sight at the speed of sound, the width of the trails is 1015-1016 cm, or 102-103 times the line-of-sight depth. Since the UV-emitting accretion disk probably has a radius of about 2 × 1016 cm, a single trail will only partially cover the continuum, but for the column densities quoted above the observed absorption lines (e.g., C IV) will be optically thick with τ > 10. Since the contrails are nearly radial just after leaving the star when the maximum outward acceleration occurs, a large range of velocities (˜4000 km s-1) will be seen in absorption of the QSO light from each trail, and only a few disk-crossing trails are needed to account for the full width of broad absorption line troughs.
De-identifying a public use microdata file from the Canadian national discharge abstract database
2011-01-01
Abstract Background The Canadian Institute for Health Information (CIHI) collects hospital discharge abstract data (DAD) from Canadian provinces and territories. There are many demands for the disclosure of this data for research and analysis to inform policy making. To expedite the disclosure of data for some of these purposes, the construction of a DAD public use microdata file (PUMF) was considered. Such purposes include: confirming some published results, providing broader feedback to CIHI to improve data quality, training students and fellows, providing an easily accessible data set for researchers to prepare for analyses on the full DAD data set, and serve as a large health data set for computer scientists and statisticians to evaluate analysis and data mining techniques. The objective of this study was to measure the probability of re-identification for records in a PUMF, and to de-identify a national DAD PUMF consisting of 10% of records. Methods Plausible attacks on a PUMF were evaluated. Based on these attacks, the 2008-2009 national DAD was de-identified. A new algorithm was developed to minimize the amount of suppression while maximizing the precision of the data. The acceptable threshold for the probability of correct re-identification of a record was set at between 0.04 and 0.05. Information loss was measured in terms of the extent of suppression and entropy. Results Two different PUMF files were produced, one with geographic information, and one with no geographic information but more clinical information. At a threshold of 0.05, the maximum proportion of records with the diagnosis code suppressed was 20%, but these suppressions represented only 8-9% of all values in the DAD. Our suppression algorithm has less information loss than a more traditional approach to suppression. Smaller regions, patients with longer stays, and age groups that are infrequently admitted to hospitals tend to be the ones with the highest rates of suppression. Conclusions The strategies we used to maximize data utility and minimize information loss can result in a PUMF that would be useful for the specific purposes noted earlier. However, to create a more detailed file with less information loss suitable for more complex health services research, the risk would need to be mitigated by requiring the data recipient to commit to a data sharing agreement. PMID:21861894
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
ERIC Educational Resources Information Center
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
Rank-k Maximal Statistics for Divergence and Probability of Misclassification
NASA Technical Reports Server (NTRS)
Decell, H. P., Jr.
1972-01-01
A technique is developed for selecting from n-channel multispectral data some k combinations of the n-channels upon which to base a given classification technique so that some measure of the loss of the ability to distinguish between classes, using the compressed k-dimensional data, is minimized. Information loss in compressing the n-channel data to k channels is taken to be the difference in the average interclass divergences (or probability of misclassification) in n-space and in k-space.
Ion-thruster propellant utilization
NASA Technical Reports Server (NTRS)
Kaufman, H. R.
1971-01-01
The evaluation and understanding of maximum propellant utilization, with mercury used as the propellant are presented. The primary-electron region in the ion chamber of a bombardment thruster is analyzed at maximum utilization. The results of this analysis, as well as experimental data from a range of ion-chamber configurations, show a nearly constant loss rate for unionized propellant at maximum utilization over a wide range of total propellant flow rate. The discharge loss level of 1000 eV/ion was used as a definition of maximum utilization, but the exact level of this definition has no effect on the qualitative results and little effect on the quantitative results. There are obvious design applications for the results of this investigation, but the results are particularly significant whenever efficient throttled operation is required.
NASA Astrophysics Data System (ADS)
Rouhani, Hassan; Leconte, Robert
2018-06-01
Climate change will affect precipitation and flood regimes. It is anticipated that the Probable Maximum Precipitation (PMP) and Probable Maximum Flood (PMF) will be modified in a changing climate. This paper aims to quantify and analyze climate change influences on PMP and PMF in three watersheds with different climatic conditions across the province of Québec, Canada. Output data from the Canadian Regional Climate Model (CRCM) was used to estimate PMP and Probable Maximum Snow Accumulation (PMSA) in future climate projections, which was then used to force the SWAT hydrological model to estimate PMF. PMP and PMF values were estimated for two time horizons each spanning 30 years: 1961-1990 (recent past) and 2041-2070 (future). PMP and PMF were separately analyzed for two seasons: summer-fall and spring. Results show that PMF in the watershed located in southern Québec would remain unchanged in the future horizon, but the trend for the watersheds located in the northeastern and northern areas of the province is an increase of up to 11%.
Williams, M S; Ebel, E D; Cao, Y
2013-01-01
The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.
Dopaminergic Drug Effects on Probability Weighting during Risky Decision Making.
Ojala, Karita E; Janssen, Lieneke K; Hashemi, Mahur M; Timmer, Monique H M; Geurts, Dirk E M; Ter Huurne, Niels P; Cools, Roshan; Sescousse, Guillaume
2018-01-01
Dopamine has been associated with risky decision-making, as well as with pathological gambling, a behavioral addiction characterized by excessive risk-taking behavior. However, the specific mechanisms through which dopamine might act to foster risk-taking and pathological gambling remain elusive. Here we test the hypothesis that this might be achieved, in part, via modulation of subjective probability weighting during decision making. Human healthy controls ( n = 21) and pathological gamblers ( n = 16) played a decision-making task involving choices between sure monetary options and risky gambles both in the gain and loss domains. Each participant played the task twice, either under placebo or the dopamine D 2 /D 3 receptor antagonist sulpiride, in a double-blind counterbalanced design. A prospect theory modelling approach was used to estimate subjective probability weighting and sensitivity to monetary outcomes. Consistent with prospect theory, we found that participants presented a distortion in the subjective weighting of probabilities, i.e., they overweighted low probabilities and underweighted moderate to high probabilities, both in the gain and loss domains. Compared with placebo, sulpiride attenuated this distortion in the gain domain. Across drugs, the groups did not differ in their probability weighting, although gamblers consistently underweighted losing probabilities in the placebo condition. Overall, our results reveal that dopamine D 2 /D 3 receptor antagonism modulates the subjective weighting of probabilities in the gain domain, in the direction of more objective, economically rational decision making.
Winter habitat selection of mule deer before and during development of a natural gas field
Sawyer, H.; Nielson, R.M.; Lindzey, F.; McDonald, L.L.
2006-01-01
Increased levels of natural gas exploration, development, and production across the Intermountain West have created a variety of concerns for mule deer (Odocoileus hemionus) populations, including direct habitat loss to road and well-pad construction and indirect habitat losses that may occur if deer use declines near roads or well pads. We examined winter habitat selection patterns of adult female mule deer before and during the first 3 years of development in a natural gas field in western Wyoming. We used global positioning system (GPS) locations collected from a sample of adult female mule deer to model relative frequency or probability of use as a function of habitat variables. Model coefficients and predictive maps suggested mule deer were less likely to occupy areas in close proximity to well pads than those farther away. Changes in habitat selection appeared to be immediate (i.e., year 1 of development), and no evidence of well-pad acclimation occurred through the course of the study; rather, mule deer selected areas farther from well pads as development progressed. Lower predicted probabilities of use within 2.7 to 3.7 km of well pads suggested indirect habitat losses may be substantially larger than direct habitat losses. Additionally, some areas classified as high probability of use by mule deer before gas field development changed to areas of low use following development, and others originally classified as low probability of use were used more frequently as the field developed. If areas with high probability of use before development were those preferred by the deer, observed shifts in their distribution as development progressed were toward less-preferred and presumably less-suitable habitats.
NASA Astrophysics Data System (ADS)
Caldarelli, Stefano; Catalano, Donata; Di Bari, Lorenzo; Lumetti, Marco; Ciofalo, Maurizio; Alberto Veracini, Carlo
1994-07-01
The dipolar couplings observed by NMR spectroscopy of solutes in nematic solvents (LX-NMR) are used to build up the maximum entropy (ME) probability distribution function of the variables describing the orientational and internal motion of the molecule. The ME conformational distributions of 2,2'- and 3,3'-dithiophene and 2,2':5',2″-terthiophene (α-terthienyl)thus obtained are compared with the results of previous studies. The 2,2'- and 3,3'-dithiophene molecules exhibit equilibria among cisoid and transoid forms; the probability maxima correspond to planar and twisted conformers for 2,2'- or 3,3'-dithiophene, respectively, 2,2':5',2″-Terthiophene has two internal degrees of freedom; the ME approach indicates that the trans, trans and cis, trans planar conformations are the most probable. The correlation between the two intramolecular rotations is also discussed.
Method and device for landing aircraft dependent on runway occupancy time
NASA Technical Reports Server (NTRS)
Ghalebsaz Jeddi, Babak (Inventor)
2012-01-01
A technique for landing aircraft using an aircraft landing accident avoidance device is disclosed. The technique includes determining at least two probability distribution functions; determining a safe lower limit on a separation between a lead aircraft and a trail aircraft on a glide slope to the runway; determining a maximum sustainable safe attempt-to-land rate on the runway based on the safe lower limit and the probability distribution functions; directing the trail aircraft to enter the glide slope with a target separation from the lead aircraft corresponding to the maximum sustainable safe attempt-to-land rate; while the trail aircraft is in the glide slope, determining an actual separation between the lead aircraft and the trail aircraft; and directing the trail aircraft to execute a go-around maneuver if the actual separation approaches the safe lower limit. Probability distribution functions include runway occupancy time, and landing time interval and/or inter-arrival distance.
Short-Run Effects of Parental Job Loss on Children's Academic Achievement
ERIC Educational Resources Information Center
Stevens, Ann Huff; Schaller, Jessamyn
2011-01-01
We study the relationship between parental job loss and children's academic achievement using data on job loss and grade retention from the 1996, 2001, and 2004 panels of the Survey of Income and Program Participation. We find that a parental job loss increases the probability of children's grade retention by 0.8 percentage points, or around 15%.…
The priority heuristic: making choices without trade-offs.
Brandstätter, Eduard; Gigerenzer, Gerd; Hertwig, Ralph
2006-04-01
Bernoulli's framework of expected utility serves as a model for various psychological processes, including motivation, moral sense, attitudes, and decision making. To account for evidence at variance with expected utility, the authors generalize the framework of fast and frugal heuristics from inferences to preferences. The priority heuristic predicts (a) the Allais paradox, (b) risk aversion for gains if probabilities are high, (c) risk seeking for gains if probabilities are low (e.g., lottery tickets), (d) risk aversion for losses if probabilities are low (e.g., buying insurance), (e) risk seeking for losses if probabilities are high, (f) the certainty effect, (g) the possibility effect, and (h) intransitivities. The authors test how accurately the heuristic predicts people's choices, compared with previously proposed heuristics and 3 modifications of expected utility theory: security-potential/aspiration theory, transfer-of-attention-exchange model, and cumulative prospect theory. ((c) 2006 APA, all rights reserved).
Artes, Paul H; Henson, David B; Harper, Robert; McLeod, David
2003-06-01
To compare a multisampling suprathreshold strategy with conventional suprathreshold and full-threshold strategies in detecting localized visual field defects and in quantifying the area of loss. Probability theory was applied to examine various suprathreshold pass criteria (i.e., the number of stimuli that have to be seen for a test location to be classified as normal). A suprathreshold strategy that requires three seen or three missed stimuli per test location (multisampling suprathreshold) was selected for further investigation. Simulation was used to determine how the multisampling suprathreshold, conventional suprathreshold, and full-threshold strategies detect localized field loss. To determine the systematic error and variability in estimates of loss area, artificial fields were generated with clustered defects (0-25 field locations with 8- and 16-dB loss) and, for each condition, the number of test locations classified as defective (suprathreshold strategies) and with pattern deviation probability less than 5% (full-threshold strategy), was derived from 1000 simulated test results. The full-threshold and multisampling suprathreshold strategies had similar sensitivity to field loss. Both detected defects earlier than the conventional suprathreshold strategy. The pattern deviation probability analyses of full-threshold results underestimated the area of field loss. The conventional suprathreshold perimetry also underestimated the defect area. With multisampling suprathreshold perimetry, the estimates of defect area were less variable and exhibited lower systematic error. Multisampling suprathreshold paradigms may be a powerful alternative to other strategies of visual field testing. Clinical trials are needed to verify these findings.
Using optimal transport theory to estimate transition probabilities in metapopulation dynamics
Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.
2017-01-01
This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
Pregnancy loss history at first parity and selected adverse pregnancy outcomes.
Ahrens, Katherine A; Rossen, Lauren M; Branum, Amy M
2016-07-01
To evaluate the association between pregnancy loss history and adverse pregnancy outcomes. Pregnancy history was captured during a computer-assisted personal interview for 21,277 women surveyed in the National Survey of Family Growth (1995-2013). History of pregnancy loss (<20 weeks) at first parity was categorized in three ways: number of losses, maximum gestational age of loss(es), and recency of last pregnancy loss. We estimated risk ratios for a composite measure of selected adverse pregnancy outcomes (preterm, stillbirth, or low birthweight) at first parity and in any future pregnancy, separately, using predicted margins from adjusted logistic regression models. At first parity, compared with having no loss, having 3+ previous pregnancy losses (adjusted risk ratio (aRR) = 1.66 [95% CI = 1.13, 2.43]), a maximum gestational age of loss(es) at ≥10 weeks (aRR = 1.28 [1.04, 1.56]) or having experienced a loss 24+ months ago (aRR = 1.36 [1.10, 1.68]) were associated with increased risks of adverse pregnancy outcomes. For future pregnancies, only having a history of 3+ previous pregnancy losses at first parity was associated with increased risks (aRR = 1.97 [1.08, 3.60]). Number, gestational age, and recency of pregnancy loss at first parity were associated with adverse pregnancy outcomes in U.S. women. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Jaiswal, P.; van Westen, C. J.; Jetten, V.
2010-06-01
A quantitative approach for landslide risk assessment along transportation lines is presented and applied to a road and a railway alignment in the Nilgiri hills in southern India. The method allows estimating direct risk affecting the alignments, vehicles and people, and indirect risk resulting from the disruption of economic activities. The data required for the risk estimation were obtained from historical records. A total of 901 landslides were catalogued initiating from cut slopes along the railway and road alignment. The landslides were grouped into three magnitude classes based on the landslide type, volume, scar depth, run-out distance, etc and their probability of occurrence was obtained using frequency-volume distribution. Hazard, for a given return period, expressed as the number of landslides of a given magnitude class per kilometre of cut slopes, was obtained using Gumbel distribution and probability of landslide magnitude. In total 18 specific hazard scenarios were generated using the three magnitude classes and six return periods (1, 3, 5, 15, 25, and 50 years). The assessment of the vulnerability of the road and railway line was based on damage records whereas the vulnerability of different types of vehicles and people was subjectively assessed based on limited historic incidents. Direct specific loss for the alignments (railway line and road), vehicles (train, bus, lorry, car and motorbike) was expressed in monetary value (US), and direct specific loss of life of commuters was expressed in annual probability of death. Indirect specific loss (US) derived from the traffic interruption was evaluated considering alternative driving routes, and includes losses resulting from additional fuel consumption, additional travel cost, loss of income to the local business, and loss of revenue to the railway department. The results indicate that the total loss, including both direct and indirect loss, from 1 to 50 years return period, varies from US 90 840 to US 779 500 and the average annual total loss was estimated as US 35 000. The annual probability of a person most at risk travelling in a bus, lorry, car, motorbike and train is less than 10-4/annum in all the time periods considered. The detailed estimation of direct and indirect risk will facilitate developing landslide risk mitigation and management strategies for transportation lines in the study area.
Probability concepts in quality risk management.
Claycamp, H Gregg
2012-01-01
Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although risk is generally a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management tools are relatively silent on the meaning and uses of "probability." The probability concept is typically applied by risk managers as a combination of frequency-based calculation and a "degree of belief" meaning of probability. Probability as a concept that is crucial for understanding and managing risk is discussed through examples from the most general, scenario-defining and ranking tools that use probability implicitly to more specific probabilistic tools in risk management. A rich history of probability in risk management applied to other fields suggests that high-quality risk management decisions benefit from the implementation of more thoughtful probability concepts in both risk modeling and risk management. Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although "risk" generally describes a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management methodologies and respective tools focus on managing severity but are relatively silent on the in-depth meaning and uses of "probability." Pharmaceutical manufacturers are expanding their use of quality risk management to identify and manage risks to the patient that might occur in phases of the pharmaceutical life cycle from drug development to manufacture, marketing to product discontinuation. A probability concept is typically applied by risk managers as a combination of data-based measures of probability and a subjective "degree of belief" meaning of probability. Probability as a concept that is crucial for understanding and managing risk is discussed through examples from the most general, scenario-defining and ranking tools that use probability implicitly to more specific probabilistic tools in risk management.
NASA Technical Reports Server (NTRS)
Frank, H. A.; Uchiyama, A. A.
1973-01-01
Water vapor loss rates were determined from simulated and imperfectly sealed alkaline cells in the vacuum environment. The observed rates were found to be in agreement with a semi-empirical equation employed in vacuum technology. Results thereby give support for using this equation for the prediction of loss rates of battery gases and vapors to the aerospace environment. On this basis it was shown how the equation can be applied to the solution of many heretofore unresolved questions regarding leaks in batteries. Among these are the maximum permissible leak size consistent with a given cell life or conversely the maximum life consistent with a given leak size. It was also shown that loss rates of these cells in the terrestrial environment are several orders of magnitude less than the corresponding loss rates in the aerospace environment.
Swanson, Eric
2012-09-01
Bupivacaine levels have not been measured in cosmetic surgery patients to establish safety. Blood loss has been underestimated using the small volumes present in the aspirate. The proportion of wetting solution removed by liposuction has not been reliably ascertained. To remedy these deficiencies, a prospective study was undertaken among 322 consecutive patients presenting for superwet ultrasonic liposuction and/or abdominoplasty, and other combined procedures, using infusions containing 0.05% lidocaine (liposuction) and/or 0.025% bupivacaine (abdominoplasty) with 1:500,000 epinephrine. Plasma levels of lidocaine, bupivacaine, and epinephrine were studied in a subset of 76 consecutive patients, including hourly intraoperative samples in 39 consecutive patients. Anesthetic levels were also measured in 12 consecutive patients during the 24-hour period after infusion. The maximum lidocaine dose was 3243 mg and the maximum level was 2.10 μg/ml. The maximum bupivacaine dose was 550 mg and the maximum level was 0.81 μg/ml. No clinical toxicity was encountered. Estimated blood loss from liposuction was 217.5 cc + 187 cc/liter of aspirate (r = 0.65). Abdominoplasty added 290 cc of blood loss, on average. The mean proportion of wetting solution removed by liposuction was 9.8 percent. Bupivacaine may be safely used in cosmetic surgery. A concentration of 1:500,000 epinephrine is safe and effective when administered as part of a wetting solution that is limited to less than 5 liters. Estimated blood loss is higher than previous estimates based on lipocrits. Combination procedures are safe.
High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm
ERIC Educational Resources Information Center
Cai, Li
2010-01-01
A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…
50 CFR 648.21 - Mid-Atlantic Fishery Management Council risk policy.
Code of Federal Regulations, 2014 CFR
2014-10-01
... to have an atypical life history, the maximum probability of overfishing as informed by the OFL... atypical life history is generally defined as one that has greater vulnerability to exploitation and whose... development process. (2) For stocks determined by the SSC to have a typical life history, the maximum...
50 CFR 648.21 - Mid-Atlantic Fishery Management Council risk policy.
Code of Federal Regulations, 2013 CFR
2013-10-01
... to have an atypical life history, the maximum probability of overfishing as informed by the OFL... atypical life history is generally defined as one that has greater vulnerability to exploitation and whose... development process. (2) For stocks determined by the SSC to have a typical life history, the maximum...
50 CFR 648.21 - Mid-Atlantic Fishery Management Council risk policy.
Code of Federal Regulations, 2012 CFR
2012-10-01
... to have an atypical life history, the maximum probability of overfishing as informed by the OFL... atypical life history is generally defined as one that has greater vulnerability to exploitation and whose... development process. (2) For stocks determined by the SSC to have a typical life history, the maximum...
Laser damage metrology in biaxial nonlinear crystals using different test beams
NASA Astrophysics Data System (ADS)
Hildenbrand, Anne; Wagner, Frank R.; Akhouayri, Hassan; Natoli, Jean-Yves; Commandre, Mireille
2008-01-01
Laser damage measurements in nonlinear optical crystals, in particular in biaxial crystals, may be influenced by several effects proper to these materials or greatly enhanced in these materials. Before discussion of these effects, we address the topic of error bar determination for probability measurements. Error bars for the damage probabilities are important because nonlinear crystals are often small and expensive, thus only few sites are used for a single damage probability measurement. We present the mathematical basics and a flow diagram for the numerical calculation of error bars for probability measurements that correspond to a chosen confidence level. Effects that possibly modify the maximum intensity in a biaxial nonlinear crystal are: focusing aberration, walk-off and self-focusing. Depending on focusing conditions, propagation direction, polarization of the light and the position of the focus point in the crystal, strong aberrations may change the beam profile and drastically decrease the maximum intensity in the crystal. A correction factor for this effect is proposed, but quantitative corrections are not possible without taking into account the experimental beam profile after the focusing lens. The characteristics of walk-off and self-focusing have quickly been reviewed for the sake of completeness of this article. Finally, parasitic second harmonic generation may influence the laser damage behavior of crystals. The important point for laser damage measurements is that the amount of externally observed SHG after the crystal does not correspond to the maximum amount of second harmonic light inside the crystal.
Mara, Duncan
2011-06-01
The maximum additional burden of water- and wastewater-related disease of 10-6 disability-adjusted life year (DALY) loss per person per year (pppy), used in the WHO Drinking-water Quality Guidelines and the WHO Guidelines for Wastewater Use in Agriculture, is based on US EPA'S acceptance of a 70-year lifetime waterborne cancer risk of 10(-5) per person, equivalent to an annual risk of 1.4x10(-7) per person which is four orders of magnitude lower than the actual all-cancer incidence in the USA in 2009 of 1.8x10(-3) pppy. A maximum additional burden of 10(-4) DALY loss pppy would reduce this risk to a more cost-effective, but still low, risk of 1.4x10(-5) pppy. It would increase the DALY loss pppy in low- and middle-income countries due to diarrhoeal diseases from the current level of 0.0119 pppy to 0.0120 pppy, and that due to ascariasis from 0.0026 pppy to 0.0027 pppy, but neither increase is of public-health significance. It is therefore recommended that the maximum additional burden of disease from these activities be increased to a DALY loss of 10(-4) pppy as this provides an adequate margin of public-health safety in relation to waterborne-cancer deaths, diarrhoeal disease and ascariasis in all countries.
Inconvenient Truth or Convenient Fiction? Probable Maximum Precipitation and Nonstationarity
NASA Astrophysics Data System (ADS)
Nielsen-Gammon, J. W.
2017-12-01
According to the inconvenient truth that Probable Maximum Precipitation (PMP) represents a non-deterministic, statistically very rare event, future changes in PMP involve a complex interplay between future frequencies of storm type, storm morphology, and environmental characteristics, many of which are poorly constrained by global climate models. On the other hand, according to the convenient fiction that PMP represents an estimate of the maximum possible precipitation that can occur at a given location, as determined by storm maximization and transposition, the primary climatic driver of PMP change is simply a change in maximum moisture availability. Increases in boundary-layer and total-column moisture have been observed globally, are anticipated from basic physical principles, and are robustly projected to continue by global climate models. Thus, using the same techniques that are used within the PMP storm maximization process itself, future PMP values may be projected. The resulting PMP trend projections are qualitatively consistent with observed trends of extreme rainfall within Texas, suggesting that in this part of the world the inconvenient truth is congruent with the convenient fiction.
Long-term spectroscopic monitoring of the Luminous Blue Variable AG Carinae
NASA Astrophysics Data System (ADS)
Stahl, O.; Jankovics, I.; Kovács, J.; Wolf, B.; Schmutz, W.; Kaufer, A.; Rivinius, Th.; Szeifert, Th.
2001-08-01
We have extensively monitored the Luminous Blue Variable AG Car (HD 94910) spectroscopically. Our data cover the years 1989 to 1999. In this period, the star underwent almost a full S Dor cycle from visual minimum to maximum and back. Over several seasons, up to four months of almost daily spectra are available. Our data cover most of the visual spectral range with a high spectral resolution (lambda /Delta lambda ~ 20 000). This allows us to investigate the variability in many lines on time scales from days to years. The strongest variability occurs on a time scale of years. Qualitatively, the variations can be understood as changes of the effective temperature and radius, which are in phase with the optical light curve. Quantitatively, there are several interesting deviations from this behaviour, however. The Balmer lines show P Cygni profiles and have their maximum strength (both in equivalent width and line flux) after the peak of the optical light curve, at the descending branch of the light curve. The line-width during maximum phase is smaller than during minimum, but it has a local maximum close to the peak of the visual light curve. We derive mass-loss rates over the cycle from the Hα line and find the highest mass loss rates (log dot {M}/({M}_sun yr-1) ~ -3.8, about a factor of five higher than in the minimum, where we find log dot {M}/({M}_sun yr-1) ~ -4.5) after the visual maximum. Line-splitting is very commonly observed, especially on the rise to maximum and on the descending branch from maximum. The components are very long-lived (years) and are probably unrelated to similar-looking line-splitting events in normal supergiants. Small apparent accelerations of the components are observed. The change in radial velocity could be due to successive narrowing of the components, with the absorption disappearing at small expansion velocities first. In general, the line-splitting is more likely the result of missing absorption at intermediate velocities than of excess absorption at the velocities of the components. The HeI lines and other lines which form deep in the atmosphere show the most peculiar variations. The HeI lines show a central absorption with variable blue- and red-shifted emission components. Due to the variations of the emission components, the HeI lines can change their line profile from a normal P Cyg profile to an inverse P Cyg-profile or double-peak emission. In addition, very broad (+/-1500 km s-1) emission wings are seen at the strongest HeI lines of AG Car. At some phases, a blue-shifted absorption is also present. The central absorption of the HeI lines is blue-shifted before and red-shifted after maximum. Possibly, we directly see the expansion and contraction of the photosphere. If this explanation is correct, the velocity of the continuum-forming layer is not dominated by expansion but is only slightly oscillating around the systemic velocity. Based on observations collected at the European Southern Observatory at La Silla, Chile.
McCarthy, Peter M.
2006-01-01
The Yellowstone River is very important in a variety of ways to the residents of southeastern Montana; however, it is especially vulnerable to spilled contaminants. In 2004, the U.S. Geological Survey, in cooperation with Montana Department of Environmental Quality, initiated a study to develop a computer program to rapidly estimate instream travel times and concentrations of a potential contaminant in the Yellowstone River using regression equations developed in 1999 by the U.S. Geological Survey. The purpose of this report is to describe these equations and their limitations, describe the development of a computer program to apply the equations to the Yellowstone River, and provide detailed instructions on how to use the program. This program is available online at [http://pubs.water.usgs.gov/sir2006-5057/includes/ytot.xls]. The regression equations provide estimates of instream travel times and concentrations in rivers where little or no contaminant-transport data are available. Equations were developed and presented for the most probable flow velocity and the maximum probable flow velocity. These velocity estimates can then be used to calculate instream travel times and concentrations of a potential contaminant. The computer program was developed so estimation equations for instream travel times and concentrations can be solved quickly for sites along the Yellowstone River between Corwin Springs and Sidney, Montana. The basic types of data needed to run the program are spill data, streamflow data, and data for locations of interest along the Yellowstone River. Data output from the program includes spill location, river mileage at specified locations, instantaneous discharge, mean-annual discharge, drainage area, and channel slope. Travel times and concentrations are provided for estimates of the most probable velocity of the peak concentration and the maximum probable velocity of the peak concentration. Verification of estimates of instream travel times and concentrations for the Yellowstone River requires information about the flow velocity throughout the 520 mi of river in the study area. Dye-tracer studies would provide the best data about flow velocities and would provide the best verification of instream travel times and concentrations estimated from this computer program; however, data from such studies does not currently (2006) exist and new studies would be expensive and time-consuming. An alternative approach used in this study for verification of instream travel times is based on the use of flood-wave velocities determined from recorded streamflow hydrographs at selected mainstem streamflow-gaging stations along the Yellowstone River. The ratios of flood-wave velocity to the most probable velocity for the base flow estimated from the computer program are within the accepted range of 2.5 to 4.0 and indicate that flow velocities estimated from the computer program are reasonable for the Yellowstone River. The ratios of flood-wave velocity to the maximum probable velocity are within a range of 1.9 to 2.8 and indicate that the maximum probable flow velocities estimated from the computer program, which corresponds to the shortest travel times and maximum probable concentrations, are conservative and reasonable for the Yellowstone River.
ERIC Educational Resources Information Center
Stevens, Ann Huff; Schaller, Jessamyn
2009-01-01
We study the relationship between parental job loss and children's academic achievement using data on job loss and grade retention from the 1996, 2001, and 2004 panels of the Survey of Income and Program Participation. We find that a parental job loss increases the probability of children's grade retention by 0.8 percentage points, or around 15…
NASA Technical Reports Server (NTRS)
Edmonds, L. D.
2016-01-01
Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.
NASA Technical Reports Server (NTRS)
Edmonds, L. D.
2016-01-01
Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.
Crop production and economic loss due to wind erosion in hot arid ecosystem of India
NASA Astrophysics Data System (ADS)
Santra, Priyabrata; Moharana, P. C.; Kumar, Mahesh; Soni, M. L.; Pandey, C. B.; Chaudhari, S. K.; Sikka, A. K.
2017-10-01
Wind erosion is a severe land degradation process in hot arid western India and affects the agricultural production system. It affects crop yield directly by damaging the crops through abrasion, burial, dust deposition etc. and indirectly by reducing soil fertility. In this study, an attempt was made to quantify the indirect impact of wind erosion process on crop production loss and associated economic loss in hot arid ecosystem of India. It has been observed that soil loss due to wind erosion varies from minimum 1.3 t ha-1 to maximum 83.3 t ha-1 as per the severity. Yield loss due to wind erosion was found maximum for groundnut (Arachis hypogea) (5-331 kg ha-1 yr-1), whereas minimum for moth bean (Vigna aconitifolia) (1-93 kg ha-1 yr-1). For pearl millet (Pennisetum glaucum), which covers a major portion of arable lands in western Rajasthan, the yield loss was found 3-195 kg ha-1 yr-1. Economic loss was found higher for groundnut and clusterbean (Cyamopsis tetragonoloba) than rest crops, which are about
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize the probable levels of atrazine for comparison to specific water-quality benchmarks. Sites with a high probability of exceeding a benchmark for human health or aquatic life can be prioritized for monitoring.
Maximizing the Detection Probability of Kilonovae Associated with Gravitational Wave Observations
NASA Astrophysics Data System (ADS)
Chan, Man Leong; Hu, Yi-Ming; Messenger, Chris; Hendry, Martin; Heng, Ik Siong
2017-01-01
Estimates of the source sky location for gravitational wave signals are likely to span areas of up to hundreds of square degrees or more, making it very challenging for most telescopes to search for counterpart signals in the electromagnetic spectrum. To boost the chance of successfully observing such counterparts, we have developed an algorithm that optimizes the number of observing fields and their corresponding time allocations by maximizing the detection probability. As a proof-of-concept demonstration, we optimize follow-up observations targeting kilonovae using telescopes including the CTIO-Dark Energy Camera, Subaru-HyperSuprimeCam, Pan-STARRS, and the Palomar Transient Factory. We consider three simulated gravitational wave events with 90% credible error regions spanning areas from ∼ 30 {\\deg }2 to ∼ 300 {\\deg }2. Assuming a source at 200 {Mpc}, we demonstrate that to obtain a maximum detection probability, there is an optimized number of fields for any particular event that a telescope should observe. To inform future telescope design studies, we present the maximum detection probability and corresponding number of observing fields for a combination of limiting magnitudes and fields of view over a range of parameters. We show that for large gravitational wave error regions, telescope sensitivity rather than field of view is the dominating factor in maximizing the detection probability.
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Monasson, Rémi
2015-09-01
The maximum entropy principle (MEP) is a very useful working hypothesis in a wide variety of inference problems, ranging from biological to engineering tasks. To better understand the reasons of the success of MEP, we propose a statistical-mechanical formulation to treat the space of probability distributions constrained by the measures of (experimental) observables. In this paper we first review the results of a detailed analysis of the simplest case of randomly chosen observables. In addition, we investigate by numerical and analytical means the case of smooth observables, which is of practical relevance. Our preliminary results are presented and discussed with respect to the efficiency of the MEP.
Effect of Conflict Resolution Maneuver Execution Delay on Losses of Separation
NASA Technical Reports Server (NTRS)
Cone, Andrew C.
2010-01-01
This paper examines uncertainty in the maneuver execution delay for data linked conflict resolution maneuvers. This uncertainty could cause the previously cleared primary conflict to reoccur or a secondary conflict to appear. Results show that the likelihood of a primary conflict reoccurring during a horizontal conflict resolution maneuver increases with larger initial turn-out angles and with shorter times until loss of separation. There is also a significant increase in the probability of a primary conflict reoccurring when the time until loss falls under three minutes. Increasing horizontal separation by an additional 1.5 nmi lowers the risk, but does not completely eliminate it. Secondary conflicts were shown to have a small probability of occurring in all tested configurations.
Monte Carlo simulation of single accident airport risk profile
NASA Technical Reports Server (NTRS)
1979-01-01
A computer simulation model was developed for estimating the potential economic impacts of a carbon fiber release upon facilities within an 80 kilometer radius of a major airport. The model simulated the possible range of release conditions and the resulting dispersion of the carbon fibers. Each iteration of the model generated a specific release scenario, which would cause a specific amount of dollar loss to the surrounding community. By repeated iterations, a risk profile was generated, showing the probability distribution of losses from one accident. Using accident probability estimates, the risks profile for annual losses was derived. The mechanics are described of the simulation model, the required input data, and the risk profiles generated for the 26 large hub airports.
A figure of merit for AMTEC electrodes
NASA Technical Reports Server (NTRS)
Underwood, M. L.; Williams, R. M.; Jeffries-Nakamura, B.; Ryan, M. A.
1991-01-01
As a method to compare the results of alkali metal thermoelectric converter (AMTEC) electrode performance measured under different conditions, an AMTEC figure of merit called ZA is proposed. This figure of merit is the ratio of the experimental maximum power for an electrode to a calculated maximum power density as determined from a recently published electrode performance model. The calculation of a maximum power density assumes that certain loss terms in the electrode can be reduced to essentially zero by improved cell design and construction, and that the electrochemical exchange current is determined from a standard value. Other losses in the electrode are considered inherent to the electrode performance. Thus, these terms remain in the determination of the calculated maximum power. A value of ZA near one, then, indicates an electrode performance near the maximum possible performance. The primary limitation of this calculation is that the small electrode effect cannot be included. This effect leads to anomalously high values of ZA. Thus, the electrode area should be reported along with the figure of merit.
External auditory exostoses and hearing loss in the Shanidar 1 Neandertal
2017-01-01
The Late Pleistocene Shanidar 1 older adult male Neandertal is known for the crushing fracture of his left orbit with a probable reduction in vision, the loss of his right forearm and hand, and evidence of an abnormal gait, as well as probable diffuse idiopathic skeletal hyperostosis. He also exhibits advanced external auditory exostoses in his left auditory meatus and larger ones with complete bridging across the porus in the right meatus (both Grade 3). These growths indicate at least unilateral conductive hearing (CHL) loss, a serious sensory deprivation for a Pleistocene hunter-gatherer. This condition joins the meatal atresia of the Middle Pleistocene Atapuerca-SH Cr.4 in providing evidence of survival with conductive hearing loss (and hence serious sensory deprivation) among these Pleistocene humans. The presence of CHL in these fossils thereby reinforces the paleobiological and archeological evidence for supporting social matrices among these Pleistocene foraging peoples. PMID:29053746
An information processing view of framing effects: the role of causal schemas in decision making.
Jou, J; Shanteau, J; Harris, R J
1996-01-01
People prefer a sure gain to a probable larger gain when the two choices are presented from a gain perspective, but a probable larger loss to a sure loss when the objectively identical choices are presented from a loss perspective. Such reversals of preference due to the context of the problem are known as framing effects. In the present study, schema activation and subjects' interpretations of the problems were examined as sources of the framing effects. Results showed that such effects could be eliminated by introducing into a problem a causal schema that provided a rationale for the reciprocal relationship between the gains and the losses. Moreover, when subjects were freed from framing they were consistently risk seeking in decisions about human life, but risk averse in decisions about property. Irrationality in choice behaviors and the ecological implication of framing effects are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halligan, Matthew
Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less
Dopaminergic Drug Effects on Probability Weighting during Risky Decision Making
Timmer, Monique H. M.; ter Huurne, Niels P.
2018-01-01
Abstract Dopamine has been associated with risky decision-making, as well as with pathological gambling, a behavioral addiction characterized by excessive risk-taking behavior. However, the specific mechanisms through which dopamine might act to foster risk-taking and pathological gambling remain elusive. Here we test the hypothesis that this might be achieved, in part, via modulation of subjective probability weighting during decision making. Human healthy controls (n = 21) and pathological gamblers (n = 16) played a decision-making task involving choices between sure monetary options and risky gambles both in the gain and loss domains. Each participant played the task twice, either under placebo or the dopamine D2/D3 receptor antagonist sulpiride, in a double-blind counterbalanced design. A prospect theory modelling approach was used to estimate subjective probability weighting and sensitivity to monetary outcomes. Consistent with prospect theory, we found that participants presented a distortion in the subjective weighting of probabilities, i.e., they overweighted low probabilities and underweighted moderate to high probabilities, both in the gain and loss domains. Compared with placebo, sulpiride attenuated this distortion in the gain domain. Across drugs, the groups did not differ in their probability weighting, although gamblers consistently underweighted losing probabilities in the placebo condition. Overall, our results reveal that dopamine D2/D3 receptor antagonism modulates the subjective weighting of probabilities in the gain domain, in the direction of more objective, economically rational decision making. PMID:29632870
2014-01-01
Aims Presenting a new method for direct, quantitative analysis of enamel surface. Measurement of adhesive remnants and enamel loss resulting from debonding molar tubes. Material and methods Buccal surfaces of fifteen extracted human molars were directly scanned with an optic blue-light 3D scanner to the nearest 2 μm. After 20 s etching molar tubes were bonded and after 24 h storing in 0.9% saline - debonded. Then 3D scanning was repeated. Superimposition and comparison were proceeded and shape alterations of the entire objects were analyzed using specialized computer software. Residual adhesive heights as well as enamel loss depths have been obtained for the entire buccal surfaces. Residual adhesive volume and enamel loss volume have been calculated for every tooth. Results The maximum height of adhesive remaining on enamel surface was 0.76 mm and the volume on particular teeth ranged from 0.047 mm3 to 4.16 mm3. The median adhesive remnant volume was 0.988 mm3. Mean depths of enamel loss for particular teeth ranged from 0.0076 mm to 0.0416 mm. Highest maximum depth of enamel loss was 0.207 mm. Median volume of enamel loss was 0.104 mm3 and maximum volume was 1.484 mm3. Conclusions Blue-light 3D scanning is able to provide direct precise scans of the enamel surface, which can be superimposed in order to calculate shape alterations. Debonding molar tubes leaves a certain amount of adhesive remnants on the enamel, however the interface fracture pattern varies for particular teeth and areas of enamel loss are present as well. PMID:25208969
Janiszewska-Olszowska, Joanna; Tandecka, Katarzyna; Szatkiewicz, Tomasz; Sporniak-Tutak, Katarzyna; Grocholewicz, Katarzyna
2014-09-10
Presenting a new method for direct, quantitative analysis of enamel surface. Measurement of adhesive remnants and enamel loss resulting from debonding molar tubes. Buccal surfaces of fifteen extracted human molars were directly scanned with an optic blue-light 3D scanner to the nearest 2 μm. After 20 s etching molar tubes were bonded and after 24 h storing in 0.9% saline - debonded. Then 3D scanning was repeated. Superimposition and comparison were proceeded and shape alterations of the entire objects were analyzed using specialized computer software. Residual adhesive heights as well as enamel loss depths have been obtained for the entire buccal surfaces. Residual adhesive volume and enamel loss volume have been calculated for every tooth. The maximum height of adhesive remaining on enamel surface was 0.76 mm and the volume on particular teeth ranged from 0.047 mm3 to 4.16 mm3. The median adhesive remnant volume was 0.988 mm3. Mean depths of enamel loss for particular teeth ranged from 0.0076 mm to 0.0416 mm. Highest maximum depth of enamel loss was 0.207 mm. Median volume of enamel loss was 0.104 mm3 and maximum volume was 1.484 mm3. Blue-light 3D scanning is able to provide direct precise scans of the enamel surface, which can be superimposed in order to calculate shape alterations. Debonding molar tubes leaves a certain amount of adhesive remnants on the enamel, however the interface fracture pattern varies for particular teeth and areas of enamel loss are present as well.
Maximum and minimum return losses from a passive two-port network terminated with a mismatched load
NASA Technical Reports Server (NTRS)
Otoshi, T. Y.
1993-01-01
This article presents an analytical method for determining the exact distance a load is required to be offset from a passive two-port network to obtain maximum or minimum return losses from the terminated two-port network. Equations are derived in terms of two-port network S-parameters and load reflection coefficient. The equations are useful for predicting worst-case performances of some types of networks that are terminated with offset short-circuit loads.
optBINS: Optimal Binning for histograms
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.
2018-03-01
optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.
Approximation of the ruin probability using the scaled Laplace transform inversion
Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak
2015-01-01
The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796
NASA Astrophysics Data System (ADS)
Cajiao Vélez, F.; Kamiński, J. Z.; Krajewska, K.
2018-04-01
High-energy photoionization driven by short and circularly-polarized laser pulses is studied in the framework of the relativistic strong-field approximation. The saddle-point analysis of the integrals defining the probability amplitude is used to determine the general properties of the probability distributions. Additionally, an approximate solution to the saddle-point equation is derived. This leads to the concept of the three-dimensional spiral of life in momentum space, around which the ionization probability distribution is maximum. We demonstrate that such spiral is also obtained from a classical treatment.
Postan, A
1987-03-01
The dynamics of a pulsed laser spot covering an optical aperture of a receiver is analyzed. This analysis includes the influence of diffraction, jitter, atmospheric absorption and scattering, and atmospheric turbulence. A simple expression for the probability of response of the receiver illuminated by the laser spot is derived. It is found that this probability would not always increase as the laser beam divergence decreases. Moreover, this probability has an optimum (maximum) with respect to the laser beam divergence or rather with respect to the diameter of the transmitting optics.
Assessing performance and validating finite element simulations using probabilistic knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolin, Ronald M.; Rodriguez, E. A.
Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less
Regional interdisciplinary paleoflood approach to assess extreme flood potential
Jarrett, Robert D.; Tomlinson, Edward M.
2000-01-01
In the past decade, there has been a growing interest of dam safety officials to incorporate a risk‐based analysis for design‐flood hydrology. Extreme or rare floods, with probabilities in the range of about 10−3 to 10−7 chance of occurrence per year, are of continuing interest to the hydrologic and engineering communities for purposes of planning and design of structures such as dams [National Research Council, 1988]. The National Research Council stresses that as much information as possible about floods needs to be used for evaluation of the risk and consequences of any decision. A regional interdisciplinary paleoflood approach was developed to assist dam safety officials and floodplain managers in their assessments of the risk of large floods. The interdisciplinary components included documenting maximum paleofloods and a regional analyses of contemporary extreme rainfall and flood data to complement a site‐specific probable maximum precipitation study [Tomlinson and Solak, 1997]. The cost‐effective approach, which can be used in many other hydrometeorologic settings, was applied to Elkhead Reservoir in Elkhead Creek (531 km2) in northwestern Colorado; the regional study area was 10,900 km2. Paleoflood data using bouldery flood deposits and noninundation surfaces for 88 streams were used to document maximum flood discharges that have occurred during the Holocene. Several relative dating methods were used to determine the age of paleoflood deposits and noninundation surfaces. No evidence of substantial flooding was found in the study area. The maximum paleoflood of 135 m3 s−1 for Elkhead Creek is about 13% of the site‐specific probable maximum flood of 1020 m3 s−1. Flood‐frequency relations using the expected moments algorithm, which better incorporates paleoflood data, were developed to assess the risk of extreme floods. Envelope curves encompassing maximum rainfall (181 sites) and floods (218 sites) were developed for northwestern Colorado to help define maximum contemporary and Holocene flooding in Elkhead Creek and in a regional frequency context. Study results for Elkhead Reservoir were accepted by the Colorado State Engineer for dam safety certification.
Fukunaga, Rena; Brown, Joshua W.; Bogg, Tim
2012-01-01
The inferior frontal gyrus/anterior insula (IFG/AI) and anterior cingulate cortex (ACC) are key regions involved in risk appraisal during decision making, but accounts of how these regions contribute to decision-making under risk remain contested. To help clarify the roles of these and other related regions, we used a modified version of the Balloon Analogue Risk Task (Lejuez et al., 2002) to distinguish between decision-making and feedback-related processes when participants decided to pursue a gain as the probability of loss increased parametrically. Specifically, we set out to test whether ACC and IFG/AI regions correspond to loss-aversion at the time of decision making in a way that is not confounded with either reward-seeking or infrequency effects. When participants chose to discontinue inflating the balloon (win option), we observed greater ACC and mainly bilateral IFG/AI activity at the time of decision as the probability of explosion increased, consistent with increased loss-aversion but inconsistent with an infrequency effect. In contrast, we found robust vmPFC activity when participants chose to continue inflating the balloon (risky option), consistent with reward-seeking. However, in the cingulate and mainly bilateral IFG regions, BOLD activation decreased when participants chose to inflate the balloon as the probability of explosion increased, findings consistent with a reduced loss-aversion signal. Our results highlight the existence of distinct reward-seeking and loss-averse signals during decision-making, as well as the importance of distinguishing decision and feedback signals. PMID:22707378
Even under Obama's Plan, Pell Grants Trail Tuition
ERIC Educational Resources Information Center
Field, Kelly
2009-01-01
Making Pell Grants an entitlement and tying the maximum award to a measure of inflation, as President Obama has proposed, would probably yield larger awards and stop the cycle of shortfalls that have plagued the program. The president's plan, which would index the maximum award to the Consumer Price Index (CPI) plus one percentage point, probably…
Loss of thermal refugia near equatorial range limits.
Lima, Fernando P; Gomes, Filipa; Seabra, Rui; Wethey, David S; Seabra, Maria I; Cruz, Teresa; Santos, António M; Hilbish, Thomas J
2016-01-01
This study examines the importance of thermal refugia along the majority of the geographical range of a key intertidal species (Patella vulgata Linnaeus, 1758) on the Atlantic coast of Europe. We asked whether differences between sun-exposed and shaded microhabitats were responsible for differences in physiological stress and ecological performance and examined the availability of refugia near equatorial range limits. Thermal differences between sun-exposed and shaded microhabitats are consistently associated with differences in physiological performance, and the frequency of occurrence of high temperatures is most probably limiting the maximum population densities supported at any given place. Topographical complexity provides thermal refugia throughout most of the distribution range, although towards the equatorial edges the magnitude of the amelioration provided by shaded microhabitats is largely reduced. Importantly, the limiting effects of temperature, rather than being related to latitude, seem to be tightly associated with microsite variability, which therefore is likely to have profound effects on the way local populations (and consequently species) respond to climatic changes. © 2015 John Wiley & Sons Ltd.
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Technical Reports Server (NTRS)
Pina, R. K.; Puetter, R. C.
1993-01-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko
2012-01-01
To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.
Kinetic aspects of chain growth in Fischer-Tropsch synthesis.
Filot, Ivo A W; Zijlstra, Bart; Broos, Robin J P; Chen, Wei; Pestman, Robert; Hensen, Emiel J M
2017-04-28
Microkinetics simulations are used to investigate the elementary reaction steps that control chain growth in the Fischer-Tropsch reaction. Chain growth in the FT reaction on stepped Ru surfaces proceeds via coupling of CH and CR surface intermediates. Essential to the growth mechanism are C-H dehydrogenation and C hydrogenation steps, whose kinetic consequences have been examined by formulating two novel kinetic concepts, the degree of chain-growth probability control and the thermodynamic degree of chain-growth probability control. For Ru the CO conversion rate is controlled by the removal of O atoms from the catalytic surface. The temperature of maximum CO conversion rate is higher than the temperature to obtain maximum chain-growth probability. Both maxima are determined by Sabatier behavior, but the steps that control chain-growth probability are different from those that control the overall rate. Below the optimum for obtaining long hydrocarbon chains, the reaction is limited by the high total surface coverage: in the absence of sufficient vacancies the CHCHR → CCHR + H reaction is slowed down. Beyond the optimum in chain-growth probability, CHCR + H → CHCHR and OH + H → H 2 O limit the chain-growth process. The thermodynamic degree of chain-growth probability control emphasizes the critical role of the H and free-site coverage and shows that at high temperature, chain depolymerization contributes to the decreased chain-growth probability. That is to say, during the FT reaction chain growth is much faster than chain depolymerization, which ensures high chain-growth probability. The chain-growth rate is also fast compared to chain-growth termination and the steps that control the overall CO conversion rate, which are O removal steps for Ru.
Wu, Zhiwei; He, Hong S; Liu, Zhihua; Liang, Yu
2013-06-01
Fuel load is often used to prioritize stands for fuel reduction treatments. However, wildfire size and intensity are not only related to fuel loads but also to a wide range of other spatially related factors such as topography, weather and human activity. In prioritizing fuel reduction treatments, we propose using burn probability to account for the effects of spatially related factors that can affect wildfire size and intensity. Our burn probability incorporated fuel load, ignition probability, and spread probability (spatial controls to wildfire) at a particular location across a landscape. Our goal was to assess differences in reducing wildfire size and intensity using fuel-load and burn-probability based treatment prioritization approaches. Our study was conducted in a boreal forest in northeastern China. We derived a fuel load map from a stand map and a burn probability map based on historical fire records and potential wildfire spread pattern. The burn probability map was validated using historical records of burned patches. We then simulated 100 ignitions and six fuel reduction treatments to compare fire size and intensity under two approaches of fuel treatment prioritization. We calibrated and validated simulated wildfires against historical wildfire data. Our results showed that fuel reduction treatments based on burn probability were more effective at reducing simulated wildfire size, mean and maximum rate of spread, and mean fire intensity, but less effective at reducing maximum fire intensity across the burned landscape than treatments based on fuel load. Thus, contributions from both fuels and spatially related factors should be considered for each fuel reduction treatment. Published by Elsevier B.V.
Rao, Uma; Sidhartha, Tanuj; Harker, Karen R.; Bidesi, Anup S.; Chen, Li-Ann; Ernst, Monique
2010-01-01
Purpose The goal of the study was to assess individual differences in risk-taking behavior among adolescents in the laboratory. A second aim was to evaluate whether the laboratory-based risk-taking behavior is associated with other behavioral and psychological measures associated with risk-taking behavior. Methods Eighty-two adolescents with no personal history of psychiatric disorder completed a computerized decision-making task, the Wheel of Fortune (WOF). By offering choices between clearly defined probabilities and real monetary outcomes, this task assesses risk preferences when participants are confronted with potential rewards and losses. The participants also completed a variety of behavioral and psychological measures associated with risk-taking behavior. Results Performance on the task varied based on the probability and anticipated outcomes. In the winning sub-task, participants selected low probability-high magnitude reward (high-risk choice) less frequently than high probability-low magnitude reward (low-risk choice). In the losing sub-task, participants selected low probability-high magnitude loss more often than high probability-low magnitude loss. On average, the selection of probabilistic rewards was optimal and similar to performance in adults. There were, however, individual differences in performance, and one-third of the adolescents made high-risk choice more frequently than low-risk choice while selecting a reward. After controlling for sociodemographic and psychological variables, high-risk choice on the winning task predicted “real-world” risk-taking behavior and substance-related problems. Conclusions These findings highlight individual differences in risk-taking behavior. Preliminary data on face validity of the WOF task suggest that it might be a valuable laboratory tool for studying behavioral and neurobiological processes associated with risk-taking behavior in adolescents. PMID:21257113
Survey of Quantitative Research Metrics to Assess Pilot Performance in Upset Recovery
NASA Technical Reports Server (NTRS)
Le Vie, Lisa R.
2016-01-01
Accidents attributable to in-flight loss of control are the primary cause for fatal commercial jet accidents worldwide. The National Aeronautics and Space Administration (NASA) conducted a literature review to determine and identify the quantitative standards for assessing upset recovery performance. This review contains current recovery procedures for both military and commercial aviation and includes the metrics researchers use to assess aircraft recovery performance. Metrics include time to first input, recognition time and recovery time and whether that input was correct or incorrect. Other metrics included are: the state of the autopilot and autothrottle, control wheel/sidestick movement resulting in pitch and roll, and inputs to the throttle and rudder. In addition, airplane state measures, such as roll reversals, altitude loss/gain, maximum vertical speed, maximum/minimum air speed, maximum bank angle and maximum g loading are reviewed as well.
How are flood risk estimates affected by the choice of return-periods?
NASA Astrophysics Data System (ADS)
Ward, P. J.; de Moel, H.; Aerts, J. C. J. H.
2011-12-01
Flood management is more and more adopting a risk based approach, whereby flood risk is the product of the probability and consequences of flooding. One of the most common approaches in flood risk assessment is to estimate the damage that would occur for floods of several exceedance probabilities (or return periods), to plot these on an exceedance probability-loss curve (risk curve) and to estimate risk as the area under the curve. However, there is little insight into how the selection of the return-periods (which ones and how many) used to calculate risk actually affects the final risk calculation. To gain such insights, we developed and validated an inundation model capable of rapidly simulating inundation extent and depth, and dynamically coupled this to an existing damage model. The method was applied to a section of the River Meuse in the southeast of the Netherlands. Firstly, we estimated risk based on a risk curve using yearly return periods from 2 to 10 000 yr (€ 34 million p.a.). We found that the overall risk is greatly affected by the number of return periods used to construct the risk curve, with over-estimations of annual risk between 33% and 100% when only three return periods are used. In addition, binary assumptions on dike failure can have a large effect (a factor two difference) on risk estimates. Also, the minimum and maximum return period considered in the curve affects the risk estimate considerably. The results suggest that more research is needed to develop relatively simple inundation models that can be used to produce large numbers of inundation maps, complementary to more complex 2-D-3-D hydrodynamic models. It also suggests that research into flood risk could benefit by paying more attention to the damage caused by relatively high probability floods.
Evaluation of Loss Due to Storm Surge Disasters in China Based on Econometric Model Groups.
Jin, Xue; Shi, Xiaoxia; Gao, Jintian; Xu, Tongbin; Yin, Kedong
2018-03-27
Storm surge has become an important factor restricting the economic and social development of China's coastal regions. In order to improve the scientific judgment of future storm surge damage, a method of model groups is proposed to refine the evaluation of the loss due to storm surges. Due to the relative dispersion and poor regularity of the natural property data (login center air pressure, maximum wind speed, maximum storm water, super warning water level, etc.), storm surge disaster is divided based on eight kinds of storm surge disaster grade division methods combined with storm surge water, hypervigilance tide level, and disaster loss. The storm surge disaster loss measurement model groups consist of eight equations, and six major modules are constructed: storm surge disaster in agricultural loss, fishery loss, human resource loss, engineering facility loss, living facility loss, and direct economic loss. Finally, the support vector machine (SVM) model is used to evaluate the loss and the intra-sample prediction. It is indicated that the equations of the model groups can reflect in detail the relationship between the damage of storm surges and other related variables. Based on a comparison of the original value and the predicted value error, the model groups pass the test, providing scientific support and a decision basis for the early layout of disaster prevention and mitigation.
Evaluation of Loss Due to Storm Surge Disasters in China Based on Econometric Model Groups
Shi, Xiaoxia; Xu, Tongbin; Yin, Kedong
2018-01-01
Storm surge has become an important factor restricting the economic and social development of China’s coastal regions. In order to improve the scientific judgment of future storm surge damage, a method of model groups is proposed to refine the evaluation of the loss due to storm surges. Due to the relative dispersion and poor regularity of the natural property data (login center air pressure, maximum wind speed, maximum storm water, super warning water level, etc.), storm surge disaster is divided based on eight kinds of storm surge disaster grade division methods combined with storm surge water, hypervigilance tide level, and disaster loss. The storm surge disaster loss measurement model groups consist of eight equations, and six major modules are constructed: storm surge disaster in agricultural loss, fishery loss, human resource loss, engineering facility loss, living facility loss, and direct economic loss. Finally, the support vector machine (SVM) model is used to evaluate the loss and the intra-sample prediction. It is indicated that the equations of the model groups can reflect in detail the relationship between the damage of storm surges and other related variables. Based on a comparison of the original value and the predicted value error, the model groups pass the test, providing scientific support and a decision basis for the early layout of disaster prevention and mitigation. PMID:29584628
An inexact risk management model for agricultural land-use planning under water shortage
NASA Astrophysics Data System (ADS)
Li, Wei; Feng, Changchun; Dai, Chao; Li, Yongping; Li, Chunhui; Liu, Ming
2016-09-01
Water resources availability has a significant impact on agricultural land-use planning, especially in a water shortage area such as North China. The random nature of available water resources and other uncertainties in an agricultural system present risk for land-use planning and may lead to undesirable decisions or potential economic loss. In this study, an inexact risk management model (IRM) was developed for supporting agricultural land-use planning and risk analysis under water shortage. The IRM model was formulated through incorporating a conditional value-at-risk (CVaR) constraint into an inexact two-stage stochastic programming (ITSP) framework, and could be used to control uncertainties expressed as not only probability distributions but also as discrete intervals. The measure of risk about the second-stage penalty cost was incorporated into the model so that the trade-off between system benefit and extreme expected loss could be analyzed. The developed model was applied to a case study in the Zhangweinan River Basin, a typical agricultural region facing serious water shortage in North China. Solutions of the IRM model showed that the obtained first-stage land-use target values could be used to reflect decision-makers' opinions on the long-term development plan. The confidence level α and maximum acceptable risk loss β could be used to reflect decisionmakers' preference towards system benefit and risk control. The results indicated that the IRM model was useful for reflecting the decision-makers' attitudes toward risk aversion and could help seek cost-effective agricultural land-use planning strategies under complex uncertainties.
Probability of Loss of Crew Achievability Studies for NASA's Exploration Systems Development
NASA Technical Reports Server (NTRS)
Boyer, Roger L.; Bigler, Mark A.; Rogers, James H.
2015-01-01
Over the last few years, NASA has been evaluating various vehicle designs for multiple proposed design reference missions (DRM) beyond low Earth orbit in support of its Exploration Systems Development (ESD) programs. This paper addresses several of the proposed missions and the analysis techniques used to assess the key risk metric, probability of loss of crew (LOC). Probability of LOC is a metric used to assess the safety risk as well as a design requirement. These assessments or studies were categorized as LOC achievability studies to help inform NASA management as to what "ball park" estimates of probability of LOC could be achieved for each DRM and were eventually used to establish the corresponding LOC requirements. Given that details of the vehicles and mission are not well known at this time, the ground rules, assumptions, and consistency across the programs become the important basis of the assessments as well as for the decision makers to understand.
Decision-making for risky gains and losses among college students with Internet gaming disorder.
Yao, Yuan-Wei; Chen, Pin-Ru; Li, Song; Wang, Ling-Jiao; Zhang, Jin-Tao; Yip, Sarah W; Chen, Gang; Deng, Lin-Yuan; Liu, Qin-Xue; Fang, Xiao-Yi
2015-01-01
Individuals with Internet gaming disorder (IGD) tend to exhibit disadvantageous risky decision-making not only in their real life but also in laboratory tasks. Decision-making is a complex multifaceted function and different cognitive processes are involved in decision-making for gains and losses. However, the relationship between impaired decision-making and gain versus loss processing in the context of IGD is poorly understood. The main aim of the present study was to separately evaluate decision-making for risky gains and losses among college students with IGD using the Cups task. Additionally, we further examined the effects of outcome magnitude and probability level on decision-making related to risky gains and losses respectively. Sixty college students with IGD and 42 matched healthy controls (HCs) participated. Results indicated that IGD subjects exhibited generally greater risk taking tendencies than HCs. In comparison to HCs, IGD subjects made more disadvantageous risky choices in the loss domain (but not in the gain domain). Follow-up analyses indicated that the impairment was associated to insensitivity to changes in outcome magnitude and probability level for risky losses among IGD subjects. In addition, higher Internet addiction severity scores were associated with percentage of disadvantageous risky options in the loss domain. These findings emphasize the effect of insensitivity to losses on disadvantageous decisions under risk in the context of IGD, which has implications for future intervention studies.
Jeffrey H. Gove
2003-01-01
Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
Brown, Michelle L.; Donovan, Therese; Schwenk, W. Scott; Theobald, David M.
2014-01-01
Forest loss and fragmentation are among the largest threats to forest-dwelling wildlife species today, and projected increases in human population growth are expected to increase these threats in the next century. We combined spatially-explicit growth models with wildlife distribution models to predict the effects of human development on 5 forest-dependent bird species in Vermont, New Hampshire, and Massachusetts, USA. We used single-species occupancy models to derive the probability of occupancy for each species across the study area in the years 2000 and 2050. Over half a million new housing units were predicted to be added to the landscape. The maximum change in housing density was nearly 30 houses per hectare; however, 30% of the towns in the study area were projected to add less than 1 housing unit per hectare. In the face of predicted human growth, the overall occupancy of each species decreased by as much as 38% (ranging from 19% to 38% declines in the worst-case scenario) in the year 2050. These declines were greater outside of protected areas than within protected lands. Ninety-seven percent of towns experienced some decline in species occupancy within their borders, highlighting the value of spatially-explicit models. The mean decrease in occupancy probability within towns ranged from 3% for hairy woodpecker to 8% for ovenbird and hermit thrush. Reductions in occupancy probability occurred on the perimeters of cities and towns where exurban development is predicted to increase in the study area. This spatial approach to wildlife planning provides data to evaluate trade-offs between development scenarios and forest-dependent wildlife species.
Probable flood predictions in ungauged coastal basins of El Salvador
Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.
2008-01-01
A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.
NASA Astrophysics Data System (ADS)
Yusof, Muhammad Mat; Sulaiman, Tajularipin; Khalid, Ruzelan; Hamid, Mohamad Shukri Abdul; Mansor, Rosnalini
2014-12-01
In professional sporting events, rating competitors before tournament start is a well-known approach to distinguish the favorite team and the weaker teams. Various methodologies are used to rate competitors. In this paper, we explore four ways to rate competitors; least squares rating, maximum likelihood strength ratio, standing points in large round robin simulation and previous league rank position. The tournament metric we used to evaluate different types of rating approach is tournament outcome characteristics measure. The tournament outcome characteristics measure is defined by the probability that a particular team in the top 100q pre-tournament rank percentile progress beyond round R, for all q and R. Based on simulation result, we found that different rating approach produces different effect to the team. Our simulation result shows that from eight teams participate in knockout standard seeding, Perak has highest probability to win for tournament that use the least squares rating approach, PKNS has highest probability to win using the maximum likelihood strength ratio and the large round robin simulation approach, while Perak has the highest probability to win a tournament using previous league season approach.
NASA Technical Reports Server (NTRS)
Pierson, Willard J., Jr.
1989-01-01
The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.
Study of High-Efficiency Motors Using Soft Magnetic Cores
NASA Astrophysics Data System (ADS)
Tokoi, Hirooki; Kawamata, Shoichi; Enomoto, Yuji
We have been developed a small and highly efficient axial gap motor whose stator core is made of a soft magnetic core. First, the loss sensitivities to various motor design parameters were evaluated using magnetic field analysis. It was found that the pole number and core dimensions had low sensitivity (≤ 2.2dB) in terms of the total loss, which is the sum of the copper loss and the iron losses in the stator core and the rotor yoke respectively. From this, we concluded that to improve the motor efficiency, it is essential to reduce the iron loss in the rotor yoke and minimize other losses. With this in mind, a prototype axial gap motor is manufactured and tested. The motor has four poles and six slots. The motor is 123mm in diameter and the axial length is 47mm. The rotor has parallel magnetized magnets and a rotor yoke with magnetic steel sheets. The maximum measured motor efficiency is 93%. This value roughly agrees with the maximum calculated efficiency of 95%.
NASA Technical Reports Server (NTRS)
Subramanyam, Guru; VanKeuls, Fred W.; Miranda, Felix A.; Canedy, Chadwick L.; Aggarwal, Sanjeev; Venkatesan, Thirumalai; Ramesh, Ramamoorthy
2000-01-01
The correlation of electric field and critical design parameters such as the insertion loss, frequency ability return loss, and bandwidth of conductor/ferroelectric/dielectric microstrip tunable K-band microwave filters is discussed in this work. This work is based primarily on barium strontium titanate (BSTO) ferroelectric thin film based tunable microstrip filters for room temperature applications. Two new parameters which we believe will simplify the evaluation of ferroelectric thin films for tunable microwave filters, are defined. The first of these, called the sensitivity parameter, is defined as the incremental change in center frequency with incremental change in maximum applied electric field (EPEAK) in the filter. The other, the loss parameter, is defined as the incremental or decremental change in insertion loss of the filter with incremental change in maximum applied electric field. At room temperature, the Au/BSTO/LAO microstrip filters exhibited a sensitivity parameter value between 15 and 5 MHz/cm/kV. The loss parameter varied for different bias configurations used for electrically tuning the filter. The loss parameter varied from 0.05 to 0.01 dB/cm/kV at room temperature.
Asymptotic properties of a bold random walk
NASA Astrophysics Data System (ADS)
Serva, Maurizio
2014-08-01
In a recent paper we proposed a non-Markovian random walk model with memory of the maximum distance ever reached from the starting point (home). The behavior of the walker is different from the simple symmetric random walk only when she is at this maximum distance, where, having the choice to move either farther or closer, she decides with different probabilities. If the probability of a forward step is higher than the probability of a backward step, the walker is bold and her behavior turns out to be superdiffusive; otherwise she is timorous and her behavior turns out to be subdiffusive. The scaling behavior varies continuously from subdiffusive (timorous) to superdiffusive (bold) according to a single parameter γ ∈R. We investigate here the asymptotic properties of the bold case in the nonballistic region γ ∈[0,1/2], a problem which was left partially unsolved previously. The exact results proved in this paper require new probabilistic tools which rely on the construction of appropriate martingales of the random walk and its hitting times.
Maximum caliber inference of nonequilibrium processes
NASA Astrophysics Data System (ADS)
Otten, Moritz; Stock, Gerhard
2010-07-01
Thirty years ago, Jaynes suggested a general theoretical approach to nonequilibrium statistical mechanics, called maximum caliber (MaxCal) [Annu. Rev. Phys. Chem. 31, 579 (1980)]. MaxCal is a variational principle for dynamics in the same spirit that maximum entropy is a variational principle for equilibrium statistical mechanics. Motivated by the success of maximum entropy inference methods for equilibrium problems, in this work the MaxCal formulation is applied to the inference of nonequilibrium processes. That is, given some time-dependent observables of a dynamical process, one constructs a model that reproduces these input data and moreover, predicts the underlying dynamics of the system. For example, the observables could be some time-resolved measurements of the folding of a protein, which are described by a few-state model of the free energy landscape of the system. MaxCal then calculates the probabilities of an ensemble of trajectories such that on average the data are reproduced. From this probability distribution, any dynamical quantity of the system can be calculated, including population probabilities, fluxes, or waiting time distributions. After briefly reviewing the formalism, the practical numerical implementation of MaxCal in the case of an inference problem is discussed. Adopting various few-state models of increasing complexity, it is demonstrated that the MaxCal principle indeed works as a practical method of inference: The scheme is fairly robust and yields correct results as long as the input data are sufficient. As the method is unbiased and general, it can deal with any kind of time dependency such as oscillatory transients and multitime decays.
The Maximums and Minimums of a Polnomial or Maximizing Profits and Minimizing Aircraft Losses.
ERIC Educational Resources Information Center
Groves, Brenton R.
1984-01-01
Plotting a polynomial over the range of real numbers when its derivative contains complex roots is discussed. The polynomials are graphed by calculating the minimums, maximums, and zeros of the function. (MNS)
Scalar pair production in a magnetic field in de Sitter universe
NASA Astrophysics Data System (ADS)
Băloi, Mihaela-Andreea; Crucean, Cosmin; Popescu, Diana
2018-05-01
The production of scalar particles by the dipole magnetic field in de Sitter expanding universe is analyzed. The amplitude and probability of transition are computed using perturbative methods. A graphical study of the transition probability is performed obtaining that the rate of pair production is important in the early universe. Our results prove that in the process of pair production by the external magnetic field the momentum conservation law is broken. We also found that the probabilities are maximum when the particles are emitted perpendicular to the direction of magnetic dipole momentum. The total probability is computed and is analysed in terms of the angle between particles momenta.
A probabilistic strategy for parametric catastrophe insurance
NASA Astrophysics Data System (ADS)
Figueiredo, Rui; Martina, Mario; Stephenson, David; Youngman, Benjamin
2017-04-01
Economic losses due to natural hazards have shown an upward trend since 1980, which is expected to continue. Recent years have seen a growing worldwide commitment towards the reduction of disaster losses. This requires effective management of disaster risk at all levels, a part of which involves reducing financial vulnerability to disasters ex-ante, ensuring that necessary resources will be available following such events. One way to achieve this is through risk transfer instruments. These can be based on different types of triggers, which determine the conditions under which payouts are made after an event. This study focuses on parametric triggers, where payouts are determined by the occurrence of an event exceeding specified physical parameters at a given location, or at multiple locations, or over a region. This type of product offers a number of important advantages, and its adoption is increasing. The main drawback of parametric triggers is their susceptibility to basis risk, which arises when there is a mismatch between triggered payouts and the occurrence of loss events. This is unavoidable in said programmes, as their calibration is based on models containing a number of different sources of uncertainty. Thus, a deterministic definition of the loss event triggering parameters appears flawed. However, often for simplicity, this is the way in which most parametric models tend to be developed. This study therefore presents an innovative probabilistic strategy for parametric catastrophe insurance. It is advantageous as it recognizes uncertainties and minimizes basis risk while maintaining a simple and transparent procedure. A logistic regression model is constructed here to represent the occurrence of loss events based on certain loss index variables, obtained through the transformation of input environmental variables. Flood-related losses due to rainfall are studied. The resulting model is able, for any given day, to issue probabilities of occurrence of loss events. Due to the nature of parametric programmes, it is still necessary to clearly define when a payout is due or not, and so a decision threshold probability above which a loss event is considered to occur must be set, effectively converting the issued probabilities into deterministic binary outcomes. Model skill and value are evaluated over the range of possible threshold probabilities, with the objective of defining the optimal one. The predictive ability of the model is assessed. In terms of value assessment, a decision model is proposed, allowing users to quantify monetarily their expected expenses when different combinations of model event triggering and actual event occurrence take place, directly tackling the problem of basis risk.
Ensrud, Kristine E.; Harrison, Stephanie L.; Cauley, Jane A.; Langsetmo, Lisa; Schousboe, John T.; Kado, Deborah M.; Gourlay, Margaret L.; Lyons, Jennifer G.; Fredman, Lisa; Napoli, Nicolas; Crandall, Carolyn J.; Lewis, Cora E.; Orwoll, Eric S.; Stefanick, Marcia L.; Cawthon, Peggy M.
2017-01-01
To determine the association of weight loss with risk of clinical fractures at the hip, spine and pelvis (central body fractures [CBF]) in older men with and without accounting for the competing risk of mortality, we used data from 4,523 men (mean age 77.5 years). Weight change between baseline and follow-up (mean 4.5 years between examinations) was categorized as moderate loss (loss ≥10%), mild loss (loss 5% to <10%), stable (<5% change) or gain (gain ≥5%). Participants were contacted every 4 months after the follow-up examination to ascertain vital status (deaths verified by death certificates) and ask about fractures (confirmed by radiographic reports). Absolute probability of CBF by weight change category was estimated using traditional Kaplan-Meier method and cumulative incidence function accounting for competing mortality risk. Risk of CBF by weight change category was determined using conventional Cox proportional hazards regression and subdistribution hazards models with death as a competing risk. During an average of 8 years, 337 men (7.5%) experienced CBF and 1,569 (34.7%) died before experiencing this outcome. Among men with moderate weight loss, CBF probability was 6.8% at 5 years and 16.9% at 10 years using Kaplan-Meier vs. 5.7% at 5 years and 10.2% at 10 years using a competing risk approach. Men with moderate weight loss compared with those with stable weight had a 1.6-fold higher adjusted risk of CBF (HR 1.59, 95% CI 1.06–2.38) using Cox models that was substantially attenuated in models accounting for competing mortality risk and no longer significant (subdistribution HR 1.16, 95% CI 0.77–1.75). Results were similar in analyses substituting hip fracture for CBF. Older men with weight loss who survive are at increased risk of CBF, including hip fracture. However, ignoring the competing mortality risk among men with weight loss substantially overestimates their longterm fracture probability and relative fracture risk. PMID:27739103
Ensrud, Kristine E; Harrison, Stephanie L; Cauley, Jane A; Langsetmo, Lisa; Schousboe, John T; Kado, Deborah M; Gourlay, Margaret L; Lyons, Jennifer G; Fredman, Lisa; Napoli, Nicolas; Crandall, Carolyn J; Lewis, Cora E; Orwoll, Eric S; Stefanick, Marcia L; Cawthon, Peggy M
2017-03-01
To determine the association of weight loss with risk of clinical fractures at the hip, spine, and pelvis (central body fractures [CBFs]) in older men with and without accounting for the competing risk of mortality, we used data from 4523 men (mean age 77.5 years). Weight change between baseline and follow-up (mean 4.5 years between examinations) was categorized as moderate loss (loss ≥10%), mild loss (loss 5% to <10%), stable (<5% change) or gain (gain ≥5%). Participants were contacted every 4 months after the follow-up examination to ascertain vital status (deaths verified by death certificates) and ask about fractures (confirmed by radiographic reports). Absolute probability of CBF by weight change category was estimated using traditional Kaplan-Meier method and cumulative incidence function accounting for competing mortality risk. Risk of CBF by weight change category was determined using conventional Cox proportional hazards regression and subdistribution hazards models with death as a competing risk. During an average of 8 years, 337 men (7.5%) experienced CBF and 1569 (34.7%) died before experiencing this outcome. Among men with moderate weight loss, CBF probability was 6.8% at 5 years and 16.9% at 10 years using Kaplan-Meier versus 5.7% at 5 years and 10.2% at 10 years using a competing risk approach. Men with moderate weight loss compared with those with stable weight had a 1.6-fold higher adjusted risk of CBF (HR 1.59; 95% CI, 1.06 to 2.38) using Cox models that was substantially attenuated in models accounting for competing mortality risk and no longer significant (subdistribution HR 1.16; 95% CI, 0.77 to 1.75). Results were similar in analyses substituting hip fracture for CBF. Older men with weight loss who survive are at increased risk of CBF, including hip fracture. However, ignoring the competing mortality risk among men with weight loss substantially overestimates their long-term fracture probability and relative fracture risk. © 2016 American Society for Bone and Mineral Research. © 2016 American Society for Bone and Mineral Research.
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Scaling exponents for ordered maxima
Ben-Naim, E.; Krapivsky, P. L.; Lemons, N. W.
2015-12-22
We study extreme value statistics of multiple sequences of random variables. For each sequence with N variables, independently drawn from the same distribution, the running maximum is defined as the largest variable to date. We compare the running maxima of m independent sequences and investigate the probability S N that the maxima are perfectly ordered, that is, the running maximum of the first sequence is always larger than that of the second sequence, which is always larger than the running maximum of the third sequence, and so on. The probability S N is universal: it does not depend on themore » distribution from which the random variables are drawn. For two sequences, S N~N –1/2, and in general, the decay is algebraic, S N~N –σm, for large N. We analytically obtain the exponent σ 3≅1.302931 as root of a transcendental equation. Moreover, the exponents σ m grow with m, and we show that σ m~m for large m.« less
NASA Astrophysics Data System (ADS)
Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel
2018-01-01
The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor ( k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series. [Figure not available: see fulltext.
The maximum entropy production and maximum Shannon information entropy in enzyme kinetics
NASA Astrophysics Data System (ADS)
Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš
2018-04-01
We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.
NASA Technical Reports Server (NTRS)
Freeman, Hugh B.
1935-01-01
Tests were made in the N.A.C.A. 20-foot wind tunnel on: (1) a wing, of 6.5-foot span, 5.5-foot chord, and 30 percent maximum thickness, fitted with large end plates and (2) a 16-foot span 2.67-foot chord wing of 15 percent maximum thickness to determine the increase in lift obtainable by removing the boundary layer and the power required for the blower. The results of the tests on the stub wing appeared more favorable than previous small-scale tests and indicated that: (1) the suction method was considerably superior to the pressure method, (2) single slots were more effective than multiple slots (where the same pressure was applied to all slots), the slot efficiency increased rapidly for increasing slot widths up to 2 percent of the wing chord and remained practically constant for all larger widths tested, (3) suction pressure and power requirements were quite low (a computation for a light airplane showed that a lift coefficient of 3.0 could be obtained with a suction as low as 2.3 times the dynamic pressure and a power expenditure less than 3 percent of the rated engine power), and (4) the volume of air required to be drawn off was quite high (approximately 0.5 cubic feet per second per unit wing area for an airplane landing at 40 miles per hour with a lift coefficient of 3,0), indicating that considerable duct area must be provided in order to prevent flow losses inside the wing and insure uniform distribution of suction along the span. The results from the tests of the large-span wing were less favorable than those on the stub wing. The reasons for this were, probably: (1) the uneven distribution of suction along the span, (2) the flow losses inside the wing, (3) the small radius of curvature of the leading edge of the wing section, and (4) the low Reynolds Number of these tests, which was about one half that of the stub wing. The results showed a large increase in the maximum lift coefficient with an increase in Reynolds Number in the range of the tests. The results of drag tests showed that the profile drag of the wing was reduced and the L/D ratio was increased throughout the range of lift coefficients corresponding to take-off and climb but that the minimum drag was increased. The slot arrangement that is best for low drag is not the same, however, as that for maximum lift.
Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multiple Conditions
2009-03-01
United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis
Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multipath Conditions
2009-03-01
United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis
NASA Technical Reports Server (NTRS)
Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.
2010-01-01
Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project
Liu, Yang; Fu, Lianjie; Wang, Jinling; Zhang, Chunxi
2017-09-25
One of the adverse impacts of scintillation on GNSS signals is the loss of lock status, which can lead to GNSS geometry and visibility reductions that compromise the accuracy and integrity of navigation performance. In this paper the loss of lock based on ionosphere scintillation in this solar maximum phase has been well investigated with respect to both temporal and spatial behaviors, based on GNSS observatory data collected at Weipa (Australia; geographic: 12.45° S, 130.95° E; geomagnetic: 21.79° S, 214.41° E) from 2011 to 2015. Experiments demonstrate that the percentage of occurrence of loss of lock events under ionosphere scintillation is closely related with solar activity and seasonal shifts. Loss of lock behaviors under ionosphere scintillation related to elevation and azimuth angles are statistically analyzed, with some distinct characteristics found. The influences of daytime scintillation and geomagnetic storms on loss of lock have also been discussed in details. The proposed work is valuable for a deeper understanding of theoretical mechanisms of-loss of lock under ionosphere scintillation in global regions, and provides a reference for GNSS applications in certain regions at Australian low latitudes.
Liu, Yang; Fu, Lianjie; Wang, Jinling; Zhang, Chunxi
2017-01-01
One of the adverse impacts of scintillation on GNSS signals is the loss of lock status, which can lead to GNSS geometry and visibility reductions that compromise the accuracy and integrity of navigation performance. In this paper the loss of lock based on ionosphere scintillation in this solar maximum phase has been well investigated with respect to both temporal and spatial behaviors, based on GNSS observatory data collected at Weipa (Australia; geographic: 12.45° S, 130.95° E; geomagnetic: 21.79° S, 214.41° E) from 2011 to 2015. Experiments demonstrate that the percentage of occurrence of loss of lock events under ionosphere scintillation is closely related with solar activity and seasonal shifts. Loss of lock behaviors under ionosphere scintillation related to elevation and azimuth angles are statistically analyzed, with some distinct characteristics found. The influences of daytime scintillation and geomagnetic storms on loss of lock have also been discussed in details. The proposed work is valuable for a deeper understanding of theoretical mechanisms of—loss of lock under ionosphere scintillation in global regions, and provides a reference for GNSS applications in certain regions at Australian low latitudes. PMID:28946676
NASA Astrophysics Data System (ADS)
Iijima, T.; Naito, H.
2017-04-01
Context. The outburst of the symbiotic recurrent nova V407 Cyg in 2010 has been studied by numerous authors. On the other hand, its spectral variations in the quiescent stage have not been well studied yet. This paper is probably the first report for the relation between the pulsation of the secondary Mira variable and the temperature of the primary hot component for V407 Cyg. Aims: The spectral variation in the post-outburst stage has been monitored to study the properties of this object. In the course of this work, we found some unexpected spectral variations around the light maximum of the secondary Mira variable in 2012. The relation between the mass transfer in the binary system and the pulsation of the secondary Mira variable is studied. Methods: High- and low-resolution optical spectra obtained at the Astronomical Observatories at Asiago were used. The photometric data depend on the database of the VSNET. Results: The secondary Mira variable reached its light maximum in 2012, when an absorption spectrum of a late-M-type giant developed and the emission line of Hδ became stronger than those of Hβ and Hγ, which are typical spectral features of Mira variables at light maxima. On the other hand, intensity ratios to Hβ of the emission lines of He I, He II, [Fe VII], etc., which obviously depended on the temperature of the hot component, rapidly varied around the light maximum. The intensity ratios started to decrease at phase about 0.9 of the periodical light variation of the Mira variable. This phenomenon suggests that the mass transfer rate, as well as the mass accretion rate onto the hot component, decreased according to the contraction of the Mira variable. However, these intensity ratios somewhat recovered just on the light maximum: phase 0.99. There might have occurred a temporal mass loss from the Mira variable at that time. The intensity ratios decreased again after the light maximum, then recovered and returned to the normal level at phase about 0.1. Since the mass transfer rate seems to have been closely related to the pulsation of the secondary component, the mass transfer in this binary system was likely due to a normal Roche-lobe overflow. If this is the case, the orbital period should be shorter than five years. Each of the Na I D1 and D2 lines had five emission and one absorption components around the light maximum. It seems that there were two pairs of mass outflows from the Mira variable with velocities of ± 79 km s-1 and ± 44 km s-1. These velocities were much higher than those of mass loss from usual Mira variables. The reduced spectra (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/600/A96
Numerical optimization using flow equations.
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Numerical optimization using flow equations
NASA Astrophysics Data System (ADS)
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Application of Markov chain model to daily maximum temperature for thermal comfort in Malaysia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordin, Muhamad Asyraf bin Che; Hassan, Husna
2015-10-22
The Markov chain’s first order principle has been widely used to model various meteorological fields, for prediction purposes. In this study, a 14-year (2000-2013) data of daily maximum temperatures in Bayan Lepas were used. Earlier studies showed that the outdoor thermal comfort range based on physiologically equivalent temperature (PET) index in Malaysia is less than 34°C, thus the data obtained were classified into two state: normal state (within thermal comfort range) and hot state (above thermal comfort range). The long-run results show the probability of daily temperature exceed TCR will be only 2.2%. On the other hand, the probability dailymore » temperature within TCR will be 97.8%.« less
Measuring Forest Area Loss Over Time Using FIA Plots and Satellite Imagery
Michael L. Hoppus; Andrew J. Lister
2005-01-01
How accurately can FIA plots, scattered at 1 per 6,000 acres, identify often rare forest land loss, estimated at less than 1 percent per year in the Northeast? Here we explore this question mathematically, empirically, and by comparing FIA plot estimates of forest change with satellite image based maps of forest loss. The mathematical probability of exactly estimating...
A new scheduling algorithm to provide proportional QoS in optical burst switching networks
NASA Astrophysics Data System (ADS)
Tan, Wei; Luo, Yunhan; Wang, Sheng; Xu, Du; Pan, Yonghong; Li, Lemin
2005-02-01
A new scheduling algorithm, which aims to provide proportional and controllable QoS in terms of burst loss probability for OBS (optical burst switching) networks, is proposed on the basis of a summary of current QoS schemes in OBS. With simulations, performance analyses and comparisons are studied in detail. The results show that, in the proposed scheme, burst loss probabilities are proportional to the given factors and the control of QoS performance can be achieved with better performance. This scheme will be beneficial to the OBS network management and the tariff policy making.
NASA Astrophysics Data System (ADS)
Gilmanshin, I. R.; Kirpichnikov, A. P.
2017-09-01
In the result of study of the algorithm of the functioning of the early detection module of excessive losses, it is proven the ability to model it by using absorbing Markov chains. The particular interest is in the study of probability characteristics of early detection module functioning algorithm of losses in order to identify the relationship of indicators of reliability of individual elements, or the probability of occurrence of certain events and the likelihood of transmission of reliable information. The identified relations during the analysis allow to set thresholds reliability characteristics of the system components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greb, Arthur; Niemi, Kari; O'Connell, Deborah
2013-12-09
Plasma parameters and dynamics in capacitively coupled oxygen plasmas are investigated for different surface conditions. Metastable species concentration, electronegativity, spatial distribution of particle densities as well as the ionization dynamics are significantly influenced by the surface loss probability of metastable singlet delta oxygen (SDO). Simulated surface conditions are compared to experiments in the plasma-surface interface region using phase resolved optical emission spectroscopy. It is demonstrated how in-situ measurements of excitation features can be used to determine SDO surface loss probabilities for different surface materials.
Engdahl, N.B.; Vogler, E.T.; Weissmann, G.S.
2010-01-01
River-aquifer exchange is considered within a transition probability framework along the Rio Grande in Albuquerque, New Mexico, to provide a stochastic estimate of aquifer heterogeneity and river loss. Six plausible hydrofacies configurations were determined using categorized drill core and wetland survey data processed through the TPROGS geostatistical package. A base case homogeneous model was also constructed for comparison. River loss was simulated for low, moderate, and high Rio Grande stages and several different riverside drain stage configurations. Heterogeneity effects were quantified by determining the mean and variance of the K field for each realization compared to the root-mean-square (RMS) error of the observed groundwater head data. Simulation results showed that the heterogeneous models produced smaller estimates of loss than the homogeneous approximation. Differences between heterogeneous and homogeneous model results indicate that the use of a homogeneous K in a regional-scale model may result in an overestimation of loss but comparable RMS error. We find that the simulated river loss is dependent on the aquifer structure and is most sensitive to the volumetric proportion of fines within the river channel. Copyright 2010 by the American Geophysical Union.
ENSO-Based Index Insurance: Approach and Peru Flood Risk Management Application
NASA Astrophysics Data System (ADS)
Khalil, A. F.; Kwon, H.; Lall, U.; Miranda, M. J.; Skees, J. R.
2006-12-01
Index insurance has recently been advocated as a useful risk transfer tool for disaster management situations where rapid fiscal relief is desirable, and where estimating insured losses may be difficult, time consuming, or subject to manipulation and falsification. For climate related hazards, a rainfall or temperature index may be proposed. However, rainfall may be highly spatially variable relative to the gauge network, and in many locations data are inadequate to develop an index due to short time-series and the spatial dispersion of stations. In such cases, it may be helpful to consider a climate proxy index as a regional rainfall index. This is particularly useful if a long record is available for the climate index through an independent source and it is well correlated with the regional rainfall hazard. Here, ENSO related climate indices are explored for use as a proxy to extreme rainfall in one of the departments of Peru -- Piura. The ENSO index insurance product may be purchased by banks or microfinance institutions (MFIs) to aid agricultural damage relief in Peru. Crop losses in the region are highly correlated with floods, but are difficult to assess directly. Beyond agriculture, many other sectors suffer as well. Basic infrastructure is destroyed during the most severe events. This disrupts trade for many micro-enterprises. The reliability and quality of the local rainfall data is variable. Averaging the financial risk across the region is desirable. Some issues with the implementation of the proxy ENSO index are identified and discussed. Specifically, we explore (a) the reliability of the index at different levels of probability of exceedance of maximum seasonal rainfall; (b) the potential for clustering of payoffs; (c) the potential that the index could be predicted with some lead time prior to the flood season; and (d) evidence for climate change or non-stationarity in the flood exceedance probability from the long ENSO record. Finally, prospects for the global application of an ENSO based index insurance product are discussed.
Models for loosely linked gene duplicates suggest lengthy persistence of both copies.
O'Hely, Martin; Wockner, Leesa
2007-06-21
Consider the appearance of a duplicate copy of a gene at a locus linked loosely, if at all, to the locus at which the gene is usually found. If all copies of the gene are subject to non-functionalizing mutations, then two fates are possible: loss of functional copies at the duplicate locus (loss of duplicate expression), or loss of functional copies at the original locus (map change). This paper proposes a simple model to address the probability of map change, the time taken for a map change and/or loss of duplicate expression, and considers where in the spectrum between loss of duplicate expression and map change such a duplicate complex is likely to be found. The findings are: the probability of map change is always half the reciprocal of the population size N, the time for a map change to occur is order NlogN generations, and that there is a marked tendency for duplicates to remain near equi-frequency with the gene at the original locus for a large portion of that time. This is in excellent agreement with simulations.
NASA Technical Reports Server (NTRS)
Weigel, C.; Ball, C. L.
1972-01-01
The performance data were taken at 50,000 rpm, using argon gas. As the Reynolds number was reduced from near design value to 30 percent of design, the maximum efficiency decreased about 1.5 percentage points. Reducing the Reynolds number from 30 percent to approximately 10 percent of design caused the maximum efficiency to decrease another 2.5 percentage points. The variation in loss with Reynolds number is compared with inverse power relation of loss with Reynolds number.
Cusimano, Natalie; Sousa, Aretuza; Renner, Susanne S.
2012-01-01
Background and Aims For 84 years, botanists have relied on calculating the highest common factor for series of haploid chromosome numbers to arrive at a so-called basic number, x. This was done without consistent (reproducible) reference to species relationships and frequencies of different numbers in a clade. Likelihood models that treat polyploidy, chromosome fusion and fission as events with particular probabilities now allow reconstruction of ancestral chromosome numbers in an explicit framework. We have used a modelling approach to reconstruct chromosome number change in the large monocot family Araceae and to test earlier hypotheses about basic numbers in the family. Methods Using a maximum likelihood approach and chromosome counts for 26 % of the 3300 species of Araceae and representative numbers for each of the other 13 families of Alismatales, polyploidization events and single chromosome changes were inferred on a genus-level phylogenetic tree for 113 of the 117 genera of Araceae. Key Results The previously inferred basic numbers x = 14 and x = 7 are rejected. Instead, maximum likelihood optimization revealed an ancestral haploid chromosome number of n = 16, Bayesian inference of n = 18. Chromosome fusion (loss) is the predominant inferred event, whereas polyploidization events occurred less frequently and mainly towards the tips of the tree. Conclusions The bias towards low basic numbers (x) introduced by the algebraic approach to inferring chromosome number changes, prevalent among botanists, may have contributed to an unrealistic picture of ancestral chromosome numbers in many plant clades. The availability of robust quantitative methods for reconstructing ancestral chromosome numbers on molecular phylogenetic trees (with or without branch length information), with confidence statistics, makes the calculation of x an obsolete approach, at least when applied to large clades. PMID:22210850
Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin
2014-03-01
To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.
In Vivo potassium-39 NMR spectra by the burg maximum-entropy method
NASA Astrophysics Data System (ADS)
Uchiyama, Takanori; Minamitani, Haruyuki
The Burg maximum-entropy method was applied to estimate 39K NMR spectra of mung bean root tips. The maximum-entropy spectra have as good a linearity between peak areas and potassium concentrations as those obtained by fast Fourier transform and give a better estimation of intracellular potassium concentrations. Therefore potassium uptake and loss processes of mung bean root tips are shown to be more clearly traced by the maximum-entropy method.
Risk Analysis of Earth-Rock Dam Failures Based on Fuzzy Event Tree Method
Fu, Xiao; Gu, Chong-Shi; Su, Huai-Zhi; Qin, Xiang-Nan
2018-01-01
Earth-rock dams make up a large proportion of the dams in China, and their failures can induce great risks. In this paper, the risks associated with earth-rock dam failure are analyzed from two aspects: the probability of a dam failure and the resulting life loss. An event tree analysis method based on fuzzy set theory is proposed to calculate the dam failure probability. The life loss associated with dam failure is summarized and refined to be suitable for Chinese dams from previous studies. The proposed method and model are applied to one reservoir dam in Jiangxi province. Both engineering and non-engineering measures are proposed to reduce the risk. The risk analysis of the dam failure has essential significance for reducing dam failure probability and improving dam risk management level. PMID:29710824
Fukunaga, Rena; Brown, Joshua W; Bogg, Tim
2012-09-01
The inferior frontal gyrus/anterior insula (IFG/AI) and anterior cingulate cortex (ACC) are key regions involved in risk appraisal during decision making, but accounts of how these regions contribute to decision making under risk remain contested. To help clarify the roles of these and other related regions, we used a modified version of the Balloon Analogue Risk Task (Lejuez et al., Journal of Experimental Psychology: Applied, 8, 75-84, 2002) to distinguish between decision-making and feedback-related processes when participants decided to pursue a gain as the probability of loss increased parametrically. Specifically, we set out to test whether the ACC and IFG/AI regions correspond to loss aversion at the time of decision making in a way that is not confounded with either reward-seeking or infrequency effects. When participants chose to discontinue inflating the balloon (win option), we observed greater ACC and mainly bilateral IFG/AI activity at the time of decision as the probability of explosion increased, consistent with increased loss aversion but inconsistent with an infrequency effect. In contrast, we found robust vmPFC activity when participants chose to continue inflating the balloon (risky option), consistent with reward seeking. However, in the cingulate and in mainly bilateral IFG regions, blood-oxygenation-level-dependent activation decreased when participants chose to inflate the balloon as the probability of explosion increased, findings that are consistent with a reduced loss aversion signal. Our results highlight the existence of distinct reward-seeking and loss-averse signals during decision making, as well as the importance of distinguishing between decision and feedback signals.
NASA Astrophysics Data System (ADS)
De-yue, Ma; Xiao-xia, Li; Yu-xiang, Guo; Yu-run, Zeng
2018-01-01
Reduced graphene oxide (RGO)/Cu-Ni ferrite/Al2O3 composite was prepared by solvothermal method, and its properties were characterized by SEM, x-ray diffraction, energy-dispersive x-ray spectroscopy and FTIR. The electromagnetic parameters in 2-18 GHz and mid-infrared (IR) spectral transmittance of the composite were measured, respectively. The results show that Cu0.7Ni0.3Fe2O4 nanoparticles with an average size of tens nanometers adsorb on surface of RGO, and meanwhile, Al2O3 nanoparticles adhere to the surface of Cu0.7Ni0.3Fe2O4 nanoparticles and RGO. The composite has both dielectric and magnetic loss mechanism. Its reflection loss is lower than -19 dB in 2-18 GHz, and the maximum of -23.2 dB occurs at 15.6 GHz. With the increasing of Al2O3 amount, its reflection loss becomes lower and the maximum moves towards low frequency slightly. Compared with RGO/Cu-Ni ferrite composites, its magnetic loss and reflection loss slightly reduce with the increasing of Al2O3 amount, and the maximum of reflection loss shifts from a low frequency to a high one. However, its broadband IR absorption is significantly enhanced owing to nano-Al2O3. Therefore, RGO/Cu-Ni ferrite/Al2O3 composites can be used as excellent broadband microwave and IR absorbing materials, and maybe have broad application prospect in electromagnetic shielding, IR absorbing and coating materials.
Theoretical Evaluation of the Maximum Work of Free-Piston Engine Generators
NASA Astrophysics Data System (ADS)
Kojima, Shinji
2017-01-01
Utilizing the adjoint equations that originate from the calculus of variations, we have calculated the maximum thermal efficiency that is theoretically attainable by free-piston engine generators considering the work loss due to friction and Joule heat. Based on the adjoint equations with seven dimensionless parameters, the trajectory of the piston, the histories of the electric current, the work done, and the two kinds of losses have been derived in analytic forms. Using these we have conducted parametric studies for the optimized Otto and Brayton cycles. The smallness of the pressure ratio of the Brayton cycle makes the net work done negative even when the duration of heat addition is optimized to give the maximum amount of heat addition. For the Otto cycle, the net work done is positive, and both types of losses relative to the gross work done become smaller with the larger compression ratio. Another remarkable feature of the optimized Brayton cycle is that the piston trajectory of the heat addition/disposal process is expressed by the same equation as that of an adiabatic process. The maximum thermal efficiency of any combination of isochoric and isobaric heat addition/disposal processes, such as the Sabathe cycle, may be deduced by applying the methods described here.
NASA Astrophysics Data System (ADS)
Weinert, Michael; Mathis, Moritz; Kröncke, Ingrid; Neumann, Hermann; Pohlmann, Thomas; Reiss, Henning
2016-06-01
In the marine realm, climate change can affect a variety of physico-chemical properties with wide-ranging biological effects, but the knowledge of how climate change affects benthic distributions is limited and mainly restricted to coastal environments. To project the response of benthic species of a shelf sea (North Sea) to the expected climate change, the distributions of 75 marine benthic species were modelled and the spatial changes in distribution were projected for 2099 based on modelled bottom temperature and salinity changes using the IPCC scenario A1B. Mean bottom temperature was projected to increase between 0.15 and 5.4 °C, while mean bottom salinity was projected to moderately increase by 1.7. The spatial changes in species distribution were modelled with Maxent and the direction and extent of these changes were assessed. The results showed a latitudinal northward shift for 64% of the species (maximum 109 km; brittle star Ophiothrix fragilis) and a southward shift for 36% (maximum 101 km; hermit crab Pagurus prideaux and the associated cloak anemone Adamsia carciniopados; 105 km). The relatively low rates of distributional shifts compared to fish or plankton species were probably influenced by the regional topography. The environmental gradients in the central North Sea along the 50 m depth contour might act as a 'barrier', possibly resulting in a compression of distribution range and hampering further shifts to the north. For 49 species this resulted in a habitat loss up to 100%, while only 11 species could benefit from the warming in terms of habitat gain. Particularly the benthic communities of the southern North Sea, where the strongest temperature increase was projected, would be strongly affected by the distributional changes, since key species showed northward shifts and high rates of habitat loss, with potential ramifications for the functioning of the ecosystem.
Rueda, Marta; Moreno Saiz, Juan Carlos; Morales-Castilla, Ignacio; Albuquerque, Fabio S; Ferrero, Mila; Rodríguez, Miguel Á
2015-01-01
Ecological theory predicts that fragmentation aggravates the effects of habitat loss, yet empirical results show mixed evidences, which fail to support the theory instead reinforcing the primary importance of habitat loss. Fragmentation hypotheses have received much attention due to their potential implications for biodiversity conservation, however, animal studies have traditionally been their main focus. Here we assess variation in species sensitivity to forest amount and fragmentation and evaluate if fragmentation is related to extinction thresholds in forest understory herbs and ferns. Our expectation was that forest herbs would be more sensitive to fragmentation than ferns due to their lower dispersal capabilities. Using forest cover percentage and the proportion of this percentage occurring in the largest patch within UTM cells of 10-km resolution covering Peninsular Spain, we partitioned the effects of forest amount versus fragmentation and applied logistic regression to model occurrences of 16 species. For nine models showing robustness according to a set of quality criteria we subsequently defined two empirical fragmentation scenarios, minimum and maximum, and quantified species' sensitivity to forest contraction with no fragmentation, and to fragmentation under constant forest cover. We finally assessed how the extinction threshold of each species (the habitat amount below which it cannot persist) varies under no and maximum fragmentation. Consistent with their preference for forest habitats probability occurrences of all species decreased as forest cover contracted. On average, herbs did not show significant sensitivity to fragmentation whereas ferns were favored. In line with theory, fragmentation yielded higher extinction thresholds for two species. For the remaining species, fragmentation had either positive or non-significant effects. We interpret these differences as reflecting species-specific traits and conclude that although forest amount is of primary importance for the persistence of understory plants, to neglect the impact of fragmentation for some species can lead them to local extinction.
Doyle, T.W.; Smith, T. J.; Robblee, M.B.
1995-01-01
On August 24, 1992, Hurricane Andrew downed and defoliated an extensive swath of mangrove trees across the lower Florida peninsula. Permanent field sites were established to assess the extent of forest damage and to monitor the rate and process of forest recovery. Canopy trees suffered the highest mortality particularly for sites within and immediately north of the storm's eyewall. The type and extent of site damage, windthrow, branch loss, and defoliation generally decreased exponentially with increasing distance from the storm track. Forest damage was greater for sites in the storm's right quadrant than in the left quadrant tor the same given distance from the storm center. Stand exposure, both horizontally and vertically, increased the susceptibility and probability of forest damage and accounted for much of the local variability. Slight species differences were found. Laguncularia racemosa exceeded Avicennia germinans and Rhizophora mangle in damage tendency under similar wind conditions. Azimuths of downed trees were strongly correlated with maximum wind speed and vector based on a hurricane simulation of the storm. Lateral branch loss and leaf defoliation on sites without windthrow damage indicated a degree of crown thinning and light penetration equivalent to treefall gaps under normally intact forest conditions. Mangrove species and forests are susceptible to catastrophic disturbance by hurricanes; the impacts of which are significant to changes in forest structure and function.
NASA Astrophysics Data System (ADS)
Leow, Shin Woei; Corrado, Carley; Osborn, Melissa; Carter, Sue A.
2013-09-01
Luminescent solar concentrators (LSCs) have the ability to receive light from a wide range of angles, concentrating the captured light onto small photo active areas. This enables greater incorporation of LSCs into building designs as windows, skylights and wall claddings in addition to rooftop installations of current solar panels. Using relatively cheap luminescent dyes and acrylic waveguides to effect light concentration onto lesser photovoltaic (PV) cells, there is potential for this technology to approach grid price parity. We employ a panel design in which the front facing PV cells collect both direct and concentrated light ensuring a gain factor greater than one. This also allows for flexibility in determining the placement and percentage coverage of PV cells during the design process to balance reabsorption losses against the power output and level of light concentration desired. To aid in design optimization, a Monte-Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in LSC panels. The program imports measured absorption/emission spectra and transmission coefficients as simulation parameters with interactions of photons in the panel determined by comparing calculated probabilities with random number generators. LSC panels with multiple dyes or layers can also be simulated. Analysis of the results reveals optimal panel dimensions and PV cell layouts for maximum power output for a given dye concentration, absorbtion/emission spectrum and quantum efficiency.
Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection
NASA Astrophysics Data System (ADS)
Denuit, Michel; Dhaene, Jan
2007-06-01
In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.
Ice Flow in Debris Aprons and Central Peaks, and the Application of Crater Counts
NASA Astrophysics Data System (ADS)
Hartmann, W. K.; Quantin, C.; Werner, S. C.; Popova, O.
2009-03-01
We apply studies of decameter-scale craters to studies of probable ice-flow-related features on Mars, to interpret both chronometry and geological processes among the features. We find losses of decameter-scale craters relative to nearby plains, probably due to sublimation.
Chang, Edward F; Breshears, Jonathan D; Raygor, Kunal P; Lau, Darryl; Molinaro, Annette M; Berger, Mitchel S
2017-01-01
OBJECTIVE Functional mapping using direct cortical stimulation is the gold standard for the prevention of postoperative morbidity during resective surgery in dominant-hemisphere perisylvian regions. Its role is necessitated by the significant interindividual variability that has been observed for essential language sites. The aim in this study was to determine the statistical probability distribution of eliciting aphasic errors for any given stereotactically based cortical position in a patient cohort and to quantify the variability at each cortical site. METHODS Patients undergoing awake craniotomy for dominant-hemisphere primary brain tumor resection between 1999 and 2014 at the authors' institution were included in this study, which included counting and picture-naming tasks during dense speech mapping via cortical stimulation. Positive and negative stimulation sites were collected using an intraoperative frameless stereotactic neuronavigation system and were converted to Montreal Neurological Institute coordinates. Data were iteratively resampled to create mean and standard deviation probability maps for speech arrest and anomia. Patients were divided into groups with a "classic" or an "atypical" location of speech function, based on the resultant probability maps. Patient and clinical factors were then assessed for their association with an atypical location of speech sites by univariate and multivariate analysis. RESULTS Across 102 patients undergoing speech mapping, the overall probabilities of speech arrest and anomia were 0.51 and 0.33, respectively. Speech arrest was most likely to occur with stimulation of the posterior inferior frontal gyrus (maximum probability from individual bin = 0.025), and variance was highest in the dorsal premotor cortex and the posterior superior temporal gyrus. In contrast, stimulation within the posterior perisylvian cortex resulted in the maximum mean probability of anomia (maximum probability = 0.012), with large variance in the regions surrounding the posterior superior temporal gyrus, including the posterior middle temporal, angular, and supramarginal gyri. Patients with atypical speech localization were far more likely to have tumors in canonical Broca's or Wernicke's areas (OR 7.21, 95% CI 1.67-31.09, p < 0.01) or to have multilobar tumors (OR 12.58, 95% CI 2.22-71.42, p < 0.01), than were patients with classic speech localization. CONCLUSIONS This study provides statistical probability distribution maps for aphasic errors during cortical stimulation mapping in a patient cohort. Thus, the authors provide an expected probability of inducing speech arrest and anomia from specific 10-mm 2 cortical bins in an individual patient. In addition, they highlight key regions of interindividual mapping variability that should be considered preoperatively. They believe these results will aid surgeons in their preoperative planning of eloquent cortex resection.
Malhis, Nawar; Butterfield, Yaron S N; Ester, Martin; Jones, Steven J M
2009-01-01
A plethora of alignment tools have been created that are designed to best fit different types of alignment conditions. While some of these are made for aligning Illumina Sequence Analyzer reads, none of these are fully utilizing its probability (prb) output. In this article, we will introduce a new alignment approach (Slider) that reduces the alignment problem space by utilizing each read base's probabilities given in the prb files. Compared with other aligners, Slider has higher alignment accuracy and efficiency. In addition, given that Slider matches bases with probabilities other than the most probable, it significantly reduces the percentage of base mismatches. The result is that its SNP predictions are more accurate than other SNP prediction approaches used today that start from the most probable sequence, including those using base quality.
Jordan, P.R.; Hart, R.J.
1985-01-01
A streamflow routing model was used to calculate the transit losses and traveltimes. Channel and aquifer characteristics, and the model control parameters, were estimated from available data and then verified to the extent possible by comparing model simulated streamflow to observed streamflow at streamflow gaging stations. Transit losses and traveltimes for varying reservoir release rates and durations then were simulated for two different antecedent streamflow (drought) conditions. For the severe-drought antecedent-streamflow condition, it was assumed that only the downstream water use requirement would be released from the reservoir. For a less severe drought (LSD) antecedent streamflow condition, it was assumed than any releases from Marion Lake for water supply use downstream, would be in addition to a nominal dry weather release of 5 cu ft/sec. Water supply release rates of 10 and 25 cu ft/sec for the severe drought condition and 5, 10, and 25 cu ft/sec for the less severe drought condition were simulated for periods of 28 and 183 days commencing on July 1. Transit losses for the severe drought condition for all reservoir release rates and durations ranged from 12% to 78% of the maximum downstream flow rate and from 27% to 91% of the total volume of reservoir storage released. For the LSD condition, transit losses ranged from 7% to 29% of the maximum downstream flow rate and from 10% to 48% of the total volume of release. The 183-day releases had larger total transit losses, but losses on a percentage basis were less than the losses for the 28-day release period for both antecedent streamflow conditions. Traveltimes to full response (80% of the maximum downstream flow rate), however, showed considerable variation. For the release of 5 cu ft/sec during LSD conditions, base flow exceeded 80% of the maximum flow rate near the confluence; the traveltime to full response was undefined for those simulations. For the releases of 10 and 25 cu ft/sec during the same drought condition, traveltimes to full response ranged from 4.4 to 6.5 days. For releases of 10 and 25 cu ft/sec during severe drought conditions, traveltimes to full response near the confluence with the Neosho River ranged from 8.3 to 93 days. (Lantz-PTT)
The probability of occurrence of high-loss windstorms
NASA Astrophysics Data System (ADS)
Massey, Neil
2016-04-01
Windstorms are one of the largest meteorological risks to life and property in Europe. High - loss windstorms, in terms of insured losses, are a result of not only the windspeed of the storm but also the position and track of the storm. The two highest loss storms on record, Daria (1990) and Lothar (1999) caused so much damage because they tracked across highly populated areas of Europe. Although the frequency and intensity of high - loss wind storms in the observed record is known, there are not enough samples, due to the short observed record, to truly know the distribution of the frequency and intensity of windstorms over Europe and, by extension, the distribution of losses which could occur if the atmosphere had been in a different state due to the internal variability of the atmosphere. Risk and loss modelling exercises carried out by and for the reinsurance industry have typically stochastically perturbed the historical record of high - loss windstorms to produce distributions of potential windstorms with greater sample sizes than the observations. This poster presents a new method of generating many samples of potential windstorms and analyses the frequency of occurrence, intensity and potential losses of these windstorms. The large ensemble regional climate modelling project weather@home is used to generate many regional climate model representations (800 per year) of the weather over Europe between 1985 and 2010. The regional climate model is driven at the boundaries by a free running global climate model and so each ensemble member represents a potential state of the atmosphere, rather than an observed state. The winter storm season of October to March is analysed by applying an objective cyclone identification and tracking algorithm to each ensemble member. From the resulting tracks, the windspeed within a 1000km radius of the cyclone centre is extracted and the maximum windspeed over a 72 hour period is derived as the storm windspeed footprint. This footprint is fed into a population based loss model to estimate the losses for the storm. Additionally the same analysis is performed on data from the same regional climate model, driven at the boundaries by ERA - Interim. This allows the tracks and losses of the storms in the observed record to be recovered using the same tracking method and loss model. A storm track matching function is applied to the storm tracks in the large ensemble and so analogues of the observed storms can be recovered. The frequency of occurrence of the high - loss storms in the large ensemble can then be determined, and used as a proxy for the frequency of occurrence in the observations.
Correlates of rediscovery and the detectability of extinction in mammals
Fisher, Diana O.; Blomberg, Simon P.
2011-01-01
Extinction is difficult to detect, even in well-known taxa such as mammals. Species with long gaps in their sighting records, which might be considered possibly extinct, are often rediscovered. We used data on rediscovery rates of missing mammals to test whether extinction from different causes is equally detectable and to find which traits affect the probability of rediscovery. We find that species affected by habitat loss were much more likely to be misclassified as extinct or to remain missing than those affected by introduced predators and diseases, or overkill, unless they had very restricted distributions. We conclude that extinctions owing to habitat loss are most difficult to detect; hence, impacts of habitat loss on extinction have probably been overestimated, especially relative to introduced species. It is most likely that the highest rates of rediscovery will come from searching for species that have gone missing during the 20th century and have relatively large ranges threatened by habitat loss, rather than from additional effort focused on charismatic missing species. PMID:20880890
Djordjević, Tijana; Radović, Ivan; Despoja, Vito; Lyon, Keenan; Borka, Duško; Mišković, Zoran L
2018-01-01
We present an analytical modeling of the electron energy loss (EEL) spectroscopy data for free-standing graphene obtained by scanning transmission electron microscope. The probability density for energy loss of fast electrons traversing graphene under normal incidence is evaluated using an optical approximation based on the conductivity of graphene given in the local, i.e., frequency-dependent form derived by both a two-dimensional, two-fluid extended hydrodynamic (eHD) model and an ab initio method. We compare the results for the real and imaginary parts of the optical conductivity in graphene obtained by these two methods. The calculated probability density is directly compared with the EEL spectra from three independent experiments and we find very good agreement, especially in the case of the eHD model. Furthermore, we point out that the subtraction of the zero-loss peak from the experimental EEL spectra has a strong influence on the analytical model for the EEL spectroscopy data. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nilsson, E. Douglas; Bigg, E. Keith
1996-04-01
Radiosondes established that the air in the near surface mixed layer was very frequently near saturation during the International Arctic Ocean Expedition 1991 which must have been a large factor in the frequent occurrence of fogs. Fogs were divided into groups of summer, transition and winter types depending on whether the advecting air, the ice surface or sea surface respectively was warmest and the source of heat. The probability of summer and transition fogs increased at air temperatures near 0°C while winter fogs had a maximum probability of occurrence at air temperatures between -5 and -10°C. Advection from the open sea was the primary cause of the summer group, the probability of occurrence being high during the 1st day's travel and appreciable until the end of 3days. Transition fogs reached its maximum probability of formation on the 4th day of advection. Radiation heating and cooling of the ice both appeared to have influenced summer and transition fogs, while winter fogs were strongly favoured by the long wave radiation loss at clear sky conditions. Another cause of winter fogs was the heat and moisture source of open leads. Wind speed was also a factor in the probability of fog formation, summer and transition fogs being favoured by winds between 2 and 6ms
1, while winter fogs were favoured by wind speeds of only 1ms
1. Concentrations of fog drops were generally lower than those of the cloud condensation nuclei active at 0.1%, having a median of 3cm
3. While a well-defined modal diameter of 20 25μm was found in all fogs, a second transient mode at about 100μm was also frequently observed. The observation of fog bows with supernumerary arcs pointed to the existence of fog droplets as large as 200 300µm in diameter at fog top. It is suggested that the large drops originated from droplets grown near the fog top and were brought to near the surface by an overturning of the fog layer. Shear induced wave motions and roll vortices were found to cause perturbations in the near-surface layer and appeared to influence fog formation and dissipation. The low observed droplet concentration in fogs limits their ability to modify aerosol number concentrations and size distributions, the persistent overlying stratus being a more likely site for effective interactions. It is suggested that variations in the fog formation described in this paper may be a useful indicator of circulation changes in the arctic consequent upon a global warming.
How Does Ambiguity Affect Insurance Decisions
1990-05-01
actuarially fair value is C=$100. As with the actuaries, the underwriters charge higher premiums when either p and/or L is ambiguous. Even for the case where...probabilities they reacted by increasing the premium (i.e., reducing C) particularly for the perfectly correlated case. Thus when p=.01, the actuarially fair ... value is C=100. When losses are perfectly correlated and the actuary faces an ambiguous probability, the median value is C=9. The probability would
Design and analysis of a double superimposed chamber valveless MEMS micropump.
Zordan, E; Amirouche, F
2007-02-01
The newly designed micropump model proposed consists of a valveless double chamber pump completely simulated and optimized for drug delivery conditions. First, the inertia force and viscous loss in relation to actuation, pressure, and frequency is considered, and then a model of the nozzle/diffuser elements is introduced. The value of the flowrate obtained from the first model is then used to determine the loss coefficients starting from geometrical properties and flow velocity. From the developed model IT analysis is performed to predict the micropump performance based on the actuation parameters and no energy loss. A single-chamber pump with geometrical dimensions equal to each of the chambers of the double-chamber pump was also developed, and the results from both models are then compared for equally applied actuation pressure and frequency. Results show that the proposed design gives a maximum flow working frequency that is about 30 per cent lower than the single chamber design, with a maximum flowrate that is 140 per cent greater than that of the single chamber. Finally, the influences of geometrical properties on flowrate, maximum flow frequency, loss coefficients, and membrane strain are examined. The results show that the nozzle/ diffuser initial width and chamber side length are the most critical dimensions of the design.
Karanth, Krithi K; Gopalaswamy, Arjun M; DeFries, Ruth; Ballal, Natasha
2012-01-01
Mitigating crop and livestock loss to wildlife and improving compensation distribution are important for conservation efforts in landscapes where people and wildlife co-occur outside protected areas. The lack of rigorously collected spatial data poses a challenge to management efforts to minimize loss and mitigate conflicts. We surveyed 735 households from 347 villages in a 5154 km(2) area surrounding Kanha Tiger Reserve in India. We modeled self-reported household crop and livestock loss as a function of agricultural, demographic and environmental factors, and mitigation measures. We also modeled self-reported compensation received by households as a function of demographic factors, conflict type, reporting to authorities, and wildlife species involved. Seventy-three percent of households reported crop loss and 33% livestock loss in the previous year, but less than 8% reported human injury or death. Crop loss was associated with greater number of cropping months per year and proximity to the park. Livestock loss was associated with grazing animals inside the park and proximity to the park. Among mitigation measures only use of protective physical structures were associated with reduced livestock loss. Compensation distribution was more likely for tiger related incidents, and households reporting loss and located in the buffer. Average estimated probability of crop loss was 0.93 and livestock loss was 0.60 for surveyed households. Estimated crop and livestock loss and compensation distribution were higher for households located inside the buffer. Our approach modeled conflict data to aid managers in identifying potential conflict hotspots, influential factors, and spatially maps risk probability of crop and livestock loss. This approach could help focus allocation of conservation efforts and funds directed at conflict prevention and mitigation where high densities of people and wildlife co-occur.
Karanth, Krithi K.; Gopalaswamy, Arjun M.; DeFries, Ruth; Ballal, Natasha
2012-01-01
Mitigating crop and livestock loss to wildlife and improving compensation distribution are important for conservation efforts in landscapes where people and wildlife co-occur outside protected areas. The lack of rigorously collected spatial data poses a challenge to management efforts to minimize loss and mitigate conflicts. We surveyed 735 households from 347 villages in a 5154 km2 area surrounding Kanha Tiger Reserve in India. We modeled self-reported household crop and livestock loss as a function of agricultural, demographic and environmental factors, and mitigation measures. We also modeled self-reported compensation received by households as a function of demographic factors, conflict type, reporting to authorities, and wildlife species involved. Seventy-three percent of households reported crop loss and 33% livestock loss in the previous year, but less than 8% reported human injury or death. Crop loss was associated with greater number of cropping months per year and proximity to the park. Livestock loss was associated with grazing animals inside the park and proximity to the park. Among mitigation measures only use of protective physical structures were associated with reduced livestock loss. Compensation distribution was more likely for tiger related incidents, and households reporting loss and located in the buffer. Average estimated probability of crop loss was 0.93 and livestock loss was 0.60 for surveyed households. Estimated crop and livestock loss and compensation distribution were higher for households located inside the buffer. Our approach modeled conflict data to aid managers in identifying potential conflict hotspots, influential factors, and spatially maps risk probability of crop and livestock loss. This approach could help focus allocation of conservation efforts and funds directed at conflict prevention and mitigation where high densities of people and wildlife co-occur. PMID:23227173
Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.
Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R
2018-05-26
Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty
Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon
2006-01-01
Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions adopted in the loss calculations. This is a sensitivity study aimed at future regional earthquake source modelers, so that they may be informed of the effects on loss introduced by modeling assumptions and epistemic uncertainty in the WG02 earthquake source model.
The utility of Bayesian predictive probabilities for interim monitoring of clinical trials
Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn
2014-01-01
Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363
System Architecture of Small Unmanned Aerial System for Flight Beyond Visual Line-of-Sight
2015-09-17
Signal Strength PT = Transmitter Power GT = Transmitter antenna gain LT = Transmitter loss Lp = Propagation loss GR = Receiver antenna...gain (dBi) LR(db) = Receiver losses (dB) 15 Lm = Link margin (dB) PT = Transmitter Power (dBm) GT = Transmitter antenna gain (dBi) LT... Transmitter loss (dB) The maximum range is determined by four components, 1) Transmission, 2) Propagation, 3) Reception and 4) Link Margin
Loss of efficiency in a coaxial arrangement of a pair of wind rotors
NASA Astrophysics Data System (ADS)
Okulov, V. L.; Naumov, I. V.; Tsoy, M. A.; Mikkelsen, R. F.
2017-07-01
The efficiency of a pair of wind turbines is experimentally investigated for the case when the model of the second rotor is coaxially located in the wake of the first one. This configuration implies the maximum level of losses in wind farms, as in the rotor wakes, the deceleration of the freestream is maximum. As a result of strain gauge measurements, the dependences of dimensionless power characteristics of both rotors on the distances between them were determined for different modes at different tip speed ratios. The obtained results are of interest for further development of aerodynamics of wind turbines, for optimizing the work of existing wind farms and reducing their power losses due to interactions with wakes of other wind turbines during design and calculation.
NASA Astrophysics Data System (ADS)
Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei
2013-08-01
develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.
Puc, Małgorzata; Wolski, Tomasz
2013-01-01
The allergenic pollen content of the atmosphere varies according to climate, biogeography and vegetation. Minimisation of the pollen allergy symptoms is related to the possibility of avoidance of large doses of the allergen. Measurements performed in Szczecin over a period of 13 years (2000-2012 inclusive) permitted prediction of theoretical maximum concentrations of pollen grains and their probability for the pollen season of Poaceae, Artemisia and Ambrosia. Moreover, the probabilities were determined of a given date as the beginning of the pollen season, the date of the maximum pollen count, Seasonal Pollen Index value and the number of days with pollen count above threshold values. Aerobiological monitoring was conducted using a Hirst volumetric trap (Lanzoni VPPS). Linear trend with determination coefficient (R(2)) was calculated. Model for long-term forecasting was performed by the method based on Gumbel's distribution. A statistically significant negative correlation was determined between the duration of pollen season of Poaceae and Artemisia and the Seasonal Pollen Index value. Seasonal, total pollen counts of Artemisia and Ambrosia showed a strong and statistically significant decreasing tendency. On the basis of Gumbel's distribution, a model was proposed for Szczecin, allowing prediction of the probabilities of the maximum pollen count values that can appear once in e.g. 5, 10 or 100 years. Short pollen seasons are characterised by a higher intensity of pollination than long ones. Prediction of the maximum pollen count values, dates of the pollen season beginning, and the number of days with pollen count above the threshold, on the basis of Gumbel's distribution, is expected to lead to improvement in the prophylaxis and therapy of persons allergic to pollen.
Multiscale Resilience of Complex Systems
NASA Astrophysics Data System (ADS)
Tchiguirinskaia, I.; Schertzer, D. J. M.; Giangola-Murzyn, A.; Hoang Cong, T.
2014-12-01
We first argue the need for well defined resilience metrics to better evaluate the resilience of complex systems such as (peri-)urban flood management systems. We review both the successes and limitations of resilience metrics in the framework of dynamical systems and their generalization in the framework of the viability theory. We then point out that the most important step to achieve is to define resilience across scales instead of doing it at a given scale. Our preliminary, critical analysis of the series of attempts to define an operational resilience metrics led us to consider a scale invariant metrics based on the scale independent codimension of extreme singularities. Multifractal downscaling of climate scenarios can be considered as a first illustration. We focussed on a flood scenario evaluation method with the help of two singularities γ_s and γ_Max, corresponding respectively to an effective and a probable maximum singularity, that yield an innovative framework to address the issues of flood resilience systems in a scale independent manner. Indeed, the stationarity of the universal multifractal parameters would result into a rather stable value of probable maximum singularity γ_s. By fixing the limit of acceptability for a maximum flood water depth at a given scale, with a corresponding singularity, we effectively fix the threshold of the probable maximum singularity γ_s as a criterion of the flood resilience we accept. Then various scenarios of flood resilient measures could be simulated with the help of Multi-Hydro under upcoming climat scenarios. The scenarios that result in estimates of either γ_Max or γ_s below the pre-selected γ_s value will assure the effective flood resilience of the whole modeled system across scales. The research for this work was supported, in part, by the EU FP7 SMARTesT and INTERREG IVB RainGain projects.
NASA Astrophysics Data System (ADS)
Amine, Lagheryeb; Zouhair, Benkhaldoun; Jonathan, Makela; Mohamed, Kaab; Aziza, Bounhir; Brian, Hardin; Dan, Fisher; Tmuthy, Duly
2016-04-01
T he Analysis of the seasonal variations of equatorial plasma bubble, occurrence using the 630.0 nm airglow images collected by the PICASSO imager deployed at the Oukkaimden observatory in Morocco. Data have been taken since November 2013 to december 2015. We show the monthly average of appearance of EPBs. A maximum probability for bubble development is seen in the data in January and between late February and early March. We also observe that there are a maximum period of appearance where the plasma is observed (3-5 nights successivies) and we will discuss its connection with the solar activity in storm time. Future analysis will compare the probability of bubble occurrence in our site with the data raised in other observation sites.
NASA Astrophysics Data System (ADS)
Li, Na; Zhang, Yu; Wen, Shuang; Li, Lei-lei; Li, Jian
2018-01-01
Noise is a problem that communication channels cannot avoid. It is, thus, beneficial to analyze the security of MDI-QKD in noisy environment. An analysis model for collective-rotation noise is introduced, and the information theory methods are used to analyze the security of the protocol. The maximum amount of information that Eve can eavesdrop is 50%, and the eavesdropping can always be detected if the noise level ɛ ≤ 0.68. Therefore, MDI-QKD protocol is secure as quantum key distribution protocol. The maximum probability that the relay outputs successful results is 16% when existing eavesdropping. Moreover, the probability that the relay outputs successful results when existing eavesdropping is higher than the situation without eavesdropping. The paper validates that MDI-QKD protocol has better robustness.
Maximum number of habitable planets at the time of Earth's origin: new hints for panspermia?
von Bloh, Werner; Franck, Siegfried; Bounama, Christine; Schellnhuber, Hans-Joachim
2003-04-01
New discoveries have fuelled the ongoing discussion of panspermia, i.e. the transport of life from one planet to another within the solar system (interplanetary panspermia) or even between different planetary systems (interstellar panspermia). The main factor for the probability of interstellar panspermia is the average density of stellar systems containing habitable planets. The combination of recent results for the formation rate of Earth-like planets with our estimations of extrasolar habitable zones allows us to determine the number of habitable planets in the Milky Way over cosmological time scales. We find that there was a maximum number of habitable planets around the time of Earth's origin. If at all, interstellar panspermia was most probable at that time and may have kick-started life on our planet.
Sapra, Katherine J; Buck Louis, Germaine M; Sundaram, Rajeshwari; Joseph, K S; Bates, Lisa M; Galea, Sando; Ananth, Cande V
2018-01-01
Although pregnancy loss affects one-third of pregnancies, the associated signs/symptoms have not been fully described. Given the dynamic nature of maternal physiologic adaptation to early pregnancy, we posited the relationships between signs/symptoms and subsequent loss would vary weekly. In a preconception cohort with daily follow-up, pregnancies were ascertained by self-administered sensitive home pregnancy tests on day of expected menses. We evaluated the effects of weekly time-varying signs/symptoms (including vaginal bleeding, lower abdominal cramping, and nausea and/or vomiting) on pregnancy loss <20 weeks in Cox proportional hazards models and calculated the week-specific probability of loss by the presence/absence of each sign/symptom. Of 341 pregnancies ascertained by home pregnancy test, 95 (28%) ended in loss. Relationships between signs/symptoms and loss varied across time since first positive pregnancy test. In the first week following pregnancy confirmation, when many losses occurred, bleeding [hazard ratio (HR) 8.7, 95% confidence interval (CI) 4.7, 16.0] and cramping (HR 1.8, 95% CI 1.2, 2.7) were associated with loss even when accompanied by nausea and/or vomiting (HR 5.2, 95% CI 2.6, 10.5). After the second week, new relationships emerged with nausea and/or vomiting inversely associated (HR range 0.6-0.3, all 95% CI upper bounds <1.00) and bleeding no longer associated with loss. Probabilities of loss of ranged from 78% (95% CI 59%, 96%) with bleeding present in week 1 to 8% (95% CI 5%, 12%) with nausea/vomiting present in week 5. Relationships between signs/symptoms and pregnancy loss vary in early pregnancy possibly reflecting maternal physiologic response. © 2017 John Wiley & Sons Ltd.
Effects of tag loss on direct estimates of population growth rate
Rotella, J.J.; Hines, J.E.
2005-01-01
The temporal symmetry approach of R. Pradel can be used with capture-recapture data to produce retrospective estimates of a population's growth rate, lambda(i), and the relative contributions to lambda(i) from different components of the population. Direct estimation of lambda(i) provides an alternative to using population projection matrices to estimate asymptotic lambda and is seeing increased use. However, the robustness of direct estimates of lambda(1) to violations of several key assumptions has not yet been investigated. Here, we consider tag loss as a possible source of bias for scenarios in which the rate of tag loss is (1) the same for all marked animals in the population and (2) a function of tag age. We computed analytic approximations of the expected values for each of the parameter estimators involved in direct estimation and used those values to calculate bias and precision for each parameter estimator. Estimates of lambda(i) were robust to homogeneous rates of tag loss. When tag loss rates varied by tag age, bias occurred for some of the sampling situations evaluated, especially those with low capture probability, a high rate of tag loss, or both. For situations with low rates of tag loss and high capture probability, bias was low and often negligible. Estimates of contributions of demographic components to lambda(i) were not robust to tag loss. Tag loss reduced the precision of all estimates because tag loss results in fewer marked animals remaining available for estimation. Clearly tag loss should be prevented if possible, and should be considered in analyses of lambda(i), but tag loss does not necessarily preclude unbiased estimation of lambda(i).
NASA Astrophysics Data System (ADS)
Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng
2017-12-01
There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.
Evaluating the influential priority of the factors on insurance loss of public transit
Su, Yongmin; Chen, Xinqiang
2018-01-01
Understanding correlation between influential factors and insurance losses is beneficial for insurers to accurately price and modify the bonus-malus system. Although there have been a certain number of achievements in insurance losses and claims modeling, limited efforts focus on exploring the relative role of accidents characteristics in insurance losses. The primary objective of this study is to evaluate the influential priority of transit accidents attributes, such as the time, location and type of accidents. Based on the dataset from Washington State Transit Insurance Pool (WSTIP) in USA, we implement several key algorithms to achieve the objectives. First, K-means algorithm contributes to cluster the insurance loss data into 6 intervals; second, Grey Relational Analysis (GCA) model is applied to calculate grey relational grades of the influential factors in each interval; in addition, we implement Naive Bayes model to compute the posterior probability of factors values falling in each interval. The results show that the time, location and type of accidents significantly influence the insurance loss in the first five intervals, but their grey relational grades show no significantly difference. In the last interval which represents the highest insurance loss, the grey relational grade of the time is significant higher than that of the location and type of accidents. For each value of the time and location, the insurance loss most likely falls in the first and second intervals which refers to the lower loss. However, for accidents between buses and non-motorized road users, the probability of insurance loss falling in the interval 6 tends to be highest. PMID:29298337
Evaluating the influential priority of the factors on insurance loss of public transit.
Zhang, Wenhui; Su, Yongmin; Ke, Ruimin; Chen, Xinqiang
2018-01-01
Understanding correlation between influential factors and insurance losses is beneficial for insurers to accurately price and modify the bonus-malus system. Although there have been a certain number of achievements in insurance losses and claims modeling, limited efforts focus on exploring the relative role of accidents characteristics in insurance losses. The primary objective of this study is to evaluate the influential priority of transit accidents attributes, such as the time, location and type of accidents. Based on the dataset from Washington State Transit Insurance Pool (WSTIP) in USA, we implement several key algorithms to achieve the objectives. First, K-means algorithm contributes to cluster the insurance loss data into 6 intervals; second, Grey Relational Analysis (GCA) model is applied to calculate grey relational grades of the influential factors in each interval; in addition, we implement Naive Bayes model to compute the posterior probability of factors values falling in each interval. The results show that the time, location and type of accidents significantly influence the insurance loss in the first five intervals, but their grey relational grades show no significantly difference. In the last interval which represents the highest insurance loss, the grey relational grade of the time is significant higher than that of the location and type of accidents. For each value of the time and location, the insurance loss most likely falls in the first and second intervals which refers to the lower loss. However, for accidents between buses and non-motorized road users, the probability of insurance loss falling in the interval 6 tends to be highest.
Size effects on miniature Stirling cycle cryocoolers
NASA Astrophysics Data System (ADS)
Yang, Xiaoqin; Chung, J. N.
2005-08-01
Size effects on the performance of Stirling cycle cryocoolers were investigated by examining each individual loss associated with the regenerator and combining these effects. For the fixed cycle parameters and given regenerator length scale, it was found that only for a specific range of the hydrodynamic diameter the system can produce net refrigeration and there is an optimum hydraulic diameter at which the maximum net refrigeration is achieved. When the hydraulic diameter is less than the optimum value, the regenerator performance is controlled by the pressure drop loss; when the hydraulic diameter is greater than the optimum value, the system performance is controlled by the thermal losses. It was also found that there exists an optimum ratio between the hydraulic diameter and the length of the regenerator that offers the maximum net refrigeration. As the regenerator length is decreased, the optimum hydraulic diameter-to-length ratio increases; and the system performance is increased that is controlled by the pressure drop loss and heat conduction loss. Choosing appropriate regenerator characteristic sizes in small-scale systems are more critical than in large-scale ones.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
Putnam, Larry D.; Long, Andrew J.
2007-01-01
The Madison aquifer, which contains fractures and solution openings in the Madison Limestone, is used extensively for water supplies for the city of Rapid City and other suburban communities in the Rapid City, S. Dak., area. The 48 square-mile study area includes the west-central and southwest parts of Rapid City and the outcrops of the Madison Limestone extending from south of Spring Creek to north of Rapid Creek. Recharge to the Madison Limestone occurs when streams lose flow as they cross the outcrop. The maximum net loss rate for Spring and Rapid Creek loss zones are 21 and 10 cubic feet per second (ft3/s), respectively. During 2003 and 2004, fluorescent dyes were injected in the Spring and Rapid Creek loss zones to estimate approximate locations of preferential flow paths in the Madison aquifer and to measure the response and transit times at wells and springs. Four injections of about 2 kilograms of fluorescein dye were made in the Spring Creek loss zone during 2003 (sites S1, S2, and S3) and 2004 (site S4). Injection at site S1 was made in streamflow just upstream from the loss zone over a 12-hour period when streamflow was about equal to the maximum loss rate. Injections at sites S2, S3, and S4 were made in specific swallow holes located in the Spring Creek loss zone. Injection at site R1 in 2004 of 3.5 kilograms of Rhodamine WT dye was made in streamflow just upstream from the Rapid Creek loss zone over about a 28-hour period. Selected combinations of 27 wells, 6 springs, and 3 stream sites were monitored with discrete samples following the injections. For injections at sites S1-S3, when Spring Creek streamflow was greater than or equal to 20 ft3/s, fluorescein was detected in samples from five wells that were located as much as about 2 miles from the loss zone. Time to first arrival (injection at site S1) ranged from less than 1 to less than 10 days. The maximum fluorescein concentration (injection at site S1) of 120 micrograms per liter (ug/L) at well CO, which is located adjacent to the loss zone, was similar to the concentration in the stream. Fluorescein arrived at well NON (injection at site S1), which is located about 2 miles northeast of the loss zone, within about 1.6 days, and the maximum concentration was 44 ug/L. For injection at site S4, when streamflow was about 12 ft3/s, fluorescein was detected in samples from six wells and time to first arrival ranged from 0.2 to 16 days. Following injection at site S4 in 2004, the length of time that dye remained in the capture zone of well NON, which is located approximately 2 miles from the loss zone, was almost an order of magnitude greater than in 2003. For injection at site R1, Rhodamine WT was detected at well DRU and spring TI-SP with time to first arrival of about 0.5 and 1.1 days and maximum concentrations of 6.2 and 0.91 ug/L, respectively. Well DRU and spring TI-SP are located near the center of the Rapid Creek loss zone where the creek has a large meander. Measurable concentrations were observed for spring TI-SP as many as 109 days after the dye injection. The direction of a conduit flow path in the Spring Creek area was to the northeast with ground-water velocities that ranged from 770 to 6,500 feet per day. In the Rapid Creek loss zone, a conduit flow path east of the loss zone was not evident from the dye injection.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
Chapter 6. Synthesis of management and research considerations
Michael K. Young
1995-01-01
The five subspecies of cutthroat trout considered in this assessment share one characteristic: the loss of populations throughout their historical ranges. Similar causes have led to these losses: the introduction of nonnative fishes, overharvest, habitat degradation, and probably habitat fragmentation. Synergism among these effects remains unstudied, and we do not...
Sulfur Dioxide and Material Damage
ERIC Educational Resources Information Center
Gillette, Donald G.
1975-01-01
This study relates sulfur dioxide levels with material damage in heavily populated or polluted areas. Estimates of loss were determined from increased maintenance and replacement costs. The data indicate a decrease in losses during the past five years probably due to decline in pollution levels established by air quality standards. (MR)
Transient Hearing Loss in Adults Associated With Zika Virus Infection.
Vinhaes, Eriko S; Santos, Luciane A; Dias, Lislane; Andrade, Nilvano A; Bezerra, Victor H; de Carvalho, Anderson T; de Moraes, Laise; Henriques, Daniele F; Azar, Sasha R; Vasilakis, Nikos; Ko, Albert I; Andrade, Bruno B; Siqueira, Isadora C; Khouri, Ricardo; Boaventura, Viviane S
2017-03-01
In 2015, during the outbreak of Zika virus (ZIKV) in Brazil, we identified 3 cases of acute hearing loss after exanthematous illness. Serology yielded finding compatible with ZIKV as the cause of a confirmed (n = 1) and a probable (n = 2) flavivirus infection, indicating an association between ZIKV infection and transient hearing loss. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America.
Catastrophe loss modelling of storm-surge flood risk in eastern England.
Muir Wood, Robert; Drayton, Michael; Berger, Agnete; Burgess, Paul; Wright, Tom
2005-06-15
Probabilistic catastrophe loss modelling techniques, comprising a large stochastic set of potential storm-surge flood events, each assigned an annual rate of occurrence, have been employed for quantifying risk in the coastal flood plain of eastern England. Based on the tracks of the causative extratropical cyclones, historical storm-surge events are categorized into three classes, with distinct windfields and surge geographies. Extreme combinations of "tide with surge" are then generated for an extreme value distribution developed for each class. Fragility curves are used to determine the probability and magnitude of breaching relative to water levels and wave action for each section of sea defence. Based on the time-history of water levels in the surge, and the simulated configuration of breaching, flow is time-stepped through the defences and propagated into the flood plain using a 50 m horizontal-resolution digital elevation model. Based on the values and locations of the building stock in the flood plain, losses are calculated using vulnerability functions linking flood depth and flood velocity to measures of property loss. The outputs from this model for a UK insurance industry portfolio include "loss exceedence probabilities" as well as "average annualized losses", which can be employed for calculating coastal flood risk premiums in each postcode.
Rosenhall, Ulf; Hederstierna, Christina; Idrizbegovic, Esma
2011-09-01
Audiological data from a population based epidemiological investigation were studied on elderly persons. Specific diagnoses of otological and audiological disorders, which can result in hearing loss, were searched for. A retrospective register study. Three age cohorts, 474 70- and 75-year olds ("younger"), and 252 85-year olds ("older"), were studied. Clinical pure tone and speech audiometry was used. Data from medical files were included. Conductive hearing loss was diagnosed in 6.1% of the "younger" elderly persons, and in 10.3% of the "older" ones. Specific diagnoses (chronic otitis media and otosclerosis) were established in about half of the cases. Sensorineural hearing loss, other than age-related hearing loss and noise induced hearing loss, was diagnosed in 3.4 % and 5.2% respectively. Severely impaired speech recognition, possibly reflecting age-related auditory neuropathy, was found in 0.4% in the "younger" group, and in 10% in the "older" group. Bilateral functional deafness was present in 3.2% of the 85-year-old persons, but was not present in the 70-75-year group. The incidence of probable age-related auditory neuropathy increases considerably from 70-75 to 85 years. There are marked differences between "younger" and "older" elderly persons regarding hearing loss that severely affects oral communication.
Performance analysis for minimally nonlinear irreversible refrigerators at finite cooling power
NASA Astrophysics Data System (ADS)
Long, Rui; Liu, Zhichun; Liu, Wei
2018-04-01
The coefficient of performance (COP) for general refrigerators at finite cooling power have been systematically researched through the minimally nonlinear irreversible model, and its lower and upper bounds in different operating regions have been proposed. Under the tight coupling conditions, we have calculated the universal COP bounds under the χ figure of merit in different operating regions. When the refrigerator operates in the region with lower external flux, we obtained the general bounds (0 < ε <(√{ 9 + 8εC } - 3) / 2) under the χ figure of merit. We have also calculated the universal bounds for maximum gain in COP under different operating regions to give a further insight into the COP gain with the cooling power away from the maximum one. When the refrigerator operates in the region located between maximum cooling power and maximum COP with lower external flux, the upper bound for COP and the lower bound for relative gain in COP present large values, compared to a relative small loss from the maximum cooling power. If the cooling power is the main objective, it is desirable to operate the refrigerator at a slightly lower cooling power than at the maximum one, where a small loss in the cooling power induces a much larger COP enhancement.
ESTIMATING RISK TO CALIFORNIA ENERGY INFRASTRUCTURE FROM PROJECTED CLIMATE CHANGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sathaye, Jayant; Dale, Larry; Larsen, Peter
2011-06-22
This report outlines the results of a study of the impact of climate change on the energy infrastructure of California and the San Francisco Bay region, including impacts on power plant generation; transmission line and substation capacity during heat spells; wildfires near transmission lines; sea level encroachment upon power plants, substations, and natural gas facilities; and peak electrical demand. Some end-of-century impacts were projected:Expected warming will decrease gas-fired generator efficiency. The maximum statewide coincident loss is projected at 10.3 gigawatts (with current power plant infrastructure and population), an increase of 6.2 percent over current temperature-induced losses. By the end ofmore » the century, electricity demand for almost all summer days is expected to exceed the current ninetieth percentile per-capita peak load. As much as 21 percent growth is expected in ninetieth percentile peak demand (per-capita, exclusive of population growth). When generator losses are included in the demand, the ninetieth percentile peaks may increase up to 25 percent. As the climate warms, California's peak supply capacity will need to grow faster than the population.Substation capacity is projected to decrease an average of 2.7 percent. A 5C (9F) air temperature increase (the average increase predicted for hot days in August) will diminish the capacity of a fully-loaded transmission line by an average of 7.5 percent.The potential exposure of transmission lines to wildfire is expected to increase with time. We have identified some lines whose probability of exposure to fire are expected to increase by as much as 40 percent. Up to 25 coastal power plants and 86 substations are at risk of flooding (or partial flooding) due to sea level rise.« less
Ancient origin of endemic Iberian earth-boring dung beetles (Geotrupidae).
Cunha, Regina L; Verdú, José R; Lobo, Jorge M; Zardoya, Rafael
2011-06-01
The earth-boring dung beetles belong to the family Geotrupidae that includes more than 350 species classified into three subfamilies Geotrupinae, Lethrinae, and Taurocerastinae, mainly distributed across temperate regions. Phylogenetic relationships within the family are based exclusively on morphology and remain controversial. In the Iberian Peninsula there are 33 species, 20 of them endemic, which suggests that these lineages might have experienced a radiation event. The evolution of morphological adaptations to the Iberian semi-arid environments such as the loss of wings (apterism) or the ability to exploit alternative food resources is thought to have promoted diversification. Here, we present a phylogenetic analysis of 31 species of Geotrupidae, 17 endemic to the Iberian Peninsula, and the remaining from southeastern Europe, Morocco, and Austral South America based on partial mitochondrial and nuclear gene sequence data. The reconstructed maximum likelihood and Bayesian inference phylogenies recovered Geotrupinae and Lethrinae as sister groups to the exclusion of Taurocerastinae. Monophyly of the analyzed geotrupid genera was supported but phylogenetic relationships among genera were poorly resolved. Ancestral character-state reconstruction of wing loss evolution, dating, and diversification tests altogether showed neither evidence of a burst of cladogenesis of the Iberian Peninsula group nor an association between apterism and higher diversification rates. Loss of flight did not accelerate speciation rates but it was likely responsible for the high levels of endemism of Iberian geotrupids by preventing their expansion to central Europe. These Iberian flightless beetle lineages are probably paleoendemics that have survived since the Tertiary in this refuge area during Plio-Pleistocene climatic fluctuations by evolving adaptations to arid and semi-arid environments. Copyright © 2011 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adare, A.; Afanasiev, S.; Aidala, C.
2016-02-22
We present measurements of the fractional momentum loss (S loss = delta pT / pT) of high-transverse-momentum-identified hadrons in heavy-ion collisions. Using pi 0 in Au + Au and Cu + Cu collisions at √s NN = 62.4 and 200 GeV measured by the PHENIX experiment at the Relativistic Heavy Ion Collider and and charged hadrons in Pb + Pb collisions measured by the ALICE experiment at the Large Hadron Collider, we studied the scaling properties of S loss as a function of a number of variables: the number of participants, N part, the number of quark participants, N qp,more » the charged-particle density, dN ch/d η, and the Bjorken energy density times the equilibration time, epsilon Bjτ 0. We also find that the p T, where S loss has its maximum, varies both with centrality and collision energy. Above the maximum, S loss tends to follow a power-law function with all four scaling variables. Finally, the data at √s NN = 200 GeV and 2.76 TeV, for sufficiently high particle densities, have a common scaling of S loss with dN ch/d η and ε Bjτ 0, lending insight into the physics of parton energy loss.« less
ERIC Educational Resources Information Center
Cairney, John; Hay, John; Veldhuizen, Scott; Faught, Brent
2010-01-01
Oxygen consumption at peak physical exertion (VO[subscript 2] maximum) is the most widely used indicator of cardiorespiratory fitness. The purpose of this study was to compare two protocols for its estimation, cycle ergometer testing and the 20 m shuttle run, among children with and without probable developmental coordination disorder (pDCD). The…
NASA Astrophysics Data System (ADS)
Hora, H.; Miley, G. H.
2007-12-01
One of the most convincing facts about LENR due to deuterons of very high concentration in host metals as palladium is the measurement of the large scale minimum of the reaction probability depending on the nucleon number A of generated elements at A = 153 where a local maximum was measured. This is similar to the fission of uranium at A = 119 where the local maximum follows from the Maruhn-Greiner theory if the splitting nuclei are excited to about MeV energy. The LENR generated elements can be documented any time after the reaction by SIMS or K-shell X-ray excitation to show the very unique distribution with the local maximum. An explanation is based on the strong Debye screening of the Maxwellian deuterons within the degenerate rigid electron background especially within the swimming electron layer at the metal surface or at interfaces. The deuterons behave like neutrals at distances of about 2 picometers. They may form clusters due to soft attraction in the range above thermal energy. Clusters of 10 pm diameter may react over long time probabilities (megaseconds) with Pd nuclei leading to a double magic number compound nucleus which splits like in fission to the A = 153 element distribution.
Decision analysis with approximate probabilities
NASA Technical Reports Server (NTRS)
Whalen, Thomas
1992-01-01
This paper concerns decisions under uncertainty in which the probabilities of the states of nature are only approximately known. Decision problems involving three states of nature are studied. This is due to the fact that some key issues do not arise in two-state problems, while probability spaces with more than three states of nature are essentially impossible to graph. The primary focus is on two levels of probabilistic information. In one level, the three probabilities are separately rounded to the nearest tenth. This can lead to sets of rounded probabilities which add up to 0.9, 1.0, or 1.1. In the other level, probabilities are rounded to the nearest tenth in such a way that the rounded probabilities are forced to sum to 1.0. For comparison, six additional levels of probabilistic information, previously analyzed, were also included in the present analysis. A simulation experiment compared four criteria for decisionmaking using linearly constrained probabilities (Maximin, Midpoint, Standard Laplace, and Extended Laplace) under the eight different levels of information about probability. The Extended Laplace criterion, which uses a second order maximum entropy principle, performed best overall.
NASA Astrophysics Data System (ADS)
Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao
2018-06-01
This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.
How to model a negligible probability under the WTO sanitary and phytosanitary agreement?
Powell, Mark R
2013-06-01
Since the 1997 EC--Hormones decision, World Trade Organization (WTO) Dispute Settlement Panels have wrestled with the question of what constitutes a negligible risk under the Sanitary and Phytosanitary Agreement. More recently, the 2010 WTO Australia--Apples Panel focused considerable attention on the appropriate quantitative model for a negligible probability in a risk assessment. The 2006 Australian Import Risk Analysis for Apples from New Zealand translated narrative probability statements into quantitative ranges. The uncertainty about a "negligible" probability was characterized as a uniform distribution with a minimum value of zero and a maximum value of 10(-6) . The Australia - Apples Panel found that the use of this distribution would tend to overestimate the likelihood of "negligible" events and indicated that a triangular distribution with a most probable value of zero and a maximum value of 10⁻⁶ would correct the bias. The Panel observed that the midpoint of the uniform distribution is 5 × 10⁻⁷ but did not consider that the triangular distribution has an expected value of 3.3 × 10⁻⁷. Therefore, if this triangular distribution is the appropriate correction, the magnitude of the bias found by the Panel appears modest. The Panel's detailed critique of the Australian risk assessment, and the conclusions of the WTO Appellate Body about the materiality of flaws found by the Panel, may have important implications for the standard of review for risk assessments under the WTO SPS Agreement. © 2012 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei
2018-01-01
In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.
Estimating loss of Brucella abortus antibodies from age-specific serological data in elk
Benavides, J. A.; Caillaud, D.; Scurlock, B. M.; Maichak, E. J.; Edwards, W.H.; Cross, Paul C.
2017-01-01
Serological data are one of the primary sources of information for disease monitoring in wildlife. However, the duration of the seropositive status of exposed individuals is almost always unknown for many free-ranging host species. Directly estimating rates of antibody loss typically requires difficult longitudinal sampling of individuals following seroconversion. Instead, we propose a Bayesian statistical approach linking age and serological data to a mechanistic epidemiological model to infer brucellosis infection, the probability of antibody loss, and recovery rates of elk (Cervus canadensis) in the Greater Yellowstone Ecosystem. We found that seroprevalence declined above the age of ten, with no evidence of disease-induced mortality. The probability of antibody loss was estimated to be 0.70 per year after a five-year period of seropositivity and the basic reproduction number for brucellosis to 2.13. Our results suggest that individuals are unlikely to become re-infected because models with this mechanism were unable to reproduce a significant decline in seroprevalence in older individuals. This study highlights the possible implications of antibody loss, which could bias our estimation of critical epidemiological parameters for wildlife disease management based on serological data.
Hotspot Identification for Shanghai Expressways Using the Quantitative Risk Assessment Method
Chen, Can; Li, Tienan; Sun, Jian; Chen, Feng
2016-01-01
Hotspot identification (HSID) is the first and key step of the expressway safety management process. This study presents a new HSID method using the quantitative risk assessment (QRA) technique. Crashes that are likely to happen for a specific site are treated as the risk. The aggregation of the crash occurrence probability for all exposure vehicles is estimated based on the empirical Bayesian method. As for the consequences of crashes, crashes may not only cause direct losses (e.g., occupant injuries and property damages) but also result in indirect losses. The indirect losses are expressed by the extra delays calculated using the deterministic queuing diagram method. The direct losses and indirect losses are uniformly monetized to be considered as the consequences of this risk. The potential costs of crashes, as a criterion to rank high-risk sites, can be explicitly expressed as the sum of the crash probability for all passing vehicles and the corresponding consequences of crashes. A case study on the urban expressways of Shanghai is presented. The results show that the new QRA method for HSID enables the identification of a set of high-risk sites that truly reveal the potential crash costs to society. PMID:28036009
Carbon storage capacity of semi-arid grassland soils and sequestration potentials in northern China.
Wiesmeier, Martin; Munro, Sam; Barthold, Frauke; Steffens, Markus; Schad, Peter; Kögel-Knabner, Ingrid
2015-10-01
Organic carbon (OC) sequestration in degraded semi-arid environments by improved soil management is assumed to contribute substantially to climate change mitigation. However, information about the soil organic carbon (SOC) sequestration potential in steppe soils and their current saturation status remains unknown. In this study, we estimated the OC storage capacity of semi-arid grassland soils on the basis of remote, natural steppe fragments in northern China. Based on the maximum OC saturation of silt and clay particles <20 μm, OC sequestration potentials of degraded steppe soils (grazing land, arable land, eroded areas) were estimated. The analysis of natural grassland soils revealed a strong linear regression between the proportion of the fine fraction and its OC content, confirming the importance of silt and clay particles for OC stabilization in steppe soils. This relationship was similar to derived regressions in temperate and tropical soils but on a lower level, probably due to a lower C input and different clay mineralogy. In relation to the estimated OC storage capacity, degraded steppe soils showed a high OC saturation of 78-85% despite massive SOC losses due to unsustainable land use. As a result, the potential of degraded grassland soils to sequester additional OC was generally low. This can be related to a relatively high contribution of labile SOC, which is preferentially lost in the course of soil degradation. Moreover, wind erosion leads to substantial loss of silt and clay particles and consequently results in a direct loss of the ability to stabilize additional OC. Our findings indicate that the SOC loss in semi-arid environments induced by intensive land use is largely irreversible. Observed SOC increases after improved land management mainly result in an accumulation of labile SOC prone to land use/climate changes and therefore cannot be regarded as contribution to long-term OC sequestration. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.
2012-12-01
We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.
Costa, Eleonora C V; Guimarães, Sara; Ferreira, Domingos; Pereira, M Graça
2016-09-01
This study examined if abuse during childhood, rape in adulthood, and loss of resources predict a woman's probability of reporting symptoms of posttraumatic stress disorder (PTSD), and whether resource loss moderates the association between reporting childhood abuse and PTSD symptoms. The sample included 767 women and was collected in publicly funded primary-care settings. Women who reported having been abused during childhood also reported more resource loss, more acute PTSD symptoms, and having suffered more adult rape than those who reported no childhood abuse. Hierarchical logistic regression yielded a two-variable additive model in which child abuse and adult rape predict the probability of reporting or not any PTSD symptoms, explaining 59.7% of the variance. Women abused as children were 1 to 2 times more likely to report PTSD symptoms, with sexual abuse during childhood contributing most strongly to this result. Similarly, women reporting adult rape were almost twice as likely to report symptoms of PTSD as those not reporting it. Resource loss was unexpectedly not among the predictors but a moderation analysis showed that such loss moderated the association between child abuse and current PTSD symptoms, with resource loss increasing the number and severity of PTSD symptoms in women who also reported childhood abuse. The findings highlight the importance of early assessment and intervention in providing mental health care to abused, neglected, and impoverished women to help them prevent and reverse resource loss and revictimization.
Complicated grief associated with hurricane Katrina.
Shear, M Katherine; McLaughlin, Katie A; Ghesquiere, Angela; Gruber, Michael J; Sampson, Nancy A; Kessler, Ronald C
2011-08-01
Although losses are important consequences of disasters, few epidemiological studies of disasters have assessed complicated grief (CG) and none assessed CG associated with losses other than death of loved one. Data come from the baseline survey of the Hurricane Katrina Community Advisory Group, a representative sample of 3,088 residents of the areas directly affected by Hurricane Katrina. A brief screen for CG was included containing four items consistent with the proposed DSM-V criteria for a diagnosis of bereavement-related adjustment disorder. Fifty-eight and half percent of respondents reported a significant hurricane-related loss: Most-severe losses were 29.0% tangible, 9.5% interpersonal, 8.1% intangible, 4.2% work/financial, and 3.7% death of loved one. Twenty-six point one percent respondents with significant loss had possible CG and 7.0% moderate-to-severe CG. Death of loved one was associated with the highest conditional probability of moderate-to-severe CG (18.5%, compared to 1.1-10.5% conditional probabilities for other losses), but accounted for only 16.5% of moderate-to-severe CG due to its comparatively low prevalence. Most moderate-to-severe CG was due to tangible (52.9%) or interpersonal (24.0%) losses. Significant predictors of CG were mostly unique to either bereavement (racial-ethnic minority status, social support) or other losses (prehurricane history of psychopathology, social competence.). Nonbereavement losses accounted for the vast majority of hurricane-related possible CG despite risk of CG being much higher in response to bereavement than to other losses. This result argues for expansion of research on CG beyond bereavement and alerts clinicians to the need to address postdisaster grief associated with a wide range of losses. © 2011 Wiley-Liss, Inc.
Complicated grief associated with Hurricane Katrina
Shear, M. Katherine; McLaughlin, Katie A.; Ghesquiere, Angela; Gruber, Michael J.; Sampson, Nancy A.; Kessler, Ronald C.
2011-01-01
Background Although losses are important consequences of disasters, few epidemiological studies of disasters have assessed complicated grief (CG) and none assessed CG associated with losses other than death of loved one. Methods Data come from the baseline survey of the Hurricane Katrina Community Advisory Group (CAG), a representative sample of 3,088 residents of the areas directly affected by Hurricane Katrina. A brief screen for CG was included containing four items consistent with the proposed DSM 5 criteria for a diagnosis of bereavement-related adjustment disorder. Results 58.5% of respondents reported a significant hurricane-related loss: Most-severe losses were 29.0% tangible, 9.5% interpersonal, 8.1% intangible, 4.2% work-financial, and 3.7% death of loved one. 26.1% of respondents with significant loss had possible CG and 7.0% moderate-severe CG. Death of loved one was associated with the highest conditional probability of moderate-severe CG (18.5%, compared to 1.1–10.5% conditional probabilities for other losses) but accounted for only 16.5% of moderate-severe CG due to its comparatively low prevalence. Most moderate-severe CG was due to tangible (52.9%) or interpersonal (24.0%) losses. Significant predictors of CG were mostly unique to either bereavement (racial-ethnic minority status, social support) or other losses (pre-hurricane history of psychopathology, social competence.). Conclusions Non-bereavement losses accounted for the vast majority of hurricane-related possible CG despite risk of CG being much higher in response to bereavement than to other losses. This result argues for expansion of research on CG beyond bereavement and alerts clinicians to the need to address post-disaster grief associated with a wide range of losses. PMID:21796740
Effect of density feedback on the two-route traffic scenario with bottleneck
NASA Astrophysics Data System (ADS)
Sun, Xiao-Yan; Ding, Zhong-Jun; Huang, Guo-Hua
2016-12-01
In this paper, we investigate the effect of density feedback on the two-route scenario with a bottleneck. The simulation and theory analysis shows that there exist two critical vehicle entry probabilities αc1 and αc2. When vehicle entry probability α≤αc1, four different states, i.e. free flow state, transition state, maximum current state and congestion state are identified in the system, which correspond to three critical reference densities. However, in the interval αc1<α<αc2, the free flow and transition state disappear, and there is only congestion state when α≥αc2. According to the results, traffic control center can adjust the reference density so that the system is in maximum current state. In this case, the capacity of the traffic system reaches maximum so that drivers can make full use of the roads. We hope that the study results can provide good advice for alleviating traffic jam and be useful to traffic control center for designing advanced traveller information systems.
Quantifying Extrinsic Noise in Gene Expression Using the Maximum Entropy Framework
Dixit, Purushottam D.
2013-01-01
We present a maximum entropy framework to separate intrinsic and extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible probability distribution of the copy number of the gene product (mRNA or protein) by accounting for possible variations in extrinsic factors. The distribution of extrinsic factors is estimated using the maximum entropy principle. Our results show that extrinsic factors qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in Escherichia coli. We suggest that the variation in extrinsic factors may account for the observed wider-than-Poisson distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in E. coli whereas others need verification. Application of the presented framework to more complex situations is also discussed. PMID:23790383
Quantifying extrinsic noise in gene expression using the maximum entropy framework.
Dixit, Purushottam D
2013-06-18
We present a maximum entropy framework to separate intrinsic and extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible probability distribution of the copy number of the gene product (mRNA or protein) by accounting for possible variations in extrinsic factors. The distribution of extrinsic factors is estimated using the maximum entropy principle. Our results show that extrinsic factors qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in Escherichia coli. We suggest that the variation in extrinsic factors may account for the observed wider-than-Poisson distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in E. coli whereas others need verification. Application of the presented framework to more complex situations is also discussed. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Fast Response of the Tropics to an Abrupt Loss of Arctic Sea Ice via Ocean Dynamics
NASA Astrophysics Data System (ADS)
Wang, Kun; Deser, Clara; Sun, Lantao; Tomas, Robert A.
2018-05-01
The role of ocean dynamics in the transient adjustment of the coupled climate system to an abrupt loss of Arctic sea ice is investigated using experiments with Community Climate System Model version 4 in two configurations: a thermodynamic slab mixed layer ocean and a full-depth ocean that includes both dynamics and thermodynamics. Ocean dynamics produce a distinct sea surface temperature warming maximum in the eastern equatorial Pacific, accompanied by an equatorward intensification of the Intertropical Convergence Zone and Hadley Circulation. These tropical responses are established within 25 years of ice loss and contrast markedly with the quasi-steady antisymmetric coupled response in the slab-ocean configuration. A heat budget analysis reveals the importance of anomalous vertical advection tied to a monotonic temperature increase below 200 m for the equatorial sea surface temperature warming maximum in the fully coupled model. Ocean dynamics also rapidly modify the midlatitude atmospheric response to sea ice loss.
Stochastic Modeling of Empirical Storm Loss in Germany
NASA Astrophysics Data System (ADS)
Prahl, B. F.; Rybski, D.; Kropp, J. P.; Burghoff, O.; Held, H.
2012-04-01
Based on German insurance loss data for residential property we derive storm damage functions that relate daily loss with maximum gust wind speed. Over a wide range of loss, steep power law relationships are found with spatially varying exponents ranging between approximately 8 and 12. Global correlations between parameters and socio-demographic data are employed to reduce the number of local parameters to 3. We apply a Monte Carlo approach to calculate German loss estimates including confidence bounds in daily and annual resolution. Our model reproduces the annual progression of winter storm losses and enables to estimate daily losses over a wide range of magnitude.
Fusion competent synaptic vesicles persist upon active zone disruption and loss of vesicle docking
Wang, Shan Shan H.; Held, Richard G.; Wong, Man Yan; Liu, Changliang; Karakhanyan, Aziz; Kaeser, Pascal S.
2016-01-01
In a nerve terminal, synaptic vesicle docking and release are restricted to an active zone. The active zone is a protein scaffold that is attached to the presynaptic plasma membrane and opposed to postsynaptic receptors. Here, we generated conditional knockout mice removing the active zone proteins RIM and ELKS, which additionally led to loss of Munc13, Bassoon, Piccolo, and RIM-BP, indicating disassembly of the active zone. We observed a near complete lack of synaptic vesicle docking and a strong reduction in vesicular release probability and the speed of exocytosis, but total vesicle numbers, SNARE protein levels, and postsynaptic densities remained unaffected. Despite loss of the priming proteins Munc13 and RIM and of docked vesicles, a pool of releasable vesicles remained. Thus, the active zone is necessary for synaptic vesicle docking and to enhance release probability, but releasable vesicles can be localized distant from the presynaptic plasma membrane. PMID:27537483
Bipolar nebulae and mass loss from red giant stars
NASA Technical Reports Server (NTRS)
Cohen, M.
1985-01-01
Observations of several bipolar nebulae are used to learn something of the nature of mass loss from the probable red-giant progenitors of these nebulae. Phenomena discussed are: (1) probable GL 2688's optical molecular emissions; (2) newly discovered very high velocity knots along the axis of OH 0739 - 14, which reveal evidence for mass ejections of + or 300 km/s from the M9 III star embedded in this nebula; (3) the bipolar structure of three extreme carbon stars, and the evidence for periodic mass ejection in IRC + 30219, also at high speed (about 80 km/s); and (4) the curious cool TiO-rich region above Parsamian 13, which may represent the very recent shedding of photospheric material from a cool, oxygen-rich giant. Several general key questions about bipolar nebulae that relate to the process of mass loss from their progenitor stars are raised.
14 CFR 31.19 - Performance: Uncontrolled descent.
Code of Federal Regulations, 2010 CFR
2010-01-01
... single failure of the heater assembly, fuel cell system, gas value system, or maneuvering vent system, or from any single tear in the balloon envelope between tear stoppers: (1) The maximum vertical velocity attained. (2) The altitude loss from the point of failure to the point at which maximum vertical velocity...
49 CFR 178.812 - Top lift test.
Code of Federal Regulations, 2013 CFR
2013-10-01
... renders the IBC, including the base pallets when applicable, unsafe for transportation, and no loss of... twice the maximum permissible gross mass with the load being evenly distributed. (2) Flexible IBC design types must be filled to six times the maximum net mass, the load being evenly distributed. (c) Test...
49 CFR 178.812 - Top lift test.
Code of Federal Regulations, 2014 CFR
2014-10-01
... renders the IBC, including the base pallets when applicable, unsafe for transportation, and no loss of... twice the maximum permissible gross mass with the load being evenly distributed. (2) Flexible IBC design types must be filled to six times the maximum net mass, the load being evenly distributed. (c) Test...
49 CFR 178.812 - Top lift test.
Code of Federal Regulations, 2012 CFR
2012-10-01
... renders the IBC, including the base pallets when applicable, unsafe for transportation, and no loss of... twice the maximum permissible gross mass with the load being evenly distributed. (2) Flexible IBC design types must be filled to six times the maximum net mass, the load being evenly distributed. (c) Test...
49 CFR 178.812 - Top lift test.
Code of Federal Regulations, 2011 CFR
2011-10-01
... renders the IBC, including the base pallets when applicable, unsafe for transportation, and no loss of... twice the maximum permissible gross mass with the load being evenly distributed. (2) Flexible IBC design types must be filled to six times the maximum net mass, the load being evenly distributed. (c) Test...
A phylogenetic approach to total evaporative water loss in mammals.
Van Sant, Matthew J; Oufiero, Christopher E; Muñoz-Garcia, Agustí; Hammond, Kimberly A; Williams, Joseph B
2012-01-01
Maintaining appropriate water balance is a constant challenge for terrestrial mammals, and this problem can be exacerbated in desiccating environments. It has been proposed that natural selection has provided desert-dwelling mammals physiological mechanisms to reduce rates of total evaporative water loss. In this study, we evaluated the relationship between total evaporative water loss and body mass in mammals by using a recent phylogenetic hypothesis. We compared total evaporative water loss in 80 species of arid-zone mammals to that in 56 species that inhabit mesic regions, ranging in size from 4 g to 3,500 kg, to test the hypothesis that mammals from arid environments have lower rates of total evaporative water loss than mammals from mesic environments once phylogeny is taken into account. We found that arid species had lower rates of total evaporative water loss than mesic species when using a dichotomous variable to describe habitat (arid or mesic). We also found that total evaporative water loss was negatively correlated with the average maximum and minimum environmental temperature as well as the maximum vapor pressure deficit of the environment. Annual precipitation and the variable Q (a measure of habitat aridity) were positively correlated with total evaporative water loss. These results support the hypothesis that desert-dwelling mammals have lower rates of total evaporative water loss than mesic species after controlling for body mass and evolutionary relatedness regardless of whether categorical or continuous variables are used to describe habitat.
A rational decision rule with extreme events.
Basili, Marcello
2006-12-01
Risks induced by extreme events are characterized by small or ambiguous probabilities, catastrophic losses, or windfall gains. Through a new functional, that mimics the restricted Bayes-Hurwicz criterion within the Choquet expected utility approach, it is possible to represent the decisionmaker behavior facing both risky (large and reliable probability) and extreme (small or ambiguous probability) events. A new formalization of the precautionary principle (PP) is shown and a new functional, which encompasses both extreme outcomes and expectation of all the possible results for every act, is claimed.
Ideal evolution of magnetohydrodynamic turbulence when imposing Taylor-Green symmetries.
Brachet, M E; Bustamante, M D; Krstulovic, G; Mininni, P D; Pouquet, A; Rosenberg, D
2013-01-01
We investigate the ideal and incompressible magnetohydrodynamic (MHD) equations in three space dimensions for the development of potentially singular structures. The methodology consists in implementing the fourfold symmetries of the Taylor-Green vortex generalized to MHD, leading to substantial computer time and memory savings at a given resolution; we also use a regridding method that allows for lower-resolution runs at early times, with no loss of spectral accuracy. One magnetic configuration is examined at an equivalent resolution of 6144(3) points and three different configurations on grids of 4096(3) points. At the highest resolution, two different current and vorticity sheet systems are found to collide, producing two successive accelerations in the development of small scales. At the latest time, a convergence of magnetic field lines to the location of maximum current is probably leading locally to a strong bending and directional variability of such lines. A novel analytical method, based on sharp analysis inequalities, is used to assess the validity of the finite-time singularity scenario. This method allows one to rule out spurious singularities by evaluating the rate at which the logarithmic decrement of the analyticity-strip method goes to zero. The result is that the finite-time singularity scenario cannot be ruled out, and the singularity time could be somewhere between t=2.33 and t=2.70. More robust conclusions will require higher resolution runs and grid-point interpolation measurements of maximum current and vorticity.
Smith, Eric Krabbe; O'Neill, Jacqueline J; Gerson, Alexander R; McKechnie, Andrew E; Wolf, Blair O
2017-09-15
We examined thermoregulatory performance in seven Sonoran Desert passerine bird species varying in body mass from 10 to 70 g - lesser goldfinch, house finch, pyrrhuloxia, cactus wren, northern cardinal, Abert's towhee and curve-billed thrasher. Using flow-through respirometry, we measured daytime resting metabolism, evaporative water loss and body temperature at air temperatures ( T air ) between 30 and 52°C. We found marked increases in resting metabolism above the upper critical temperature ( T uc ), which for six of the seven species fell within a relatively narrow range (36.2-39.7°C), but which was considerably higher in the largest species, the curve-billed thrasher (42.6°C). Resting metabolism and evaporative water loss were minimal below the T uc and increased with T air and body mass to maximum values among species of 0.38-1.62 W and 0.87-4.02 g H 2 O h -1 , respectively. Body temperature reached maximum values ranging from 43.5 to 45.3°C. Evaporative cooling capacity, the ratio of evaporative heat loss to metabolic heat production, reached maximum values ranging from 1.39 to 2.06, consistent with known values for passeriforms and much lower than values in taxa such as columbiforms and caprimulgiforms. These maximum values occurred at heat tolerance limits that did not scale with body mass among species, but were ∼50°C for all species except the pyrrhuloxia and Abert's towhee (48°C). High metabolic costs associated with respiratory evaporation appeared to drive the limited heat tolerance in these desert passeriforms, compared with larger desert columbiforms and galliforms that use metabolically more efficient mechanisms of evaporative heat loss. © 2017. Published by The Company of Biologists Ltd.
1978-08-01
dam is a concrete gravity dam with earth abutments. It is 730 ft. long and the maximum height of it is 54 ft. The dam is assessed to be in poor...concrete gravity dam with earth abutments constructed in 1920. Overall length is 730 feet and maximum height is 54 feet. The Spicket River flows 5...the Spillway Test flood is based on the estimated "Probable Maximum Flood" for the region ( greatest reasonably possible storm runoff), or fractions
Maximum-likelihood block detection of noncoherent continuous phase modulation
NASA Technical Reports Server (NTRS)
Simon, Marvin K.; Divsalar, Dariush
1993-01-01
This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.
Maximum of a Fractional Brownian Motion: Analytic Results from Perturbation Theory.
Delorme, Mathieu; Wiese, Kay Jörg
2015-11-20
Fractional Brownian motion is a non-Markovian Gaussian process X_{t}, indexed by the Hurst exponent H. It generalizes standard Brownian motion (corresponding to H=1/2). We study the probability distribution of the maximum m of the process and the time t_{max} at which the maximum is reached. They are encoded in a path integral, which we evaluate perturbatively around a Brownian, setting H=1/2+ϵ. This allows us to derive analytic results beyond the scaling exponents. Extensive numerical simulations for different values of H test these analytical predictions and show excellent agreement, even for large ϵ.
Effectiveness of Africa's tropical protected areas for maintaining forest cover.
Bowker, J N; De Vos, A; Ament, J M; Cumming, G S
2017-06-01
The effectiveness of parks for forest conservation is widely debated in Africa, where increasing human pressure, insufficient funding, and lack of management capacity frequently place significant demands on forests. Tropical forests house a substantial portion of the world's remaining biodiversity and are heavily affected by anthropogenic activity. We analyzed park effectiveness at the individual (224 parks) and national (23 countries) level across Africa by comparing the extent of forest loss (as a proxy for deforestation) inside parks to matched unprotected control sites. Although significant geographical variation existed among parks, the majority of African parks had significantly less forest loss within their boundaries (e.g., Mahale Park had 34 times less forest loss within its boundary) than control sites. Accessibility was a significant driver of forest loss. Relatively inaccessible areas had a higher probability (odds ratio >1, p < 0.001) of forest loss but only in ineffective parks, and relatively accessible areas had a higher probability of forest loss but only in effective parks. Smaller parks less effectively prevented forest loss inside park boundaries than larger parks (T = -2.32, p < 0.05), and older parks less effectively prevented forest loss inside park boundaries than younger parks (F 2,154 = -4.11, p < 0.001). Our analyses, the first individual and national assessment of park effectiveness across Africa, demonstrated the complexity of factors (such as geographical variation, accessibility, and park size and age) influencing the ability of a park to curb forest loss within its boundaries. © 2016 Society for Conservation Biology.
Accident hazard evaluation and control decisions on forested recreation sites
Lee A. Paine
1971-01-01
Accident hazard associated with trees on recreation sites is inherently concerned with probabilities. The major factors include the probabilities of mechanical failure and of target impact if failure occurs, the damage potential of the failure, and the target value. Hazard may be evaluated as the product of these factors; i.e., expected loss during the current...
Variation with Mach Number of Static and Total Pressures Through Various Screens
NASA Technical Reports Server (NTRS)
Adler, Alfred A
1946-01-01
Tests were conducted in the Langley 24-inch highspeed tunnel to ascertain the static-pressure and total-pressure losses through screens ranging in mesh from 3 to 12 wires per inch and in wire diameter from 0.023 to 0.041 inch. Data were obtained from a Mach number of approximately 0.20 up to the maximum (choking) Mach number obtainable for each screen. The results of this investigation indicate that the pressure losses increase with increasing Mach number until the choking Mach number, which can be computed, is reached. Since choking imposes a restriction on the mass rate of flow and maximum losses are incurred at this condition, great care must be taken in selecting the screen mesh and wire dimmeter for an installation so that the choking Mach number is
Understanding clinical and non-clinical decisions under uncertainty: a scenario-based survey.
Simianu, Vlad V; Grounds, Margaret A; Joslyn, Susan L; LeClerc, Jared E; Ehlers, Anne P; Agrawal, Nidhi; Alfonso-Cristancho, Rafael; Flaxman, Abraham D; Flum, David R
2016-12-01
Prospect theory suggests that when faced with an uncertain outcome, people display loss aversion by preferring to risk a greater loss rather than incurring certain, lesser cost. Providing probability information improves decision making towards the economically optimal choice in these situations. Clinicians frequently make decisions when the outcome is uncertain, and loss aversion may influence choices. This study explores the extent to which prospect theory, loss aversion, and probability information in a non-clinical domain explains clinical decision making under uncertainty. Four hundred sixty two participants (n = 117 non-medical undergraduates, n = 113 medical students, n = 117 resident trainees, and n = 115 medical/surgical faculty) completed a three-part online task. First, participants completed an iced-road salting task using temperature forecasts with or without explicit probability information. Second, participants chose between less or more risk-averse ("defensive medicine") decisions in standardized scenarios. Last, participants chose between recommending therapy with certain outcomes or risking additional years gained or lost. In the road salting task, the mean expected value for decisions made by clinicians was better than for non-clinicians(-$1,022 vs -$1,061; <0.001). Probability information improved decision making for all participants, but non-clinicians improved more (mean improvement of $64 versus $33; p = 0.027). Mean defensive decisions decreased across training level (medical students 2.1 ± 0.9, residents 1.6 ± 0.8, faculty1.6 ± 1.1; p-trend < 0.001) and prospect-theory-concordant decisions increased (25.4%, 33.9%, and 40.7%;p-trend = 0.016). There was no relationship identified between road salting choices with defensive medicine and prospect-theory-concordant decisions. All participants made more economically-rational decisions when provided explicit probability information in a non-clinical domain. However, choices in the non-clinical domain were not related to prospect-theory concordant decision making and risk aversion tendencies in the clinical domain. Recognizing this discordance may be important when applying prospect theory to interventions aimed at improving clinical care.
On the Importance of Cycle Minimum in Sunspot Cycle Prediction
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.
1996-01-01
The characteristics of the minima between sunspot cycles are found to provide important information for predicting the amplitude and timing of the following cycle. For example, the time of the occurrence of sunspot minimum sets the length of the previous cycle, which is correlated by the amplitude-period effect to the amplitude of the next cycle, with cycles of shorter (longer) than average length usually being followed by cycles of larger (smaller) than average size (true for 16 of 21 sunspot cycles). Likewise, the size of the minimum at cycle onset is correlated with the size of the cycle's maximum amplitude, with cycles of larger (smaller) than average size minima usually being associated with larger (smaller) than average size maxima (true for 16 of 22 sunspot cycles). Also, it was found that the size of the previous cycle's minimum and maximum relates to the size of the following cycle's minimum and maximum with an even-odd cycle number dependency. The latter effect suggests that cycle 23 will have a minimum and maximum amplitude probably larger than average in size (in particular, minimum smoothed sunspot number Rm = 12.3 +/- 7.5 and maximum smoothed sunspot number RM = 198.8 +/- 36.5, at the 95-percent level of confidence), further suggesting (by the Waldmeier effect) that it will have a faster than average rise to maximum (fast-rising cycles have ascent durations of about 41 +/- 7 months). Thus, if, as expected, onset for cycle 23 will be December 1996 +/- 3 months, based on smoothed sunspot number, then the length of cycle 22 will be about 123 +/- 3 months, inferring that it is a short-period cycle and that cycle 23 maximum amplitude probably will be larger than average in size (from the amplitude-period effect), having an RM of about 133 +/- 39 (based on the usual +/- 30 percent spread that has been seen between observed and predicted values), with maximum amplitude occurrence likely sometime between July 1999 and October 2000.
Design of a Field Test for Probability of Hit by Antiaircraft Guns
1973-02-01
not available. • The cost of conducting the numerous field test trials that would be needed to establish the loss rates of aircraft to antiaircraft...mathematical models provide a readily available and relatively inexpensive way to obtain estimates of aircraft losses to antiaircraft guns. Because these...aircraft losses to antiaircraft guns, the use of the models can contribute greatly to better decisions. But if the models produce invalid estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bretschneider, C.L.
1980-06-01
This volume is an extension of and consists of several modifications to the earlier report by Bretschneider (April 1979) on the subject of hurricane design wind, wave and current criteria for the four potential OTEC sites. The 100-year hurricane criteria for the design of OTEC plants is included. The criteria, in addition to the maximum conditions of winds, waves and surface current, include: hurricane fields for wind speed U/sub s/ and significant wave height H/sub s/; hurricane fields for modal wave period f/sub 0//sup -1/ and maximum energy density S/sub max/ of the wave spectrum; the corresponding Ekman wind-driven surfacemore » current V/sub s/; tabulated cross-sections for U/sub s/, H/sub s/, f/sub 0//sup -1/ and S/sub max/ through max U/sub s/ and through max H/sub s/ along traverses at right angles to and along traverses parallel to the forward movement of the hurricane; most probable maximum wave height and the expected corresponding wave period, based on statistical analysis of maximum wave heights from five hurricanes; design wave spectra for maximum U/sub s/ and also maximum H/sub s/, since maximum U/sub s/ and maximum H/sub s/ do not occur simultaneously; the envelope of wave spectra through maximum U/sub s/ and through maximum H/sub s/ along traverses parallel to the forward movement of the hurricane; the above same determinations for Hurricane Camille (1969) as for the four OTEC locations; and alternative methods (suggested) for obtaining design wave spectra from the joint probability distribution functions for wave height and period given by Longuet-Higgins (1975) and C.N.E.X.O. after Arhan, et al (1976).« less
NASA Astrophysics Data System (ADS)
Mohammed, Amal A.; Abraheem, Sudad K.; Fezaa Al-Obedy, Nadia J.
2018-05-01
In this paper is considered with Burr type XII distribution. The maximum likelihood, Bayes methods of estimation are used for estimating the unknown scale parameter (α). Al-Bayyatis’ loss function and suggest loss function are used to find the reliability with the least loss. So the reliability function is expanded in terms of a set of power function. For this performance, the Matlab (ver.9) is used in computations and some examples are given.
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi
2015-11-01
We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).
Assessing wildland fire risk transmission to communities in northern Spain
Fermín J. Alcasena; Michele Salis; Alan A. Ager; Rafael Castell; Cristina Vega-García
2017-01-01
We assessed potential economic losses and transmission to residential houses from wildland fires in a rural area of central Navarra (Spain). Expected losses were quantified at the individual structure level (n = 306) in 14 rural communities by combining fire model predictions of burn probability and fire intensity with susceptibility functions derived from expert...
NASA Astrophysics Data System (ADS)
Kurceren, Ragip; Modestino, James W.
1998-12-01
The use of forward error-control (FEC) coding, possibly in conjunction with ARQ techniques, has emerged as a promising approach for video transport over ATM networks for cell-loss recovery and/or bit error correction, such as might be required for wireless links. Although FEC provides cell-loss recovery capabilities it also introduces transmission overhead which can possibly cause additional cell losses. A methodology is described to maximize the number of video sources multiplexed at a given quality of service (QoS), measured in terms of decoded cell loss probability, using interlaced FEC codes. The transport channel is modelled as a block interference channel (BIC) and the multiplexer as single server, deterministic service, finite buffer supporting N users. Based upon an information-theoretic characterization of the BIC and large deviation bounds on the buffer overflow probability, the described methodology provides theoretically achievable upper limits on the number of sources multiplexed. Performance of specific coding techniques using interlaced nonbinary Reed-Solomon (RS) codes and binary rate-compatible punctured convolutional (RCPC) codes is illustrated.
Quantitative Analysis of Land Loss in Coastal Louisiana Using Remote Sensing
NASA Astrophysics Data System (ADS)
Wales, P. M.; Kuszmaul, J.; Roberts, C.
2005-12-01
For the past thirty-five years the land loss along the Louisiana Coast has been recognized as a growing problem. One of the clearest indicators of this land loss is that in 2000 smooth cord grass (spartina alterniflora) was turning brown well before its normal hibernation period. Over 100,000 acres of marsh were affected by the 2000 browning. In 2001 data were collected using low altitude helicopter based transects of the coast, with 7,400 data points being collected by researchers at the USGS, National Wetlands Research Center, and Louisiana Department of Natural Resources. The surveys contained data describing the characteristics of the marsh, including latitude, longitude, marsh condition, marsh color, percent vegetated, and marsh die-back. Creating a model that combines remote sensing images, field data, and statistical analysis to develop a methodology for estimating the margin of error in measurements of coastal land loss (erosion) is the ultimate goal of the study. A model was successfully created using a series of band combinations (used as predictive variables). The most successful band combinations or predictive variables were the braud value [(Sum Visible TM Bands - Sum Infrared TM Bands)/(Sum Visible TM Bands + Sum Infrared TM Bands)], TM band 7/ TM band 2, brightness, NDVI, wetness, vegetation index, and a 7x7 autocovariate nearest neighbor floating window. The model values were used to generate the logistic regression model. A new image was created based on the logistic regression probability equation where each pixel represents the probability of finding water or non-water at that location in each image. Pixels within each image that have a high probability of representing water have a value close to 1 and pixels with a low probability of representing water have a value close to 0. A logistic regression model is proposed that uses seven independent variables. This model yields an accurate classification in 86.5% of the locations considered in the 1997 and 2001 survey locations. When the logistic regression was modeled to the satellite imagery of the entire Louisiana Coast study area a statewide loss was estimated to be 358 mi2 to 368 mi2, from 1997 to 2001, using two different methods for estimating land loss.
The turbulent cascade of individual eddies
NASA Astrophysics Data System (ADS)
Huertas-Cerdeira, Cecilia; Lozano-Durán, Adrián; Jiménez, Javier
2014-11-01
The merging and splitting processes of Reynolds-stress carrying structures in the inertial range of scales are studied through their time-resolved evolution in channels at Reλ = 100 - 200 . Mergers and splits coexist during the whole life of the structures, and are responsible for a substantial part of their growth and decay. Each interaction involves two or more eddies and results in little overall volume loss or gain. Most of them involve a small eddy that merges with, or splits from, a significantly larger one. Accordingly, if merge and split indexes are respectively defined as the maximum number of times that a structure has merged from its birth or will split until its death, the mean eddy volume grows linearly with both indexes, suggesting an accretion process rather than a hierarchical fragmentation. However, a non-negligible number of interactions involve eddies of similar scale, with a second probability peak of the volume of the smaller parent or child at 0.3 times that of the resulting or preceding structure. Funded by the Multiflow project of the ERC.
Bouska, Kristen; Whitledge, Gregory W.; Lant, Christopher; Schoof, Justin
2018-01-01
Land cover is an important determinant of aquatic habitat and is projected to shift with climate changes, yet climate-driven land cover changes are rarely factored into climate assessments. To quantify impacts and uncertainty of coupled climate and land cover change on warm-water fish species’ distributions, we used an ensemble model approach to project distributions of 14 species. For each species, current range projections were compared to 27 scenario-based projections and aggregated to visualize uncertainty. Multiple regression and model selection techniques were used to identify drivers of range change. Novel, or no-analogue, climates were assessed to evaluate transferability of models. Changes in total probability of occurrence ranged widely across species, from a 63% increase to a 65% decrease. Distributional gains and losses were largely driven by temperature and flow variables and underscore the importance of habitat heterogeneity and connectivity to facilitate adaptation to changing conditions. Finally, novel climate conditions were driven by mean annual maximum temperature, which stresses the importance of understanding the role of temperature on fish physiology and the role of temperature-mitigating management practices.
Walters, Glenn D; Diamond, Pamela M; Magaletta, Philip R
2010-03-01
Three indicators derived from the Personality Assessment Inventory (PAI) Alcohol Problems scale (ALC)-tolerance/high consumption, loss of control, and negative social and psychological consequences-were subjected to taxometric analysis-mean above minus below a cut (MAMBAC), maximum covariance (MAXCOV), and latent mode factor analysis (L-Mode)-in 1,374 federal prison inmates (905 males, 469 females). Whereas the total sample yielded ambiguous results, the male subsample produced dimensional results, and the female subsample produced taxonic results. Interpreting these findings in light of previous taxometric research on alcohol abuse and dependence it is speculated that while alcohol use disorders may be taxonic in female offenders, they are probably both taxonic and dimensional in male offenders. Two models of male alcohol use disorder in males are considered, one in which the diagnostic features are categorical and the severity of symptomatology is dimensional, and one in which some diagnostic features (e.g., withdrawal) are taxonic and other features (e.g., social problems) are dimensional.
Electromagnetic wave energy conversion research
NASA Technical Reports Server (NTRS)
Bailey, R. L.; Callahan, P. S.
1975-01-01
Known electromagnetic wave absorbing structures found in nature were first studied for clues of how one might later design large area man-made radiant-electric converters. This led to the study of the electro-optics of insect dielectric antennae. Insights were achieved into how these antennae probably operate in the infrared 7-14um range. EWEC theoretical models and relevant cases were concisely formulated and justified for metal and dielectric absorber materials. Finding the electromagnetic field solutions to these models is a problem not yet solved. A rough estimate of losses in metal, solid dielectric, and hollow dielectric waveguides indicates future radiant-electric EWEC research should aim toward dielectric materials for maximum conversion efficiency. It was also found that the absorber bandwidth is a theoretical limitation on radiant-electric conversion efficiency. Ideally, the absorbers' wavelength would be centered on the irradiating spectrum and have the same bandwith as the irradiating wave. The EWEC concept appears to have a valid scientific basis, but considerable more research is needed before it is thoroughly understood, especially for the complex randomly polarized, wide band, phase incoherent spectrum of the sun. Specific recommended research areas are identified.
Estimating design flood and HEC-RAS modelling approach for flood analysis in Bojonegoro city
NASA Astrophysics Data System (ADS)
Prastica, R. M. S.; Maitri, C.; Hermawan, A.; Nugroho, P. C.; Sutjiningsih, D.; Anggraheni, E.
2018-03-01
Bojonegoro faces flood every year with less advanced prevention development. Bojonegoro city development could not peak because the flood results material losses. It affects every sectors in Bojonegoro: education, politics, economy, social, and infrastructure development. This research aims to analyse and to ensure that river capacity has high probability to be the main factor of flood in Bojonegoro. Flood discharge analysis uses Nakayasu synthetic unit hydrograph for period of 5 years, 10 years, 25 years, 50 years, and 100 years. They would be compared to the water maximum capacity that could be loaded by downstream part of Bengawan Solo River in Bojonegoro. According to analysis result, Bengawan Solo River in Bojonegoro could not able to load flood discharges. Another method used is HEC-RAS analysis. The conclusion that shown by HEC-RAS analysis has the same view. It could be observed that flood water loading is more than full bank capacity elevation in the river. To conclude, the main factor that should be noticed by government to solve flood problem is river capacity.
NASA Technical Reports Server (NTRS)
Sahai, Raghvendra; Bieging, John H.
1993-01-01
High- and medium-resolution images of SiO J = 2-1(V = 0) from the circumstellar envelopes (CSEs) of three oxygen-rich stars, Chi Cyg, RX Boo, and IK Tau, were obtained. The SIO images were found to be roughly circular, implying that the CSEs are spherically symmetric on angular-size scales of about 3-9 arcsec. The observed angular half-maximum intensity source radius is nearly independent of the LSR velocity for all three CSEs. Chi Cyg and RX Boo are argued to be less than 450 pc distant, and have mass-loss rates larger than about 10 exp -6 solar mass/yr. In Chi Cyg and RX Boo, the line profiles at the peak of the brightness distribution are rounded, typical of optically-thick emission from a spherical envelope expanding with a constant velocity. In the IK Tau line profiles, an additional narrower central component is present, probably a result of emission from an inner circumstellar shell with a significantly smaller expansion velocity than the extended envelope.
NASA Technical Reports Server (NTRS)
Spiers, Gary D.
1995-01-01
A brief description of enhancements made to the NASA MSFC coherent lidar model is provided. Notable improvements are the addition of routines to automatically determine the 3 dB misalignment loss angle and the backscatter value at which the probability of a good estimate (for a maximum likelihood estimator) falls to 50%. The ability to automatically generate energy/aperture parametrization (EAP) plots which include the effects of angular misalignment has been added. These EAP plots make it very easy to see that for any practical system where there is some degree of misalignment then there is an optimum telescope diameter for which the laser pulse energy required to achieve a particular sensitivity is minimized. Increasing the telescope diameter above this will result in a reduction of sensitivity. These parameterizations also clearly show that the alignment tolerances at shorter wavelengths are much stricter than those at longer wavelengths. A brief outline of the NASA MSFC AEOLUS program is given and a summary of the lidar designs considered during the program is presented. A discussion of some of the design trades is performed both in the text and in a conference publication attached as an appendix.
Wiggins, Paul A
2015-07-21
This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Luciani, Valeria; D'Onofrio, Roberta; Dickens, Gerald Roy; Wade, Bridget
2017-04-01
The symbiotic relationship with algae is a key strategy adopted by many modern species and by early Paleogene shallow-dwelling planktic foraminifera. The endosymbionts play an important role in foraminiferal calcification, longevity and growth, allowing the host to succeed in oligotrophic environment. We have indirect evidence on the presence and loss of algae photosymbionts because symbionts modify the chemistry of the microenvironment where a foraminifer calcifies, resulting in a characteristic geochemical signature between test size and δ13C. We present here the result of a test on loss of algal photosymbiont (bleaching) in planktic foraminifera from the northwest Atlantic Ocean Drilling Program (ODP) Site 1051 across the Early Eocene Climatic Optimum (EECO), the interval ( 49-53 Ma) when Earth surface temperatures and probably atmospheric pCO2 reached their Cenozoic maximum. We select this interval because two symbiont-bearing planktic foraminiferal genera Morozovella and Acarinina, that were important calcifiers of the early Paleogene tropical-subtropical oceans, experienced a marked and permanent switch in abundance at the beginning of the EECO, close to the carbon isotope excursion known as J event. Specifically, the relative abundance of Morozovella permanently decreased by at least half, along with a progressive decrease in the number of species. Concomitantly, the genus Acarinina almost doubled its abundance and diversified within the EECO. Many stressors inducing loss of photosymbiosis may have occurred during the long-lasting environmental conditions relating to the EECO extreme warmth, such as high pCO2 and possible decrease of the surface-water pH. The bleaching may therefore represent a potential mechanism to explain the rapid morozovellid decline at the start of the EECO. Our geochemical data from Site 1051 demonstrate that there was indeed a reduction of algal-symbiosis in morozovellids at the EECO beginning. This bleaching event occurred at the time of the permanent low-latitude morozovellid collapse in abundance, but it affected also the acarininids that proliferated concomitantly. Foraminifera affected by bleaching are expected to reduce their test-size besides abundance, since endosymbiosis is advantageous in foraminiferal longevity and in providing energy to drive calcification. Our record on the species of Morozovella at Site 1051 shows a significant reduction of the maximum test diameter at the initiation of the EECO, thus supporting bleaching. The postulated bleaching episode at the start of the EECO was transitory, as photo-symbiotic activity recovered for Morozovella and Acarinina species within the main EECO phase. However, species of Morozovella never recover their maximum diameter test-size, even after having restored the photo-symbiotic relationship. Decrease in planktic foraminiferal test-size can be related to different types of environmental stressors, in addition to the bleaching. We cannot assign the loss of photo-symbionts to the main cause for morozovellid decline at the EECO onset. Changes in ocean chemistry or interaction with other microplankton groups may have contributed to induce favourable habitat for continued the Acarinina diversification and proliferation during the EECO whereas environmental conditions surpassed a critical threshold for morozovellids. A possible prolonged competition between Morozovella and Acarinina in the mixed-layer for life resources may have resulted in a reduced population for the former.
Monoamines and assessment of risks.
Takahashi, Hidehiko
2012-12-01
Over the past decade, neuroeconomics studies utilizing neurophysiology methods (fMRI or EEG) have flourished, revealing the neural basis of 'boundedly rational' or 'irrational' decision-making that violates normative theory. The next question is how modulatory neurotransmission is involved in these central processes. Here I focused on recent efforts to understand how central monoamine transmission is related to nonlinear probability weighting and loss aversion, central features of prospect theory, which is a leading alternative to normative theory for decision-making under risk. Circumstantial evidence suggests that dopamine tone might be related to distortion of subjective reward probability and noradrenaline and serotonin tone might influence aversive emotional reaction to potential loss. Copyright © 2012 Elsevier Ltd. All rights reserved.
Off-diagonal long-range order, cycle probabilities, and condensate fraction in the ideal Bose gas.
Chevallier, Maguelonne; Krauth, Werner
2007-11-01
We discuss the relationship between the cycle probabilities in the path-integral representation of the ideal Bose gas, off-diagonal long-range order, and Bose-Einstein condensation. Starting from the Landsberg recursion relation for the canonic partition function, we use elementary considerations to show that in a box of size L3 the sum of the cycle probabilities of length k>L2 equals the off-diagonal long-range order parameter in the thermodynamic limit. For arbitrary systems of ideal bosons, the integer derivative of the cycle probabilities is related to the probability of condensing k bosons. We use this relation to derive the precise form of the pik in the thermodynamic limit. We also determine the function pik for arbitrary systems. Furthermore, we use the cycle probabilities to compute the probability distribution of the maximum-length cycles both at T=0, where the ideal Bose gas reduces to the study of random permutations, and at finite temperature. We close with comments on the cycle probabilities in interacting Bose gases.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation
Meyer, Karin
2016-01-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681
Role of shell diffusion area in incubating eggs at simulated high altitude.
Weiss, H S
1978-10-01
Embryonic development is inhibited when eggs are incubated at 9,100 m (0.3 atm) despite a normoxic environment. The problem apparently relates to respiratory gas exchange occurring by diffusion through gas-filled pores in the shell. Gaseous flux is therefore inversely proportional to ambient pressure and is affected by the physical characteristics of the ambient gas (Chapman-Enskog equation). Excess loss of H2O and CO2 occurs in eggs incubating at altitude and could be detrimental. Such increased loss should be correctable by decreasing diffusion area. This was tested by progressively increasing coverage of the shell with paraffin and incubating at simulated 0.3 ATA (225 Torr) in 100% O2. Uncoated eggs failed to hatch, but numbers of chicks increased with increased coverage. Maximum hatch was an extrapolated 90% of controls at 69% shell coverage. With further coverage, hatch size decreased. Egg weight loss, and estimate of H2O diffusion, was around three times controls in uncoated eggs but decreased linearly with paraffin coverage, reaching near normal at maximum hatch. Reduction of diffusion area to 0.3 normal at maximum hatch generally balanced the increased flux predicted for 0.3 ATA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duffey, R.B.; Rohatgi, U.S.
Maximum power limits for hypothetical designs of natural circulation plants can be described analytically. The thermal hydraulic design parameters are those which limit the flow, being the elevations, flow areas, and loss coefficients. WE have found some simple ``design`` equations for natural circulation flow to power ratio, and for the stability limit. The analysis of historical and available data for maximum capacity factor estimation shows 80% to be reasonable and achievable. The least cost is obtained by optimizing both hypothetical plant performance for a given output,a nd the plant layout and design. There is also scope to increase output andmore » reduce cost by considering design variations of primary and secondary pressure, and by optimizing component elevations and loss coefficients. The design limits for each are set by stability and maximum flow considerations, which deserve close and careful evaluation.« less
Long-Run Savings and Investment Strategy Optimization
Gerrard, Russell; Guillén, Montserrat; Pérez-Marín, Ana M.
2014-01-01
We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration. PMID:24711728
Long-run savings and investment strategy optimization.
Gerrard, Russell; Guillén, Montserrat; Nielsen, Jens Perch; Pérez-Marín, Ana M
2014-01-01
We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration.
The deuterium puzzle in the symmetric universe
NASA Technical Reports Server (NTRS)
Leroy, B.; Nicolle, J. P.; Schatzman, E.
1973-01-01
An attempt was made to use deuterium abundance in the symmetric universe to prove that no nucleosynthesis takes place during annihilation and therefore neutrons were loss before nucleosynthesis. Data cover nucleosynthesis during the radiative era, cross section estimates, maximum abundance of He-4 at the end of nucleosynthesis area, and loss rate.
USDA-ARS?s Scientific Manuscript database
To meet Chesapeake Bay Total Maximum Daily Load requirements for agricultural pollution, conservation districts and farmers are tasked with implementing best management practices (BMPs) that reduce farm losses of nutrients and sediment. The importance of the agricultural industry to the regional eco...
49 CFR 178.980 - Stacking test.
Code of Federal Regulations, 2012 CFR
2012-10-01
... for transportation and no loss of contents. (2) For flexible Large Packagings, there may be no deterioration which renders the Large Packaging unsafe for transportation and no loss of contents. (3) For the... of their capacity and to their maximum net mass, with the load being evenly distributed. (c) Test...
49 CFR 178.980 - Stacking test.
Code of Federal Regulations, 2011 CFR
2011-10-01
... for transportation and no loss of contents. (2) For flexible Large Packagings, there may be no deterioration which renders the Large Packaging unsafe for transportation and no loss of contents. (3) For the... of their capacity and to their maximum net mass, with the load being evenly distributed. (c) Test...
Modeling of screening currents in coated conductor magnets containing up to 40000 turns
NASA Astrophysics Data System (ADS)
Pardo, E.
2016-08-01
Screening currents caused by varying magnetic fields degrade the homogeneity and stability of the magnetic fields created by REBCO coated conductor coils. They are responsible for the AC loss; which is also important for other power applications containing windings, such as transformers, motors and generators. Since real magnets contain coils exceeding 10000 turns, accurate modeling tools for this number of turns or above are necessary for magnet design. This article presents a fast numerical method to model coils with no loss of accuracy. We model a 10400-turn coil for its real number of turns and coils of up to 40000 turns with continuous approximation, which introduces negligible errors. The screening currents, the screening current induced field (SCIF) and the AC loss is analyzed in detail. The SCIF is at a maximum at the remnant state with a considerably large value. The instantaneous AC loss for an anisotropic magnetic-field dependent J c is qualitatively different than for a constant J c , although the loss per cycle is similar. Saturation of the magnetization currents at the end pancakes causes the maximum AC loss at the first ramp to increase with J c . The presented modeling tool can accurately calculate the SCIF and AC loss in practical computing times for coils with any number of turns used in real windings, enabling parameter optimization.
Framework for probabilistic flood risk assessment in an Alpine region
NASA Astrophysics Data System (ADS)
Schneeberger, Klaus; Huttenlau, Matthias; Steinberger, Thomas; Achleitner, Stefan; Stötter, Johann
2014-05-01
Flooding is among the natural hazards that regularly cause significant losses to property and human lives. The assessment of flood risk delivers crucial information for all participants involved in flood risk management and especially for local authorities and insurance companies in order to estimate the possible flood losses. Therefore a framework for assessing flood risk has been developed and is introduced with the presented contribution. Flood risk is thereby defined as combination of the probability of flood events and of potential flood damages. The probability of occurrence is described through the spatial and temporal characterisation of flood. The potential flood damages are determined in the course of vulnerability assessment, whereas, the exposure and the vulnerability of the elements at risks are considered. Direct costs caused by flooding with the focus on residential building are analysed. The innovative part of this contribution lies on the development of a framework which takes the probability of flood events and their spatio-temporal characteristic into account. Usually the probability of flooding will be determined by means of recurrence intervals for an entire catchment without any spatial variation. This may lead to a misinterpretation of the flood risk. Within the presented framework the probabilistic flood risk assessment is based on analysis of a large number of spatial correlated flood events. Since the number of historic flood events is relatively small additional events have to be generated synthetically. This temporal extrapolation is realised by means of the method proposed by Heffernan and Tawn (2004). It is used to generate a large number of possible spatial correlated flood events within a larger catchment. The approach is based on the modelling of multivariate extremes considering the spatial dependence structure of flood events. The input for this approach are time series derived from river gauging stations. In a next step the historic and synthetic flood events have to be spatially interpolated from point scale (i.e. river gauges) to the river network. Therefore, topological kriging (Top-kriging) proposed by Skøien et al. (2006) is applied. Top-kriging considers the nested structure of river networks and is therefore suitable to regionalise flood characteristics. Thus, the characteristics of a large number of possible flood events can be transferred to arbitrary locations (e.g. community level) at the river network within a study region. This framework has been used to generate a set of spatial correlated river flood events in the Austrian Federal Province of Vorarlberg. In addition, loss-probability-curves for each community has been calculated based on official inundation maps of public authorities, elements at risks and their vulnerability. One location along the river network within each community refers as interface between the set of flood events and the individual loss-probability relationships for the individual communities. Consequently, every flood event from the historic and synthetic generated dataset can be monetary evaluated. Thus, a time series comprising a large number of flood events and their corresponding monetary losses serves as basis for a probabilistic flood risk assessment. This includes expected annual losses and estimates of extreme event losses, which occur over the course of a certain time period. The gained results are essential decision-support for primary insurers, reinsurance companies and public authorities in order to setup a scale adequate risk management.
NASA Astrophysics Data System (ADS)
Kothyari, B. P.; Verma, P. K.; Joshi, B. K.; Kothyari, U. C.
2004-06-01
The Bhetagad watershed in Kumaon Hills of Central Himalaya represents for hydro-meteorological conditions of the middle mountains over the Hindu Kush Himalayas. This study was conducted to assess the runoff, soil loss and subsequent nutrient losses from different prominent land uses in the Bhetagad watershed of Central Himalayas. Four experimental natural plots each of 20 m length and 5 m width were delineated on four most common land covers viz, pine forests, tea plantation, rainfed agricultural and degraded lands. Monthly values of runoff, soil loss and nutrient loss, for four successive years (1998-2001), from these land uses were quantified following standard methodologies. The annual runoff in these plots ranged between 51 and 3593 m 3/ha while the annual soil loss varied between 0.06 and 5.47 tonnes/ha during the entire study period. The loss of organic matter was found to be maximum in plot having pine forest followed by plot having tea plantation as the land cover. Annual loss of total N (6.24 kg/ha), total P (3.88 kg/ha) and total K (5.98 kg/ha),per unit loss of soil (tonnes/ha), was maximum from the plot having rainfed agricultural crop as the land cover. The loss of total N ranged between 0.30 and 21.27 kg/ha, total P ranged between 0.14 and 9.42 kg/ha, total K ranged from 0.12 to 11.31 kg/ha whereas organic matter loss varied between 3.65 and 255.16 kg/ha, from different experimental plots. The findings will lead towards devising better conservation/management options for mountain land use systems.
Zhao, Wei; Cella, Massimo; Della Pasqua, Oscar; Burger, David; Jacqz-Aigrain, Evelyne
2012-04-01
Abacavir is used to treat HIV infection in both adults and children. The recommended paediatric dose is 8 mg kg(-1) twice daily up to a maximum of 300 mg twice daily. Weight was identified as the central covariate influencing pharmacokinetics of abacavir in children. A population pharmacokinetic model was developed to describe both once and twice daily pharmacokinetic profiles of abacavir in infants and toddlers. Standard dosage regimen is associated with large interindividual variability in abacavir concentrations. A maximum a posteriori probability Bayesian estimator of AUC(0-) (t) based on three time points (0, 1 or 2, and 3 h) is proposed to support area under the concentration-time curve (AUC) targeted individualized therapy in infants and toddlers. To develop a population pharmacokinetic model for abacavir in HIV-infected infants and toddlers, which will be used to describe both once and twice daily pharmacokinetic profiles, identify covariates that explain variability and propose optimal time points to optimize the area under the concentration-time curve (AUC) targeted dosage and individualize therapy. The pharmacokinetics of abacavir was described with plasma concentrations from 23 patients using nonlinear mixed-effects modelling (NONMEM) software. A two-compartment model with first-order absorption and elimination was developed. The final model was validated using bootstrap, visual predictive check and normalized prediction distribution errors. The Bayesian estimator was validated using the cross-validation and simulation-estimation method. The typical population pharmacokinetic parameters and relative standard errors (RSE) were apparent systemic clearance (CL) 13.4 () h−1 (RSE 6.3%), apparent central volume of distribution 4.94 () (RSE 28.7%), apparent peripheral volume of distribution 8.12 () (RSE14.2%), apparent intercompartment clearance 1.25 () h−1 (RSE 16.9%) and absorption rate constant 0.758 h−1 (RSE 5.8%). The covariate analysis identified weight as the individual factor influencing the apparent oral clearance: CL = 13.4 × (weight/12)1.14. The maximum a posteriori probability Bayesian estimator, based on three concentrations measured at 0, 1 or 2, and 3 h after drug intake allowed predicting individual AUC0–t. The population pharmacokinetic model developed for abacavir in HIV-infected infants and toddlers accurately described both once and twice daily pharmacokinetic profiles. The maximum a posteriori probability Bayesian estimator of AUC(0-) (t) was developed from the final model and can be used routinely to optimize individual dosing. © 2011 The Authors. British Journal of Clinical Pharmacology © 2011 The British Pharmacological Society.
Noguchi, Yoshihiro; Takahashi, Masatoki; Ito, Taku; Fujikawa, Taro; Kawashima, Yoshiyuki; Kitamura, Ken
2016-10-01
To assess possible delayed recovery of the maximum speech discrimination score (SDS) when the audiometric threshold ceases to change. We retrospectively examined 20 patients with idiopathic sudden sensorineural hearing loss (ISSNHL) (gender: 9 males and 11 females, age: 24-71 years). The findings of pure-tone average (PTA), maximum SDS, auditory brainstem responses (ABRs), and tinnitus handicap inventory (THI) were compared among the three periods of 1-3 months, 6-8 months, and 11-13 months after ISSNHL onset. No significant differences were noted in PTA, whereas an increase of greater than or equal to 10% in maximum SDS was recognized in 9 patients (45%) from the period of 1-3 months to the period of 11-13 months. Four of the 9 patients showed 20% or more recovery of maximum SDS. No significant differences were observed in the interpeak latency difference between waves I and V and the interaural latency difference of wave V in ABRs, whereas an improvement in the THI grade was recognized in 11 patients (55%) from the period of 1-3 months to the period of 11-13 months. The present study suggested the incidence of maximum SDS restoration over 1 year after ISSNHL onset. These findings may be because of the effects of auditory plasticity via the central auditory pathway. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Estimation of organic carbon loss potential in north of Iran
NASA Astrophysics Data System (ADS)
Shahriari, A.; Khormali, F.; Kehl, M.; Welp, G.; Scholz, Ch.
2009-04-01
The development of sustainable agricultural systems requires techniques that accurately monitor changes in the amount, nature and breakdown rate of soil organic matter and can compare the rate of breakdown of different plant or animal residues under different management systems. In this research, the study area includes the southern alluvial and piedmont plains of Gorgan River extended from east to west direction in Golestan province, Iran. Samples from 10 soil series and were collected from cultivation depth (0-30 cm). Permanganate-oxidizable carbon (POC) an index of soil labile carbon, was used to show soil potential loss of organic carbon. In this index shows the maximum loss of OC in a given soil. Maximum loss of OC for each soil series was estimated through POC and bulk density (BD). The potential loss of OC were estimated between 1253263 and 2410813 g/ha Carbon. Stable organic constituents in the soil include humic substances and other organic macromolecules that are intrinsically resistant against microbial attack, or that are physically protected by adsorption on mineral surfaces or entrapment within clay and mineral aggregates. However, the (Clay + Silt)/OC ratio had a negative significant (p < 0.001) correlation with POC content, confirming the preserving effect of fine particle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mickalonis, J. I.
2015-08-31
Aluminum-clad spent nuclear fuel will be transported for processing in the 70-ton nuclear fuel element cask from L Basin to H-canyon. During transport these fuels would be expected to experience high temperature aqueous corrosion from the residual L Basin water that will be present in the cask. Cladding corrosion losses during transport were calculated for material test reactor (MTR) and high flux isotope reactors (HFIR) fuels using literature and site information on aqueous corrosion at a range of time/temperature conditions. Calculations of the cladding corrosion loss were based on Arrhenius relationships developed for aluminum alloys typical of cladding material withmore » the primary assumption that an adherent passive film does not form to retard the initial corrosion rate. For MTR fuels a cladding thickness loss of 33 % was found after 1 year in the cask with a maximum temperature of 263 °C. HFIR fuels showed a thickness loss of only 6% after 1 year at a maximum temperature of 180 °C. These losses are not expected to impact the overall confinement function of the aluminum cladding.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mickalonis, J. I.
2015-08-01
Aluminum-clad spent nuclear fuel will be transported for processing in the 70-ton nuclear fuel element cask from L Basin to H-canyon. During transport these fuels would be expected to experience high temperature aqueous corrosion from the residual L Basin water that will be present in the cask. Cladding corrosion losses during transport were calculated for material test reactor (MTR) and high flux isotope reactors (HFIR) fuels using literature and site information on aqueous corrosion at a range of time/temperature conditions. Calculations of the cladding corrosion loss were based on Arrhenius relationships developed for aluminum alloys typical of cladding material withmore » the primary assumption that an adherent passive film does not form to retard the initial corrosion rate. For MTR fuels a cladding thickness loss of 33% was found after 1 year in the cask with a maximum temperature of 263 °C. HFIR fuels showed a thickness loss of only 6% after 1 year at a maximum temperature of 180 °C. These losses are not expected to impact the overall confinement function of the aluminum cladding.« less
Exact probability distribution functions for Parrondo's games
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Saakian, David B.; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Associating an ionospheric parameter with major earthquake occurrence throughout the world
NASA Astrophysics Data System (ADS)
Ghosh, D.; Midya, S. K.
2014-02-01
With time, ionospheric variation analysis is gaining over lithospheric monitoring in serving precursors for earthquake forecast. The current paper highlights the association of major (Ms ≥ 6.0) and medium (4.0 ≤ Ms < 6.0) earthquake occurrences throughout the world in different ranges of the Ionospheric Earthquake Parameter (IEP) where `Ms' is earthquake magnitude on the Richter scale. From statistical and graphical analyses, it is concluded that the probability of earthquake occurrence is maximum when the defined parameter lies within the range of 0-75 (lower range). In the higher ranges, earthquake occurrence probability gradually decreases. A probable explanation is also suggested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
Exact probability distribution functions for Parrondo's games.
Zadourian, Rubina; Saakian, David B; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
NASA Astrophysics Data System (ADS)
Guillard-Gonçalves, Clémence; Zêzere, José Luis; Pereira, Susana; Garcia, Ricardo
2016-04-01
The physical vulnerability of the buildings of Loures (a Portuguese municipality) to landslides was assessed, and the landslide risk was computed as the product of the landslide hazard by the vulnerability and the market economic value of the buildings. First, the hazard was assessed by combining the spatio-temporal probability and the frequency-magnitude relationship of the landslides, which was established by plotting the probability of a landslide area. The susceptibility of deep-seated and shallow landslides was assessed by a bi-variate statistical method and was mapped. The annual and multiannual spatio-temporal probabilities were estimated, providing a landslide hazard model. Then, an assessment of buildings vulnerability to landslides, based on an inquiry of a pool of landslide European experts, was developed and applied to the study area. The inquiry was based on nine magnitude scenarios and four structural building types. A sub-pool of the landslide experts who know the study area was extracted from the pool, and the variability of the answers coming from the pool and the sub-pool was assessed with standard deviation. Moreover, the average vulnerability of the basic geographic entities was compared by changing the map unit and applying the vulnerability to all the buildings of a test site (included in the study area), the inventory of which was listed on the field. Next, the market economic value of the buildings was calculated using an adaptation of the Portuguese Tax Services approach. Finally, the annual and multiannual landslide risk was computed for the nine landslide magnitude scenarios and different spatio-temporal probabilities by multiplying the potential loss (Vulnerability × Economic Value) by the hazard probability. As a rule, the vulnerability values given by the sub-pool of experts who know the study area are higher than those given by the European experts, namely for the high magnitude landslides. The obtained vulnerabilities vary from 0.2 to 1 as a function of the structural building types and the landslide magnitude, and are maximal for 10 and 20 meters landslide depths. However, the highest annual risk was found for the 3 m deep landslides, with a maximum value of 25.68 € per 5 m pixel, which is explained by the combination of a relatively high frequency in the Loures municipality with a substantial potential damage.
Problem of quality assurance during metal constructions welding via robotic technological complexes
NASA Astrophysics Data System (ADS)
Fominykh, D. S.; Rezchikov, A. F.; Kushnikov, V. A.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.
2018-05-01
The problem of minimizing the probability for critical combinations of events that lead to a loss in welding quality via robotic process automation is examined. The problem is formulated, models and algorithms for its solution are developed. The problem is solved by minimizing the criterion characterizing the losses caused by defective products. Solving the problem may enhance the quality and accuracy of operations performed and reduce the losses caused by defective product
Long-term archives reveal shifting extinction selectivity in China's postglacial mammal fauna
Crees, Jennifer J.; Li, Zhipeng; Bielby, Jon; Yuan, Jing
2017-01-01
Ecosystems have been modified by human activities for millennia, and insights about ecology and extinction risk based only on recent data are likely to be both incomplete and biased. We synthesize multiple long-term archives (over 250 archaeological and palaeontological sites dating from the early Holocene to the Ming Dynasty and over 4400 historical records) to reconstruct the spatio-temporal dynamics of Holocene–modern range change across China, a megadiverse country experiencing extensive current-day biodiversity loss, for 34 mammal species over three successive postglacial time intervals. Our combined zooarchaeological, palaeontological, historical and current-day datasets reveal that both phylogenetic and spatial patterns of extinction selectivity have varied through time in China, probably in response both to cumulative anthropogenic impacts (an ‘extinction filter’ associated with vulnerable species and accessible landscapes being affected earlier by human activities) and also to quantitative and qualitative changes in regional pressures. China has experienced few postglacial global species-level mammal extinctions, and most species retain over 50% of their maximum estimated Holocene range despite millennia of increasing regional human pressures, suggesting that the potential still exists for successful species conservation and ecosystem restoration. Data from long-term archives also demonstrate that herbivores have experienced more historical extinctions in China, and carnivores have until recently displayed greater resilience. Accurate assessment of patterns of biodiversity loss and the likely predictive power of current-day correlates of faunal vulnerability and resilience is dependent upon novel perspectives provided by long-term archives. PMID:29167363
Multi-model ensembles for assessment of flood losses and associated uncertainty
NASA Astrophysics Data System (ADS)
Figueiredo, Rui; Schröter, Kai; Weiss-Motz, Alexander; Martina, Mario L. V.; Kreibich, Heidi
2018-05-01
Flood loss modelling is a crucial part of risk assessments. However, it is subject to large uncertainty that is often neglected. Most models available in the literature are deterministic, providing only single point estimates of flood loss, and large disparities tend to exist among them. Adopting any one such model in a risk assessment context is likely to lead to inaccurate loss estimates and sub-optimal decision-making. In this paper, we propose the use of multi-model ensembles to address these issues. This approach, which has been applied successfully in other scientific fields, is based on the combination of different model outputs with the aim of improving the skill and usefulness of predictions. We first propose a model rating framework to support ensemble construction, based on a probability tree of model properties, which establishes relative degrees of belief between candidate models. Using 20 flood loss models in two test cases, we then construct numerous multi-model ensembles, based both on the rating framework and on a stochastic method, differing in terms of participating members, ensemble size and model weights. We evaluate the performance of ensemble means, as well as their probabilistic skill and reliability. Our results demonstrate that well-designed multi-model ensembles represent a pragmatic approach to consistently obtain more accurate flood loss estimates and reliable probability distributions of model uncertainty.
Simulation-Based Model Checking for Nondeterministic Systems and Rare Events
2016-03-24
year, we have investigated AO* search and Monte Carlo Tree Search algorithms to complement and enhance CMU’s SMCMDP. 1 Final Report, March 14... tree , so we can use it to find the probability of reachability for a property in PRISM’s Probabilistic LTL. By finding the maximum probability of...savings, particularly when handling very large models. 2.3 Monte Carlo Tree Search The Monte Carlo sampling process in SMCMDP can take a long time to
E-O Sensor Signal Recognition Simulation: Computer Code SPOT I.
1978-10-01
scattering phase function PDCO , defined at the specified wavelength, given for each of the scattering angles defined. Currently, a maximum of sixty-four...PHASE MATRIX DATA IS DEFINED PDCO AVERAGE PROBABILITY FOR PHASE MATRIX DEFINITION NPROB PROBLEM NUMBER 54 Fig. 12. FLOWCHART for the SPOT Computer Code...El0.1 WLAM(N) Wavelength at which the aerosol single-scattering phase function set is defined (microns) 3 8El0.1 PDCO (N,I) Average probability for
Cargo Throughput and Survivability Trade-Offs in Force Sustainment Operations
2008-06-01
more correlation with direct human activity. Mines are able to simply ‘sit and wait,’ thus allowing for easier mathematical and statistical ...1.2) Since the ships will likely travel in groups along the same programmed GPS track, modeling several transitors to the identical path is assumed...setting of 1/2 was used for the actuation probability maximum. The ‘threat profile’ will give the probability that the nth transitor will hit a mine
Probability of Loss of Crew Achievability Studies for NASA's Exploration Systems Development
NASA Technical Reports Server (NTRS)
Boyer, Roger L.; Bigler, Mark; Rogers, James H.
2014-01-01
Over the last few years, NASA has been evaluating various vehicle designs for multiple proposed design reference missions (DRM) beyond low Earth orbit in support of its Exploration Systems Development (ESD) programs. This paper addresses several of the proposed missions and the analysis techniques used to assess the key risk metric, probability of loss of crew (LOC). Probability of LOC is a metric used to assess the safety risk as well as a design requirement. These risk assessments typically cover the concept phase of a DRM, i.e. when little more than a general idea of the mission is known and are used to help establish "best estimates" for proposed program and agency level risk requirements. These assessments or studies were categorized as LOC achievability studies to help inform NASA management as to what "ball park" estimates of probability of LOC could be achieved for each DRM and were eventually used to establish the corresponding LOC requirements. Given that details of the vehicles and mission are not well known at this time, the ground rules, assumptions, and consistency across the programs become the important basis of the assessments as well as for the decision makers to understand.
Probability of Loss of Crew Achievability Studies for NASA's Exploration Systems Development
NASA Technical Reports Server (NTRS)
Boyer, Roger L.; Bigler, Mark; Rogers, James H.
2015-01-01
Over the last few years, NASA has been evaluating various vehicle designs for multiple proposed design reference missions (DRM) beyond low Earth orbit in support of its Exploration Systems Development (ESD) programs. This paper addresses several of the proposed missions and the analysis techniques used to assess the key risk metric, probability of loss of crew (LOC). Probability of LOC is a metric used to assess the safety risk as well as a design requirement. These risk assessments typically cover the concept phase of a DRM, i.e. when little more than a general idea of the mission is known and are used to help establish "best estimates" for proposed program and agency level risk requirements. These assessments or studies were categorized as LOC achievability studies to help inform NASA management as to what "ball park" estimates of probability of LOC could be achieved for each DRM and were eventually used to establish the corresponding LOC requirements. Given that details of the vehicles and mission are not well known at this time, the ground rules, assumptions, and consistency across the programs become the important basis of the assessments as well as for the decision makers to understand.
A risk-based multi-objective model for optimal placement of sensors in water distribution system
NASA Astrophysics Data System (ADS)
Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein
2018-02-01
In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.
Flood Risk Due to Hurricane Flooding
NASA Astrophysics Data System (ADS)
Olivera, Francisco; Hsu, Chih-Hung; Irish, Jennifer
2015-04-01
In this study, we evaluated the expected economic losses caused by hurricane inundation. We used surge response functions, which are physics-based dimensionless scaling laws that give surge elevation as a function of the hurricane's parameters (i.e., central pressure, radius, forward speed, approach angle and landfall location) at specified locations along the coast. These locations were close enough to avoid significant changes in surge elevations between consecutive points, and distant enough to minimize calculations. The probability of occurrence of a surge elevation value at a given location was estimated using a joint probability distribution of the hurricane parameters. The surge elevation, at the shoreline, was assumed to project horizontally inland within a polygon of influence. Individual parcel damage was calculated based on flood water depth and damage vs. depth curves available for different building types from the HAZUS computer application developed by the Federal Emergency Management Agency (FEMA). Parcel data, including property value and building type, were obtained from the county appraisal district offices. The expected economic losses were calculated as the sum of the products of the estimated parcel damages and their probability of occurrence for the different storms considered. Anticipated changes for future climate scenarios were considered by accounting for projected hurricane intensification, as indicated by sea surface temperature rise, and sea level rise, which modify the probability distribution of hurricane central pressure and change the baseline of the damage calculation, respectively. Maps of expected economic losses have been developed for Corpus Christi in Texas, Gulfport in Mississippi and Panama City in Florida. Specifically, for Port Aransas, in the Corpus Christi area, it was found that the expected economic losses were in the range of 1% to 4% of the property value for current climate conditions, of 1% to 8% for the 2030's and of 1% to 14% for the 2080's.
Estimating the probability that the Taser directly causes human ventricular fibrillation.
Sun, H; Haemmerich, D; Rahko, P S; Webster, J G
2010-04-01
This paper describes the first methodology and results for estimating the order of probability for Tasers directly causing human ventricular fibrillation (VF). The probability of an X26 Taser causing human VF was estimated using: (1) current density near the human heart estimated by using 3D finite-element (FE) models; (2) prior data of the maximum dart-to-heart distances that caused VF in pigs; (3) minimum skin-to-heart distances measured in erect humans by echocardiography; and (4) dart landing distribution estimated from police reports. The estimated mean probability of human VF was 0.001 for data from a pig having a chest wall resected to the ribs and 0.000006 for data from a pig with no resection when inserting a blunt probe. The VF probability for a given dart location decreased with the dart-to-heart horizontal distance (radius) on the skin surface.
Storm-based Cloud-to-Ground Lightning Probabilities and Warnings
NASA Astrophysics Data System (ADS)
Calhoun, K. M.; Meyer, T.; Kingfield, D.
2017-12-01
A new cloud-to-ground (CG) lightning probability algorithm has been developed using machine-learning methods. With storm-based inputs of Earth Networks' in-cloud lightning, Vaisala's CG lightning, multi-radar/multi-sensor (MRMS) radar derived products including the Maximum Expected Size of Hail (MESH) and Vertically Integrated Liquid (VIL), and near storm environmental data including lapse rate and CAPE, a random forest algorithm was trained to produce probabilities of CG lightning up to one-hour in advance. As part of the Prototype Probabilistic Hazard Information experiment in the Hazardous Weather Testbed in 2016 and 2017, National Weather Service forecasters were asked to use this CG lightning probability guidance to create rapidly updating probability grids and warnings for the threat of CG lightning for 0-60 minutes. The output from forecasters was shared with end-users, including emergency managers and broadcast meteorologists, as part of an integrated warning team.
NASA Astrophysics Data System (ADS)
Fluck, Elody
2015-04-01
Hail statistic in Western Europe based on a hybrid cell-tracking algorithm combining radar signals with hailstone observations Elody Fluck¹, Michael Kunz¹ , Peter Geissbühler², Stefan P. Ritz² With hail damage estimated over Billions of Euros for a single event (e.g., hailstorm Andreas on 27/28 July 2013), hail constitute one of the major atmospheric risks in various parts of Europe. The project HAMLET (Hail Model for Europe) in cooperation with the insurance company Tokio Millennium Re aims at estimating hail probability, hail hazard and, combined with vulnerability, hail risk for several European countries (Germany, Switzerland, France, Netherlands, Austria, Belgium and Luxembourg). Hail signals are obtained from radar reflectivity since this proxy is available with a high temporal and spatial resolution using several hail proxies, especially radar data. The focus in the first step is on Germany and France for the periods 2005- 2013 and 1999 - 2013, respectively. In the next step, the methods will be transferred and extended to other regions. A cell-tracking algorithm TRACE2D was adjusted and applied to two dimensional radar reflectivity data from different radars operated by European weather services such as German weather service (DWD) and French weather service (Météo-France). Strong convective cells are detected by considering 3 connected pixels over 45 dBZ (Reflectivity Cores RCs) in a radar scan. Afterwards, the algorithm tries to find the same RCs in the next 5 minute radar scan and, thus, track the RCs centers over time and space. Additional information about hailstone diameters provided by ESWD (European Severe Weather Database) is used to determine hail intensity of the detected hail swaths. Maximum hailstone diameters are interpolated along and close to the individual hail tracks giving an estimation of mean diameters for the detected hail swaths. Furthermore, a stochastic event set is created by randomizing the parameters obtained from the tracking approach of the historical event catalogue (length, width, orientation, diameter). This stochastic event set will be used to quantify hail risk and to estimate probable maximum loss (e.g., PML200) for a given industry motor or property (building) portfolio.
Resonant tube for measurement of sound absorption in gases at low frequency/pressure ratios
NASA Technical Reports Server (NTRS)
Zuckerwar, A. J.; Griffin, W. A.
1980-01-01
The paper describes a resonant tube for measuring sound absorption in gases, with specific emphasis on the vibrational relaxation peak of N2, over a range of frequency/pressure ratios from 0.1 to 2500 Hz/atm. The experimental background losses measured in argon agree with the theoretical wall losses except at few isolated frequencies. Rigid cavity terminations, external excitation, and a differential technique of background evaluation were used to minimize spurious contributions to the background losses. Room temperature measurements of sound absorption in binary mixtures of N2-CO2 in which both components are excitable resulted in the maximum frequency/pressure ratio in Hz/atm of 0.063 + 123m for the N2 vibrational relaxation peak, where m is mole percent of added CO2; the maximum ratio for the CO2 peak was 34,500 268m where m is mole percent of added N2.
EFFECTS OF LASER RADIATION ON MATTER: Maximum depth of keyhole melting of metals by a laser beam
NASA Astrophysics Data System (ADS)
Pinsker, V. A.; Cherepanov, G. P.
1990-11-01
A calculation is reported of the maximum depth and diameter of a narrow crater formed in a stationary metal target exposed to high-power cw CO2 laser radiation. The energy needed for erosion of a unit volume is assumed to be constant and the energy losses experienced by the beam in the vapor-gas channel are ignored. The heat losses in the metal are allowed for by an analytic solution of the three-dimensional boundary-value heat-conduction problem of the temperature field in the vicinity of a thin but long crater with a constant temperature on its surface. An approximate solution of this problem by a method proposed earlier by one of the present authors was tested on a computer. The dimensions of the thin crater were found to be very different from those obtained earlier subject to a less rigorous allowance for the heat losses.
Framing From Experience: Cognitive Processes and Predictions of Risky Choice.
Gonzalez, Cleotilde; Mehlhorn, Katja
2016-07-01
A framing bias shows risk aversion in problems framed as "gains" and risk seeking in problems framed as "losses," even when these are objectively equivalent and probabilities and outcomes values are explicitly provided. We test this framing bias in situations where decision makers rely on their own experience, sampling the problem's options (safe and risky) and seeing the outcomes before making a choice. In Experiment 1, we replicate the framing bias in description-based decisions and find risk indifference in gains and losses in experience-based decisions. Predictions of an Instance-Based Learning model suggest that objective probabilities as well as the number of samples taken are factors that contribute to the lack of framing effect. We test these two factors in Experiment 2 and find no framing effect when a few samples are taken but when large samples are taken, the framing effect appears regardless of the objective probability values. Implications of behavioral results and cognitive modeling are discussed. Copyright © 2015 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Anuwar, Muhammad Hafidz; Jaffar, Maheran Mohd
2017-08-01
This paper provides an overview for the assessment of credit risk specific to the banks. In finance, risk is a term to reflect the potential of financial loss. The risk of default on loan may increase when a company does not make a payment on that loan when the time comes. Hence, this framework analyses the KMV-Merton model to estimate the probabilities of default for Malaysian listed companies. In this way, banks can verify the ability of companies to meet their loan commitments in order to overcome bad investments and financial losses. This model has been applied to all Malaysian listed companies in Bursa Malaysia for estimating the credit default probabilities of companies and compare with the rating given by the rating agency, which is RAM Holdings Berhad to conform to reality. Then, the significance of this study is a credit risk grade is proposed by using the KMV-Merton model for the Malaysian listed companies.
Hoefnagels, W A; Padberg, G W; Overweg, J; Roos, R A; van Dijk, J G; Kamphuisen, H A
1991-01-01
In a prospective study of consecutive patients (age 15 or over) with transient loss of consciousness 45 patients had a history of seizure and 74 patients had a history of syncope. All patients had an EEG, ECG, laboratory tests and a hyperventilation test and were followed for an average of 14.5 months. Epileptiform activity in the interictal EEG had a sensitivity of 0.40 and a specificity of 0.95 for the diagnosis of a seizure. Epileptiform activity nearly doubled the probability of a seizure in doubtful cases. If no epileptiform activity was found, this probability remained substantially the same. The hyperventilation test had a sensitivity of 0.57 and a specificity of 0.84 for the diagnosis of syncope. A positive test increased the probability of syncope half as much in doubtful cases. A negative test did not exclude syncope. Laboratory tests were not helpful except for an ECG which was helpful in elderly patients. PMID:1800665
Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.
Farsani, Zahra Amini; Schmid, Volker J
2017-01-01
In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.
Zecca, Giovanni; Minuto, Luigi
2016-01-01
Quaternary glaciations and mostly last glacial maximum have shaped the contemporary distribution of many species in the Alps. However, in the Maritime and Ligurian Alps a more complex picture is suggested by the presence of many Tertiary paleoendemisms and by the divergence time between lineages in one endemic species predating the Late Pleistocene glaciation. The low number of endemic species studied limits the understanding of the processes that took place within this region. We used species distribution models and phylogeographical methods to infer glacial refugia and to reconstruct the phylogeographical pattern of Silene cordifolia All. and Viola argenteria Moraldo & Forneris. The predicted suitable area for last glacial maximum roughly fitted current known distribution. Our results suggest that separation of the major clades predates the last glacial maximum and the following repeated glacial and interglacial periods probably drove differentiations. The complex phylogeographical pattern observed in the study species suggests that both populations and genotypes extinction was minimal during the last glacial maximum, probably due to the low impact of glaciations and to topographic complexity in this area. This study underlines the importance of cumulative effect of previous glacial cycles in shaping the genetic structure of plant species in Maritime and Ligurian Alps, as expected for a Mediterranean mountain region more than for an Alpine region. PMID:27870888
Ditching Investigation of a 1/12-Scale Model of the Douglas F4D-1 Airplane, TED No. NACA DE 384
NASA Technical Reports Server (NTRS)
Windham, John O.
1956-01-01
A ditching investigation was made of a l/l2-scale dynamically similar model of the Douglas F4D-1 airplane to study its behavior when ditched. The model was landed in calm water at the Langley tank no. 2 monorail. Various landing attitudes, speeds, and configurations were investigated. The behavior of the model was determined from visual observations, acceleration records, and motion-picture records of the ditchings. Data are presented in tables, sequence photographs, time-history acceleration curves, and attitude curves. From the results of the investigation, it was concluded that the airplane should be ditched at the lowest speed and highest attitude consistent with adequate control (near 22 deg) with landing gear retracted. In a calm-water ditching under these conditions the airplane will probably nose in slightly, then make a fairly smooth run. The fuselage bottom will sustain appreciable damage so that rapid flooding and short flotation time are likely. Maximum longitudinal deceleration will be about 4g and maximum normal acceleration will be about 6g in a landing run of about 420 feet, In a calm-water ditching under similar conditions with the landing gear extended, the airplane will probably dive. Maximum longitudinal decelerations will be about 5-1/2g and maximum normal accelerations will be about 3-1/2g in a landing run of about 170 feet.
The non-parametric Parzen's window in stereo vision matching.
Pajares, G; de la Cruz, J
2002-01-01
This paper presents an approach to the local stereovision matching problem using edge segments as features with four attributes. From these attributes we compute a matching probability between pairs of features of the stereo images. A correspondence is said true when such a probability is maximum. We introduce a nonparametric strategy based on Parzen's window (1962) to estimate a probability density function (PDF) which is used to obtain the matching probability. This is the main finding of the paper. A comparative analysis of other recent matching methods is included to show that this finding can be justified theoretically. A generalization of the proposed method is made in order to give guidelines about its use with the similarity constraint and also in different environments where other features and attributes are more suitable.
Faith, Daniel P
2008-12-01
New species conservation strategies, including the EDGE of Existence (EDGE) program, have expanded threatened species assessments by integrating information about species' phylogenetic distinctiveness. Distinctiveness has been measured through simple scores that assign shared credit among species for evolutionary heritage represented by the deeper phylogenetic branches. A species with a high score combined with a high extinction probability receives high priority for conservation efforts. Simple hypothetical scenarios for phylogenetic trees and extinction probabilities demonstrate how such scoring approaches can provide inefficient priorities for conservation. An existing probabilistic framework derived from the phylogenetic diversity measure (PD) properly captures the idea of shared responsibility for the persistence of evolutionary history. It avoids static scores, takes into account the status of close relatives through their extinction probabilities, and allows for the necessary updating of priorities in light of changes in species threat status. A hypothetical phylogenetic tree illustrates how changes in extinction probabilities of one or more species translate into changes in expected PD. The probabilistic PD framework provided a range of strategies that moved beyond expected PD to better consider worst-case PD losses. In another example, risk aversion gave higher priority to a conservation program that provided a smaller, but less risky, gain in expected PD. The EDGE program could continue to promote a list of top species conservation priorities through application of probabilistic PD and simple estimates of current extinction probability. The list might be a dynamic one, with all the priority scores updated as extinction probabilities change. Results of recent studies suggest that estimation of extinction probabilities derived from the red list criteria linked to changes in species range sizes may provide estimated probabilities for many different species. Probabilistic PD provides a framework for single-species assessment that is well-integrated with a broader measurement of impacts on PD owing to climate change and other factors.
NASA Astrophysics Data System (ADS)
Govatski, J. A.; da Luz, M. G. E.; Koehler, M.
2015-01-01
We study the geminated pair dissociation probability φ as function of applied electric field and temperature in energetically disordered nD media. Regardless nD, for certain parameters regions φ versus the disorder degree (σ) displays anomalous minimum (maximum) at low (moderate) fields. This behavior is compatible with a transport energy which reaches a maximum and then decreases to negative values as σ increases. Our results explain the temperature dependence of the persistent photoconductivity in C60 single crystals going through order-disorder transitions. They also indicate how an energetic disorder spatial variation may contribute to higher exciton dissociation in multicomponent donor/acceptor systems.
2002 Commercial Space Transportation Lecture Series, volumes 1,2, and 3
DOT National Transportation Integrated Search
2003-04-01
This document includes three presentations which are part of the 2002 Commercial Space Transportation Lecture Series: The Early Years, AST - A Historical Perspective; Approval of Reentry Vehicles; and, Setting Insurance Requirements: Maximum Probable...
NASA Astrophysics Data System (ADS)
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
Probability distribution of extreme share returns in Malaysia
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin
2014-09-01
The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.
Estimation of descriptive statistics for multiply censored water quality data
Helsel, Dennis R.; Cohn, Timothy A.
1988-01-01
This paper extends the work of Gilliom and Helsel (1986) on procedures for estimating descriptive statistics of water quality data that contain “less than” observations. Previously, procedures were evaluated when only one detection limit was present. Here we investigate the performance of estimators for data that have multiple detection limits. Probability plotting and maximum likelihood methods perform substantially better than simple substitution procedures now commonly in use. Therefore simple substitution procedures (e.g., substitution of the detection limit) should be avoided. Probability plotting methods are more robust than maximum likelihood methods to misspecification of the parent distribution and their use should be encouraged in the typical situation where the parent distribution is unknown. When utilized correctly, less than values frequently contain nearly as much information for estimating population moments and quantiles as would the same observations had the detection limit been below them.
Conceptual design study of Fusion Experimental Reactor (FY86 FER): Safety
NASA Astrophysics Data System (ADS)
Seki, Yasushi; Iida, Hiromasa; Honda, Tsutomu
1987-08-01
This report describes the study on safety for FER (Fusion Experimental Reactor) which has been designed as a next step machine to the JT-60. Though the final purpose of this study is to have an image of design base accident, maximum credible accident and to assess their risk or probability, etc., as FER plant system, the emphasis of this years study is placed on fuel-gas circulation system where the tritium inventory is maximum. The report consists of two chapters. The first chapter summarizes the FER system and describes FMEA (Failure Mode and Effect Analysis) and related accident progression sequence for FER plant system as a whole. The second chapter of this report is focused on fuel-gas circulation system including purification, isotope separation and storage. Probability of risk is assessed by the probabilistic risk analysis (PRA) procedure based on FMEA, ETA and FTA.
NASA Technical Reports Server (NTRS)
Billingham, John; Tarter, Jill
1989-01-01
The maximum range is calculated at which radar signals from the earth could be detected by a search system similar to the NASA SETI Microwave Observing Project (SETI MOP) assumed to be operating out in the Galaxy. Figures are calculated for the Targeted Search and for the Sky Survey parts of the MOP, both planned to be operating in the 1990s. The probability of detection is calculated for the two most powerful transmitters, the planetary radar at Arecibo (Puerto Rico) and the ballistic missile early warning systems (BMEWSs), assuming that the terrestrial radars are only in the eavesdropping mode. It was found that, for the case of a single transmitter within the maximum range, the highest probability is for the sky survey detecting BMEWSs; this is directly proportional to BMEWS sky coverage and is therefore 0.25.
NASA Astrophysics Data System (ADS)
Nketsia-Tabiri, Josephine
1998-06-01
The effects of pre-irradiation storage time (7-21 days), radiation dose (0-75 Gy) and post-irradiation storage time (2-20 weeks) on sprouting, wrinkling and weight loss of ginger was investigated using a central composite rotatable design. Predictive models developed for all three responses were highly significant. Weight loss and wrinkling decreased as pre-irradiation storage time increased. Dose and post-irradiation storage time had significant interactive effects on weight loss and sprouting. Processing conditions for achieving minimal sprouting resulted in maximum weight loss and wrinkling.
Plafker, George
1969-01-01
The March 27, 1964, earthquake was accomp anied by crustal deformation-including warping, horizontal distortion, and faulting-over probably more than 110,000 square miles of land and sea bottom in south-central Alaska. Regional uplift and subsidence occurred mainly in two nearly parallel elongate zones, together about 600 miles long and as much as 250 miles wide, that lie along the continental margin. From the earthquake epicenter in northern Prince William Sound, the deformation extends eastward 190 miles almost to long 142° and southwestward slightly more than 400 miles to about long 155°. It extends across the two zones from the chain of active volcanoes in the Aleutian Range and Wrangell Mountains probably to the Aleutian Trench axis. Uplift that averages 6 feet over broad areas occurred mainly along the coast of the Gulf of Alaska, on the adjacent Continental Shelf, and probably on the continental slope. This uplift attained a measured maximum on land of 38 feet in a northwest-trending narrow belt less than 10 miles wide that is exposed on Montague Island in southwestern Prince William Sound. Two earthquake faults exposed on Montague Island are subsidiary northwest-dipping reverse faults along which the northwest blocks were relatively displaced a maximum of 26 feet, and both blocks were upthrown relative to sea level. From Montague Island, the faults and related belt of maximum uplift may extend southwestward on the Continental Shelf to the vicinity of the Kodiak group of islands. To the north and northwest of the zone of uplift, subsidence forms a broad asymmetrical downwarp centered over the Kodiak-Kenai-Chugach Mountains that averages 2½ feet and attains a measured maximum of 7½ feet along the southwest coast of the Kenai Peninsula. Maximum indicated uplift in the Alaska and Aleutian Ranges to the north of the zone of subsidence was l½ feet. Retriangulation over roughly 25,000 square miles of the deformed region in and around Prince William Sound shows that vertical movements there were accompanied by horizontal distortion, involving systematic shifts of about 64 feet in a relative seaward direction. Comparable horizontal movements are presumed to have affected those parts of the major zones of uplift and subsidence for which retriangulation data are unavailable. Regional vertical deformation generated a train of destructive long-period seismic sea waves in the Gulf of Alaska as well as unique atmospheric and ionospheric disturbances that were recorded at points far distant from Alaska. Warping resulted in permanent tilt of larger lake basins and temporary reductions in discharge of some major rivers. Uplift and subsidence relative to sea level caused profound modifications in shoreline morphology with attendant catastrophic effects on the nearshore biota and costly damage to coasta1 installations. Systematic horizontal movements of the land relative to bodies of confined or semiconfined water may have caused unexplained short-period waves—some of which were highly destructive—observed during or immediately after the earthquake at certain coastal localities and in Kenai Lake. Porosity increases, probably related to horizontal displacements in the zone of subsidence, were reflected in lowered well-water levels and in losses of surface water. The primary fault, or zone of faults, along which the earthquake occurred is not exposed at the surface on land. Focal-mechanism studies, when considered in conjunction with the pattern of deformation and seismicity, suggest that it was a complex thrust fault (megathrust) dipping at a gentle angle beneath the continental margin from the vicinity of the Aleutian Trench. Movement on the megathrust was accompanied by subsidiary reverse faulting, and perhaps wrench faulting, within the upper plate. Aftershock distribution suggests movement on a segment of the megathrust, some 550–600 miles long and 110–180 miles wide, that underlies most of the major zone of uplift and the seaward part of the major zone of subsidence. According to the postulated model, the observed and inferred tectonic displacements that accompanied the earthquake resulted primarily from (1) relative seaward displacement and uplift of the seaward part of the block by movement along the dipping megathrust and subsidiary faults that break through the upper plate to the surface, and (2) simultaneous elastic horizontal extension and vertical attenuation (subsidence) of the crustal slab behind the upper plate. Slight uplift inland from the major zones of deformation presumably was related to elastic strain changes resulting from the overthrusting; however, the data are insufficient to permit conclusions regarding its cause. The belt of seismic activity and major zones of tectonic deformation associated with the 1964 earthquake, to a large extent, lie between and parallel to the Aleutian Volcanic Arc and the Aleutian Trench, and are probably genetically related to the arc. Geologic data indicate that the earthquake-related tectonic movements were but the most recent pulse in an episode of deformation that probably began in late Pleistocene time and has continued intermittently to the present. Evidence for progressive coastal submergence in the deformed region for several centuries preceding the earthquake, in combin1ation with transverse horizontal shortening indicated by the retriangulation data, suggests pre-earthquake strain directed at a gentle angle downward beneath the arc. The duration of strain accumulation in the epicentral region, as interpreted from the time interval during which the coastal submergence occurred, probably is 930–1,360 years.
Brownian motion surviving in the unstable cubic potential and the role of Maxwell's demon
NASA Astrophysics Data System (ADS)
Ornigotti, Luca; Ryabov, Artem; Holubec, Viktor; Filip, Radim
2018-03-01
The trajectories of an overdamped particle in a highly unstable potential diverge so rapidly, that the variance of position grows much faster than its mean. A description of the dynamics by moments is therefore not informative. Instead, we propose and analyze local directly measurable characteristics, which overcome this limitation. We discuss the most probable particle position (position of the maximum of the probability density) and the local uncertainty in an unstable cubic potential, V (x ) ˜x3 , both in the transient regime and in the long-time limit. The maximum shifts against the acting force as a function of time and temperature. Simultaneously, the local uncertainty does not increase faster than the observable shift. In the long-time limit, the probability density naturally attains a quasistationary form. We interpret this process as a stabilization via the measurement-feedback mechanism, the Maxwell demon, which works as an entropy pump. The rules for measurement and feedback naturally arise from the basic properties of the unstable dynamics. All reported effects are inherent in any unstable system. Their detailed understanding will stimulate the development of stochastic engines and amplifiers and, later, their quantum counterparts.
Gómez Toledo, Verónica; Gutiérrez Farfán, Ileana; Verduzco-Mendoza, Antonio; Arch-Tirado, Emilio
Tinnitus is defined as the conscious perception of a sensation of sound that occurs in the absence of an external stimulus. This audiological symptom affects 7% to 19% of the adult population. The aim of this study is to describe the associated comorbidities present in patients with tinnitus usingjoint and conditional probability analysis. Patients of both genders, diagnosed with unilateral or bilateral tinnitus, aged between 20 and 45 years, and had a full computerised medical record, were selected. Study groups were formed on the basis of the following clinical aspects: 1) audiological findings; 2) vestibular findings; 3) comorbidities such as, temporomandibular dysfunction, tubal dysfunction, otosclerosis and, 4) triggering factors of tinnitus noise exposure, respiratory tract infection, use of ototoxic and/or drugs. Of the patients with tinnitus, 27 (65%) reported hearing loss, 11 (26.19%) temporomandibular dysfunction, and 11 (26.19%) with vestibular disorders. When performing the joint probability analysis, it was found that the probability that a patient with tinnitus having hearing loss was 2742 0.65, and 2042 0.47 for bilateral type. The result for P (A ∩ B)=30%. Bayes' theorem P (AiB) = P(Ai∩B)P(B) was used, and various probabilities were calculated. Therefore, in patients with temporomandibulardysfunction and vestibular disorders, a posterior probability of P (Aі/B)=31.44% was calculated. Consideration should be given to the joint and conditional probability approach as tools for the study of different pathologies. Copyright © 2016 Academia Mexicana de Cirugía A.C. Publicado por Masson Doyma México S.A. All rights reserved.
Tempest, Elizabeth L; Carter, Ben; Beck, Charles R; Rubin, G James
2017-12-01
The impact of flooding on mental health is exacerbated due to secondary stressors, although the mechanism of action is not understood. We investigated the role of secondary stressors on psychological outcomes through analysis of data collected one-year after flooding, and effect modification by sex. We analysed data from the English National Study on Flooding and Health collected from households flooded, disrupted and unexposed to flooding during 2013-14. Psychological outcomes were probable depression, anxiety and post-traumatic stress disorder (PTSD). Parsimonious multivariable logistic regression models were fitted to determine the effect of secondary stressors on the psychological outcomes. Sex was tested as an effect modifier using subgroup analyses. A total of 2006 people participated (55.5% women, mean age 60 years old). Participants reporting concerns about their personal health and that of their family (concerns about health) had greater odds of probable depression (adjusted odds ratio [aOR] 1.77, 95% CI 1.17-2.65) and PTSD (aOR 2.58, 95% CI 1.82-3.66). Loss of items of sentimental value was associated with probable anxiety (aOR 1.82, 95% CI 1.26-2.62). For women, the strongest associations were between concerns about health and probable PTSD (aOR 2.86, 95% CI 1.79-4.57). For men, the strongest associations were between 'relationship problems' and probable depression (aOR 3.25, 95% CI 1.54-6.85). Concerns about health, problems with relationships and loss of sentimental items were consistently associated with poor psychological outcomes. Interventions to reduce the occurrence of these secondary stressors are needed to mitigate the impact of flooding on probable psychological morbidity. © The Author 2017. Published by Oxford University Press on behalf of the European Public Health Association.
Tempest, Elizabeth L.; Carter, Ben; Beck, Charles R.; Rubin, G. James
2017-01-01
Abstract Background The impact of flooding on mental health is exacerbated due to secondary stressors, although the mechanism of action is not understood. We investigated the role of secondary stressors on psychological outcomes through analysis of data collected one-year after flooding, and effect modification by sex. Methods We analysed data from the English National Study on Flooding and Health collected from households flooded, disrupted and unexposed to flooding during 2013–14. Psychological outcomes were probable depression, anxiety and post-traumatic stress disorder (PTSD). Parsimonious multivariable logistic regression models were fitted to determine the effect of secondary stressors on the psychological outcomes. Sex was tested as an effect modifier using subgroup analyses. Results A total of 2006 people participated (55.5% women, mean age 60 years old). Participants reporting concerns about their personal health and that of their family (concerns about health) had greater odds of probable depression (adjusted odds ratio [aOR] 1.77, 95% CI 1.17–2.65) and PTSD (aOR 2.58, 95% CI 1.82–3.66). Loss of items of sentimental value was associated with probable anxiety (aOR 1.82, 95% CI 1.26–2.62). For women, the strongest associations were between concerns about health and probable PTSD (aOR 2.86, 95% CI 1.79–4.57). For men, the strongest associations were between ‘relationship problems’ and probable depression (aOR 3.25, 95% CI 1.54–6.85). Conclusions Concerns about health, problems with relationships and loss of sentimental items were consistently associated with poor psychological outcomes. Interventions to reduce the occurrence of these secondary stressors are needed to mitigate the impact of flooding on probable psychological morbidity. PMID:29087460
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-16
...; (12) low voltage winding basic insulation level; (13) load loss at maximum MVA rating; (14) no-load loss; (15) cooling class designation; (16) overload requirement; (17) decibel rating; and (18... Transformers from Korea: Investigation No. 731-TA-1189 (Preliminary).'' On September 16, 2011, we selected...
Band-aid for information loss from black holes
NASA Astrophysics Data System (ADS)
Israel, Werner; Yun, Zinkoo
2010-12-01
We summarize, simplify and extend recent work showing that small deviations from exact thermality in Hawking radiation, first uncovered by Kraus and Wilczek, have the capacity to carry off the maximum information content of a black hole. This goes a considerable way toward resolving a long-standing “information loss paradox.”
Magnetostrictive Vibration Damper and Energy Harvester for Rotating Machinery
NASA Technical Reports Server (NTRS)
Deng, Zhangxian; Asnani, Vivake M.; Dapino, Marcelo J.
2015-01-01
Vibrations generated by machine driveline components can cause excessive noise and structural damage. Magnetostrictive materials, including Galfenol (iron-gallium alloys) and Terfenol-D (terbium-iron-dysprosium alloys), are able to convert mechanical energy to magnetic energy. A magnetostrictive vibration ring is proposed, which generates electrical energy and dampens vibration, when installed in a machine driveline. A 2D axisymmetric finite element (FE) model incorporating magnetic, mechanical, and electrical dynamics is constructed in COMSOL Multiphysics. Based on the model, a parametric study considering magnetostrictive material geometry, pickup coil size, bias magnet strength, flux path design, and electrical load is conducted to maximize loss factor and average electrical output power. By connecting various resistive loads to the pickup coil, the maximum loss factors for Galfenol and Terfenol-D due to electrical energy loss are identified as 0.14 and 0.34, respectively. The maximum average electrical output power for Galfenol and Terfenol-D is 0.21 W and 0.58 W, respectively. The loss factors for Galfenol and Terfenol-D are increased to 0.59 and 1.83, respectively, by using an L-C resonant circuit.
[Reversible damages: loss of chance].
Béry, Alain
2013-03-01
Chance is the probability that a particular event may or may not occur and, in this sense, a loss of chance∗∗ can be defined as the missed opportunities resulting from the loss of the possibility that a favorable event will occur (a contrario, the failure to take risks)∗∗∗. This is a self-imposed liability that should be distinguished from the final damage. Moral damage is a notion that is very close to loss of chance although it is based on indemnification from the final damage of an affliction or malady. © EDP Sciences, SFODF, 2013.
Calculation of Cumulative Distributions and Detection Probabilities in Communications and Optics.
1984-10-01
the CMLD . As an example of a particular result, Figure 8.1 shows the additional SNR required (often called the CFAR loss) for the MLD, CMLD , and OSD in...the background noise level is known. Notice that although the CFAR loss increases with INR for the MLD, the CMLD and OSD have a bounded loss as the INR...Radar Detectors (J. A. Ritcey) Mean-level detectors (MLD) are commonly used in radar to maintain a constant -*! false-alarm rate ( CFAR ) when the
Calculation of Cumulative Distributions and Detection Probabilities in Communications and Optics.
1986-03-31
result, Figure 3.1 shows the additional SNR required (often called the CFAR loss) for the MLD, CMLD , and OSD in a multiple target environment to...Notice that although the CFAR loss increases with INR for the MLD, the CMLD and OSD have a bounded loss as the INR + w. These results have been more...false-alarm rate ( CFAR ) when the background noise level is unknown. In Section 2 we described the application of saddlepoint integration techniques to