Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure
NASA Technical Reports Server (NTRS)
Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark
2009-01-01
High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.
NASA Astrophysics Data System (ADS)
Xu, Jinghai; An, Jiwen; Nie, Gaozong
2016-04-01
Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.
Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method
NASA Astrophysics Data System (ADS)
Ardianti, Fitri; Sutarman
2018-01-01
In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.
Estimating extreme losses for the Florida Public Hurricane Model—part II
NASA Astrophysics Data System (ADS)
Gulati, Sneh; George, Florence; Hamid, Shahid
2018-02-01
Rising global temperatures are leading to an increase in the number of extreme events and losses (http://www.epa.gov/climatechange/science/indicators/). Accurate estimation of these extreme losses with the intention of protecting themselves against them is critical to insurance companies. In a previous paper, Gulati et al. (2014) discussed probable maximum loss (PML) estimation for the Florida Public Hurricane Loss Model (FPHLM) using parametric and nonparametric methods. In this paper, we investigate the use of semi-parametric methods to do the same. Detailed analysis of the data shows that the annual losses from FPHLM do not tend to be very heavy tailed, and therefore, neither the popular Hill's method nor the moment's estimator work well. However, Pickand's estimator with threshold around the 84th percentile provides a good fit for the extreme quantiles for the losses.
Work productivity loss from depression: evidence from an employer survey.
Rost, Kathryn M; Meng, Hongdao; Xu, Stanley
2014-12-18
National working groups identify the need for return on investment research conducted from the purchaser perspective; however, the field has not developed standardized methods for measuring the basic components of return on investment, including costing out the value of work productivity loss due to illness. Recent literature is divided on whether the most commonly used method underestimates or overestimates this loss. The goal of this manuscript is to characterize between and within variation in the cost of work productivity loss from illness estimated by the most commonly used method and its two refinements. One senior health benefit specialist from each of 325 companies employing 100+ workers completed a cross-sectional survey describing their company size, industry and policies/practices regarding work loss which allowed the research team to derive the variables needed to estimate work productivity loss from illness using three methods. Compensation estimates were derived by multiplying lost work hours from presenteeism and absenteeism by wage/fringe. Disruption correction adjusted this estimate to account for co-worker disruption, while friction correction accounted for labor substitution. The analysis compared bootstrapped means and medians between and within these three methods. The average company realized an annual $617 (SD = $75) per capita loss from depression by compensation methods and a $649 (SD = $78) loss by disruption correction, compared to a $316 (SD = $58) loss by friction correction (p < .0001). Agreement across estimates was 0.92 (95% CI 0.90, 0.93). Although the methods identify similar companies with high costs from lost productivity, friction correction reduces the size of compensation estimates of productivity loss by one half. In analyzing the potential consequences of method selection for the dissemination of interventions to employers, intervention developers are encouraged to include friction methods in their estimate of the economic value of interventions designed to improve absenteeism and presenteeism. Business leaders in industries where labor substitution is common are encouraged to seek friction corrected estimates of return on investment. Health policy analysts are encouraged to target the dissemination of productivity enhancing interventions to employers with high losses rather than all employers. NCT01013220.
A Temperature-Based Bioimpedance Correction for Water Loss Estimation During Sports.
Ring, Matthias; Lohmueller, Clemens; Rauh, Manfred; Mester, Joachim; Eskofier, Bjoern M
2016-11-01
The amount of total body water (TBW) can be estimated based on bioimpedance measurements of the human body. In sports, TBW estimations are of importance because mild water losses can impair muscular strength and aerobic endurance. Severe water losses can even be life threatening. TBW estimations based on bioimpedance, however, fail during sports because the increased body temperature corrupts bioimpedance measurements. Therefore, this paper proposes a machine learning method that eliminates the effects of increased temperature on bioimpedance and, consequently, reveals the changes in bioimpedance that are due to TBW loss. This is facilitated by utilizing changes in skin and core temperature. The method was evaluated in a study in which bioimpedance, temperature, and TBW loss were recorded every 15 min during a 2-h running workout. The evaluation demonstrated that the proposed method is able to reduce the error of TBW loss estimation by up to 71%, compared to the state of art. In the future, the proposed method in combination with portable bioimpedance devices might facilitate the development of wearable systems for continuous and noninvasive TBW loss monitoring during sports.
Heath, A L; Skeaff, C M; Gibson, R S
1999-04-01
The objective of this study was to validate two indirect methods for estimating the extent of menstrual blood loss against a reference method to determine which method would be most appropriate for use in a population of young adult women. Thirty-two women aged 18 to 29 years (mean +/- SD; 22.4 +/- 2.8) were recruited by poster in Dunedin (New Zealand). Data are presented for 29 women. A recall method and a record method for estimating extent of menstrual loss were validated against a weighed reference method. Spearman rank correlation coefficients between blood loss assessed by Weighed Menstrual Loss and Menstrual Record was rs = 0.47 (p = 0.012), and between Weighed Menstrual Loss and Menstrual Recall, was rs = 0.61 (p = 0.001). The Record method correctly classified 66% of participants into the same tertile, grossly misclassifying 14%. The Recall method correctly classified 59% of participants, grossly misclassifying 7%. Reference method menstrual loss calculated for surrogate categories demonstrated a significant difference between the second and third tertiles for the Record method, and between the first and third tertiles for the Recall method. The Menstrual Recall method can differentiate between low and high levels of menstrual blood loss in young adult women, is quick to complete and analyse, and has a low participant burden.
Vadas, P A; Good, L W; Moore, P A; Widman, N
2009-01-01
Nonpoint-source pollution of fresh waters by P is a concern because it contributes to accelerated eutrophication. Given the state of the science concerning agricultural P transport, a simple tool to quantify annual, field-scale P loss is a realistic goal. We developed new methods to predict annual dissolved P loss in runoff from surface-applied manures and fertilizers and validated the methods with data from 21 published field studies. We incorporated these manure and fertilizer P runoff loss methods into an annual, field-scale P loss quantification tool that estimates dissolved and particulate P loss in runoff from soil, manure, fertilizer, and eroded sediment. We validated the P loss tool using independent data from 28 studies that monitored P loss in runoff from a variety of agricultural land uses for at least 1 yr. Results demonstrated (i) that our new methods to estimate P loss from surface manure and fertilizer are an improvement over methods used in existing Indexes, and (ii) that it was possible to reliably quantify annual dissolved, sediment, and total P loss in runoff using relatively simple methods and readily available inputs. Thus, a P loss quantification tool that does not require greater degrees of complexity or input data than existing P Indexes could accurately predict P loss across a variety of management and fertilization practices, soil types, climates, and geographic locations. However, estimates of runoff and erosion are still needed that are accurate to a level appropriate for the intended use of the quantification tool.
NASA Technical Reports Server (NTRS)
Edmonds, L. D.
2016-01-01
Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.
NASA Technical Reports Server (NTRS)
Edmonds, L. D.
2016-01-01
Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.
Estimated value of insurance premium due to Citarum River flood by using Bayesian method
NASA Astrophysics Data System (ADS)
Sukono; Aisah, I.; Tampubolon, Y. R. H.; Napitupulu, H.; Supian, S.; Subiyanto; Sidi, P.
2018-03-01
Citarum river flood in South Bandung, West Java Indonesia, often happens every year. It causes property damage, producing economic loss. The risk of loss can be mitigated by following the flood insurance program. In this paper, we discussed about the estimated value of insurance premiums due to Citarum river flood by Bayesian method. It is assumed that the risk data for flood losses follows the Pareto distribution with the right fat-tail. The estimation of distribution model parameters is done by using Bayesian method. First, parameter estimation is done with assumption that prior comes from Gamma distribution family, while observation data follow Pareto distribution. Second, flood loss data is simulated based on the probability of damage in each flood affected area. The result of the analysis shows that the estimated premium value of insurance based on pure premium principle is as follows: for the loss value of IDR 629.65 million of premium IDR 338.63 million; for a loss of IDR 584.30 million of its premium IDR 314.24 million; and the loss value of IDR 574.53 million of its premium IDR 308.95 million. The premium value estimator can be used as neither a reference in the decision of reasonable premium determination, so as not to incriminate the insured, nor it result in loss of the insurer.
ELER software - a new tool for urban earthquake loss assessment
NASA Astrophysics Data System (ADS)
Hancilar, U.; Tuzun, C.; Yenidogan, C.; Erdik, M.
2010-12-01
Rapid loss estimation after potentially damaging earthquakes is critical for effective emergency response and public information. A methodology and software package, ELER-Earthquake Loss Estimation Routine, for rapid estimation of earthquake shaking and losses throughout the Euro-Mediterranean region was developed under the Joint Research Activity-3 (JRA3) of the EC FP6 Project entitled "Network of Research Infrastructures for European Seismology-NERIES". Recently, a new version (v2.0) of ELER software has been released. The multi-level methodology developed is capable of incorporating regional variability and uncertainty originating from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. Although primarily intended for quasi real-time estimation of earthquake shaking and losses, the routine is also equally capable of incorporating scenario-based earthquake loss assessments. This paper introduces the urban earthquake loss assessment module (Level 2) of the ELER software which makes use of the most detailed inventory databases of physical and social elements at risk in combination with the analytical vulnerability relationships and building damage-related casualty vulnerability models for the estimation of building damage and casualty distributions, respectively. Spectral capacity-based loss assessment methodology and its vital components are presented. The analysis methods of the Level 2 module, i.e. Capacity Spectrum Method (ATC-40, 1996), Modified Acceleration-Displacement Response Spectrum Method (FEMA 440, 2005), Reduction Factor Method (Fajfar, 2000) and Coefficient Method (ASCE 41-06, 2006), are applied to the selected building types for validation and verification purposes. The damage estimates are compared to the results obtained from the other studies available in the literature, i.e. SELENA v4.0 (Molina et al., 2008) and ATC-55 (Yang, 2005). An urban loss assessment exercise for a scenario earthquake for the city of Istanbul is conducted and physical and social losses are presented. Damage to the urban environment is compared to the results obtained from similar software, i.e. KOERILoss (KOERI, 2002) and DBELA (Crowley et al., 2004). The European rapid loss estimation tool is expected to help enable effective emergency response, on both local and global level, as well as public information.
Gardner, Bethany T.; Dale, Ann Marie; Buckner-Petty, Skye; Van Dillen, Linda; Amick, Benjamin C.; Evanoff, Bradley
2016-01-01
Objective To assess construct and discriminant validity of four health-related work productivity loss questionnaires in relation to employer productivity metrics, and to describe variation in economic estimates of productivity loss provided by the questionnaires in healthy workers. Methods 58 billing office workers completed surveys including health information and four productivity loss questionnaires. Employer productivity metrics and work hours were also obtained. Results Productivity loss questionnaires were weakly to moderately correlated with employer productivity metrics. Workers with more health complaints reported greater health-related productivity loss than healthier workers, but showed no loss on employer productivity metrics. Economic estimates of productivity loss showed wide variation among questionnaires, yet no loss of actual productivity. Conclusions Additional studies are needed comparing questionnaires with objective measures in larger samples and other industries, to improve measurement methods for health-related productivity loss. PMID:26849261
Comparison of estimators of standard deviation for hydrologic time series
Tasker, Gary D.; Gilroy, Edward J.
1982-01-01
Unbiasing factors as a function of serial correlation, ρ, and sample size, n for the sample standard deviation of a lag one autoregressive model were generated by random number simulation. Monte Carlo experiments were used to compare the performance of several alternative methods for estimating the standard deviation σ of a lag one autoregressive model in terms of bias, root mean square error, probability of underestimation, and expected opportunity design loss. Three methods provided estimates of σ which were much less biased but had greater mean square errors than the usual estimate of σ: s = (1/(n - 1) ∑ (xi −x¯)2)½. The three methods may be briefly characterized as (1) a method using a maximum likelihood estimate of the unbiasing factor, (2) a method using an empirical Bayes estimate of the unbiasing factor, and (3) a robust nonparametric estimate of σ suggested by Quenouille. Because s tends to underestimate σ, its use as an estimate of a model parameter results in a tendency to underdesign. If underdesign losses are considered more serious than overdesign losses, then the choice of one of the less biased methods may be wise.
Estimating TCP Packet Loss Ratio from Sampled ACK Packets
NASA Astrophysics Data System (ADS)
Yamasaki, Yasuhiro; Shimonishi, Hideyuki; Murase, Tutomu
The advent of various quality-sensitive applications has greatly changed the requirements for IP network management and made the monitoring of individual traffic flows more important. Since the processing costs of per-flow quality monitoring are high, especially in high-speed backbone links, packet sampling techniques have been attracting considerable attention. Existing sampling techniques, such as those used in Sampled NetFlow and sFlow, however, focus on the monitoring of traffic volume, and there has been little discussion of the monitoring of such quality indexes as packet loss ratio. In this paper we propose a method for estimating, from sampled packets, packet loss ratios in individual TCP sessions. It detects packet loss events by monitoring duplicate ACK events raised by each TCP receiver. Because sampling reveals only a portion of the actual packet loss, the actual packet loss ratio is estimated statistically. Simulation results show that the proposed method can estimate the TCP packet loss ratio accurately from a 10% sampling of packets.
Predicting Loss-of-Control Boundaries Toward a Piloting Aid
NASA Technical Reports Server (NTRS)
Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje
2012-01-01
This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.
Yao, Hong; You, Zhen; Liu, Bo
2016-01-01
The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies’ functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident’s origin and other indirect losses. In the valuation of damage to people’s life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water’s recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole. PMID:26805869
Yao, Hong; You, Zhen; Liu, Bo
2016-01-22
The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies' functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident's origin and other indirect losses. In the valuation of damage to people's life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water's recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole.
Accuracy of Blood Loss Measurement during Cesarean Delivery.
Doctorvaladan, Sahar V; Jelks, Andrea T; Hsieh, Eric W; Thurer, Robert L; Zakowski, Mark I; Lagrew, David C
2017-04-01
Objective This study aims to compare the accuracy of visual, quantitative gravimetric, and colorimetric methods used to determine blood loss during cesarean delivery procedures employing a hemoglobin extraction assay as the reference standard. Study Design In 50 patients having cesarean deliveries blood loss determined by assays of hemoglobin content on surgical sponges and in suction canisters was compared with obstetricians' visual estimates, a quantitative gravimetric method, and the blood loss determined by a novel colorimetric system. Agreement between the reference assay and other measures was evaluated by the Bland-Altman method. Results Compared with the blood loss measured by the reference assay (470 ± 296 mL), the colorimetric system (572 ± 334 mL) was more accurate than either visual estimation (928 ± 261 mL) or gravimetric measurement (822 ± 489 mL). The correlation between the assay method and the colorimetric system was more predictive (standardized coefficient = 0.951, adjusted R 2 = 0.902) than either visual estimation (standardized coefficient = 0.700, adjusted R 2 = 00.479) or the gravimetric determination (standardized coefficient = 0.564, adjusted R 2 = 0.304). Conclusion During cesarean delivery, measuring blood loss using colorimetric image analysis is superior to visual estimation and a gravimetric method. Implementation of colorimetric analysis may enhance the ability of management protocols to improve clinical outcomes.
Accuracy of Blood Loss Measurement during Cesarean Delivery
Doctorvaladan, Sahar V.; Jelks, Andrea T.; Hsieh, Eric W.; Thurer, Robert L.; Zakowski, Mark I.; Lagrew, David C.
2017-01-01
Objective This study aims to compare the accuracy of visual, quantitative gravimetric, and colorimetric methods used to determine blood loss during cesarean delivery procedures employing a hemoglobin extraction assay as the reference standard. Study Design In 50 patients having cesarean deliveries blood loss determined by assays of hemoglobin content on surgical sponges and in suction canisters was compared with obstetricians' visual estimates, a quantitative gravimetric method, and the blood loss determined by a novel colorimetric system. Agreement between the reference assay and other measures was evaluated by the Bland–Altman method. Results Compared with the blood loss measured by the reference assay (470 ± 296 mL), the colorimetric system (572 ± 334 mL) was more accurate than either visual estimation (928 ± 261 mL) or gravimetric measurement (822 ± 489 mL). The correlation between the assay method and the colorimetric system was more predictive (standardized coefficient = 0.951, adjusted R2 = 0.902) than either visual estimation (standardized coefficient = 0.700, adjusted R2 = 00.479) or the gravimetric determination (standardized coefficient = 0.564, adjusted R2 = 0.304). Conclusion During cesarean delivery, measuring blood loss using colorimetric image analysis is superior to visual estimation and a gravimetric method. Implementation of colorimetric analysis may enhance the ability of management protocols to improve clinical outcomes. PMID:28497007
Attitude determination using vector observations: A fast optimal matrix algorithm
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1993-01-01
The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.
Dyverfeldt, Petter; Hope, Michael D.; Tseng, Elaine E.; Saloner, David
2013-01-01
OBJECTIVES The authors sought to measure the turbulent kinetic energy (TKE) in the ascending aorta of patients with aortic stenosis and to assess its relationship to irreversible pressure loss. BACKGROUND Irreversible pressure loss caused by energy dissipation in post-stenotic flow is an important determinant of the hemodynamic significance of aortic stenosis. The simplified Bernoulli equation used to estimate pressure gradients often misclassifies the ventricular overload caused by aortic stenosis. The current gold standard for estimation of irreversible pressure loss is catheterization, but this method is rarely used due to its invasiveness. Post-stenotic pressure loss is largely caused by dissipation of turbulent kinetic energy into heat. Recent developments in magnetic resonance flow imaging permit noninvasive estimation of TKE. METHODS The study was approved by the local ethics review board and all subjects gave written informed consent. Three-dimensional cine magnetic resonance flow imaging was used to measure TKE in 18 subjects (4 normal volunteers, 14 patients with aortic stenosis with and without dilation). For each subject, the peak total TKE in the ascending aorta was compared with a pressure loss index. The pressure loss index was based on a previously validated theory relating pressure loss to measures obtainable by echocardiography. RESULTS The total TKE did not appear to be related to global flow patterns visualized based on magnetic resonance–measured velocity fields. The TKE was significantly higher in patients with aortic stenosis than in normal volunteers (p < 0.001). The peak total TKE in the ascending aorta was strongly correlated to index pressure loss (R2 = 0.91). CONCLUSIONS Peak total TKE in the ascending aorta correlated strongly with irreversible pressure loss estimated by a well-established method. Direct measurement of TKE by magnetic resonance flow imaging may, with further validation, be used to estimate irreversible pressure loss in aortic stenosis. PMID:23328563
Niswonger, R.G.; Prudic, David E.; Fogg, G.E.; Stonestrom, David A.; Buckland, E.M.
2008-01-01
A method is presented for estimating seepage loss and streambed hydraulic conductivity along intermittent and ephemeral streams using streamflow front velocities in initially dry channels. The method uses the kinematic wave equation for routing streamflow in channels coupled to Philip's equation for infiltration. The coupled model considers variations in seepage loss both across and along the channel. Water redistribution in the unsaturated zone is also represented in the model. Sensitivity of the streamflow front velocity to parameters used for calculating seepage loss and for routing streamflow shows that the streambed hydraulic conductivity has the greatest sensitivity for moderate to large seepage loss rates. Channel roughness, geometry, and slope are most important for low seepage loss rates; however, streambed hydraulic conductivity is still important for values greater than 0.008 m/d. Two example applications are presented to demonstrate the utility of the method.
NASA Astrophysics Data System (ADS)
Mohammed, Amal A.; Abraheem, Sudad K.; Fezaa Al-Obedy, Nadia J.
2018-05-01
In this paper is considered with Burr type XII distribution. The maximum likelihood, Bayes methods of estimation are used for estimating the unknown scale parameter (α). Al-Bayyatis’ loss function and suggest loss function are used to find the reliability with the least loss. So the reliability function is expanded in terms of a set of power function. For this performance, the Matlab (ver.9) is used in computations and some examples are given.
Estimation of furrow irrigation sediment loss using an artificial neural network
USDA-ARS?s Scientific Manuscript database
The area irrigated by furrow irrigation in the U.S. has been steadily decreasing but still represents about 20% of the total irrigated area in the U.S. Furrow irrigation sediment loss is a major water quality issue and a method for estimating sediment loss is needed to quantify the environmental imp...
Al Kadri, Hanan M F; Al Anazi, Bedayah K; Tamim, Hani M
2011-06-01
One of the major problems in international literature is how to measure postpartum blood loss with accuracy. We aimed in this research to assess the accuracy of visual estimation of postpartum blood loss (by each of two main health-care providers) compared with the gravimetric calculation method. We carried out a prospective cohort study at King Abdulaziz Medical City, Riyadh, Saudi Arabia between 1 November 2009 and 31 December 2009. All women who were admitted to labor and delivery suite and delivered vaginally were included in the study. Postpartum blood loss was visually estimated by the attending physician and obstetrics nurse and then objectively calculated by a gravimetric machine. Comparison between the three methods of blood loss calculation was carried out. A total of 150 patients were included in this study. There was a significant difference between the gravimetric calculated blood loss and both health-care providers' estimation with a tendency to underestimate the loss by about 30%. The background and seniority of the assessing health-care provider did not affect the accuracy of the estimation. The corrected incidence of postpartum hemorrhage in Saudi Arabia was found to be 1.47%. Health-care providers tend to underestimate the volume of postpartum blood loss by about 30%. Training and continuous auditing of the diagnosis of postpartum hemorrhage is needed to avoid missing cases and thus preventing associated morbidity and mortality.
Hidden Markov model for dependent mark loss and survival estimation
Laake, Jeffrey L.; Johnson, Devin S.; Diefenbach, Duane R.; Ternent, Mark A.
2014-01-01
Mark-recapture estimators assume no loss of marks to provide unbiased estimates of population parameters. We describe a hidden Markov model (HMM) framework that integrates a mark loss model with a Cormack–Jolly–Seber model for survival estimation. Mark loss can be estimated with single-marked animals as long as a sub-sample of animals has a permanent mark. Double-marking provides an estimate of mark loss assuming independence but dependence can be modeled with a permanently marked sub-sample. We use a log-linear approach to include covariates for mark loss and dependence which is more flexible than existing published methods for integrated models. The HMM approach is demonstrated with a dataset of black bears (Ursus americanus) with two ear tags and a subset of which were permanently marked with tattoos. The data were analyzed with and without the tattoo. Dropping the tattoos resulted in estimates of survival that were reduced by 0.005–0.035 due to tag loss dependence that could not be modeled. We also analyzed the data with and without the tattoo using a single tag. By not using.
Capel, P.D.; Larson, S.J.
1995-01-01
Minimizing the loss of target organic chemicals from environmental water samples between the time of sample collection and isolation is important to the integrity of an investigation. During this sample holding time, there is a potential for analyte loss through volatilization from the water to the headspace, sorption to the walls and cap of the sample bottle; and transformation through biotic and/or abiotic reactions. This paper presents a chemodynamic-based, generalized approach to estimate the most probable loss processes for individual target organic chemicals. The basic premise is that the investigator must know which loss process(es) are important for a particular analyte, based on its chemodynamic properties, when choosing the appropriate method(s) to prevent loss.
Quantifying Soiling Loss Directly From PV Yield
Deceglie, Michael G.; Micheli, Leonardo; Muller, Matthew
2018-01-23
Soiling of photovoltaic (PV) panels is typically quantified through the use of specialized sensors. Here, we describe and validate a method for estimating soiling loss experienced by PV systems directly from system yield without the need for precipitation data. The method, termed the stochastic rate and recovery (SRR) method, automatically detects soiling intervals in a dataset, then stochastically generates a sample of possible soiling profiles based on the observed characteristics of each interval. In this paper, we describe the method, validate it against soiling station measurements, and compare it with other PV-yield-based soiling estimation methods. The broader application of themore » SRR method will enable the fleet scale assessment of soiling loss to facilitate mitigation planning and risk assessment.« less
Quantifying Soiling Loss Directly From PV Yield
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deceglie, Michael G.; Micheli, Leonardo; Muller, Matthew
Soiling of photovoltaic (PV) panels is typically quantified through the use of specialized sensors. Here, we describe and validate a method for estimating soiling loss experienced by PV systems directly from system yield without the need for precipitation data. The method, termed the stochastic rate and recovery (SRR) method, automatically detects soiling intervals in a dataset, then stochastically generates a sample of possible soiling profiles based on the observed characteristics of each interval. In this paper, we describe the method, validate it against soiling station measurements, and compare it with other PV-yield-based soiling estimation methods. The broader application of themore » SRR method will enable the fleet scale assessment of soiling loss to facilitate mitigation planning and risk assessment.« less
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)
NASA Astrophysics Data System (ADS)
Li, Xin-ran; Wang, Xin
2017-04-01
When the genetic algorithm is used to solve the problem of too short-arc (TSA) orbit determination, due to the difference of computing process between the genetic algorithm and the classical method, the original method for outlier deletion is no longer applicable. In the genetic algorithm, the robust estimation is realized by introducing different loss functions for the fitness function, then the outlier problem of the TSA orbit determination is solved. Compared with the classical method, the genetic algorithm is greatly simplified by introducing in different loss functions. Through the comparison on the calculations of multiple loss functions, it is found that the least median square (LMS) estimation and least trimmed square (LTS) estimation can greatly improve the robustness of the TSA orbit determination, and have a high breakdown point.
Effect of education and clinical assessment on the accuracy of post partum blood loss estimation
2014-01-01
Background This research aimed to assess the effect of health care provider education on the accuracy of post partum blood loss estimation. Methods A non-randomized observational study that was conducted at King Abdulaziz Medical City, Riyadh, Saudi Arabia between January 1, 2011 and June 30, 2011. Hundred and twenty three health care providers who are involved in the estimation of post partum blood loss were eligible to participate. The participants were subjected to three research phases and an educational intervention. They have assessed a total of 30 different simulated blood loss stations, with 10 stations in each of the research phases. These phases took place before and after educational sessions on how to visually estimate blood loss and how to best utilize patient data in clinical scenarios. We have assessed the differences between the estimated blood loss and the actual measure. P-values were calculated to assess the differences between the three research phases estimations. Results The participants significantly under-estimated post partum blood loss. The accuracy was improved after training (p-value < 0.0001) and after analysing each patient’s clinical information (p-value = 0.042). The overall results were not affected by the participants’ clinical backgrounds or their years of experience. Under-estimation was more prominent in cases where more than average-excessive blood losses were simulated while over-estimations or accurate estimations were more prominent in less than average blood loss incidents. Conclusion Simple education programmes can improve traditional findings related to under-estimation of blood loss. More sophisticated clinical education programmes may provide additional improvements. PMID:24646156
Singh, Bismark; Meyers, Lauren Ancel
2017-05-08
We provide a methodology for estimating counts of single-year-of-age live-births, fetal-losses, abortions, and pregnant women from aggregated age-group counts. As a case study, we estimate counts for the 254 counties of Texas for the year 2010. We use interpolation to estimate counts of live-births, fetal-losses, and abortions by women of each single-year-of-age for all Texas counties. We then use these counts to estimate the numbers of pregnant women for each single-year-of-age, which were previously available only in aggregate. To support public health policy and planning, we provide single-year-of-age estimates of live-births, fetal-losses, abortions, and pregnant women for all Texas counties in the year 2010, as well as the estimation method source code.
Robust Characterization of Loss Rates
NASA Astrophysics Data System (ADS)
Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph
2015-08-01
Many physical implementations of qubits—including ion traps, optical lattices and linear optics—suffer from loss. A nonzero probability of irretrievably losing a qubit can be a substantial obstacle to fault-tolerant methods of processing quantum information, requiring new techniques to safeguard against loss that introduce an additional overhead that depends upon the loss rate. Here we present a scalable and platform-independent protocol for estimating the average loss rate (averaged over all input states) resulting from an arbitrary Markovian noise process, as well as an independent estimate of detector efficiency. Moreover, we show that our protocol gives an additional constraint on estimated parameters from randomized benchmarking that improves the reliability of the estimated error rate and provides a new indicator for non-Markovian signatures in the experimental data. We also derive a bound for the state-dependent loss rate in terms of the average loss rate.
Fodor, Nándor; Foskolos, Andreas; Topp, Cairistiona F E; Moorby, Jon M; Pásztor, László; Foyer, Christine H
2018-01-01
Dairy farming is one the most important sectors of United Kingdom (UK) agriculture. It faces major challenges due to climate change, which will have direct impacts on dairy cows as a result of heat stress. In the absence of adaptations, this could potentially lead to considerable milk loss. Using an 11-member climate projection ensemble, as well as an ensemble of 18 milk loss estimation methods, temporal changes in milk production of UK dairy cows were estimated for the 21st century at a 25 km resolution in a spatially-explicit way. While increases in UK temperatures are projected to lead to relatively low average annual milk losses, even for southern UK regions (<180 kg/cow), the 'hottest' 25×25 km grid cell in the hottest year in the 2090s, showed an annual milk loss exceeding 1300 kg/cow. This figure represents approximately 17% of the potential milk production of today's average cow. Despite the potential considerable inter-annual variability of annual milk loss, as well as the large differences between the climate projections, the variety of calculation methods is likely to introduce even greater uncertainty into milk loss estimations. To address this issue, a novel, more biologically-appropriate mechanism of estimating milk loss is proposed that provides more realistic future projections. We conclude that South West England is the region most vulnerable to climate change economically, because it is characterised by a high dairy herd density and therefore potentially high heat stress-related milk loss. In the absence of mitigation measures, estimated heat stress-related annual income loss for this region by the end of this century may reach £13.4M in average years and £33.8M in extreme years.
Probabilistic seismic loss estimation via endurance time method
NASA Astrophysics Data System (ADS)
Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.
2017-01-01
Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.
[New non-volumetric method for estimating peroperative blood loss].
Tachoires, D; Mourot, F; Gillardeau, G
1979-01-01
The authors have developed a new method for the estimation of peroperative blood loss by measurement of the haematocrit of a fluid obtained by diluting the blood from swabs in a known volume of isotonic saline solution. This value, referred to a monogram, may be used to assess the volume of blood impregnating the compresses, in relation to the pre-operative or present haematocrit of the patient, by direct reading. The precision of the method is discussed. The results obtained justified its routine application in surgery in children, patients with cardiac failure and in all cases requiring precise compensation of per-operative blood loss.
Consumptive use and resulting leach-field water budget of a mountain residence
Stannard, David; Paul, William T.; Laws, Roy; Poeter, Eileen P.
2010-01-01
Consumptive use of water in a dispersed rural community has important implications for maximum housing density and its effects on sustainability of groundwater withdrawals. Recent rapid growth in Colorado, USA has stressed groundwater supplies in some areas, thereby increasing scrutiny of approximate methods developed there more than 30 years ago to estimate consumptive use that are still used today. A foothills residence was studied during a 2-year period to estimate direct and indirect water losses. Direct losses are those from evaporation inside the home, plus any outdoor use. Indirect loss is evapotranspiration (ET) from the residential leach-field in excess of ET from the immediately surrounding terrain. Direct losses were 18.7% of water supply to the home, substantially larger than estimated historically in Colorado. A new approach was developed to estimate indirect loss, using chamber methods together with the Penman–Monteith model. Indirect loss was only 0.9% of water supply, but this value probably was anomalously low due to a recurring leach-field malfunction. Resulting drainage beneath the leach-field was 80.4% of water supply. Guidelines are given to apply the same methodology at other sites and combine results with a survey of leach-fields in an area to obtain more realistic average values of ET losses.
Dyverfeldt, Petter; Hope, Michael D; Tseng, Elaine E; Saloner, David
2013-01-01
The authors sought to measure the turbulent kinetic energy (TKE) in the ascending aorta of patients with aortic stenosis and to assess its relationship to irreversible pressure loss. Irreversible pressure loss caused by energy dissipation in post-stenotic flow is an important determinant of the hemodynamic significance of aortic stenosis. The simplified Bernoulli equation used to estimate pressure gradients often misclassifies the ventricular overload caused by aortic stenosis. The current gold standard for estimation of irreversible pressure loss is catheterization, but this method is rarely used due to its invasiveness. Post-stenotic pressure loss is largely caused by dissipation of turbulent kinetic energy into heat. Recent developments in magnetic resonance flow imaging permit noninvasive estimation of TKE. The study was approved by the local ethics review board and all subjects gave written informed consent. Three-dimensional cine magnetic resonance flow imaging was used to measure TKE in 18 subjects (4 normal volunteers, 14 patients with aortic stenosis with and without dilation). For each subject, the peak total TKE in the ascending aorta was compared with a pressure loss index. The pressure loss index was based on a previously validated theory relating pressure loss to measures obtainable by echocardiography. The total TKE did not appear to be related to global flow patterns visualized based on magnetic resonance-measured velocity fields. The TKE was significantly higher in patients with aortic stenosis than in normal volunteers (p < 0.001). The peak total TKE in the ascending aorta was strongly correlated to index pressure loss (R(2) = 0.91). Peak total TKE in the ascending aorta correlated strongly with irreversible pressure loss estimated by a well-established method. Direct measurement of TKE by magnetic resonance flow imaging may, with further validation, be used to estimate irreversible pressure loss in aortic stenosis. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Gardner, Bethany T; Dale, Ann Marie; Buckner-Petty, Skye; Van Dillen, Linda; Amick, Benjamin C; Evanoff, Bradley
2016-02-01
The aim of the study was to assess construct and discriminant validity of four health-related work productivity loss questionnaires in relation to employer productivity metrics, and to describe variation in economic estimates of productivity loss provided by the questionnaires in healthy workers. Fifty-eight billing office workers completed surveys including health information and four productivity loss questionnaires. Employer productivity metrics and work hours were also obtained. Productivity loss questionnaires were weakly to moderately correlated with employer productivity metrics. Workers with more health complaints reported greater health-related productivity loss than healthier workers, but showed no loss on employer productivity metrics. Economic estimates of productivity loss showed wide variation among questionnaires, yet no loss of actual productivity. Additional studies are needed comparing questionnaires with objective measures in larger samples and other industries, to improve measurement methods for health-related productivity loss.
Bankert, Brian; Coberley, Carter; Pope, James E; Wells, Aaron
2015-02-01
This paper presents a new approach to estimating the indirect costs of health-related absenteeism. Productivity losses related to employee absenteeism have negative business implications for employers and these losses effectively deprive the business of an expected level of employee labor. The approach herein quantifies absenteeism cost using an output per labor hour-based method and extends employer-level results to the region. This new approach was applied to the employed population of 3 health insurance carriers. The economic cost of absenteeism was estimated to be $6.8 million, $0.8 million, and $0.7 million on average for the 3 employers; regional losses were roughly twice the magnitude of employer-specific losses. The new approach suggests that costs related to absenteeism for high output per labor hour industries exceed similar estimates derived from application of the human capital approach. The materially higher costs under the new approach emphasize the importance of accurately estimating productivity losses.
Grimes, Caris E; Quaife, Matthew; Kamara, Thaim B; Lavy, Christopher B D; Leather, Andy J M; Bolkan, Håkon A
2018-03-14
The Lancet Commission on Global Surgery estimated that low/middle-income countries will lose an estimated cumulative loss of US$12.3 trillion from gross domestic product (GDP) due to the unmet burden of surgical disease. However, no country-specific data currently exist. We aimed to estimate the costs to the Sierra Leone economy from death and disability which may have been averted by surgical care. We used estimates of total, met and unmet need from two main sources-a cluster randomised, cross-sectional, countrywide survey and a retrospective, nationwide study on surgery in Sierra Leone. We calculated estimated disability-adjusted life years from morbidity and mortality for the estimated unmet burden and modelled the likely economic impact using three different methods-gross national income per capita, lifetime earnings foregone and value of a statistical life. In 2012, estimated, discounted lifetime losses to the Sierra Leone economy from the unmet burden of surgical disease was between US$1.1 and US$3.8 billion, depending on the economic method used. These lifetime losses equate to between 23% and 100% of the annual GDP for Sierra Leone. 80% of economic losses were due to mortality. The incremental losses averted by scale up of surgical provision to the Lancet Commission target of 80% were calculated to be between US$360 million and US$2.9 billion. There is a large economic loss from the unmet need for surgical care in Sierra Leone. There is an immediate need for massive investment to counteract ongoing economic losses. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.J. Miller; S.A. Mizell; R.H. French
2005-10-01
Transmission losses along ephemeral channels are an important, yet poorly understood, aspect of rainfall-runoff prediction. Losses occur as flow infiltrates channel bed, banks, and floodplains. Estimating transmission losses in arid environments is difficult because of the variability of surficial geomorphic characteristics and infiltration capacities of soils and near-surface low-permeability geologic layers (e.g., calcrete). Transmission losses in ephemeral channels are nonlinear functions of discharge and time (Lane, 1972), and vary spatially along the channel reach and with soil antecedent moisture conditions (Sharma and Murthy, 1994). Rainfall-runoff models used to estimate peak discharge and runoff volume for flood hazard assessment are notmore » designed specifically for ephemeral channels, where transmission loss can be significant because of the available storage volume in channel soils. Accuracy of the flow routing and rainfall-runoff models is dependent on the transmission loss estimate. Transmission loss rate is the most uncertain parameter in flow routing through ephemeral channels. This research, sponsored by the U.S. Department of Energy, National Nuclear Security Administration (DOE/NNSA) and conducted at the Nevada Test Site (NTS), is designed to improve understanding of the impact of transmission loss on ephemeral flood modeling and compare various methodologies for predicting runoff from rainfall events. Various applications of this research to DOE projects include more site-specific accuracy in runoff prediction; possible reduction in size of flood mitigation structures at the NTS; and a better understanding of expected infiltration from runoff losses into landfill covers. Two channel transmission loss field experiments were performed on the NTS between 2001 and 2003: the first was conducted in the ER-5-3 channel (Miller et al., 2003), between March and June 2001, and the second was conducted in the Cambric Ditch (Mizell et al., 2005), between April and July 2003. Both studies used water discharged from unrelated drilling activities during well development and aquifer pump tests. Discharge measurements at several flumes located along the channels were used to directly measure transmission losses. Flume locations were chosen in relation to geomorphic surface types and ages, vegetative cover and types, subsurface indurated layers (calcrete), channel slopes, etc. Transmission losses were quantified using three different analysis methods. Method 1 uses Lane's Method (Lane, 1983) for estimating flood magnitude in ephemeral channels. Method 2 uses heat as a subsurface tracer for infiltration. Numerical modeling, using HYDRUS-2D (Simunek et al., 1999), a finite-element-based flow and transport code, was applied to estimate infiltration from soil temperature data. Method 3 uses hydraulic gradient and water content in a Darcy's Law approach (Freeze and Cherry, 1979) to calculate one-dimensional flow rates. Heat dissipation and water content data were collected for this analysis.« less
An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)
NASA Technical Reports Server (NTRS)
Dechant, Lawrence J.
1994-01-01
Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.
Modal loss mechanism of micro-structured VCSELs studied using full vector FDTD method.
Jo, Du-Ho; Vu, Ngoc Hai; Kim, Jin-Tae; Hwang, In-Kag
2011-09-12
Modal properties of vertical cavity surface-emitting lasers (VCSELs) with holey structures are studied using a finite difference time domain (FDTD) method. We investigate loss behavior with respect to the variation of structural parameters, and explain the loss mechanism of VCSELs. We also propose an effective method to estimate the modal loss based on mode profiles obtained using FDTD simulation. Our results could provide an important guideline for optimization of the microstructures of high-power single-mode VCSELs.
Spatial correlation of probabilistic earthquake ground motion and loss
Wesson, R.L.; Perkins, D.M.
2001-01-01
Spatial correlation of annual earthquake ground motions and losses can be used to estimate the variance of annual losses to a portfolio of properties exposed to earthquakes A direct method is described for the calculations of the spatial correlation of earthquake ground motions and losses. Calculations for the direct method can be carried out using either numerical quadrature or a discrete, matrix-based approach. Numerical results for this method are compared with those calculated from a simple Monte Carlo simulation. Spatial correlation of ground motion and loss is induced by the systematic attenuation of ground motion with distance from the source, by common site conditions, and by the finite length of fault ruptures. Spatial correlation is also strongly dependent on the partitioning of the variability, given an event, into interevent and intraevent components. Intraevent variability reduces the spatial correlation of losses. Interevent variability increases spatial correlation of losses. The higher the spatial correlation, the larger the variance in losses to a port-folio, and the more likely extreme values become. This result underscores the importance of accurately determining the relative magnitudes of intraevent and interevent variability in ground-motion studies, because of the strong impact in estimating earthquake losses to a portfolio. The direct method offers an alternative to simulation for calculating the variance of losses to a portfolio, which may reduce the amount of calculation required.
Applying stochastic small-scale damage functions to German winter storms
NASA Astrophysics Data System (ADS)
Prahl, B. F.; Rybski, D.; Kropp, J. P.; Burghoff, O.; Held, H.
2012-03-01
Analyzing insurance-loss data we derive stochastic storm-damage functions for residential buildings. On district level we fit power-law relations between daily loss and maximum wind speed, typically spanning more than 4 orders of magnitude. The estimated exponents for 439 German districts roughly range from 8 to 12. In addition, we find correlations among the parameters and socio-demographic data, which we employ in a simplified parametrization of the damage function with just 3 independent parameters for each district. A Monte Carlo method is used to generate loss estimates and confidence bounds of daily and annual storm damages in Germany. Our approach reproduces the annual progression of winter storm losses and enables to estimate daily losses over a wide range of magnitudes.
NASA Astrophysics Data System (ADS)
Foulser-Piggott, R.; Saito, K.; Spence, R.
2012-04-01
Loss estimates produced by catastrophe models are dependent on the quality of the input data, including both the hazard and exposure data. Currently, some of the exposure data input into a catastrophe model is aggregated over an area and therefore an estimate of the risk in this area may have a low level of accuracy. In order to obtain a more detailed and accurate loss estimate, it is necessary to have higher resolution exposure data. However, high resolution exposure data is not commonly available worldwide and therefore methods to infer building distribution and characteristics at higher resolution from existing information must be developed. This study is focussed on the development of disaggregation methodologies for exposure data which, if implemented in current catastrophe models, would lead to improved loss estimates. The new methodologies developed for disaggregating exposure data make use of GIS, remote sensing and statistical techniques. The main focus of this study is on earthquake risk, however the methods developed are modular so that they may be applied to different hazards. A number of different methods are proposed in order to be applicable to different regions of the world which have different amounts of data available. The new methods give estimates of both the number of buildings in a study area and a distribution of building typologies, as well as a measure of the vulnerability of the building stock to hazard. For each method, a way to assess and quantify the uncertainties in the methods and results is proposed, with particular focus on developing an index to enable input data quality to be compared. The applicability of the methods is demonstrated through testing for two study areas, one in Japan and the second in Turkey, selected because of the occurrence of recent and damaging earthquake events. The testing procedure is to use the proposed methods to estimate the number of buildings damaged at different levels following a scenario earthquake event. This enables the results of the models to be compared with real data and the relative performance of the different methodologies to be evaluated. A sensitivity analysis is also conducted for two main reasons. Firstly, to determine the key input variables in the methodology that have the most significant impact on the resulting loss estimate. Secondly, to enable the uncertainty in the different approaches to be quantified and therefore provide a range of uncertainty in the loss estimates.
Williams, Marshall L.; Etheridge, Alexandra B.
2013-01-01
The U.S. Geological Survey, in cooperation with the Idaho Department of Water Resources, conducted an investigation on Indian Creek Reservoir, a small impoundment in east Ada County, Idaho, to quantify groundwater seepage into and out of the reservoir. Data from the study will assist the Idaho Water Resources Department’s Comprehensive Aquifer Management Planning effort to estimate available water resources in Ada County. Three independent methods were utilized to estimate groundwater seepage: (1) the water-budget method; (2) the seepage-meter method; and (3) the segmented Darcy method. Reservoir seepage was quantified during the periods of April through August 2010 and February through November 2011. With the water-budget method, all measureable sources of inflow to and outflow from the reservoir were quantified, with the exception of groundwater; the water-budget equation was solved for groundwater inflow to or outflow from the reservoir. The seepage-meter method relies on the placement of seepage meters into the bottom sediments of the reservoir for the direct measurement of water flux across the sediment-water interface. The segmented-Darcy method utilizes a combination of water-level measurements in the reservoir and in adjacent near-shore wells to calculate water-table gradients between the wells and the reservoir within defined segments of the reservoir shoreline. The Darcy equation was used to calculate groundwater inflow to and outflow from the reservoir. Water-budget results provided continuous, daily estimates of seepage over the full period of data collection, while the seepage-meter and segmented Darcy methods provided instantaneous estimates of seepage. As a result of these and other difference in methodologies, comparisons of seepage estimates provided by the three methods are considered semi-quantitative. The results of the water-budget derived estimates of seepage indicate seepage to be seasonally variable in terms of the direction and magnitude of flow. The reservoir tended to gain water from seepage of groundwater in the early spring months (March–May), while seepage losses to groundwater from the reservoir occurred in the drier months (June–October). Net monthly seepage rates, as computed by the water-budget method, varied greatly. Reservoir gains from seepage ranged from 0.2 to 59.4 acre-feet per month, while reservoir losses to seepage ranged from 1.6 and 26.8 acre-feet per month. An analysis of seepage meter estimates and segmented-Darcy estimates qualitatively supports the seasonal patterns in seepage provided by the water-budget calculations, except that they tended to be much smaller in magnitude. This suggests that actual seepage might be smaller than those estimates made by the water-budget method. Although the results of all three methods indicate that there is some water loss from the reservoir to groundwater, the seepage losses may be due to rewetting of unsaturated near-shore soils, possible replenishment of a perched aquifer, or both, rather than through percolation to the local aquifer that lies 130 feet below the reservoir. A lithologic log from an adjacent well indicates the existence of a clay lithology that is well correlated to the original reservoir’s base elevation. If the clay lithologic unit extends beneath the reservoir basin underlying the fine-grain reservoir bed sediments, the clay layer should act as an effective barrier to reservoir seepage to the local aquifer, which would explain the low seepage loss estimates calculated in this study.
An analytical method of estimating turbine performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1949-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.
Topp, Cairistiona F. E.; Moorby, Jon M.; Pásztor, László; Foyer, Christine H.
2018-01-01
Dairy farming is one the most important sectors of United Kingdom (UK) agriculture. It faces major challenges due to climate change, which will have direct impacts on dairy cows as a result of heat stress. In the absence of adaptations, this could potentially lead to considerable milk loss. Using an 11-member climate projection ensemble, as well as an ensemble of 18 milk loss estimation methods, temporal changes in milk production of UK dairy cows were estimated for the 21st century at a 25 km resolution in a spatially-explicit way. While increases in UK temperatures are projected to lead to relatively low average annual milk losses, even for southern UK regions (<180 kg/cow), the ‘hottest’ 25×25 km grid cell in the hottest year in the 2090s, showed an annual milk loss exceeding 1300 kg/cow. This figure represents approximately 17% of the potential milk production of today’s average cow. Despite the potential considerable inter-annual variability of annual milk loss, as well as the large differences between the climate projections, the variety of calculation methods is likely to introduce even greater uncertainty into milk loss estimations. To address this issue, a novel, more biologically-appropriate mechanism of estimating milk loss is proposed that provides more realistic future projections. We conclude that South West England is the region most vulnerable to climate change economically, because it is characterised by a high dairy herd density and therefore potentially high heat stress-related milk loss. In the absence of mitigation measures, estimated heat stress-related annual income loss for this region by the end of this century may reach £13.4M in average years and £33.8M in extreme years. PMID:29738581
NASA Astrophysics Data System (ADS)
Dewi Ratih, Iis; Sutijo Supri Ulama, Brodjol; Prastuti, Mike
2018-03-01
Value at Risk (VaR) is one of the statistical methods used to measure market risk by estimating the worst losses in a given time period and level of confidence. The accuracy of this measuring tool is very important in determining the amount of capital that must be provided by the company to cope with possible losses. Because there is a greater losses to be faced with a certain degree of probability by the greater risk. Based on this, VaR calculation analysis is of particular concern to researchers and practitioners of the stock market to be developed, thus getting more accurate measurement estimates. In this research, risk analysis of stocks in four banking sub-sector, Bank Rakyat Indonesia, Bank Mandiri, Bank Central Asia and Bank Negara Indonesia will be done. Stock returns are expected to be influenced by exogenous variables, namely ICI and exchange rate. Therefore, in this research, stock risk estimation are done by using VaR ARMAX-GARCHX method. Calculating the VaR value with the ARMAX-GARCHX approach using window 500 gives more accurate results. Overall, Bank Central Asia is the only bank had the estimated maximum loss in the 5% quantile.
Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie
2012-06-01
Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.
NASA Astrophysics Data System (ADS)
Permata, Anggi; Juniansah, Anwar; Nurcahyati, Eka; Dimas Afrizal, Mousafi; Adnan Shafry Untoro, Muhammad; Arifatha, Na'ima; Ramadhani Yudha Adiwijaya, Raden; Farda, Nur Mohammad
2016-11-01
Landslide is an unpredictable natural disaster which commonly happens in highslope area. Aerial photography in small format is one of acquisition method that can reach and obtain high resolution spatial data faster than other methods, and provide data such as orthomosaic and Digital Surface Model (DSM). The study area contained landslide area in Clapar, Madukara District of Banjarnegara. Aerial photographs of landslide area provided advantage in objects visibility. Object's characters such as shape, size, and texture were clearly seen, therefore GEOBIA (Geography Object Based Image Analysis) was compatible as method for classifying land cover in study area. Dissimilar with PPA (PerPixel Analyst) method that used spectral information as base object detection, GEOBIA could use spatial elements as classification basis to establish a land cover map with better accuracy. GEOBIA method used classification hierarchy to divide post disaster land cover into three main objects: vegetation, landslide/soil, and building. Those three were required to obtain more detailed information that can be used in estimating loss caused by landslide and establishing land cover map in landslide area. Estimating loss in landslide area related to damage in Salak (Salacca zalacca) plantations. This estimation towards quantity of Salak tree that were drifted away by landslide was calculated in assumption that every tree damaged by landslide had same age and production class with other tree that weren't damaged. Loss calculation was done by approximating quantity of damaged trees in landslide area with data of trees around area that were acquired from GEOBIA classification method.
Quantifying Standing Dead Tree Volume and Structural Loss with Voxelized Terrestrial Lidar Data
NASA Astrophysics Data System (ADS)
Popescu, S. C.; Putman, E.
2017-12-01
Standing dead trees (SDTs) are an important forest component and impact a variety of ecosystem processes, yet the carbon pool dynamics of SDTs are poorly constrained in terrestrial carbon cycling models. The ability to model wood decay and carbon cycling in relation to detectable changes in tree structure and volume over time would greatly improve such models. The overall objective of this study was to provide automated aboveground volume estimates of SDTs and automated procedures to detect, quantify, and characterize structural losses over time with terrestrial lidar data. The specific objectives of this study were: 1) develop an automated SDT volume estimation algorithm providing accurate volume estimates for trees scanned in dense forests; 2) develop an automated change detection methodology to accurately detect and quantify SDT structural loss between subsequent terrestrial lidar observations; and 3) characterize the structural loss rates of pine and oak SDTs in southeastern Texas. A voxel-based volume estimation algorithm, "TreeVolX", was developed and incorporates several methods designed to robustly process point clouds of varying quality levels. The algorithm operates on horizontal voxel slices by segmenting the slice into distinct branch or stem sections then applying an adaptive contour interpolation and interior filling process to create solid reconstructed tree models (RTMs). TreeVolX estimated large and small branch volume with an RMSE of 7.3% and 13.8%, respectively. A voxel-based change detection methodology was developed to accurately detect and quantify structural losses and incorporated several methods to mitigate the challenges presented by shifting tree and branch positions as SDT decay progresses. The volume and structural loss of 29 SDTs, composed of Pinus taeda and Quercus stellata, were successfully estimated using multitemporal terrestrial lidar observations over elapsed times ranging from 71 - 753 days. Pine and oak structural loss rates were characterized by estimating the amount of volumetric loss occurring in 20 equal-interval height bins of each SDT. Results showed that large pine snags exhibited more rapid structural loss in comparison to medium-sized oak snags in this study.
NASA Astrophysics Data System (ADS)
Sano, K.; Gomi, T.; Hiraoka, M.; Sato, T.; Onda, Y.
2015-12-01
We examined the changes in seasonal patterns of catchment-scale evapotranspiration (i.e., water loss) using Short-Term Water Balance Model (STWBM) developed. STWBM is applied to estimate the value of water loss based on precipitation minus discharge volume during short-periods(8 to 80 days). This method can be applicable for examining seasonal characteristics of water loss that relets to ET. We applied STWBM for investigating the effects of 50% thinning in nested headwater catchments draining Japanese cypress (Cryptomeria japonica) and cedar (Chamaecyparis obtusa) forests. Study areas is located to 70 km north of Tokyo with 1250 mm annual precipitation and 14℃ mean annual temperature. 50% of the stems (46% of timber volume) were removed by strip thinning in 17 ha treatment catchment, 9 ha catchment remained untreated as a control. We installed 4 nested gauging stations in treated and control catchments with 3 to 10 ha of drainage areas. Runoff in each nested gauging station was measured in the pre- (from April, 2010 to June 2011) and the post-thinning periods (from January 2012 to December 2012). Total runoff coefficient in treated and control catchment was 54% and 26%, respectively. , . Estimated annual water loss by STWBM was 585 mm in treated and 969 mm in control catchments. Because annual evapotranspiration of Japanese cypress and cedar was about ranging from 400 to 800 mm in this catchment, our estimated water loss mostly associated with ET and partially by water loss by deep bedrock percolation. Estimated water loss after thinning in growth season (May to October) decreased 45 to 60 (in 2012) % and 51 to 60 (in 2013) % for all nested gauging station, while estimated water loss in control catchment was consistent. This result suggested that 50% of thinning decreased water loss by ET but changes can be varied among nested gauging station.
NASA Astrophysics Data System (ADS)
Eichner, J. F.; Steuer, M.; Loew, P.
2016-12-01
Past natural catastrophes offer valuable information for present-day risk assessment. To make use of historic loss data one has to find a setting that enables comparison (over place and time) of historic events happening under today's conditions. By means of loss data normalization the influence of socio-economic development, as the fundamental driver in this context, can be eliminated and the data gives way to the deduction of risk-relevant information and allows the study of other driving factors such as influences from climate variability and climate change or changes of vulnerability. Munich Re's NatCatSERVICE database includes for each historic loss event the geographic coordinates of all locations and regions that were affected in a relevant way. These locations form the basis for what is known as the loss footprint of an event. Here we introduce a state of the art and robust method for global loss data normalization. The presented peril-specific loss footprint normalization method adjusts direct economic loss data to the influence of economic growth within each loss footprint (by using gross cell product data as proxy for local economic growth) and makes loss data comparable over time. To achieve a comparative setting for supra-regional economic differences, we categorize the normalized loss values (together with information on fatalities) based on the World Bank income groups into five catastrophe classes, from minor to catastrophic. The data treated in such way allows (a) for studying the influence of improved reporting of small scale loss events over time and (b) for application of standard (stationary) extreme value statistics (here: peaks over threshold method) to compile estimates for extreme and extrapolated loss magnitudes such as a "100 year event" on global scale. Examples of such results will be shown.
Emergency Physician Estimation of Blood Loss
Ashburn, Jeffery C.; Harrison, Tamara; Ham, James J.; Strote, Jared
2012-01-01
Introduction Emergency physicians (EP) frequently estimate blood loss, which can have implications for clinical care. The objectives of this study were to examine EP accuracy in estimating blood loss on different surfaces and compare attending physician and resident performance. Methods A sample of 56 emergency department (ED) physicians (30 attending physicians and 26 residents) were asked to estimate the amount of moulage blood present in 4 scenarios: 500 mL spilled onto an ED cot; 25 mL spilled onto a 10-pack of 4 × 4-inch gauze; 100 mL on a T-shirt; and 150 mL in a commode filled with water. Standard estimate error (the absolute value of (estimated volume − actual volume)/actual volume × 100) was calculated for each estimate. Results The mean standard error for all estimates was 116% with a range of 0% to 1233%. Only 8% of estimates were within 20% of the true value. Estimates were most accurate for the sheet scenario and worst for the commode scenario. Residents and attending physicians did not perform significantly differently (P > 0.05). Conclusion Emergency department physicians do not estimate blood loss well in a variety of scenarios. Such estimates could potentially be misleading if used in clinical decision making. Clinical experience does not appear to improve estimation ability in this limited study. PMID:22942938
On-Line Loss of Control Detection Using Wavelets
NASA Technical Reports Server (NTRS)
Brenner, Martin J. (Technical Monitor); Thompson, Peter M.; Klyde, David H.; Bachelder, Edward N.; Rosenthal, Theodore J.
2005-01-01
Wavelet transforms are used for on-line detection of aircraft loss of control. Wavelet transforms are compared with Fourier transform methods and shown to more rapidly detect changes in the vehicle dynamics. This faster response is due to a time window that decreases in length as the frequency increases. New wavelets are defined that further decrease the detection time by skewing the shape of the envelope. The wavelets are used for power spectrum and transfer function estimation. Smoothing is used to tradeoff the variance of the estimate with detection time. Wavelets are also used as front-end to the eigensystem reconstruction algorithm. Stability metrics are estimated from the frequency response and models, and it is these metrics that are used for loss of control detection. A Matlab toolbox was developed for post-processing simulation and flight data using the wavelet analysis methods. A subset of these methods was implemented in real time and named the Loss of Control Analysis Tool Set or LOCATS. A manual control experiment was conducted using a hardware-in-the-loop simulator for a large transport aircraft, in which the real time performance of LOCATS was demonstrated. The next step is to use these wavelet analysis tools for flight test support.
Spatial modeling for estimation of earthquakes economic loss in West Java
NASA Astrophysics Data System (ADS)
Retnowati, Dyah Ayu; Meilano, Irwan; Riqqi, Akhmad; Hanifa, Nuraini Rahma
2017-07-01
Indonesia has a high vulnerability towards earthquakes. The low adaptive capacity could make the earthquake become disaster that should be concerned. That is why risk management should be applied to reduce the impacts, such as estimating the economic loss caused by hazard. The study area of this research is West Java. The main reason of West Java being vulnerable toward earthquake is the existence of active faults. These active faults are Lembang Fault, Cimandiri Fault, Baribis Fault, and also Megathrust subduction zone. This research tries to estimates the value of earthquakes economic loss from some sources in West Java. The economic loss is calculated by using HAZUS method. The components that should be known are hazard (earthquakes), exposure (building), and the vulnerability. Spatial modeling is aimed to build the exposure data and make user get the information easier by showing the distribution map, not only in tabular data. As the result, West Java could have economic loss up to 1,925,122,301,868,140 IDR ± 364,683,058,851,703.00 IDR, which is estimated from six earthquake sources with maximum possibly magnitude. However, the estimation of economic loss value in this research is the worst case earthquakes occurrence which is probably over-estimated.
Testing the Wisconsin Phosphorus Index with year-round, field-scale runoff monitoring.
Good, Laura W; Vadas, Peter; Panuska, John C; Bonilla, Carlos A; Jokela, William E
2012-01-01
The Wisconsin Phosphorus Index (WPI) is one of several P indices in the United States that use equations to describe actual P loss processes. Although for nutrient management planning the WPI is reported as a dimensionless whole number, it is calculated as average annual dissolved P (DP) and particulate P (PP) mass delivered per unit area. The WPI calculations use soil P concentration, applied manure and fertilizer P, and estimates of average annual erosion and average annual runoff. We compared WPI estimated P losses to annual P loads measured in surface runoff from 86 field-years on crop fields and pastures. As the erosion and runoff generated by the weather in the monitoring years varied substantially from the average annual estimates used in the WPI, the WPI and measured loads were not well correlated. However, when measured runoff and erosion were used in the WPI field loss calculations, the WPI accurately estimated annual total P loads with a Nash-Sutcliffe Model Efficiency (NSE) of 0.87. The DP loss estimates were not as close to measured values (NSE = 0.40) as the PP loss estimates (NSE = 0.89). Some errors in estimating DP losses may be unavoidable due to uncertainties in estimating on-farm manure P application rates. The WPI is sensitive to field management that affects its erosion and runoff estimates. Provided that the WPI methods for estimating average annual erosion and runoff are accurately reflecting the effects of management, the WPI is an accurate field-level assessment tool for managing runoff P losses. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Roso, V M; Schenkel, F S; Miller, S P; Schaeffer, L R
2005-08-01
Breed additive, dominance, and epistatic loss effects are of concern in the genetic evaluation of a multibreed population. Multiple regression equations used for fitting these effects may show a high degree of multicollinearity among predictor variables. Typically, when strong linear relationships exist, the regression coefficients have large SE and are sensitive to changes in the data file and to the addition or deletion of variables in the model. Generalized ridge regression methods were applied to obtain stable estimates of direct and maternal breed additive, dominance, and epistatic loss effects in the presence of multicollinearity among predictor variables. Preweaning weight gains of beef calves in Ontario, Canada, from 1986 to 1999 were analyzed. The genetic model included fixed direct and maternal breed additive, dominance, and epistatic loss effects, fixed environmental effects of age of the calf, contemporary group, and age of the dam x sex of the calf, random additive direct and maternal genetic effects, and random maternal permanent environment effect. The degree and the nature of the multicollinearity were identified and ridge regression methods were used as an alternative to ordinary least squares (LS). Ridge parameters were obtained using two different objective methods: 1) generalized ridge estimator of Hoerl and Kennard (R1); and 2) bootstrap in combination with cross-validation (R2). Both ridge regression methods outperformed the LS estimator with respect to mean squared error of predictions (MSEP) and variance inflation factors (VIF) computed over 100 bootstrap samples. The MSEP of R1 and R2 were similar, and they were 3% less than the MSEP of LS. The average VIF of LS, R1, and R2 were equal to 26.81, 6.10, and 4.18, respectively. Ridge regression methods were particularly effective in decreasing the multicollinearity involving predictor variables of breed additive effects. Because of a high degree of confounding between estimates of maternal dominance and direct epistatic loss effects, it was not possible to compare the relative importance of these effects with a high level of confidence. The inclusion of epistatic loss effects in the additive-dominance model did not cause noticeable reranking of sires, dams, and calves based on across-breed EBV. More precise estimates of breed effects as a result of this study may result in more stable across-breed estimated breeding values over the years.
NASA Astrophysics Data System (ADS)
Aller, D.; Hohl, R.; Mair, F.; Schiesser, H.-H.
2003-04-01
Extreme hailfall can cause massive damage to building structures. For the insurance and reinsurance industry it is essential to estimate the probable maximum hail loss of their portfolio. The probable maximum loss (PML) is usually defined with a return period of 1 in 250 years. Statistical extrapolation has a number of critical points, as historical hail loss data are usually only available from some events while insurance portfolios change over the years. At the moment, footprints are derived from historical hail damage data. These footprints (mean damage patterns) are then moved over a portfolio of interest to create scenario losses. However, damage patterns of past events are based on the specific portfolio that was damaged during that event and can be considerably different from the current spread of risks. A new method for estimating the probable maximum hail loss to a building portfolio is presented. It is shown that footprints derived from historical damages are different to footprints of hail kinetic energy calculated from radar reflectivity measurements. Based on the relationship between radar-derived hail kinetic energy and hail damage to buildings, scenario losses can be calculated. A systematic motion of the hail kinetic energy footprints over the underlying portfolio creates a loss set. It is difficult to estimate the return period of losses calculated with footprints derived from historical damages being moved around. To determine the return periods of the hail kinetic energy footprints over Switzerland, 15 years of radar measurements and 53 years of agricultural hail losses are available. Based on these data, return periods of several types of hailstorms were derived for different regions in Switzerland. The loss set is combined with the return periods of the event set to obtain an exceeding frequency curve, which can be used to derive the PML.
Labour productivity losses caused by premature death associated with hepatitis C in Spain
Oliva-Moreno, Juan; Peña-Longobardo, Luz M.; Alonso, Sonia; Fernández-Bolaños, Antonio; Gutiérrez, María Luisa; Hidalgo-Vega, Álvaro; de la Fuente, Elsa
2015-01-01
Background and aims Hepatitis C virus (HCV) infection places a huge burden on healthcare systems. There is no study assessing the impact of HCV infection on premature deaths in Spain. The aim of this study was to estimate productivity losses because of premature deaths attributable to hepatitis C occurring in Spain during 2007–2011. Materials and methods We use data from several sources (Registry of Deaths, Labour Force Survey and Wage Structure Survey) to develop a simulation model based on the human capital approach and to estimate the flows in labour productivity losses in the period considered. The attributable fraction method was used to estimate the numbers of deaths associated with HCV infection. Two sensitivity analyses were developed to test the robustness of the results. Results Our model shows total productivity losses attributable to HCV infection of 1054.7 million euros over the period analysed. The trend in productivity losses is decreasing over the period. This result is because of improvements in health outcomes, reflected in the reduction of the number of years of potential productive life lost. Of the total estimated losses, 18.6% were because of hepatitis C, 24.6% because of hepatocellular carcinoma, 30.1% because of cirrhosis, 15.9% because of other liver diseases and 10.7% because of HIV–HCV coinfection. Conclusion The results show that premature mortality attributable to hepatitis C involves significant productivity losses. This highlights the need to extend the analysis to consider other social costs and obtain a more complete picture of the actual economic impact of hepatitis C infection. PMID:25853930
New Method for Estimating Landslide Losses for Major Winter Storms in California.
NASA Astrophysics Data System (ADS)
Wills, C. J.; Perez, F. G.; Branum, D.
2014-12-01
We have developed a prototype system for estimating the economic costs of landslides due to winter storms in California. This system uses some of the basic concepts and estimates of the value of structures from the HAZUS program developed for FEMA. Using the only relatively complete landslide loss data set that we could obtain, data gathered by the City of Los Angeles in 1978, we have developed relations between landslide susceptibility and loss ratio for private property (represented as the value of wood frame structures from HAZUS). The landslide loss ratios estimated from the Los Angeles data are calibrated using more generalized data from the 1982 storms in the San Francisco Bay area to develop relationships that can be used to estimate loss for any value of 2-day or 30-day rainfall averaged over a county. The current estimates for major storms are long projections from very small data sets, subject to very large uncertainties, so provide a very rough estimate of the landslide damage to structures and infrastructure on hill slopes. More importantly, the system can be extended and improved with additional data and used to project landslide losses in future major winter storms. The key features of this system—the landslide susceptibility map, the relationship between susceptibility and loss ratio, and the calibration of estimates against losses in past storms—can all be improved with additional data. Most importantly, this study highlights the importance of comprehensive studies of landslide damage. Detailed surveys of landslide damage following future storms that include locations and amounts of damage for all landslides within an area are critical for building a well-calibrated system to project future landslide losses. Without an investment in post-storm landslide damage surveys, it will not be possible to improve estimates of the magnitude or distribution of landslide damage, which can range up to billions of dollars.
The economic consequences of neurosurgical disease in low- and middle-income countries.
Rudolfson, Niclas; Dewan, Michael C; Park, Kee B; Shrime, Mark G; Meara, John G; Alkire, Blake C
2018-05-18
OBJECTIVE The objective of this study was to estimate the economic consequences of neurosurgical disease in low- and middle-income countries (LMICs). METHODS The authors estimated gross domestic product (GDP) losses and the broader welfare losses attributable to 5 neurosurgical disease categories in LMICs using two distinct economic models. The value of lost output (VLO) model projects annual GDP losses due to neurosurgical disease during 2015-2030, and is based on the WHO's "Projecting the Economic Cost of Ill-health" tool. The value of lost economic welfare (VLW) model estimates total welfare losses, which is based on the value of a statistical life and includes nonmarket losses such as the inherent value placed on good health, resulting from neurosurgical disease in 2015 alone. RESULTS The VLO model estimates the selected neurosurgical diseases will result in $4.4 trillion (2013 US dollars, purchasing power parity) in GDP losses during 2015-2030 in the 90 included LMICs. Economic losses are projected to disproportionately affect low- and lower-middle-income countries, risking up to a 0.6% and 0.54% loss of GDP, respectively, in 2030. The VLW model evaluated 127 LMICs, and estimates that these countries experienced $3 trillion (2013 US dollars, purchasing power parity) in economic welfare losses in 2015. Regardless of the model used, the majority of the losses can be attributed to stroke and traumatic brain injury. CONCLUSIONS The economic impact of neurosurgical diseases in LMICs is significant. The magnitude of economic losses due to neurosurgical diseases in LMICs provides further motivation beyond already compelling humanitarian reasons for action.
Ortega-Ortega, Marta; Oliva-Moreno, Juan; Jiménez-Aguilera, Juan de Dios; Romero-Aguilar, Antonio; Espigado-Tocino, Ildefonso
2015-01-01
Stem cell transplantation has been used for many years to treat haematological malignancies that could not be cured by other treatments. Despite this medical breakthrough, mortality rates remain high. Our purpose was to evaluate labour productivity losses associated with premature mortality due to blood cancer in recipients of stem cell transplantations. We collected primary data from the clinical histories of blood cancer patients who had undergone stem cell transplantation between 2006 and 2011 in two Spanish hospitals. We carried out a descriptive analysis and calculated the years of potential life lost and years of potential productive life lost. Labour productivity losses due to premature mortality were estimated using the Human Capital method. An alternative approach, the Friction Cost method, was used as part of the sensitivity analysis. Our findings suggest that, in a population of 179 transplanted and deceased patients, males and people who die between the ages of 30 and 49 years generate higher labour productivity losses. The estimated loss amounts to over €31.4 million using the Human Capital method (€480,152 using the Friction Cost method), which means an average of €185,855 per death. The highest labour productivity losses are produced by leukaemia. However, lymphoma generates the highest loss per death. Further efforts are needed to reduce premature mortality in blood cancer patients undergoing transplantations and reduce economic losses. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, D.C.; Forsyth, E.M.; Cohn, S.H.
An established method for determining radioiron absorption by whole body counting was used to study six parous women with hypochromic anemia and menorrhagia, and a seventh nulliparous woman with normal blood values and normal menses. In addition to demonstrating iron deficiency by increased radioiron absorption, the method was found useful in estimating the quantity of blood lost with each menstrual period. As much as 550 ml of menstrual loss was noted in two of the patients studied. Estimates in the patient with normal menses were 59 and 33 ml. Two additional patients demonstrated patierns of blood loss found in continuousmore » gastrointestinal hemorrhage due to hereditary hemorrhagic telangiectasia, and in severe epistaxis, as further applications of the technique. Where available, the method is to be recommended for routine investigation of hypochromic anemia when episodic or continuous blood loss such as that of menorrhagia is suspected. (auth)« less
Blandford, John M; Gift, Thomas L
2006-10-01
The productivity losses attributable to disease-related morbidity and mortality impose a burden on society in general and on employers in particular. A reliable assessment of the productivity losses associated with untreated infection with Chlamydia trachomatis (Ct) would complement earlier work on direct medical costs and contribute to an estimate of the full cost of chlamydial disease. The goal of this study was to estimate the discounted lifetime productivity losses attributable to untreated chlamydial infection in reproductive-aged women. We developed a cost model using Monte Carlo methods to estimate the lifetime discounted productivity losses attributable to untreated lower genital tract Ct infection among reproductive-aged women. The model considered the impact of disability resulting from acute pelvic inflammatory disease (PID) associated with untreated Ct infection and from the sequelae of acute PID, including chronic pelvic pain, ectopic pregnancy, and infertility. To accommodate disparate Ct infection rates and labor market characteristics across age groups, we matched age-based risk factors for Ct infection with labor market patterns. Data sources included the 2001 National Chlamydia Surveillance Data, the 2001 Current Population Survey, and published literature. Estimates indicate that the mean weighted productivity losses per untreated Ct infection were approximately US dollars 130 (in year 2001 dollars). Mean weighted productivity losses per case of acute PID were estimated at US dollars 649. Estimated productivity losses were highly correlated with age, reflecting age-dependent differences in labor market characteristics. The productivity losses attributable to untreated infection with Ct and to sequelae of this infection form a substantial portion of the total economic burden of disease. Effective programs to prevent chlamydial infection and effective screening, diagnosis, and treatment of Ct-infected women may reduce productivity losses and substantially lessen the economic burden of disease to employers.
DRAINWAT--Based Methods For Estimating Nitrogen Transport in Poorly Drained Watersheds
Devendra M. Amatya; George M. Chescheir; Glenn P. Fernandez; R. Wayne Skaggs; J.W. Gilliam
2004-01-01
Methods are needed to quantify effects of land use and management practices on nutrient and sediment loads at the watershed scale. Two methods were used to apply a DRAINMOD-based watershed-scale model (DRAINWAT) to estimate total nitrogen (N) transport from a poorly drained, forested watershed. In both methods, in-stream retention or losses of N were calculated with a...
Trends in Worker Hearing Loss by Industry Sector, 1981–2010
Masterson, Elizabeth A.; Deddens, James A.; Themann, Christa L.; Bertke, Stephen; Calvert, Geoffrey M.
2015-01-01
Background The purpose of this study was to estimate the incidence and prevalence of hearing loss for noise-exposed U.S. workers by industry sector and 5-year time period, covering 30 years. Methods Audiograms for 1.8 million workers from 1981–2010 were examined. Incidence and prevalence were estimated by industry sector and time period. The adjusted risk of incident hearing loss within each time period and industry sector as compared with a reference time period was also estimated. Results The adjusted risk for incident hearing loss decreased over time when all industry sectors were combined. However, the risk remained high for workers in Healthcare and Social Assistance, and the prevalence was consistently high for Mining and Construction workers. Conclusions While progress has been made in reducing the risk of incident hearing loss within most industry sectors, additional efforts are needed within Mining, Construction and Healthcare and Social Assistance. PMID:25690583
Fratila, Radu; Benabou, Abdelkader; Tounzi, Abdelmounaïm; Mipo, Jean-Claude
2014-05-14
NdFeB permanent magnets (PMs) are widely used in high performance electrical machines, but their relatively high conductivity subjects them to eddy current losses that can lead to magnetization loss. The Finite Element (FE) method is generally used to quantify the eddy current loss of PMs, but it remains quite difficult to validate the accuracy of the results with complex devices. In this paper, an experimental test device is used in order to extract the eddy current losses that are then compared with those of a 3D FE model.
NASA Astrophysics Data System (ADS)
Fulani, Olatunji T.
Development of electric drive systems for transportation and industrial applications is rapidly seeing the use of wide-bandgap (WBG) based power semiconductor devices. These devices, such as SiC MOSFETs, enable high switching frequencies and are becoming the preferred choice in inverters because of their lower switching losses and higher allowable operating temperatures. Due to the much shorter turn-on and turn-off times and correspondingly larger output voltage edge rates, traditional models and methods previously used to estimate inverter and motor power losses, based upon a triangular power loss waveform, are no longer justifiable from a physical perspective. In this thesis, more appropriate models and a power loss calculation approach are described with the goal of more accurately estimating the power losses in WBG-based electric drive systems. Sine-triangle modulation with third harmonic injection is used to control the switching of the inverter. The motor and inverter models are implemented using Simulink and computer studies are shown illustrating the application of the new approach.
Earthquake Loss Estimates in Near Real-Time
NASA Astrophysics Data System (ADS)
Wyss, Max; Wang, Rongjiang; Zschau, Jochen; Xia, Ye
2006-10-01
The usefulness to rescue teams of nearreal-time loss estimates after major earthquakes is advancing rapidly. The difference in the quality of data available in highly developed compared with developing countries dictates that different approaches be used to maximize mitigation efforts. In developed countries, extensive information from tax and insurance records, together with accurate census figures, furnish detailed data on the fragility of buildings and on the number of people at risk. For example, these data are exploited by the method to estimate losses used in the Hazards U.S. Multi-Hazard (HAZUSMH)software program (http://www.fema.gov/plan/prevent/hazus/). However, in developing countries, the population at risk is estimated from inferior data sources and the fragility of the building stock often is derived empirically, using past disastrous earthquakes for calibration [Wyss, 2004].
Kamara, Thaim B; Lavy, Christopher B D; Leather, Andy J M; Bolkan, Håkon A
2018-01-01
Objectives The Lancet Commission on Global Surgery estimated that low/middle-income countries will lose an estimated cumulative loss of US$12.3 trillion from gross domestic product (GDP) due to the unmet burden of surgical disease. However, no country-specific data currently exist. We aimed to estimate the costs to the Sierra Leone economy from death and disability which may have been averted by surgical care. Design We used estimates of total, met and unmet need from two main sources—a cluster randomised, cross-sectional, countrywide survey and a retrospective, nationwide study on surgery in Sierra Leone. We calculated estimated disability-adjusted life years from morbidity and mortality for the estimated unmet burden and modelled the likely economic impact using three different methods—gross national income per capita, lifetime earnings foregone and value of a statistical life. Results In 2012, estimated, discounted lifetime losses to the Sierra Leone economy from the unmet burden of surgical disease was between US$1.1 and US$3.8 billion, depending on the economic method used. These lifetime losses equate to between 23% and 100% of the annual GDP for Sierra Leone. 80% of economic losses were due to mortality. The incremental losses averted by scale up of surgical provision to the Lancet Commission target of 80% were calculated to be between US$360 million and US$2.9 billion. Conclusion There is a large economic loss from the unmet need for surgical care in Sierra Leone. There is an immediate need for massive investment to counteract ongoing economic losses. PMID:29540407
Dalle Carbonare, S; Folli, F; Patrini, E; Giudici, P; Bellazzi, R
2013-01-01
The increasing demand of health care services and the complexity of health care delivery require Health Care Organizations (HCOs) to approach clinical risk management through proper methods and tools. An important aspect of risk management is to exploit the analysis of medical injuries compensation claims in order to reduce adverse events and, at the same time, to optimize the costs of health insurance policies. This work provides a probabilistic method to estimate the risk level of a HCO by computing quantitative risk indexes from medical injury compensation claims. Our method is based on the estimate of a loss probability distribution from compensation claims data through parametric and non-parametric modeling and Monte Carlo simulations. The loss distribution can be estimated both on the whole dataset and, thanks to the application of a Bayesian hierarchical model, on stratified data. The approach allows to quantitatively assessing the risk structure of the HCO by analyzing the loss distribution and deriving its expected value and percentiles. We applied the proposed method to 206 cases of injuries with compensation requests collected from 1999 to the first semester of 2007 by the HCO of Lodi, in the Northern part of Italy. We computed the risk indexes taking into account the different clinical departments and the different hospitals involved. The approach proved to be useful to understand the HCO risk structure in terms of frequency, severity, expected and unexpected loss related to adverse events.
Household income and earnings losses among 6,396 persons with rheumatoid arthritis.
Wolfe, Frederick; Michaud, Kaleb; Choi, Hyon K; Williams, Rhys
2005-10-01
Rheumatoid arthritis (RA) causes disability and reduced productivity. There are no large quantitative studies of earnings and productivity losses in patients with clinical RA, and no studies of household income losses. We describe methods for obtaining earnings and household income losses that are applicable to working as well as nonworking RA patients, and we perform such studies using these methods. We estimated cross-sectional expected annual earnings and household income losses in 6,649 persons with RA from Current Populations Survey (CPS) and O*NET (Occupational Information Network) data, and we estimated expected household income and earnings losses based on demographic characteristics after adjustment to Medical Outcomes Study Short-Form 36 (SF-36) population norms (internal method). Workplace productivity was measured by the Work Limitations Questionnaire (WLQ). 27.9% of patients aged < or = 65 years considered themselves disabled after 14.6 years of RA, and 8.8% received disability benefits. Annual earnings losses ranged between USD 2,319 and USD 3,407 by the CPS and internal method (preferred), with losses of 9.3% and 10.9%. A 0.25 difference in Health Assessment Questionnaire (HAQ) score was associated with a $1,095 difference in annual earnings. Productivity losses were 6% based on work limitations identified by the WLQ. Household income loss (percentage loss) including transfer payments was USD 6,287 (11.8%) for all patients, USD 4,247 (6.9%) for employed patients, and USD 7,374 (14.8%) for nonworking patients. Among nonworking nondisabled patients aged < or = 65 years, income loss was 14.1%. As measured by annual household income loss, the overall impact of RA is USD 6,287 (11.8%). Earnings and household income are dependent on functional status, education, age, ethnicity, and marital status. Income loss is predicted by the HAQ, HAQ-II, Modified HAQ, and SF-36.
NASA Astrophysics Data System (ADS)
alhilman, Judi
2017-12-01
In the production line process of the printing office, the reliability of the printing machine plays a very important role, if the machine fail it can disrupt production target so that the company will suffer huge financial loss. One method to calculate the financial loss cause by machine failure is use the Cost of Unreliability(COUR) method. COUR method works based on down time machine and costs associated with unreliability data. Based on the calculation of COUR method, so the sum of cost due to unreliability printing machine during active repair time and downtime is 1003,747.00.
Assessment of risk due to the use of carbon fiber composites in commercial and general aviation
NASA Technical Reports Server (NTRS)
Fiksel, J.; Rosenfield, D.; Kalelkar, A.
1980-01-01
The development of a national risk profile for the total annual aircraft losses due to carbon fiber composite (CFC) usage through 1993 is discussed. The profile was developed using separate simulation methods for commercial and general aviation aircraft. A Monte Carlo method which was used to assess the risk in commercial aircraft is described. The method projects the potential usage of CFC through 1993, investigates the incidence of commercial aircraft fires, models the potential release and dispersion of carbon fibers from a fire, and estimates potential economic losses due to CFC damaging electronic equipment. The simulation model for the general aviation aircraft is described. The model emphasizes variations in facility locations and release conditions, estimates distribution of CFC released in general aviation aircraft accidents, and tabulates the failure probabilities and aggregate economic losses in the accidents.
Jugnia, Louis-B; Sime-Ngando, Télesphore; Gilbert, Daniel
2006-10-01
The growth rate and losses of bacterioplankton in the epilimnion of an oligo-mesotrophic reservoir were simultaneously estimated using three different methods for each process. Bacterial production was determined by means of the tritiated thymidine incorporation method, the dialysis bag method and the dilution method, while bacterial mortality was assessed with the dilution method, the disappearance of thymidine-labeled natural cells and ingestion of fluorescent bacterial tracers by heterotrophic flagellates. The different methods used to estimate bacterial growth rates yielded similar results. On the other hand, the mortality rates obtained with the dilution method were significantly lower than those obtained with the use of thymidine-labeled natural cells. The bacterial ingestion rate by flagellates accounted on average for 39% of total bacterial mortality estimated by the dilution method, but this value fell to 5% when the total mortality was measured by the thymidine-labeling method. Bacterial abundance and production varied in opposite phase to flagellate abundance and the various bacterial mortality rates. All this points to the critical importance of methodological aspects in the elaboration of quantitative models of matter and energy flows over the time through microbial trophic networks in aquatic systems, and highlights the role of bacterioplankton as a source of carbon for higher trophic levels in the studied system.
Estimation of High-Dimensional Graphical Models Using Regularized Score Matching
Lin, Lina; Drton, Mathias; Shojaie, Ali
2017-01-01
Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498
2006-06-01
Soil Loss Equation ( USLE ) and the Revised Universal Soil Loss Equation (RUSLE) continue to be widely accepted methods for estimating sediment loss...range areas. Therefore, a generalized design methodology using the Universal Soil Loss Equation ( USLE ) is presented to accommodate the variations...constructed use the slope most suitable to the area topography (3:1 or 4:1). Step 4: Using the Universal Soil Loss equation, USLE , find the values of A
Social impacts of the work loss in cancer survivors.
Yamauchi, Hideko; Nakagawa, Chizuko; Fukuda, Takashi
2017-09-01
As cancer frequently occurs during the most productive years of life, our purpose was to estimate the cost of work loss of cancer survivors and develop interventions to minimize the loss. We estimated the cost of the work loss from all cancers resulting from patients' inpatient, outpatient, and non-treatment days. This was calculated with a new method, the product of the "employment rate coefficient × productivity coefficient," making use of data published by the Japanese Ministries. The estimate of work loss on treatment days for all cancers was $1820.21 million in men and $939.38 million in women. In terms of disease classification, lung cancer was the largest cause in men, whereas breast cancer was the largest in women. On non-treatment days, the work losses because of gastric, colon, and lung cancers were large in men, while breast cancer was the largest in women and in total. The estimated loss for all cancers was $3685.506 million in men and $2502.565 million in women, when the product was assumed 0.5. In Japan, breast cancer was considered the leading cause for cost of work loss, and the most influential cause when the product of the "employment rate coefficient × productivity coefficient" for breast cancer was assumed the same as the product for all other types of cancers. It is necessary to establish support systems for working cancer survivors.
Gregory M. Filip
1989-01-01
In 1979, an equation was developed to estimate the percentage of current and future timber volume loss due to stem decay caused by Heterobasidion annosum and other fungi in advance regeneration stands of grand and white fir in eastern Oregon and Washington. Methods for using and testing the equation are presented. Extensive testing in 1988 showed the...
Real Time Intraoperative Monitoring of Blood Loss with a Novel Tablet Application
Sharareh, Behnam; Woolwine, Spencer; Satish, Siddarth; Abraham, Peter; Schwarzkopf, Ran
2015-01-01
Introduction : Real-time monitoring of blood loss is critical in fluid management. Visual estimation remains the standard of care in estimating blood loss, yet is demonstrably inaccurate. Photometric analysis, which is the referenced “gold-standard” for measuring blood loss, is both time-consuming and costly. The purpose of this study was to evaluate the efficacy of a novel tablet-monitoring device for measurement of Hb loss during orthopaedic procedures. Methods : This is a prospective study of 50 patients in a consecutive series of joint arthroplasty cases. The novel System with Feature Extraction Technology was used to measure the amount of Hb contained within surgical sponges intra-operatively. The system’s measures were then compared with those obtained via gravimetric method and photometric analysis. Accuracy was evaluated using linear regression and Bland-Altman analysis. Results : Our results showed a significant positive correlation between Triton tablet system and photometric analysis with respect to intra-operative hemoglobin and blood loss at 0.92 and 0.91, respectively. Discussion : This novel system can accurately determine Hb loss contained within surgical sponges. We believe that this user-friendly software can be used for measurement of total intraoperative blood loss and thus aid in a more accurate fluid management protocols during orthopaedic surgical procedures. PMID:26401167
Service Lifetime Estimation of EPDM Rubber Based on Accelerated Aging Tests
NASA Astrophysics Data System (ADS)
Liu, Jie; Li, Xiangbo; Xu, Likun; He, Tao
2017-04-01
Service lifetime of ethylene propylene diene monomer (EPDM) rubber at room temperature (25 °C) was estimated based on accelerated aging tests. The study followed sealing stress loss on compressed cylinder samples by compression stress relaxation methods. The results showed that the cylinder samples of EPDM can quickly reach the physical relaxation equilibrium by using the over-compression method. The non-Arrhenius behavior occurred at the lowest aging temperature. A significant linear relationship was observed between compression set values and normalized stress decay results, and the relationship was not related to the ambient temperature of aging. It was estimated that the sealing stress loss in view of practical application would occur after around 86.8 years at 25 °C. The estimations at 25 °C based on the non-Arrhenius behavior were in agreement with compression set data from storage aging tests in natural environment.
Potential lost productivity resulting from the global burden of uncorrected refractive error
Frick, KD; Holden, BA; Fricke, TR; Naidoo, KS
2009-01-01
Abstract Objective To estimate the potential global economic productivity loss associated with the existing burden of visual impairment from uncorrected refractive error (URE). Methods Conservative assumptions and national population, epidemiological and economic data were used to estimate the purchasing power parity-adjusted gross domestic product (PPP-adjusted GDP) loss for all individuals with impaired vision and blindness, and for individuals with normal sight who provide them with informal care. Findings An estimated 158.1 million cases of visual impairment resulted from uncorrected or undercorrected refractive error in 2007; of these, 8.7 million were blind. We estimated the global economic productivity loss in international dollars (I$) associated with this burden at I$ 427.7 billion before, and I$ 268.8 billion after, adjustment for country-specific labour force participation and employment rates. With the same adjustment, but assuming no economic productivity for individuals aged ≥ 50 years, we estimated the potential productivity loss at I$ 121.4 billion. Conclusion Even under the most conservative assumptions, the total estimated productivity loss, in $I, associated with visual impairment from URE is approximately a thousand times greater than the global number of cases. The cost of scaling up existing refractive services to meet this burden is unknown, but if each affected individual were to be provided with appropriate eyeglasses for less than I$ 1000, a net economic gain may be attainable. PMID:19565121
Eckert, Kristen A; Carter, Marissa J; Lansingh, Van C; Wilson, David A; Furtado, João M; Frick, Kevin D; Resnikoff, Serge
2015-01-01
To estimate the annual loss of productivity from blindness and moderate to severe visual impairment (MSVI) using simple models (analogous to how a rapid assessment model relates to a comprehensive model) based on minimum wage (MW) and gross national income (GNI) per capita (US$, 2011). Cost of blindness (COB) was calculated for the age group ≥50 years in nine sample countries by assuming the loss of current MW and loss of GNI per capita. It was assumed that all individuals work until 65 years old and that half of visual impairment prevalent in the ≥50 years age group is prevalent in the 50-64 years age group. For cost of MSVI (COMSVI), individual wage and GNI loss of 30% was assumed. Results were compared with the values of the uncorrected refractive error (URE) model of productivity loss. COB (MW method) ranged from $0.1 billion in Honduras to $2.5 billion in the United States, and COMSVI ranged from $0.1 billion in Honduras to $5.3 billion in the US. COB (GNI method) ranged from $0.1 million in Honduras to $7.8 billion in the US, and COMSVI ranged from $0.1 billion in Honduras to $16.5 billion in the US. Most GNI method values were near equivalent to those of the URE model. Although most people with blindness and MSVI live in developing countries, the highest productivity losses are in high income countries. The global economy could improve if eye care were made more accessible and more affordable to all.
Using Reanalysis Data for the Prediction of Seasonal Wind Turbine Power Losses Due to Icing
NASA Astrophysics Data System (ADS)
Burtch, D.; Mullendore, G. L.; Delene, D. J.; Storm, B.
2013-12-01
The Northern Plains region of the United States is home to a significant amount of potential wind energy. However, in winter months capturing this potential power is severely impacted by the meteorological conditions, in the form of icing. Predicting the expected loss in power production due to icing is a valuable parameter that can be used in wind turbine operations, determination of wind turbine site locations and long-term energy estimates which are used for financing purposes. Currently, losses due to icing must be estimated when developing predictions for turbine feasibility and financing studies, while icing maps, a tool commonly used in Europe, are lacking in the United States. This study uses the Modern-Era Retrospective Analysis for Research and Applications (MERRA) dataset in conjunction with turbine production data to investigate various methods of predicting seasonal losses (October-March) due to icing at two wind turbine sites located 121 km apart in North Dakota. The prediction of icing losses is based on temperature and relative humidity thresholds and is accomplished using three methods. For each of the three methods, the required atmospheric variables are determined in one of two ways: using industry-specific software to correlate anemometer data in conjunction with the MERRA dataset and using only the MERRA dataset for all variables. For each season, a percentage of the total expected generated power lost due to icing is determined and compared to observed losses from the production data. An optimization is performed in order to determine the relative humidity threshold that minimizes the difference between the predicted and observed values. Eight seasons of data are used to determine an optimal relative humidity threshold, and a further three seasons of data are used to test this threshold. Preliminary results have shown that the optimized relative humidity threshold for the northern turbine is higher than the southern turbine for all methods. For the three test seasons, the optimized thresholds tend to under-predict the icing losses. However, the threshold determined using boundary layer similarity theory most closely predicts the power losses due to icing versus the other methods. For the northern turbine, the average predicted power loss over the three seasons is 4.65 % while the observed power loss is 6.22 % (average difference of 1.57 %). For the southern turbine, the average predicted power loss and observed power loss over the same time period are 4.43 % and 6.16 %, respectively (average difference of 1.73 %). The three-year average, however, does not clearly capture the variability that exists season-to-season. On examination of each of the test seasons individually, the optimized relative humidity threshold methodology performs better than fixed power loss estimates commonly used in the wind energy industry.
Analysis of No-load Iron Losses of Turbine Generators by 3D Magnetic Field Analysis
NASA Astrophysics Data System (ADS)
Nakahara, Akihito; Mogi, Hisashi; Takahashi, Kazuhiko; Ide, Kazumasa; Kaneda, Junya; Hattori, Ken'Ichi; Watanabe, Takashi; Kaido, Chikara; Minematsu, Eisuke; Hanzawa, Kazufumi
This paper focuses on no-load iron losses of turbine generators. To calculate iron losses of turbine generators a program was developed. In the program, core loss curves of materials used for stator core were reproduced precisely by using tables of loss coefficients. Accuracy of calculation by this method was confirmed by comparing calculated values with measured in a model stator core. The iron loss of a turbine generator estimated with considering three-dimensional distribution of magnetic fluxes. And additional losses included in measured iron loss was evaluated with three-dimensional magnetic field analysis.
Atukunda, Esther Cathyln; Mugyenyi, Godfrey Rwambuka; Obua, Celestino; Atuhumuza, Elly Bronney; Musinguzi, Nicholas; Tornes, Yarine Fajardo; Agaba, Amon Ganaafa; Siedner, Mark Jacob
2016-01-01
Background Accurate estimation of blood loss is central to prompt diagnosis and management of post-partum hemorrhage (PPH), which remains a leading cause of maternal mortality in low-resource countries. In such settings, blood loss is often estimated visually and subjectively by attending health workers, due to inconsistent availability of laboratory infrastructure. We evaluated the diagnostic accuracy of weighed blood loss (WBL) versus changes in peri-partum hemoglobin to detect PPH. Methods Data from this analysis were collected as part of a randomized controlled trial comparing oxytocin with misoprostol for PPH (NCT01866241). Blood samples for complete blood count were drawn on admission and again prior to hospital discharge or before blood transfusion. During delivery, women were placed on drapes and had pre-weighed sanitary towels placed around their perineum. Blood was then drained into a calibrated container and the sanitary towels were added to estimate WBL, where each gram of blood was estimated as a milliliter. Sensitivity, specificity, negative and positive predictive values (PPVs) were calculated at various blood volume loss and time combinations, and we fit receiver-operator curves using blood loss at 1, 2, and 24 hours compared to a reference standard of haemoglobin decrease of >10%. Results A total of 1,140 women were enrolled in the study, of whom 258 (22.6%) developed PPH, defined as a haemoglobin drop >10%, and 262 (23.0%) had WBL ≥500mL. WBL generally had a poor sensitivity for detection of PPH (<75% for most volume-time combinations). In contrast, the specificity of WBL was high with blood loss ≥ 500mL at 1h and ≥750mL at any time points excluding PPH in over 97% of women. As such, WBL has a high PPV (>85%) in high prevalence settings when WBL exceeds 750mL. Conclusion WBL has poor sensitivity but high specificity compared to laboratory-based methods of PPH diagnosis. These characteristics correspond to a high PPV in areas with high PPH prevalence. Although WBL is not useful for excluding PPH, this low-cost, simple and reproducible method is promising as a reasonable method to identify significant PPH in such settings where quantifiable red cell indices are unavailable. PMID:27050823
Economic Impact of Hearing Loss and Reduction of Noise-Induced Hearing Loss in the United States
ERIC Educational Resources Information Center
Neitzel, Richard L.; Swinburn, Tracy K.; Hammer, Monica S.; Eisenberg, Daniel
2017-01-01
Purpose: Hearing loss (HL) is pervasive and debilitating, and noise-induced HL is preventable by reducing environmental noise. Lack of economic analyses of HL impacts means that prevention and treatment remain a low priority for public health and environmental investment. Method: This article estimates the costs of HL on productivity by building…
Determining forest carbon stock losses due to wildfire disturbance in the Western United States
John M. Zobel; John W. Coulston
2015-01-01
Quantifying carbon stock losses after wildfire events is challenging due to the lack of detailed information before and after the disturbance. We propose to use the extensive Western FIA database (including periodic and annual inventories) to recreate pre- and post-fire conditions to better estimate actual carbon losses. Methods include using remeasurement date where...
Channel Modeling of Miniaturized Battery-Powered Capacitive Human Body Communication Systems.
Park, Jiwoong; Garudadri, Harinath; Mercier, Patrick P
2017-02-01
The purpose of this contribution is to estimate the path loss of capacitive human body communication (HBC) systems under practical conditions. Most prior work utilizes large grounded instruments to perform path loss measurements, resulting in overly optimistic path loss estimates for wearable HBC devices. In this paper, small battery-powered transmitter and receiver devices are implemented to measure path loss under realistic assumptions. A hybrid electrostatic finite element method simulation model is presented that validates measurements and enables rapid and accurate characterization of future capacitive HBC systems. Measurements from form-factor-accurate prototypes reveal path loss results between 31.7 and 42.2 dB from 20 to 150 MHz. Simulation results matched measurements within 2.5 dB. Comeasurements using large grounded benchtop vector network analyzer (VNA) and large battery-powered spectrum analyzer (SA) underestimate path loss by up to 33.6 and 8.2 dB, respectively. Measurements utilizing a VNA with baluns, or large battery-powered SAs with baluns still underestimate path loss by up to 24.3 and 6.7 dB, respectively. Measurements of path loss in capacitive HBC systems strongly depend on instrumentation configurations. It is thus imperative to simulate or measure path loss in capacitive HBC systems utilizing realistic geometries and grounding configurations. HBC has a great potential for many emerging wearable devices and applications; accurate path loss estimation will improve system-level design leading to viable products.
Sha, Zhichao; Liu, Zhengmeng; Huang, Zhitao; Zhou, Yiyu
2013-08-29
This paper addresses the problem of direction-of-arrival (DOA) estimation of multiple wideband coherent chirp signals, and a new method is proposed. The new method is based on signal component analysis of the array output covariance, instead of the complicated time-frequency analysis used in previous literatures, and thus is more compact and effectively avoids possible signal energy loss during the hyper-processes. Moreover, the a priori information of signal number is no longer a necessity for DOA estimation in the new method. Simulation results demonstrate the performance superiority of the new method over previous ones.
Estimating forestland area change from inventory data
Paul Van Deusen; Francis Roesch; Thomas Wigley
2013-01-01
Simple methods for estimating the proportion of land changing from forest to nonforest are developed. Variance estimators are derived to facilitate significance tests. A power analysis indicates that 400 inventory plots are required to reliably detect small changes in net or gross forest loss. This is an important result because forest certification programs may...
Wegner, C; Gutsch, A; Hessel, F; Wasem, J
2004-07-01
Costs of productivity loss for the Federal Republic of Germany attributable to smoking in 1999 was to be determined. Mortality and morbidity attributable to smoking is determined by a 0.5 % sample of the smoking behaviour of the German population (microcensus 1999) and the relative mortality risks of smokers (US-American cancer prevention study II). Tobacco smoke-associated cancer illnesses, cardiovascular diseases, respiratory tract diseases and illnesses of children under one year are considered. Calculation of the productivity-relevant consequences of smoking due to morbidity and mortality is effected according to the so-called human potential capital method. In Germany total of 607,393 working years were lost because of smoking in the year 1999. The costs of productivity loss are estimated at 14,480 billion euro. From this 4,525 billion euro are allotted to premature mortality, 5.759 billion euro to permanent disablement and 4.196 billion euro to temporary incapacitation for work. If the costs of productivity loss by smoking are referred to the gross national product (BSP) in the year 1999, an economical damage at a value of 0.74 % of BSPs results. This corresponds to a productivity loss of 379 euro per present or former smoker. The sensitivity analysis manifests that the inclusion of "non-marketable production" results in an immense rise productivity losses attributable to smoking. However, it should be noted that in times of mass unemployment the human capital method which is based on full employment does not measure the actual, but only the potential productivity loss cost. This partial disease cost study shows that immense economic productivity losses are associated with smoking. This loss of resources can justify a purposeful promotion of studies regarding cost effectiveness of anti-smoking therapeutic measures or preventive measures against smoking. But it should be considered that the use of the human potential capital method results in an overestimation of the actual productivity losses by smoking. In future the costs of productivity losses attributable to smoking should be determined by the friction cost method. With this procedure a more realistic estimation of productivity-relevant costs of smoking is possible.
Hak, Eelko; Knol, Lisanne M; Wilschut, Jan C; Postma, Maarten J
2010-01-01
To assess the annual productivity loss among hospital healthcare workers attributable to influenza and to estimate the costs and economic benefits of a vaccination programme from the perspective of the the employer. Cost-benefit analysis. The percentage of work loss due to influenza was determined using monthly age and gender specific figures for productivity loss among healthcare workers of the University Medical Center Groningen (UMCG), the Netherlands over the period January 2006-June 2008. Influenza periods were determined on the basis of national surveillance data. The average increase in productivity loss in these periods was estimated by comparison with the periods outside influenza seasons. The direct costs of productivity loss from the perspective of the employer were estimated using the friction cost method. In the sensitivity analyses various modelling parameters were varied, such as the vaccination coverage. In the UMCG, with approximately 9,400 employees, the estimated annual costs associated with productivity loss due to influenza before the introduction of the yearly influenza vaccination program were € 675,242 or on average, € 72 per employee. The economic benefits of the current vaccination program with a vaccination coverage of 24% with a vaccine effectiveness of 71% were estimated at € 89,858 or € 10 per employee. The nett economic benefits of a vaccination program with a target vaccination coverage of 70% with a vaccine effectiveness of 71% were estimated at € 244,325 or € 26 per employee. This modelling study performed from the perspective of the employer showed that an annual influenza vaccination programme for hospital personnel can save costs.
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
Costs Attributable to Overweight and Obesity in Working Asthma Patients in the United States
Chang, Chongwon; Lee, Seung-Mi; Choi, Byoung-Whui; Song, Jong-hwa; Song, Hee; Jung, Sujin; Bai, Yoon Kyeong; Park, Haedong; Jeung, Seungwon
2017-01-01
Purpose To estimate annual health care and productivity loss costs attributable to overweight or obesity in working asthmatic patients. Materials and Methods This study was conducted using the 2003–2013 Medical Expenditure Panel Survey (MEPS) in the United States. Patients aged 18 to 64 years with asthma were identified via self-reported diagnosis, a Clinical Classification Code of 128, or a ICD-9-CM code of 493.xx. All-cause health care costs were estimated using a generalized linear model with a log function and a gamma distribution. Productivity loss costs were estimated in relation to hourly wages and missed work days, and a two-part model was used to adjust for patients with zero costs. To estimate the costs attributable to overweight or obesity in asthma patients, costs were estimated by the recycled prediction method. Results Among 11670 working patients with a diagnosis of asthma, 4428 (35.2%) were obese and 3761 (33.0%) were overweight. The health care costs attributable to obesity and overweight in working asthma patients were estimated to be $878 [95% confidence interval (CI): $861–$895] and $257 (95% CI: $251–$262) per person per year, respectively, from 2003 to 2013. The productivity loss costs attributable to obesity and overweight among working asthma patients were $256 (95% CI: $253–$260) and $26 (95% CI: $26–$27) per person per year, respectively. Conclusion Health care and productivity loss costs attributable to overweight and obesity in asthma patients are substantial. This study's results highlight the importance of effective public health and educational initiatives targeted at reducing overweight and obesity among patients with asthma, which may help lower the economic burden of asthma. PMID:27873513
Wittenborn, John S.; Zhang, Xinzhi; Feagan, Charles W.; Crouse, Wesley L.; Shrestha, Sundar; Kemper, Alex R.; Hoerger, Thomas J.; Saaddine, Jinan B.
2017-01-01
Objective To estimate the economic burden of vision loss and eye disorders in the United States population younger than 40 years in 2012. Design Econometric and statistical analysis of survey, commercial claims, and census data. Participants The United States population younger than 40 years in 2012. Methods We categorized costs based on consensus guidelines. We estimated medical costs attributable to diagnosed eye-related disorders, undiagnosed vision loss, and medical vision aids using Medical Expenditure Panel Survey and MarketScan data. The prevalence of vision impairment and blindness were estimated using National Health and Nutrition Examination Survey data. We estimated costs from lost productivity using Survey of Income and Program Participation. We estimated costs of informal care, low vision aids, special education, school screening, government spending, and transfer payments based on published estimates and federal budgets. We estimated quality-adjusted life years (QALYs) lost based on published utility values. Main Outcome Measures Costs and QALYs lost in 2012. Results The economic burden of vision loss and eye disorders among the United States population younger than 40 years was $27.5 billion in 2012 (95% confidence interval, $21.5–$37.2 billion), including $5.9 billion for children and $21.6 billion for adults 18 to 39 years of age. Direct costs were $14.5 billion, including $7.3 billion in medical costs for diagnosed disorders, $4.9 billion in refraction correction, $0.5 billion in medical costs for undiagnosed vision loss, and $1.8 billion in other direct costs. Indirect costs were $13 billion, primarily because of $12.2 billion in productivity losses. In addition, vision loss cost society 215 000 QALYs. Conclusions We found a substantial burden resulting from vision loss and eye disorders in the United States population younger than 40 years, a population excluded from previous studies. Monetizing quality-of-life losses at $50 000 per QALY would add $10.8 billion in additional costs, indicating a total economic burden of $38.2 billion. Relative to previously reported estimates for the population 40 years of age and older, more than one third of the total cost of vision loss and eye disorders may be incurred by persons younger than 40 years. PMID:23631946
A methodology for overall consequence modeling in chemical industry.
Arunraj, N S; Maiti, J
2009-09-30
Risk assessment in chemical process industry is a very important issue for safeguarding human and the ecosystem from damages caused to them. Consequence assessment is an integral part of risk assessment. However, the commonly used consequence estimation methods involve time-consuming complex mathematical models and simple assimilation of losses without considering all the consequence factors. This lead to the deterioration of quality of estimated risk value. So, the consequence modeling has to be performed in detail considering all major losses with optimal time to improve the decisive value of risk. The losses can be broadly categorized into production loss, assets loss, human health and safety loss, and environment loss. In this paper, a conceptual framework is developed to assess the overall consequence considering all the important components of major losses. Secondly, a methodology is developed for the calculation of all the major losses, which are normalized to yield the overall consequence. Finally, as an illustration, the proposed methodology is applied to a case study plant involving benzene extraction. The case study result using the proposed consequence assessment scheme is compared with that from the existing methodologies.
Economic losses and burden of disease by medical conditions in Norway.
Kinge, Jonas Minet; Sælensminde, Kjartan; Dieleman, Joseph; Vollset, Stein Emil; Norheim, Ole Frithjof
2017-06-01
We explore the correlation between disease specific estimates of economic losses and the burden of disease. This is based on data for Norway in 2013 from the Global Burden of Disease (GBD) project and the Norwegian Directorate of Health. The diagnostic categories were equivalent to the ICD-10 chapters. Mental disorders topped the list of the costliest conditions in Norway in 2013, and musculoskeletal disorders caused the highest production loss, while neoplasms caused the greatest burden in terms of DALYs. There was a positive and significant association between economic losses and burden of disease. Neoplasms, circulatory diseases, mental and musculoskeletal disorders all contributed to large health care expenditures. Non-fatal conditions with a high prevalence in working populations, like musculoskeletal and mental disorders, caused the largest production loss, while fatal conditions such as neoplasms and circulatory disease did not, since they occur mostly at old age. The magnitude of the production loss varied with the estimation method. The estimations presented in this study did not include reductions in future consumption, by net-recipients, due to premature deaths. Non-fatal diseases are thus even more burdensome, relative to fatal diseases, than the production loss in this study suggests. Hence, ignoring production losses may underestimate the economic losses from chronic diseases in countries with an epidemiological profile similar to Norway. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimating milk yield and value losses from increased somatic cell count on US dairy farms.
Hadrich, J C; Wolf, C A; Lombard, J; Dolak, T M
2018-04-01
Milk loss due to increased somatic cell counts (SCC) results in economic losses for dairy producers. This research uses 10 mo of consecutive dairy herd improvement data from 2013 and 2014 to estimate milk yield loss using SCC as a proxy for clinical and subclinical mastitis. A fixed effects regression was used to examine factors that affected milk yield while controlling for herd-level management. Breed, milking frequency, days in milk, seasonality, SCC, cumulative months with SCC greater than 100,000 cells/mL, lactation, and herd size were variables included in the regression analysis. The cumulative months with SCC above a threshold was included as a proxy for chronic mastitis. Milk yield loss increased as the number of test days with SCC ≥100,000 cells/mL increased. Results from the regression were used to estimate a monetary value of milk loss related to SCC as a function of cow and operation related explanatory variables for a representative dairy cow. The largest losses occurred from increased cumulative test days with a SCC ≥100,000 cells/mL, with daily losses of $1.20/cow per day in the first month to $2.06/cow per day in mo 10. Results demonstrate the importance of including the duration of months above a threshold SCC when estimating milk yield losses. Cows with chronic mastitis, measured by increased consecutive test days with SCC ≥100,000 cells/mL, resulted in higher milk losses than cows with a new infection. This provides farm managers with a method to evaluate the trade-off between treatment and culling decisions as it relates to mastitis control and early detection. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
A Method to Estimate the Masses of Asymptotic Giant Branch Variable Stars
NASA Astrophysics Data System (ADS)
Takeuti, Mine; Nakagawa, Akiharu; Kurayama, Tomoharu; Honma, Mareki
2013-06-01
AGB variable stars are at the transient phase between low and high mass-loss rates; estimating the masses of these stars is necessary to study the evolutionary processes and mass-loss processes during the AGB stage. We applied the pulsation constant theoretically derived by Xiong and Deng (2007 MNRAS, 378, 1270) to 15 galactic AGB stars in order to estimate their masses. We found that using the pulsation constant is effective to estimate the mass of a star pulsating with two different pulsation modes, such as S Crt and RX Boo, which provides mass estimates comparable to theoretical results of AGB star evolution. We also extended the use of the pulsation constant to single-mode variables, and analyzed the properties of AGB stars related to their masses.
NASA Astrophysics Data System (ADS)
Maghsoudi, Mastoureh; Bakar, Shaiful Anuar Abu
2017-05-01
In this paper, a recent novel approach is applied to estimate the threshold parameter of a composite model. Several composite models from Transformed Gamma and Inverse Transformed Gamma families are constructed based on this approach and their parameters are estimated by the maximum likelihood method. These composite models are fitted to allocated loss adjustment expenses (ALAE). In comparison to all composite models studied, the composite Weibull-Inverse Transformed Gamma model is proved to be a competitor candidate as it best fit the loss data. The final part considers the backtesting method to verify the validation of VaR and CTE risk measures.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji
This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.
Artificial neural networks for AC losses prediction in superconducting round filaments
NASA Astrophysics Data System (ADS)
Leclerc, J.; Makong Hell, L.; Lorin, C.; Masson, P. J.
2016-06-01
An extensive and fast method to estimate superconducting AC losses within a superconducting round filament carrying an AC current and subjected to an elliptical magnetic field (both rotating and oscillating) is presented. Elliptical fields are present in rotating machine stators and being able to accurately predict AC losses in fully superconducting machines is paramount to generating realistic machine designs. The proposed method relies on an analytical scaling law (ASL) combined with two artificial neural network (ANN) estimators taking 9 input parameters representing the superconductor, external field and transport current characteristics. The ANNs are trained with data generated by finite element (FE) computations with a commercial software (FlexPDE) based on the widely accepted H-formulation. After completion, the model is validated through comparison with additional randomly chosen data points and compared for simple field configurations to other predictive models. The loss estimation discrepancy is about 3% on average compared to the FEA analysis. The main advantages of the model compared to FE simulations is the fast computation time (few milliseconds) which allows it to be used in iterated design processes of fully superconducting machines. In addition, the proposed model provides a higher level of fidelity than the scaling laws existing in literature usually only considering pure AC field.
Urban Earthquake Shaking and Loss Assessment
NASA Astrophysics Data System (ADS)
Hancilar, U.; Tuzun, C.; Yenidogan, C.; Zulfikar, C.; Durukal, E.; Erdik, M.
2009-04-01
This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Level 2 analysis of the ELER Software (similar to HAZUS and SELENA) is essentially intended for earthquake risk assessment (building damage, consequential human casualties and macro economic loss quantifiers) in urban areas. The basic Shake Mapping is similar to the Level 0 and Level 1 analysis however, options are available for more sophisticated treatment of site response through externally entered data and improvement of the shake map through incorporation of accelerometric and other macroseismic data (similar to the USGS ShakeMap System). The building inventory data for the Level 2 analysis will consist of grid (geo-cell) based urban building and demographic inventories. For building grouping the European building typology developed within the EU-FP5 RISK-EU project is used. The building vulnerability/fragility relationships to be used can be user selected from a list of applicable relationships developed on the basis of a comprehensive study, Both empirical and analytical relationships (based on the Coefficient Method, Equivalent Linearization Method and the Reduction Factor Method of analysis) can be employed. Casualties in Level 2 analysis are estimated based on the number of buildings in different damaged states and the casualty rates for each building type and damage level. Modifications to the casualty rates can be used if necessary. ELER Level 2 analysis will include calculation of direct monetary losses as a result building damage that will allow for repair-cost estimations and specific investigations associated with earthquake insurance applications (PML and AAL estimations). ELER Level 2 analysis loss results obtained for Istanbul for a scenario earthquake using different techniques will be presented with comparisons using different earthquake damage assessment software. The urban earthquake shaking and loss information is intented for dissemination in a timely manner to related agencies for the planning and coordination of the post-earthquake emergency response. However the same software can also be used for scenario earthquake loss estimation, related Monte-Carlo type simulations and eathquake insurance applications.
Evaluation of methods for the quantification of ether extract contents in forage and cattle feces.
Barbosa, Marcília M; Detmann, Edenio; Valadares, Sebastião C; Detmann, Kelly S C; Franco, Marcia O; Batista, Erick D; Rocha, Gabriel C
2017-01-01
The objective of this study was to compare the estimates of ether extract (EE) contents obtained by the Randall method and by the high-temperature method of the American Oil Chemist's Society (AOCS; Am 5-04) in forages (n = 20) and cattle feces (n = 15). The EE contents were quantified by using the Randall extraction or AOCS method and XT4 filter bags or cartridges made of qualitative filter paper (80 g/m²) as containers for the samples. It was also evaluated the loss of particles, and concentration of residual chlorophyll after extraction and the recovery of protein and minerals in the material subjected to extraction. Significant interaction was observed between extraction method and material for EE contents. The EE estimates using the AOCS method were higher, mainly in forages. No loss of particles was observed with different containers. The chlorophyll contents in the residues of cattle feces were not affected by the extraction method; however, residual chlorophyll was lower using the AOCS method in forages. There was complete recovery of the protein and ash after extraction. The results suggest that AOCS method produces higher estimates of EE contents in forages and cattle feces, possibly by providing greater extraction of non-fatty EE.
Model comparisons for estimating carbon emissions from North American wildland fire
Nancy H.F. French; William J. de Groot; Liza K. Jenkins; Brendan M. Rogers; Ernesto Alvarado; Brian Amiro; Bernardus De Jong; Scott Goetz; Elizabeth Hoy; Edward Hyer; Robert Keane; B.E. Law; Donald McKenzie; Steven G. McNulty; Roger Ottmar; Diego R. Perez-Salicrup; James Randerson; Kevin M. Robertson; Merritt Turetsky
2011-01-01
Research activities focused on estimating the direct emissions of carbon from wildland fires across North America are reviewed as part of the North American Carbon Program disturbance synthesis. A comparison of methods to estimate the loss of carbon from the terrestrial biosphere to the atmosphere from wildland fires is presented. Published studies on emissions from...
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
Green, Paul E; Woodrooffe, John
2006-01-01
Using data from the NASS General Estimates System (GES), the method of induced exposure was used to assess the effects of electronic stability control (ESC) on loss-of-control type crashes for sport utility vehicles. Sport utility vehicles were classified into crash types generally associated with loss of control and crash types most likely not associated with loss of control. Vehicles were then compared as to whether ESC technology was present or absent in the vehicles. A generalized additive model was fit to assess the effects of ESC, driver age, and driver gender on the odds of loss of control. In addition, the effects of ESC on roads that were not dry were compared to effects on roads that were dry. Overall, the estimated percentage reduction in the odds of a loss-of-control crash for sport utility vehicles equipped with ESC was 70.3%. Both genders and all age groups showed reduced odds of loss-of-control crashes, but there was no significant difference between males and females. With respect to driver age, the maximum percentage reduction of 73.6% occurred at age 27. The positive effects of ESC on roads that were not dry were significantly greater than on roads that were dry.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Li, Hailong; Xiao, Kai; Wang, Xuejing; Lu, Xiaoting; Zhang, Meng; An, An; Qu, Wenjing; Wan, Li; Zheng, Chunmiao; Wang, Xusheng; Jiang, Xiaowei
2017-10-01
Radium and radon mass balance models have been widely used to quantify submarine groundwater discharge (SGD) in the coastal areas. However, the losses of radium or radon in seawater caused by recirculated saline groundwater discharge (RSGD) are ignored in most of the previous studies for tracer-based models and this can lead to an underestimation of SGD. Here we present an improved method which considers the losses of tracers caused by RSGD to enhance accuracy in estimating SGD and SGD-associated material loadings. Theoretical analysis indicates that neglecting the losses of tracers induced by RSGD would underestimate the SGD by a percentage approximately equaling the tracer activity ratio of nearshore seawater to groundwater. The data analysis of previous typical case studies shows that the existing old models underestimated the SGD by 1.9-93%, with an average of 32.2%. The method is applied in Jiaozhou Bay (JZB), North China, which is experiencing significant environmental pollution. The SGD flux into JZB estimated by the improved method is ˜1.44 and 1.34 times of that estimated by the old method for 226Ra mass balance model and 228Ra mass balance model, respectively. Both SGD and RSGD fluxes are significantly higher than the discharge rate of Dagu River (the largest one running into JZB). The fluxes of nutrients and metals through SGD are comparable to or even higher than those from local rivers, which indicates that SGD is an important source of chemicals into JZB and has important impact on marine ecological system.
Reverberation Modelling Using a Parabolic Equation Method
2012-10-01
the limits of their applicability. Results: Transmission loss estimates produced by the PECan parabolic equation acoustic model were used in...environments is possible when used in concert with a parabolic equation passive acoustic model . Future plans: The authors of this report recommend further...technique using other types of acoustic models should be undertaken. Furthermore, as the current method when applied as-is results in estimates that reflect
WTA estimates using the method of paired comparison: tests of robustness
Patricia A. Champ; John B. Loomis
1998-01-01
The method of paired comparison is modified to allow choices between two alternative gains so as to estimate willingness to accept (WTA) without loss aversion. The robustness of WTA values for two public goods is tested with respect to sensitivity of theWTA measure to the context of the bundle of goods used in the paired comparison exercise and to the scope (scale) of...
NASA Astrophysics Data System (ADS)
Wyss, M.
2012-12-01
Estimating human losses within less than an hour worldwide requires assumptions and simplifications. Earthquake for which losses are accurately recorded after the event provide clues concerning the influence of error sources. If final observations and real time estimates differ significantly, data and methods to calculate losses may be modified or calibrated. In the case of the earthquake in the Emilia Romagna region with M5.9 on May 20th, the real time epicenter estimates of the GFZ and the USGS differed from the ultimate location by the INGV by 6 and 9 km, respectively. Fatalities estimated within an hour of the earthquake by the loss estimating tool QLARM, based on these two epicenters, numbered 20 and 31, whereas 7 were reported in the end, and 12 would have been calculated if the ultimate epicenter released by INGV had been used. These four numbers being small, do not differ statistically. Thus, the epicenter errors in this case did not appreciably influence the results. The QUEST team of INGV has reported intensities with I ≥ 5 at 40 locations with accuracies of 0.5 units and QLARM estimated I > 4.5 at 224 locations. The differences between the observed and calculated values at the 23 common locations show that the calculation in the 17 instances with significant differences were too high on average by one unit. By assuming higher than average attenuation within standard bounds for worldwide loss estimates, the calculated intensities model the observed ones better: For 57% of the locations, the difference was not significant; for the others, the calculated intensities were still somewhat higher than the observed ones. Using a generic attenuation law with higher than average attenuation, but not tailored to the region, the number of estimated fatalities becomes 12 compared to 7 reported ones. Thus, attenuation in this case decreased the discrepancy between observed and reported death by approximately a factor of two. The source of the fatalities is perplexing: Most fatalities occurred in industrial facilities where few workers are present at 4AM, while the vast majority of the population at home survived. QLARM contains a function modeling the occupancy rate of buildings as a function of the hour of day for residential buildings. The possibility that two-year old industrial plants may collapse and kill workers within a stone's throw from abandoned, old, brick farm houses that do not collapse, as it happened near Sant'Agostino on 20th May 2012, is not considered in QLARM or any loss estimating method. The dismal performance of the many new industrial plants in Emilia Romagna, which collapsed, lost their roofs, or their walls, shows that regional building practices can remain hidden from the world community trying to estimate earthquake risk and lead to surprises and unnecessary fatalities.
Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)
NASA Astrophysics Data System (ADS)
Li, X. R.; Wang, X.
2016-03-01
When using the genetic algorithm to solve the problem of too-short-arc (TSA) determination, due to the difference of computing processes between the genetic algorithm and classical method, the methods for outliers editing are no longer applicable. In the genetic algorithm, the robust estimation is acquired by means of using different loss functions in the fitness function, then the outlier problem of TSAs is solved. Compared with the classical method, the application of loss functions in the genetic algorithm is greatly simplified. Through the comparison of results of different loss functions, it is clear that the methods of least median square and least trimmed square can greatly improve the robustness of TSAs, and have a high breakdown point.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagentoft, C.E.
1986-01-01
Many old district-heating culverts are in bad condition due to the entry of water into the thermal insulation. The thermal conductivity, and thereby the heat loss from the culvert, is much larger for a wet than a dry thermal insulation. The high energy prices make it interesting and necessary to find the water-damaged parts of the district-heating culvert and improve the thermal insulation so that a reduction in the heat losses is obtained. The aim of the project is to develop a simple field method to determine the heat loss and the condition of the culvert. The method is basedmore » on the measurement of the temperature on the top of the culvert and a classification of the soil. The classification of the soil gives an estimation of its thermal conductivity. The heat loss and the reduction in heat loss due to an extra insulation is estimated from these data. Five different types of culverts were tested: two types of asbestos cement culverts, one concrete culvert, and two aerated concrete culverts. The comparison of the measured temperatures and the temperatures obtained from the simulations is reported in the study.« less
Quantification of the Precipitation Loss of Radiation Belt Electrons Observed by SAMPEX (Invited)
NASA Astrophysics Data System (ADS)
Tu, W.; Li, X.; Selesnick, R. S.; Looper, M. D.
2010-12-01
Based on SAMPEX/PET observations, the fluxes and the spatial and temporal variations of electron loss to the atmosphere in the Earth’s radiation belt were quantified using a drift-diffusion model that includes the effects of azimuthal drift and pitch angle diffusion. The measured electrons by SAMPEX can be distinguished as trapped, quasi-trapped (in the drift loss cone), or precipitating (in the bounce loss cone), and the model simulates the low-altitude electron distribution from SAMPEX. After fitting the model results to the data, the magnitudes and variations of the electron loss rate can be estimated based on the optimum model parameter values. In this presentation we give an overview of our method and published results, followed by some recent improvements we made on the model, including updating the quantified electron lifetimes more frequently (e.g., every two hours instead of half a day) to achieve smoother variations, estimating the adiabatic effects at SAMPEX’s orbit and their influence on our model results, and calculating the error bar associated with each quantified electron lifetime. This method combining a model with low-altitude observations provides direct quantification of the electron loss rate, as required for any accurate modeling of the radiation belt electron dynamics.
Mode and climatic factors effect on energy losses in transient heat modes of transmission lines
NASA Astrophysics Data System (ADS)
Bigun, A. Ya; Sidorov, O. A.; Osipov, D. S.; Girshin, S. S.; Goryunov, V. N.; Petrova, E. V.
2018-01-01
Electrical energy losses increase in modern grids. The losses are connected with an increase in consumption. Existing models of electric power losses estimation considering climatic factors do not allow estimating the cable temperature in real time. Considering weather and mode factors in real time allows to meet effectively and safely the consumer’s needs to minimize energy losses during transmission, to use electric power equipment effectively. These factors increase an interest in the evaluation of the dynamic thermal mode of overhead transmission lines conductors. The article discusses an approximate analytic solution of the heat balance equation in the transient operation mode of overhead lines based on the least squares method. The accuracy of the results obtained is comparable with the results of solving the heat balance equation of transient thermal mode with the Runge-Kutt method. The analysis of mode and climatic factors effect on the cable temperature in a dynamic thermal mode is presented. The calculation of the maximum permissible current for variation of weather conditions is made. The average electric energy losses during the transient process are calculated with the change of wind, air temperature and solar radiation. The parameters having the greatest effect on the transmission capacity are identified.
Ara, Perzila; Cheng, Shaokoon; Heimlich, Michael; Dutkiewicz, Eryk
2015-01-01
Recent developments in capsule endoscopy have highlighted the need for accurate techniques to estimate the location of a capsule endoscope. A highly accurate location estimation of a capsule endoscope in the gastrointestinal (GI) tract in the range of several millimeters is a challenging task. This is mainly because the radio-frequency signals encounter high loss and a highly dynamic channel propagation environment. Therefore, an accurate path-loss model is required for the development of accurate localization algorithms. This paper presents an in-body path-loss model for the human abdomen region at 2.4 GHz frequency. To develop the path-loss model, electromagnetic simulations using the Finite-Difference Time-Domain (FDTD) method were carried out on two different anatomical human models. A mathematical expression for the path-loss model was proposed based on analysis of the measured loss at different capsule locations inside the small intestine. The proposed path-loss model is a good approximation to model in-body RF propagation, since the real measurements are quite infeasible for the capsule endoscopy subject.
The protective service of mangrove ecosystems: A review of valuation methods.
Barbier, Edward B
2016-08-30
Concern over the loss of mangrove ecosystems often focuses on their role in protecting coastal communities from storms that damage property and cause deaths and injury. With climate change, mangrove loss may also result in less protection against coastal storms as well as sea-level rise, saline intrusion and erosion. Past valuations of the storm protection benefit of mangroves have relied on the second-best replacement cost method, such as estimating this protective value with the cost of building human-made storm barriers. More reliable methods instead model the production of the protection service of mangroves and estimate its value in terms of reducing the expected damages or deaths avoided by coastal communities. This paper reviews recent methods of valuing the storm protection service of mangroves and their role in protecting coastal areas and communities of tropical developing countries. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kang, S.; Kim, K.; Suk, B.; Yoo, H.
2007-12-01
Strong ground motion attenuation relationship represents a comprehensive trend of ground shakings at sites with distances from the source, geology, local soil conditions, and others. It is necessary to develop an attenuation relationship with careful considerations of characteristics of the target area for reliable seismic hazard/risk assessments. In the study, observed ground motions from the January 2007 magnitude 4.9 Odaesan earthquake and the events occurring in the Gyeongsang provinces are compared with the previously proposed ground attenuation relationships in the Korean Peninsula to select most appropriate one. In the meantime, a few strong ground motion attenuation relationships are proposed and introduced in HAZUS, which have been designed for the Western United States and the Central and Eastern United States. The selected relationship from the ones for the Korean Peninsula has been compared with attenuation relationships available in HAZUS. Then, the attenuation relation for the Western United States proposed by Sadigh et al. (1997) for the Site Class B has been selected for this study. Reliability of the assessment will be improved by using an appropriate attenuation relation. It has been used for the earthquake loss estimation of the Gyeongju area located in southeast Korea using the deterministic method in HAZUS with a scenario earthquake (M=6.7). Our preliminary estimates show 15.6% damage of houses, shelter needs for about three thousands residents, and 75 life losses in the study area for the scenario events occurring at 2 A.M. Approximately 96% of hospitals will be in normal operation in 24 hours from the proposed event. Losses related to houses will be more than 114 million US dollars. Application of the improved methodology for loss estimation in Korea will help decision makers for planning disaster responses and hazard mitigation.
In Vivo potassium-39 NMR spectra by the burg maximum-entropy method
NASA Astrophysics Data System (ADS)
Uchiyama, Takanori; Minamitani, Haruyuki
The Burg maximum-entropy method was applied to estimate 39K NMR spectra of mung bean root tips. The maximum-entropy spectra have as good a linearity between peak areas and potassium concentrations as those obtained by fast Fourier transform and give a better estimation of intracellular potassium concentrations. Therefore potassium uptake and loss processes of mung bean root tips are shown to be more clearly traced by the maximum-entropy method.
Method for determining damping properties of materials using a suspended mechanical oscillator
NASA Astrophysics Data System (ADS)
Biscans, S.; Gras, S.; Evans, M.; Fritschel, P.; Pezerat, C.; Picart, P.
2018-06-01
We present a new approach for characterizing the loss factor of materials, using a suspended mechanical oscillator. Compared to more standard techniques, this method offers freedom in terms of the size and shape of the tested samples. Using a finite element model and the vibration measurements, the loss factor is deduced from the oscillator's ring-down. In this way the loss factor can be estimated independently for shear and compression deformation of the sample over a range of frequencies. As a proof of concept, we present measurements for EPO-TEK 353ND epoxy samples.
Projecting productivity losses for cancer-related mortality 2011 - 2030.
Pearce, Alison; Bradley, Cathy; Hanly, Paul; O'Neill, Ciaran; Thomas, Audrey Alforque; Molcho, Michal; Sharp, Linda
2016-10-18
When individuals stop working due to cancer this represents a loss to society - the loss of productivity. The aim of this analysis was to estimate productivity losses associated with premature mortality from all adult cancers and from the 20 highest mortality adult cancers in Ireland in 2011, and project these losses until 2030. An incidence-based method was used to estimate the cost of cancer deaths between 2011 and 2030 using the Human Capital Approach. National data were used for cancer, population and economic inputs. Both paid work and unpaid household activities were included. Sensitivity analyses estimated the impact of assumptions around future cancer mortality rates, retirement ages, value of unpaid work, wage growth and discounting. The 233,000 projected deaths from all invasive cancers in Ireland between 2011 and 2030 will result in lost productivity valued at €73 billion; €13 billion in paid work and €60 billion in household activities. These losses represent approximately 1.4 % of Ireland's GDP annually. The most costly cancers are lung (€14.4 billion), colorectal and breast cancer (€8.3 billion each). However, when viewed as productivity losses per cancer death, testis (€364,000 per death), cervix (€155,000 per death) and brain cancer (€136,000 per death) are most costly because they affect working age individuals. An annual 1 % reduction in mortality reduces productivity losses due to all invasive cancers by €8.5 billion over 20 years. Society incurs substantial losses in productivity as a result of cancer-related mortality, particularly when household production is included. These estimates provide valuable evidence to inform resource allocation decisions in cancer prevention and control.
The economic impact of pig-associated parasitic zoonosis in Northern Lao PDR.
Choudhury, Adnan Ali Khan; Conlan, James V; Racloz, Vanessa Nadine; Reid, Simon Andrew; Blacksell, Stuart D; Fenwick, Stanley G; Thompson, Andrew R C; Khamlome, Boualam; Vongxay, Khamphouth; Whittaker, Maxine
2013-03-01
The parasitic zoonoses human cysticercosis (Taenia solium), taeniasis (other Taenia species) and trichinellosis (Trichinella species) are endemic in the Lao People's Democratic Republic (Lao PDR). This study was designed to quantify the economic burden pig-associated zoonotic disease pose in Lao PDR. In particular, the analysis included estimation of the losses in the pork industry as well as losses due to human illness and lost productivity. A Markov-probability based decision-tree model was chosen to form the basis of the calculations to estimate the economic and public health impacts of taeniasis, trichinellosis and cysticercosis. Two different decision trees were run simultaneously on the model's human cohort. A third decision tree simulated the potential impacts on pig production. The human capital method was used to estimate productivity loss. The results found varied significantly depending on the rate of hospitalisation due to neurocysticerosis. This study is the first systematic estimate of the economic impact of pig-associated zoonotic diseases in Lao PDR that demonstrates the significance of the diseases in that country.
Temporary Losses of Highway Capacity and Impacts on Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, S.M.
2002-07-31
Traffic congestion and its impacts significantly affect the nation's economic performance and the public's quality of life. In most urban areas, travel demand routinely exceeds highway capacity during peak periods. In addition, events such as crashes, vehicle breakdowns, work zones, adverse weather, and suboptimal signal timing cause temporary capacity losses, often worsening the conditions on already congested highway networks. The impacts of these temporary capacity losses include delay, reduced mobility, and reduced reliability of the highway system. They can also cause drivers to re-route or reschedule trips. Prior to this study, no nationwide estimates of temporary losses of highway capacitymore » had been made by type of capacity-reducing event. Such information is vital to formulating sound public policies for the highway infrastructure and its operation. This study is an initial attempt to provide nationwide estimates of the capacity losses and delay caused by temporary capacity-reducing events. The objective of this study was to develop and implement methods for producing national-level estimates of the loss of capacity on the nation's highway facilities due to temporary phenomena as well as estimates of the impacts of such losses. The estimates produced by this study roughly indicate the magnitude of problems that are likely be addressed by the Congress during the next re-authorization of the Surface Transportation Programs. The scope of the study includes all urban and rural freeways and principal arterials in the nation's highway system for 1999. Specifically, this study attempts to quantify the extent of temporary capacity losses due to crashes, breakdowns, work zones, weather, and sub-optimal signal timing. These events can cause impacts such as capacity reduction, delays, trip rescheduling, rerouting, reduced mobility, and reduced reliability. This study focuses on the reduction of capacity and resulting delays caused by the temporary events mentioned above. Impacts other than capacity losses and delay, such as re-routing, rescheduling, reduced mobility, and reduced reliability, are not covered in this phase of research.« less
Comparison of methods for estimating evapotranspiration in a small rangeland catchment
USDA-ARS?s Scientific Manuscript database
Evapotranspiration (ET) was quantified for two rangeland vegetation types, aspen and sagebrush/grassland, over an eight year study period by comparing several approaches for estimating ET: eddy covariance systems (EC, available for only six years); soil water storage loss measured by time domain ref...
Estimating soil solution nitrate concentration from dielectric spectra using PLS analysis
USDA-ARS?s Scientific Manuscript database
Fast and reliable methods for in situ monitoring of soil nitrate-nitrogen concentration are vital for reducing nitrate-nitrogen losses to ground and surface waters from agricultural systems. While several studies have been done to indirectly estimate nitrate-nitrogen concentration from time domain s...
Economic Impact of Cystic Echinococcosis in Peru
Moro, Pedro L.; Budke, Christine M.; Schantz, Peter M.; Vasquez, Julio; Santivañez, Saul J.; Villavicencio, Jaime
2011-01-01
Background Cystic echinococcosis (CE) constitutes an important public health problem in Peru. However, no studies have attempted to estimate the monetary and non-monetary impact of CE in Peruvian society. Methods We used official and published sources of epidemiological and economic information to estimate direct and indirect costs associated with livestock production losses and human disease in addition to surgical CE-associated disability adjusted life years (DALYs) lost. Findings The total estimated cost of human CE in Peru was U.S.$2,420,348 (95% CI:1,118,384–4,812,722) per year. Total estimated livestock-associated costs due to CE ranged from U.S.$196,681 (95% CI:141,641–251,629) if only direct losses (i.e., cattle and sheep liver destruction) were taken into consideration to U.S.$3,846,754 (95% CI:2,676,181–4,911,383) if additional production losses (liver condemnation, decreased carcass weight, wool losses, decreased milk production) were accounted for. An estimated 1,139 (95% CI: 861–1,489) DALYs were also lost due to surgical cases of CE. Conclusions This preliminary and conservative assessment of the socio-economic impact of CE on Peru, which is based largely on official sources of information, very likely underestimates the true extent of the problem. Nevertheless, these estimates illustrate the negative economic impact of CE in Peru. PMID:21629731
Eisenberg, Jonathan D.; Lee, Richard J.; Gilmore, Michael E.; Turan, Ekin A.; Singh, Sarabjeet; Kalra, Mannudeep K.; Liu, Bob; Kong, Chung Yin; Gazelle, G. Scott
2013-01-01
Purpose: To demonstrate a limitation of lifetime radiation-induced cancer risk metrics in the setting of testicular cancer surveillance—in particular, their failure to capture the delayed timing of radiation-induced cancers over the course of a patient’s lifetime. Materials and Methods: Institutional review board approval was obtained for the use of computed tomographic (CT) dosimetry data in this study. Informed consent was waived. This study was HIPAA compliant. A Markov model was developed to project outcomes in patients with testicular cancer who were undergoing CT surveillance in the decade after orchiectomy. To quantify effects of early versus delayed risks, life expectancy losses and lifetime mortality risks due to testicular cancer were compared with life expectancy losses and lifetime mortality risks due to radiation-induced cancers from CT. Projections of life expectancy loss, unlike lifetime risk estimates, account for the timing of risks over the course of a lifetime, which enabled evaluation of the described limitation of lifetime risk estimates. Markov chain Monte Carlo methods were used to estimate the uncertainty of the results. Results: As an example of evidence yielded, 33-year-old men with stage I seminoma who were undergoing CT surveillance were projected to incur a slightly higher lifetime mortality risk from testicular cancer (598 per 100 000; 95% uncertainty interval [UI]: 302, 894) than from radiation-induced cancers (505 per 100 000; 95% UI: 280, 730). However, life expectancy loss attributable to testicular cancer (83 days; 95% UI: 42, 124) was more than three times greater than life expectancy loss attributable to radiation-induced cancers (24 days; 95% UI: 13, 35). Trends were consistent across modeled scenarios. Conclusion: Lifetime radiation risk estimates, when used for decision making, may overemphasize radiation-induced cancer risks relative to short-term health risks. © RSNA, 2012 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12121015/-/DC1 PMID:23249573
Social Cost of Substance Abuse in Russia.
Potapchik, Elena; Popovich, Larisa
2014-09-01
To summarize results of studies that estimate the social costs of alcohol, tobacco, and illicit drug abuse in Russia. The purpose of these studies was to inform policymakers about the real economic burden of risky behaviors and to provide conditions for evidence-based and well-informed decision making in this area. The cost-of-illness method was applied to estimate the social cost of substance abuse. The intangible cost was not included in estimation. A prevalence-based approach was applied to estimate the tangible cost. For the estimation of direct costs, a top-down method was used. Indirect costs were estimated using two methods: the human capital and the friction cost. In 2008, the social cost of substance abuse in Russia comprised 677.2 billion rubles if the friction cost method is applied and 1965.9 billion rubles if the human capital method is used. The social cost of substance abuse is defined to the greatest extent by alcohol consumption, comprising about 45% of the economic burden. Illicit drug use comprises about 30% of the economic burden and tobacco consumption 25%. The results of economic studies demonstrated that psychoactive substances impose a considerable economic burden on society. Analysis of the substance abuse social cost pattern shows that the main losses that society bears because of these behavioral risk factors fall outside the health care system and lay in other sectors of the economy such as social care, law enforcement, and productivity losses. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States
ERIC Educational Resources Information Center
Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.
2007-01-01
Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…
Detection of soil erosion within pinyon-juniper woodlands using Thematic Mapper (TM) data
NASA Technical Reports Server (NTRS)
Price, Kevin P.
1993-01-01
Multispectral measurements collected by Landsat Thematic Mapper (TM) were correlated with field measurements, direct soil loss estimates, and Universal Soil Loss Equation (USLE) estimates to determine the sensitivity of TM data to varying degrees of soil erosion in pinyon-juniper woodland in central Utah. TM data were also evaluated as a predictor of the USLE Crop Management C factor for pinyon-juniper woodlands. TM spectral data were consistently better predictors of soil erosion factors than any combination of field factors. TM data were more sensitive to vegetation variations than the USLE C factor. USLE estimates showed low annual rates of erosion which varied little among the study sites. Direct measurements of rate of soil loss using the SEDIMENT (Soil Erosion DIrect measureMENT) technique, indicated high and varying rates of soil loss among the sites since tree establishment. Erosion estimates from the USLE and SEDIMENT methods suggest that erosion rates have been severe in the past, but because significant amounts of soil have already been eroded, and the surface is now armored by rock debris, present erosion rates are lower. Indicators of accelerated erosion were still present on all sites, however, suggesting that the USLE underestimated erosion within the study area.
NASA Astrophysics Data System (ADS)
Lauro, S. E.; Mattei, E.; Cosciotti, B.; Di Paolo, F.; Arcone, S. A.; Viccaro, M.; Pettinelli, E.
2017-07-01
Ground-penetrating radar (GPR) is a well-established geophysical terrestrial exploration method and has recently become one of the most promising for planetary subsurface exploration. Several future landing vehicles like EXOMARS, 2020 NASA ROVER, and Chang'e-4, to mention a few, will host GPR. A GPR survey has been conducted on volcanic deposits on Mount Etna (Italy), considered a good analogue for Martian and Lunar volcanic terrains, to test a novel methodology for subsoil dielectric properties estimation. The stratigraphy of the volcanic deposits was investigated using 500 MHz and 1 GHz antennas in two different configurations: transverse electric and transverse magnetic. Sloping discontinuities have been used to estimate the loss tangents of the upper layer of such deposits by applying the amplitude-decay and frequency shift methods and approximating the GPR transmitted signal by Gaussian and Ricker wavelets. The loss tangent values, estimated using these two methodologies, were compared and validated with those retrieved from time domain reflectometry measurements acquired along the radar profiles. The results show that the proposed analysis, together with typical GPR methods for the estimation of the real part of permittivity, can be successfully used to characterize the electrical properties of planetary subsurface and to define some constraints on its lithology of the subsurface.
Xu, Yongxiang; Wei, Yanyu; Zou, Jibin; Li, Jianjun; Qi, Wenjuan; Li, Yong
2014-01-01
Deep-sea permanent magnet motor equipped with fluid compensated pressure-tolerant system is compressed by the high pressure fluid both outside and inside. The induced stress distribution in stator core is significantly different from that in land type motor. Its effect on the magnetic properties of stator core is important for deep-sea motor designers but seldom reported. In this paper, the stress distribution in stator core, regarding the seawater compressive stress, is calculated by 2D finite element method (FEM). The effect of compressive stress on magnetic properties of electrical steel sheet, that is, permeability, BH curves, and BW curves, is also measured. Then, based on the measured magnetic properties and calculated stress distribution, the stator iron loss is estimated by stress-electromagnetics-coupling FEM. At last the estimation is verified by experiment. Both the calculated and measured results show that stator iron loss increases obviously with the seawater compressive stress.
NASA Technical Reports Server (NTRS)
Loeppky, J. A.; Kobayashi, Y.; Venters, M. D.; Luft, U. C.
1979-01-01
Blood samples were obtained from forearm vein or artery with indwelling cannula (1) before, (2) during the last min, and (3) about 2 min after lower body negative pressure (LBNP) in 16 experiments to determine whether plasma volume (PV) estimates were affected by regional hemoconcentration in the lower body. Total hemoglobin (THb) was estimated with the CO method prior to LBNP. Hemoglobin (Hb) and hematocrit (Hct) values from (2) gave only a 3% (87 ml) loss in PV due to LBNP, assuming no change in THb. However, Hb and Hct values from (3) showed an 11% loss in PV (313 ml). This 72% underestimation of PV loss with (2) must have resulted from the sequestration of blood and subsequent hemoconcentration in the lower body during LBNP. The effects of LBNP on PV should be estimated 1-3 min after exposure, after mixing but before extravascular fluid returns to the circulation.
Barriers to Mental Health Service Use Among Workers With Depression and Work Productivity
Hoch, Jeffrey S.
2015-01-01
Objective: This article estimates the decrease in workplace productivity losses associated with removal of three types of barriers to mental health service use among workers with depression. Methods: A model of productivity losses based on the results of a population-based survey of Canadian workers was used to estimate the impact of three types of barriers to mental health service use among workers with depression. Results: Removing the service need recognition barrier is associated with a 33% decrease in work productivity losses. There is a 49% decrease when all three barriers are removed. Conclusions: Our results suggest recognizing the need for treatment is only one barrier to service use; attitudinal and structural barriers should also be considered. The greatest decrease in productivity losses is observed with the removal of all three barriers. PMID:26147540
Hill, Heather D.; Morris, Pamela A.; Castells, Nina; Walker, Jessica Thornton
2011-01-01
This study uses data from an experimental employment program and instrumental variables (IV) estimation to examine the effects of maternal job loss on child classroom behavior. Random assignment to the treatment at one of three program sites is an exogenous predictor of employment patterns. Cross-site variation in treatment-control differences is used to identify the effects of employment levels and transitions. Under certain assumptions, this method controls for unobserved correlates of job loss and child well-being, as well as measurement error and simultaneity. IV estimates suggest that maternal job loss sharply increases problem behavior but has neutral effects on positive social behavior. Current employment programs concentrate primarily on job entry, but these findings point to the importance of promoting job stability for workers and their children. PMID:22162901
Kearney, Lauren; Kynn, Mary; Reed, Rachel; Davenport, Lisa; Young, Jeanine; Schafer, Keppel
2018-06-07
In industrialised countries the incidence of postpartum haemorrhage (PPH) is increasing, for which exact etiology is not well understood. Studies have relied upon retrospective data with estimated blood loss as the primary outcome, known to be underestimated by clinicians. This study aimed to explore variables associated with PPH in a cohort of women birthing vaginally in coastal Queensland, Australia, using the gravimetric method to measure blood loss. Women were prospectively recruited to participate using an opt-out consent process. Maternal demographics; pregnancy history; model of care; mode of birth; third stage management practices; antenatal, intrapartum and immediate postpartum complications; gravimetric and estimated blood loss; and haematological laboratory data, were collected via a pre-designed data collection instrument. Descriptive statistics were used for demographic, intrapartum and birthing practices. A General Linear Model was used for multivariate analysis to examine relationship between gravimetric blood loss and demographic, birthing practices and intrapartum variables. The primary outcome was a postpartum haemorrhage (blood loss > 500 ml). 522 singleton births were included in the analysis. Maternal mean age was 29 years; 58% were multiparous. Most participants received active (291, 55.7%) or modified active management of third stage (191, 36.6%). Of 451 births with valid gravimetric blood loss recorded, 35% (n = 159) recorded a loss of 500 ml or more and 111 (70%) of these were recorded as PPH. Gravimetric blood loss was strongly correlated with estimated blood loss (r = 0.88; p < 0.001). On average, the estimated blood loss was lower than the gravimetric blood loss, about 78% of the measured value. High neonatal weight, perineal injury, complications during labour, separation of mother and baby, and observation of a gush of blood were associated with PPH. Nulliparity, labour induction and augmentation, syntocinon use were not associated with PPH. In contrast to previous study findings, nulliparity, labour induction and augmentation were not associated with PPH. Estimation of blood loss was relatively accurate in comparison to gravimetric assessment; raising questions about routine gravimetric assessment of blood loss following uncomplicated births. Further research is required to investigate type and speed of blood loss associated with PPH.
[DPOAE in tinnitus patients with cochlear hearing loss considering hyperacusis and misophonia].
Sztuka, Aleksandra; Pośpiech, Lucyna; Gawron, Wojciech; Dudek, Krzysztof
2006-01-01
The most probable place generating tinnitus in auditory pathway are outer hair cells (OHC) inside cochlea. To asses their activity otoacoustic emission is used. The goal of the investigation was estimation the features of otoemission DPOAE in groups with tinnitus patients with cochlear hearing loss, estimation of diagnostic value of DPOAE parameters for analysis of function of the cochlea in investigated patients emphasizing DPOAE parameters most useful in localizing tinnitus generators and estimation of hypothetic influence of hyperacusis and misophony on parameters of DPOAE in tinnitus patients with cochlear hearing loss. The material of the study were 42 tinnitus patients with cochlear hearing loss. In the control group there were 21 patients without tinnitus with the same type of hearing loss. Then tinnitus patients were divided into three subgroups--with hyperacusis, misophony and without both of them, based on audiologic findings. after taking view on tinnitus and physical examination in all the patients pure tone and impedance audiometry, supratreshold tests, ABR and audiometric average and discomfort level were evaluated. Then otoemission DPOAE was measured in three procedures. First the amplitudes of two points per octave were assessed, in second--"fine structure" method-- 16-20 points per octave (f2/f1 = 1.2, L1 = L2 = 70 dB). Third procedure included recording of growth rate function in three series for input tones of value f2 = 2002, 4004, 6006 Hz (f2/f1= 1.22) and levels L1=L2, growing by degrees of 5dB in each series. DPOAE amplitudes in recording of 2 points per octave and fine structure method are very valuable parameters for estimation of cochlear function in tinnitus patients with cochlear hearing loss. Decreasing of DPOAE amplitudes in patients with cochlear hearing loss and tinnitus suggests significant role of OHC pathology, unbalanced by IHC injury in generation of tinnitus in patients with hearing loss of cochlear localization. DPOAE fine structure provides us the additional information about DPOAE amplitude recorded in two points per octave, spreading the amount of frequencies f2, where differences are noticed in comparison of two groups--tinnitus patients and control. Function growth rate cannot be the only parameter in estimation of DPOAE in tinnitus patients with cochlear hearing loss, also including subjects with hyperacusis and misophony. Hyperacusis has important influence on DPOAE amplitude, increases essentially amplitude of DPOAE in the examined group of tinnitus patients.
Pearce, Alison M; Hanly, Paul; Timmons, Aileen; Walsh, Paul M; O'Neill, Ciaran; O'Sullivan, Eleanor; Gooberman-Hill, Rachael; Thomas, Audrey Alforque; Gallagher, Pamela; Sharp, Linda
2015-08-01
Previous studies suggest that productivity losses associated with head and neck cancer (HNC) are higher than in other cancers. These studies have only assessed a single aspect of productivity loss, such as temporary absenteeism or premature mortality, and have only used the Human Capital Approach (HCA). The Friction Cost Approach (FCA) is increasingly recommended, although has not previously been used to assess lost production from HNC. The aim of this study was to estimate the lost productivity associated with HNC due to different types of absenteeism and premature mortality, using both the HCA and FCA. Survey data on employment status were collected from 251 HNC survivors in Ireland and combined with population-level survival estimates and national wage data. The cost of temporary and permanent time off work, reduced working hours and premature mortality using both the HCA and FCA were calculated. Estimated total productivity losses per employed person of working age were EUR253,800 using HCA and EUR6800 using FCA. The main driver of HCA costs was premature mortality (38% of total) while for FCA it was temporary time off (73% of total). The productivity losses associated with head and neck cancer are substantial, and return to work assistance could form an important part of rehabilitation. Use of both the HCA and FCA approaches allowed different drivers of productivity losses to be identified, due to the different assumptions of the two methods. For future estimates of productivity losses, the use of both approaches may be pragmatic.
DeMars, Craig A; Auger-Méthé, Marie; Schlägel, Ulrike E; Boutin, Stan
2013-01-01
Analyses of animal movement data have primarily focused on understanding patterns of space use and the behavioural processes driving them. Here, we analyzed animal movement data to infer components of individual fitness, specifically parturition and neonate survival. We predicted that parturition and neonate loss events could be identified by sudden and marked changes in female movement patterns. Using GPS radio-telemetry data from female woodland caribou (Rangifer tarandus caribou), we developed and tested two novel movement-based methods for inferring parturition and neonate survival. The first method estimated movement thresholds indicative of parturition and neonate loss from population-level data then applied these thresholds in a moving-window analysis on individual time-series data. The second method used an individual-based approach that discriminated among three a priori models representing the movement patterns of non-parturient females, females with surviving offspring, and females losing offspring. The models assumed that step lengths (the distance between successive GPS locations) were exponentially distributed and that abrupt changes in the scale parameter of the exponential distribution were indicative of parturition and offspring loss. Both methods predicted parturition with near certainty (>97% accuracy) and produced appropriate predictions of parturition dates. Prediction of neonate survival was affected by data quality for both methods; however, when using high quality data (i.e., with few missing GPS locations), the individual-based method performed better, predicting neonate survival status with an accuracy rate of 87%. Understanding ungulate population dynamics often requires estimates of parturition and neonate survival rates. With GPS radio-collars increasingly being used in research and management of ungulates, our movement-based methods represent a viable approach for estimating rates of both parameters. PMID:24324866
Demars, Craig A; Auger-Méthé, Marie; Schlägel, Ulrike E; Boutin, Stan
2013-10-01
Analyses of animal movement data have primarily focused on understanding patterns of space use and the behavioural processes driving them. Here, we analyzed animal movement data to infer components of individual fitness, specifically parturition and neonate survival. We predicted that parturition and neonate loss events could be identified by sudden and marked changes in female movement patterns. Using GPS radio-telemetry data from female woodland caribou (Rangifer tarandus caribou), we developed and tested two novel movement-based methods for inferring parturition and neonate survival. The first method estimated movement thresholds indicative of parturition and neonate loss from population-level data then applied these thresholds in a moving-window analysis on individual time-series data. The second method used an individual-based approach that discriminated among three a priori models representing the movement patterns of non-parturient females, females with surviving offspring, and females losing offspring. The models assumed that step lengths (the distance between successive GPS locations) were exponentially distributed and that abrupt changes in the scale parameter of the exponential distribution were indicative of parturition and offspring loss. Both methods predicted parturition with near certainty (>97% accuracy) and produced appropriate predictions of parturition dates. Prediction of neonate survival was affected by data quality for both methods; however, when using high quality data (i.e., with few missing GPS locations), the individual-based method performed better, predicting neonate survival status with an accuracy rate of 87%. Understanding ungulate population dynamics often requires estimates of parturition and neonate survival rates. With GPS radio-collars increasingly being used in research and management of ungulates, our movement-based methods represent a viable approach for estimating rates of both parameters.
NASA Astrophysics Data System (ADS)
Bolte, Nathan; Heidbrink, W. W.; Pace, D. C.; van Zeeland, M. A.; Chen, X.
2015-11-01
A new fast-ion diagnostic method uses passive emission of D-alpha radiation to determine fast-ion losses quantitatively. The passive fast-ion D-alpha simulation (P-FIDAsim) forward models the Doppler-shifted spectra of first-orbit fast ions that charge exchange with edge neutrals. Simulated spectra are up to 80 % correlated with experimental spectra. Calibrated spectra are used to estimate the 2D neutral density profile by inverting simulated spectra. The inferred neutral density shows the expected increase toward each x-point and an average value of 8 × 10 9 cm-3 at the plasma boundary and 1 × 10 11 cm-3 near the wall. Measuring and simulating first-orbit spectra effectively ``calibrates'' the system, allowing for the quantification of more general fast-ion losses. Sawtooth crashes are estimated to eject 1.2 % of the fast-ion inventory, in good agreement with a 1.7 % loss estimate made by TRANSP. Sightlines sensitive to passing ions observe larger sawtooth losses than sightlines sensitive to trapped ions. Supported by US DOE under SC-G903402, DE-FC02-04ER54698.
USDA-ARS?s Scientific Manuscript database
Volatilization represents a significant loss pathway for many pesticides, herbicides and other agrochemicals. One common method for measuring the volatilization of agrochemicals is the flux-gradient method. Using this method, the chemical flux is estimated as the product of the vertical concentratio...
Factors influencing to earthquake caused economical losses on urban territories
NASA Astrophysics Data System (ADS)
Nurtaev, B.; Khakimov, S.
2005-12-01
Questions of assessment of earthquake economical losses on urban territories of Uzbekistan, taking into account damage forming factors, which are increqasing or reducing economical losses were discussed in the paper. Buildings and facilities vulnerability factors were classified. From total value (equal to 50) were selected most important ones. Factors ranging by level of impact and weight function in loss assessment were ranged. One group of damage forming factors includs seismic hazard assessment, design, construction and maintenance of building and facilities. Other one is formed by city planning characteristics and includes : density of constructions and population, area of soft soils, existence of liquefaction susceptible soils and etc. To all these factors has been given weight functions and interval values by groups. Methodical recomendations for loss asessment taking into account above mentioned factors were developed. It gives possibility to carry out preventive measures for protection of vulnerable territories, to differentiate cost assessment of each region in relation with territory peculiarity and damage value. Using developed method we have ranged cities by risk level. It has allowed to establish ratings of the general vulnerability of urban territories of cities and on their basis to make optimum decisions, oriented to loss mitigation and increase of safety of population. Besides the technique can be used by insurance companies for estimated zoning of territory, development of effective utilization schema of land resources, rational town-planning, an economic estimation of used territory for supply with information of the various works connected to an estimation of seismic hazard. Further improvement of technique of establishment of rating of cities by level of damage from earthquakes will allow to increase quality of construction, rationality of accommodation of buildings, will be an economic stimulator for increasing of seismic resistance of building.
Calvert, Clara; Thomas, Sara L.; Ronsmans, Carine; Wagner, Karen S.; Adler, Alma J.; Filippi, Veronique
2012-01-01
Objective To provide regional estimates of the prevalence of maternal haemorrhage and explore the effect of methodological differences between studies on any observed regional variation. Methods We conducted a systematic review of the prevalence of maternal haemorrhage, defined as blood loss greater than or equal to 1) 500 ml or 2) 1000 ml in the antepartum, intrapartum or postpartum period. We obtained regional estimates of the prevalence of maternal and severe maternal haemorrhage by conducting meta-analyses and used meta-regression to explore potential sources of between-study heterogeneity. Findings No studies reported the prevalence of antepartum haemorrhage (APH) according to our definitions. The prevalence of postpartum haemorrhage (PPH) (blood loss ≥500 ml) ranged from 7.2% in Oceania to 25.7% in Africa. The prevalence of severe PPH (blood loss ≥1000 ml) was highest in Africa at 5.1% and lowest in Asia at 1.9%. There was strong evidence of between-study heterogeneity in the prevalence of PPH and severe PPH in most regions. Meta-regression analyses suggested that region and method of measurement of blood loss influenced prevalence estimates for both PPH and severe PPH. The regional patterns changed after adjusting for the other predictors of PPH indicating that, compared with European women, Asian women have a lower prevalence of PPH. Conclusions We found evidence that Asian women have a very low prevalence of PPH compared with women in Europe. However, more reliable estimates will only be obtained with the standardisation of the measurement of PPH so that the data from different regions are comparable. PMID:22844432
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan
2016-05-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.
Weir, Hannah K.; Li, Chunyu; Henley, S. Jane; Joseph, Djenaba
2018-01-01
Background Educational attainment (EA) is inversely associated with colorectal cancer risk. Colorectal cancer screening can save lives if precancerous polyps or early cancers are found and successfully treated. This study aims to estimate the potential productivity loss (PPL) and associated avoidable colorectal cancer–related deaths among screen-eligible adults residing in lower EA counties in the United States. Methods Mortality and population data were used to examine colorectal cancer deaths (2008–2012) among adults aged 50 to 74 years in lower EA counties, and to estimate the expected number of deaths using the mortality experience from high EA counties. Excess deaths (observed–expected) were used to estimate potential years life lost, and the human capital method was used to estimate PPL in 2012 U.S. dollars. Results County-level colorectal cancer death rates were inversely associated with county-level EA. Of the 100,857 colorectal cancer deaths in lower EA counties, we estimated that more than 21,000 (1 in 5) was potentially avoidable and resulted in nearly $2 billion annual productivity loss. Conclusions County-level EA disparities contribute to a large number of potentially avoidable colorectal cancer–related deaths. Increased prevention and improved screening potentially could decrease deaths and help reduce the associated economic burden in lower EA communities. Increased screening could further reduce deaths in all EA groups. Impact These results estimate the large economic impact of potentially avoidable colorectal cancer–related deaths in economically disadvantaged communities, as measured by lower EA. PMID:28003180
Are False-Positive Rates Leading to an Overestimation of Noise-Induced Hearing Loss?
ERIC Educational Resources Information Center
Schlauch, Robert S.; Carney, Edward
2011-01-01
Purpose: To estimate false-positive rates for rules proposed to identify early noise-induced hearing loss (NIHL) using the presence of notches in audiograms. Method: Audiograms collected from school-age children in a national survey of health and nutrition (the Third National Health and Nutrition Examination Survey [NHANES III]; National Center…
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji; Sano, Kousuke
This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.
Vaughan, Ian P.; Ramirez Saldivar, Diana A.; Nathan, Senthilvel K. S. S.; Goossens, Benoit
2017-01-01
The development of GPS tags for tracking wildlife has revolutionised the study of home ranges, habitat use and behaviour. Concomitantly, there have been rapid developments in methods for estimating habitat use from GPS data. In combination, these changes can cause challenges in choosing the best methods for estimating home ranges. In primatology, this issue has received little attention, as there have been few GPS collar-based studies to date. However, as advancing technology is making collaring studies more feasible, there is a need for the analysis to advance alongside the technology. Here, using a high quality GPS collaring data set from 10 proboscis monkeys (Nasalis larvatus), we aimed to: 1) compare home range estimates from the most commonly used method in primatology, the grid-cell method, with three recent methods designed for large and/or temporally correlated GPS data sets; 2) evaluate how well these methods identify known physical barriers (e.g. rivers); and 3) test the robustness of the different methods to data containing either less frequent or random losses of GPS fixes. Biased random bridges had the best overall performance, combining a high level of agreement between the raw data and estimated utilisation distribution with a relatively low sensitivity to reduced fixed frequency or loss of data. It estimated the home range of proboscis monkeys to be 24–165 ha (mean 80.89 ha). The grid-cell method and approaches based on local convex hulls had some advantages including simplicity and excellent barrier identification, respectively, but lower overall performance. With the most suitable model, or combination of models, it is possible to understand more fully the patterns, causes, and potential consequences that disturbances could have on an animal, and accordingly be used to assist in the management and restoration of degraded landscapes. PMID:28362872
The South African Tuberculosis Care Cascade: Estimated Losses and Methodological Challenges
Naidoo, Pren; Theron, Grant; Rangaka, Molebogeng X; Chihota, Violet N; Vaughan, Louise; Brey, Zameer O; Pillay, Yogan
2017-01-01
Abstract Background While tuberculosis incidence and mortality are declining in South Africa, meeting the goals of the End TB Strategy requires an invigorated programmatic response informed by accurate data. Enumerating the losses at each step in the care cascade enables appropriate targeting of interventions and resources. Methods We estimated the tuberculosis burden; the number and proportion of individuals with tuberculosis who accessed tests, had tuberculosis diagnosed, initiated treatment, and successfully completed treatment for all tuberculosis cases, for those with drug-susceptible tuberculosis (including human immunodeficiency virus (HIV)–coinfected cases) and rifampicin-resistant tuberculosis. Estimates were derived from national electronic tuberculosis register data, laboratory data, and published studies. Results The overall tuberculosis burden was estimated to be 532005 cases (range, 333760–764480 cases), with successful completion of treatment in 53% of cases. Losses occurred at multiple steps: 5% at test access, 13% at diagnosis, 12% at treatment initiation, and 17% at successful treatment completion. Overall losses were similar among all drug-susceptible cases and those with HIV coinfection (54% and 52%, respectively, successfully completed treatment). Losses were substantially higher among rifampicin- resistant cases, with only 22% successfully completing treatment. Conclusion Although the vast majority of individuals with tuberculosis engaged the public health system, just over half were successfully treated. Urgent efforts are required to improve implementation of existing policies and protocols to close gaps in tuberculosis diagnosis, treatment initiation, and successful treatment completion. PMID:29117342
Novel Method for Quantitative Estimation of Biofilms.
Syal, Kirtimaan
2017-10-01
Biofilm protects bacteria from stress and hostile environment. Crystal violet (CV) assay is the most popular method for biofilm determination adopted by different laboratories so far. However, biofilm layer formed at the liquid-air interphase known as pellicle is extremely sensitive to its washing and staining steps. Early phase biofilms are also prone to damage by the latter steps. In bacteria like mycobacteria, biofilm formation occurs largely at the liquid-air interphase which is susceptible to loss. In the proposed protocol, loss of such biofilm layer was prevented. In place of inverting and discarding the media which can lead to the loss of the aerobic biofilm layer in CV assay, media was removed from the formed biofilm with the help of a syringe and biofilm layer was allowed to dry. The staining and washing steps were avoided, and an organic solvent-tetrahydrofuran (THF) was deployed to dissolve the biofilm, and the absorbance was recorded at 595 nm. The protocol was tested for biofilm estimation of E. coli, B. subtilis and M. smegmatis, and compared with the traditional CV assays. Isoniazid drug molecule, a known inhibitor of M. smegmatis biofilm, was tested and its inhibitory effects were quantified by the proposed protocol. For ease in referring, this method has been described as the Syal method for biofilm quantification. This new method was found to be useful for the estimation of early phase biofilm and aerobic biofilm layer formed at the liquid-air interphase. The biofilms formed by all three tested bacteria-B. subtilis, E. coli and M. smegmatis, were precisely quantified.
The economic burden of child sexual abuse in the United States.
Letourneau, Elizabeth J; Brown, Derek S; Fang, Xiangming; Hassan, Ahmed; Mercy, James A
2018-05-01
The present study provides an estimate of the U.S. economic impact of child sexual abuse (CSA). Costs of CSA were measured from the societal perspective and include health care costs, productivity losses, child welfare costs, violence/crime costs, special education costs, and suicide death costs. We separately estimated quality-adjusted life year (QALY) losses. For each category, we used the best available secondary data to develop cost per case estimates. All costs were estimated in U.S. dollars and adjusted to the reference year 2015. Estimating 20 new cases of fatal and 40,387 new substantiated cases of nonfatal CSA that occurred in 2015, the lifetime economic burden of CSA is approximately $9.3 billion, the lifetime cost for victims of fatal CSA per female and male victim is on average $1,128,334 and $1,482,933, respectively, and the average lifetime cost for victims of nonfatal CSA is of $282,734 per female victim. For male victims of nonfatal CSA, there was insufficient information on productivity losses, contributing to a lower average estimated lifetime cost of $74,691 per male victim. If we included QALYs, these costs would increase by approximately $40,000 per victim. With the exception of male productivity losses, all estimates were based on robust, replicable incidence-based costing methods. The availability of accurate, up-to-date estimates should contribute to policy analysis, facilitate comparisons with other public health problems, and support future economic evaluations of CSA-specific policy and practice. In particular, we hope the availability of credible and contemporary estimates will support increased attention to primary prevention of CSA. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
Extreme risk assessment based on normalized historic loss data
NASA Astrophysics Data System (ADS)
Eichner, Jan
2017-04-01
Natural hazard risk assessment and risk management focuses on the expected loss magnitudes of rare and extreme events. Such large-scale loss events typically comprise all aspects of compound events and accumulate losses from multiple sectors (including knock-on effects). Utilizing Munich Re's NatCatSERVICE direct economic loss data, we beriefly recap a novel methodology of peril-specific loss data normalization which improves the stationarity properties of highly non-stationary historic loss data (due to socio-economic growth of assets prone to destructive forces), and perform extreme value analysis (peaks-over-threshold method) to come up with return level estimates of e.g. 100-yr loss event scenarios for various types of perils, globally or per continent, and discuss uncertainty in the results.
Efficient robust doubly adaptive regularized regression with applications.
Karunamuni, Rohana J; Kong, Linglong; Tu, Wei
2018-01-01
We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.
Real Time Intraoperative Monitoring of Blood Loss with a Novel Tablet Application.
Sharareh, Behnam; Woolwine, Spencer; Satish, Siddarth; Abraham, Peter; Schwarzkopf, Ran
2015-01-01
Real-time monitoring of blood loss is critical in fluid management. Visual estimation remains the standard of care in estimating blood loss, yet is demonstrably inaccurate. Photometric analysis, which is the referenced "gold-standard" for measuring blood loss, is both time-consuming and costly. The purpose of this study was to evaluate the efficacy of a novel tablet-monitoring device for measurement of Hb loss during orthopaedic procedures. This is a prospective study of 50 patients in a consecutive series of joint arthroplasty cases. The novel System with Feature Extraction Technology was used to measure the amount of Hb contained within surgical sponges intra-operatively. The system's measures were then compared with those obtained via gravimetric method and photometric analysis. Accuracy was evaluated using linear regression and Bland-Altman analysis. Our results showed a significant positive correlation between Triton tablet system and photometric analysis with respect to intra-operative hemoglobin and blood loss at 0.92 and 0.91, respectively. This novel system can accurately determine Hb loss contained within surgical sponges. We believe that this user-friendly software can be used for measurement of total intraoperative blood loss and thus aid in a more accurate fluid management protocols during orthopaedic surgical procedures.
Relationships between salt marsh loss and dredged canals in three Louisiana Estuaries
Bass, A.S.; Turner, R.E.
1997-01-01
Coastal land loss rates were quantified for 27 salt marshes in three estuaries of the Louisiana Mississippi Deltaic plain: Barataria, Terrebonne and St. Bernard. The sites ranged from 23 ha to 908 ha and the total area of all sites was 6,367 ha. Two methods were used to calculate open water and canal density in each of five years: (1) a Geographic Information System for 1956 and 1978, and, (2) a point grid method for 1974, 1988, and 1990. A General Linear Model explained 79% of the variance (R2 = 0.79; P ??? 0.95) among basins for all years and provided an estimate of the impacts of canals in each basin. The land loss rates, virtually all occurring as wetland to open water conversions, were different in each basin. The 'background' land loss rates from 1956 to 1990 (exclusive of the direct or indirect effects of canals; %/yr; ?? + 1 Std. Dev.) for each basin were estimated to be: Barataria: 0.71 ?? 0.12, Terrebonne 0.47 ?? 0.09, and St. Bernard 0.08 ?? 0.14. Canals were equally and directly correlated with landloss in each basin. There was 2.85 ha of open water formed with each ha of canal dredged (inclusive of the canal area) and an additional 1 ha wetland converted to spoil bank vegetation. Additional losses may occur if loss rates continue for periods longer than the mapping intervals. There are documented causal mechanisms involving wetland hydrologic changes that can explain these wetland losses.
Bayesian inference for disease prevalence using negative binomial group testing
Pritchard, Nicholas A.; Tebbs, Joshua M.
2011-01-01
Group testing, also known as pooled testing, and inverse sampling are both widely used methods of data collection when the goal is to estimate a small proportion. Taking a Bayesian approach, we consider the new problem of estimating disease prevalence from group testing when inverse (negative binomial) sampling is used. Using different distributions to incorporate prior knowledge of disease incidence and different loss functions, we derive closed form expressions for posterior distributions and resulting point and credible interval estimators. We then evaluate our new estimators, on Bayesian and classical grounds, and apply our methods to a West Nile Virus data set. PMID:21259308
Gering, Kevin L.
2013-06-18
A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples charge characteristics of the electrochemical cell. The computing system periodically determines cell information from the charge characteristics of the electrochemical cell. The computing system also periodically adds a first degradation characteristic from the cell information to a first sigmoid expression, periodically adds a second degradation characteristic from the cell information to a second sigmoid expression and combines the first sigmoid expression and the second sigmoid expression to develop or augment a multiple sigmoid model (MSM) of the electrochemical cell. The MSM may be used to estimate a capacity loss of the electrochemical cell at a desired point in time and analyze other characteristics of the electrochemical cell. The first and second degradation characteristics may be loss of active host sites and loss of free lithium for Li-ion cells.
Huizinga, Richard J.
2014-01-01
The rainfall-runoff pairs from the storm-specific GUH analysis were further analyzed against various basin and rainfall characteristics to develop equations to estimate the peak streamflow and flood volume based on a quantity of rainfall on the basin.
ERIC Educational Resources Information Center
Ding, Cody S.; Davison, Mark L.
2010-01-01
Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…
[Methodologies for estimating the indirect costs of traffic accidents].
Carozzi, Soledad; Elorza, María Eugenia; Moscoso, Nebel Silvana; Ripari, Nadia Vanina
2017-01-01
Traffic accidents generate multiple costs to society, including those associated with the loss of productivity. However, there is no consensus about the most appropriate methodology for estimating those costs. The aim of this study was to review methods for estimating indirect costs applied in crash cost studies. A thematic review of the literature was carried out between 1995 and 2012 in PubMed with the terms cost of illness, indirect cost, road traffic injuries, productivity loss. For the assessment of costs we used the the human capital method, on the basis of the wage-income lost during the time of treatment and recovery of patients and caregivers. In the case of premature death or total disability, the discount rate was applied to obtain the present value of lost future earnings. The computed years arose by subtracting to life expectancy at birth the average age of those affected who are not incorporated into the economically active life. The interest in minimizing the problem is reflected in the evolution of the implemented methodologies. We expect that this review is useful to estimate efficiently the real indirect costs of traffic accidents.
Effects of tag loss on direct estimates of population growth rate
Rotella, J.J.; Hines, J.E.
2005-01-01
The temporal symmetry approach of R. Pradel can be used with capture-recapture data to produce retrospective estimates of a population's growth rate, lambda(i), and the relative contributions to lambda(i) from different components of the population. Direct estimation of lambda(i) provides an alternative to using population projection matrices to estimate asymptotic lambda and is seeing increased use. However, the robustness of direct estimates of lambda(1) to violations of several key assumptions has not yet been investigated. Here, we consider tag loss as a possible source of bias for scenarios in which the rate of tag loss is (1) the same for all marked animals in the population and (2) a function of tag age. We computed analytic approximations of the expected values for each of the parameter estimators involved in direct estimation and used those values to calculate bias and precision for each parameter estimator. Estimates of lambda(i) were robust to homogeneous rates of tag loss. When tag loss rates varied by tag age, bias occurred for some of the sampling situations evaluated, especially those with low capture probability, a high rate of tag loss, or both. For situations with low rates of tag loss and high capture probability, bias was low and often negligible. Estimates of contributions of demographic components to lambda(i) were not robust to tag loss. Tag loss reduced the precision of all estimates because tag loss results in fewer marked animals remaining available for estimation. Clearly tag loss should be prevented if possible, and should be considered in analyses of lambda(i), but tag loss does not necessarily preclude unbiased estimation of lambda(i).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halligan, Matthew
Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less
NASA Astrophysics Data System (ADS)
Welle, Paul D.; Mauter, Meagan S.
2017-09-01
This work introduces a generalizable approach for estimating the field-scale agricultural yield losses due to soil salinization. When integrated with regional data on crop yields and prices, this model provides high-resolution estimates for revenue losses over large agricultural regions. These methods account for the uncertainty inherent in model inputs derived from satellites, experimental field data, and interpreted model results. We apply this method to estimate the effect of soil salinity on agricultural outputs in California, performing the analysis with both high-resolution (i.e. field scale) and low-resolution (i.e. county-scale) data sources to highlight the importance of spatial resolution in agricultural analysis. We estimate that soil salinity reduced agricultural revenues by 3.7 billion (1.7-7.0 billion) in 2014, amounting to 8.0 million tons of lost production relative to soil salinities below the crop-specific thresholds. When using low-resolution data sources, we find that the costs of salinization are underestimated by a factor of three. These results highlight the need for high-resolution data in agro-environmental assessment as well as the challenges associated with their integration.
NASA Astrophysics Data System (ADS)
Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.
2018-01-01
Aims: We aim to perform a theoretical evaluation of the impact of the mass loss indetermination on asteroseismic grid based estimates of masses, radii, and ages of stars in the red giant branch (RGB) phase. Methods: We adopted the SCEPtER pipeline on a grid spanning the mass range [0.8; 1.8] M⊙. As observational constraints, we adopted the star effective temperatures, the metallicity [Fe/H], the average large frequency spacing Δν, and the frequency of maximum oscillation power νmax. The mass loss was modelled following a Reimers parametrization with the two different efficiencies η = 0.4 and η = 0.8. Results: In the RGB phase, the average random relative error (owing only to observational uncertainty) on mass and age estimates is about 8% and 30% respectively. The bias in mass and age estimates caused by the adoption of a wrong mass loss parameter in the recovery is minor for the vast majority of the RGB evolution. The biases get larger only after the RGB bump. In the last 2.5% of the RGB lifetime the error on the mass determination reaches 6.5% becoming larger than the random error component in this evolutionary phase. The error on the age estimate amounts to 9%, that is, equal to the random error uncertainty. These results are independent of the stellar metallicity [Fe/H] in the explored range. Conclusions: Asteroseismic-based estimates of stellar mass, radius, and age in the RGB phase can be considered mass loss independent within the range (η ∈ [0.0,0.8]) as long as the target is in an evolutionary phase preceding the RGB bump.
Kingston, J K; Geor, R J; McCutcheon, L J
1997-02-01
To compare dew-point hygrometry, direct sweat collection, and measurement of body water loss as methods for determination of sweating rate (SR) in exercising horses. 6 exercise-trained Thoroughbreds. SR was measured in 6 horses exercising at 40% of the speed that elicited maximum oxygen consumption for 45 km, with a 15-minute rest at the end of each 15-km phase. Each horse completed 2 exercise trials. Dew-point hygrometry, as a method of local SR determination, was validated in vitro by measurement of rate of evaporative water loss. During exercise, local SR was determined every 10 minutes by the following 2 methods: (1) dew-point hygrometry on the neck and lateral area of the thorax, and (2) on the basis of the volume of sweat collected from a sealed plastic pouch attached to the lateral area of the thorax. Mean whole body SR was calculated from total body water loss incurred during exercise. Evaporation rate measured by use of dew-point hygrometry was significantly correlated (r2 = 0.92) with the actual rate of evaporative water loss. There was a similar pattern of change in SR measured by dew-point hygrometry on the neck and lateral area of the thorax during exercise, with a significantly higher SR on the neck. The SR measured on the thorax by direct sweat collection and by dew-point hygrometry were of similar magnitude. Mean whole body SR calculated from total body water loss was not significantly different from mean whole body SR estimated from direct sweat collection or dew-point hygrometry measurements on the thorax. Dew-point hygrometry and direct sweat collection are useful methods for determination of local SR in horses during prolonged, steady-state exercise in moderate ambient conditions. Both methods of local SR determination provide an accurate estimated of whole body SR.
NASA Astrophysics Data System (ADS)
Yuan, Weijia; Coombs, T. A.; Kim, Jae-Ho; Han Kim, Chul; Kvitkovic, Jozef; Pamidi, Sastry
2011-12-01
Theoretical and experimental AC loss data on a superconducting pancake coil wound using second generation (2 G) conductors are presented. An anisotropic critical state model is used to calculate critical current and the AC losses of a superconducting pancake coil. In the coil there are two regions, the critical state region and the subcritical region. The model assumes that in the subcritical region the flux lines are parallel to the tape wide face. AC losses of the superconducting pancake coil are calculated using this model. Both calorimetric and electrical techniques were used to measure AC losses in the coil. The calorimetric method is based on measuring the boil-off rate of liquid nitrogen. The electric method used a compensation circuit to eliminate the inductive component to measure the loss voltage of the coil. The experimental results are consistent with the theoretical calculations thus validating the anisotropic critical state model for loss estimations in the superconducting pancake coil.
Using satellite laser ranging to measure ice mass change in Greenland and Antarctica
NASA Astrophysics Data System (ADS)
Bonin, Jennifer A.; Chambers, Don P.; Cheng, Minkang
2018-01-01
A least squares inversion of satellite laser ranging (SLR) data over Greenland and Antarctica could extend gravimetry-based estimates of mass loss back to the early 1990s and fill any future gap between the current Gravity Recovery and Climate Experiment (GRACE) and the future GRACE Follow-On mission. The results of a simulation suggest that, while separating the mass change between Greenland and Antarctica is not possible at the limited spatial resolution of the SLR data, estimating the total combined mass change of the two areas is feasible. When the method is applied to real SLR and GRACE gravity series, we find significantly different estimates of inverted mass loss. There are large, unpredictable, interannual differences between the two inverted data types, making us conclude that the current 5×5 spherical harmonic SLR series cannot be used to stand in for GRACE. However, a comparison with the longer IMBIE time series suggests that on a 20-year time frame, the inverted SLR series' interannual excursions may average out, and the long-term mass loss estimate may be reasonable.
Toward allocative efficiency in the prescription drug industry.
Guell, R C; Fischbaum, M
1995-01-01
Traditionally, monopoly power in the pharmaceutical industry has been measured by profits. An alternative method estimates the deadweight loss of consumer surplus associated with the exercise of monopoly power. Although upper and lower bound estimates for this inefficiency are far apart, they at least suggest a dramatically greater welfare loss than measures of industry profitability would imply. A proposed system would have the U.S. government employing its power of eminent domain to "take" and distribute pharmaceutical patents, providing as "just compensation" the present value of the patent's expected future monopoly profits. Given the allocative inefficiency of raising taxes to pay for the program, the impact of the proposal on allocative efficiency would be at least as good at our lower bound estimate of monopoly costs while substantially improving efficiency at or near our upper bound estimate.
Implementation and adaptation of a macro-scale methodology to calculate direct economic losses
NASA Astrophysics Data System (ADS)
Natho, Stephanie; Thieken, Annegret
2017-04-01
As one of the 195 member countries of the United Nations, Germany signed the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR). With this, though voluntary and non-binding, Germany agreed to report on achievements to reduce disaster impacts. Among other targets, the SFDRR aims at reducing direct economic losses in relation to the global gross domestic product by 2030 - but how to measure this without a standardized approach? The United Nations Office for Disaster Risk Reduction (UNISDR) has hence proposed a methodology to estimate direct economic losses per event and country on the basis of the number of damaged or destroyed items in different sectors. The method bases on experiences from developing countries. However, its applicability in industrial countries has not been investigated so far. Therefore, this study presents the first implementation of this approach in Germany to test its applicability for the costliest natural hazards and suggests adaptations. The approach proposed by UNISDR considers assets in the sectors agriculture, industry, commerce, housing, and infrastructure by considering roads, medical and educational facilities. The asset values are estimated on the basis of sector and event specific number of affected items, sector specific mean sizes per item, their standardized construction costs per square meter and a loss ratio of 25%. The methodology was tested for the three costliest natural hazard types in Germany, i.e. floods, storms and hail storms, considering 13 case studies on the federal or state scale between 1984 and 2016. Not any complete calculation of all sectors necessary to describe the total direct economic loss was possible due to incomplete documentation. Therefore, the method was tested sector-wise. Three new modules were developed to better adapt this methodology to German conditions covering private transport (cars), forestry and paved roads. Unpaved roads in contrast were integrated into the agricultural and forestry sector. Furthermore overheads are proposed to include costs of housing content as well as the overall costs of public infrastructure, one of the most important damage sectors. All constants considering sector specific mean sizes or construction costs were adapted. Loss ratios were adapted for each event. Whereas the original UNISDR method over- und underestimates the losses of the tested events, the adapted method is able to calculate losses in good accordance for river floods, hail storms and storms. For example, for the 2013-flood economic losses of EUR 6.3 billion were calculated (UNISDR EUR 0.85 billion, documentation EUR 11 billion). For the hail storms in 2013 the calculated EUR 3.6 billion overestimate the documented losses of EUR 2.7 billion less than the original UNISDR approach with EUR 5.2 billion. Only for flash floods, where public infrastructure can account for more than 90% of total losses, the method is absolutely not applicable. The adapted methodology serves as a good starting point for macro-scale loss estimations by accounting for the most important damage sectors. By implementing this approach into damage and event documentation and reporting standards, a consistent monitoring according to the SFDRR could be achieved.
NASA Astrophysics Data System (ADS)
Cuca, B.; Agapiou, A.
2017-05-01
In 2006 UNESCO report has identified soil loss as one of the main threats of climate change with possible impact to natural and cultural heritage. The study illustrated in this paper shows the results from geomatic perspective, applying an interdisciplinary approach undertaken in order to identify major natural hazards affecting cultural landscapes and archaeological heritage in rural areas in Cyprus. In particular, Earth Observation (EO) and ground-based methods were identified and applied for mapping, monitoring and estimation of the possible soil loss caused by soil erosion. Special attention was given to the land use/land cover factor (C) and its impact on the overall estimation of the soil-loss. Cover factor represents the effect of soil-disturbing activities, plants, crop sequence and productivity level, soil cover and subsurface bio-mass on soil erosion. Urban areas have a definite role in retarding the recharge process, leading to increased runoff and soil loss in the broader area. On the other hand, natural vegetation plays a predominant role in reducing water erosion. The land use change was estimated based on the difference of the NDVI value between Landsat 5 TM and Sentinel-2 data for the period between 1980s' until today. Cover factor was then estimated for both periods and significant land use changes were further examined in areas of significant cultural and natural landscape value. The results were then compared in order to study the impact of land use change on the soil erosion and hence on the soil loss rate in the selected areas.
Testing of The Harp Guidelines On A Small Watershed In Finland
NASA Astrophysics Data System (ADS)
Granlund, K.; Rekolainen, S.
TESTING of THE HARP GUIDELINES ON A SMALL WATERSHED IN FIN- LAND K. Granlund, S. Rekolainen Finnish Environment Institute, Research Department kirsti.granlund@vyh.fi Watersheds have emerged as environmental units for assessing, controlling and reduc- ing non-point-source pollution. Within the framework of the international conventions, such as OSPARCOM, HELCOM, and in the implementation of the EU Water Frame- work Directive, the criteria for model selection is of key importance. Harmonized Quantification and Reporting Procedures for Nutrients (HARP) aims at helping the implementation of OSPAR's (Convention for the Protection of the Marine Environ- ment of the North-East Atlantic) strategy in controlling eutrophication and reducing nutrient input to marine ecosystems by 50nitrogen and phosphorus losses from both point and nonpoint sources and help assess the effectiveness of the pollution reduction strategy. The HARP guidelines related respectively to the "Quantification of Nitrogen and Phosphorus Losses from Diffuse Anthropogenic Sources and Natural Background Losses" and to the "Quantification and Reporting of the Retention of Nitrogen and Phosphorus in River Catchments" were tested on a small, well instrumented agricul- tural watershed in Finland. The project was coordinated by the Environment Institute of the Joint Research Centre. Three types of methodologies for estimating nutrient losses to watercourses were eval- uated during the project. Simple methods based on regression equations or loading functions provide a quick method for estimating nutrient losses. Through these meth- ods the pollutant load can be related to parameters such as slope, soil type, land-use, management practices etc. Relevant nutrient loading functions for the study catch- ment were collected during the project. One mid-range model was applied to simulate the nitrogen cycle in a simplified manner in relation to climate, soil properties, land- use and management practices. Physically based models describe in detail the water and nutrient cycle within the watershed. ICECREAM and SWAT models were applied on the study watershed. ICECREAM is a management model based on CREAMS model for predicting field-scale runoff and erosion. The nitrogen and phosphorus sub- models are based on GLEAMS model. SWAT is a continuous time and spatially dis- tributed model, which includes hydrological, sediment and chemical processes in river 1 basins.The simple methods and the mid-range model for nitrogen proved to be fast and easy to apply, but due limited information on crop-specific loading functions and ni- trogen process rates (e.g. mineralisation in soil), only order-of-magnitude estimates for nutrient loads could be calculated. The ICECREAM model was used to estimate crop-specific nutrient losses from the agricultural area. The potential annual nutrient loads for the whole catchment were then calculated by including estimates for nutri- ent loads from other land-use classes (forested area and scattered settlement). Finally, calibration of the SWAT model was started to study in detail the effects of catchment characteristics on nutrient losses. The preliminary results of model testing are pre- sented and the suitability of different methodologies for estimating nutrient losses in Finnish catchments is discussed. 2
NASA Astrophysics Data System (ADS)
Wang, Jiandong; Wang, Shuxiao; Voorhees, A. Scott; Zhao, Bin; Jang, Carey; Jiang, Jingkun; Fu, Joshua S.; Ding, Dian; Zhu, Yun; Hao, Jiming
2015-12-01
Air pollution is a major environmental risk to health. In this study, short-term premature mortality due to particulate matter equal to or less than 2.5 μm in aerodynamic diameter (PM2.5) in the Yangtze River Delta (YRD) is estimated by using a PC-based human health benefits software. The economic loss is assessed by using the willingness to pay (WTP) method. The contributions of each region, sector and gaseous precursor are also determined by employing brute-force method. The results show that, in the YRD in 2010, the short-term premature deaths caused by PM2.5 are estimated to be 13,162 (95% confidence interval (CI): 10,761-15,554), while the economic loss is 22.1 (95% CI: 18.1-26.1) billion Chinese Yuan. The industrial and residential sectors contributed the most, accounting for more than 50% of the total economic loss. Emissions of primary PM2.5 and NH3 are major contributors to the health-related loss in winter, while the contribution of gaseous precursors such as SO2 and NOx is higher than primary PM2.5 in summer.
A dynamic programming approach to estimate the capacity value of energy storage
Sioshansi, Ramteen; Madaeni, Seyed Hossein; Denholm, Paul
2013-09-17
Here, we present a method to estimate the capacity value of storage. Our method uses a dynamic program to model the effect of power system outages on the operation and state of charge of storage in subsequent periods. We combine the optimized dispatch from the dynamic program with estimated system loss of load probabilities to compute a probability distribution for the state of charge of storage in each period. This probability distribution can be used as a forced outage rate for storage in standard reliability-based capacity value estimation methods. Our proposed method has the advantage over existing approximations that itmore » explicitly captures the effect of system shortage events on the state of charge of storage in subsequent periods. We also use a numerical case study, based on five utility systems in the U.S., to demonstrate our technique and compare it to existing approximation methods.« less
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle.
Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu
2017-02-26
In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle
Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu
2017-01-01
In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions. PMID:28245634
Entropic benefit of a cross-link in protein association.
Zaman, Muhammad H; Berry, R Stephen; Sosnick, Tobin R
2002-08-01
We introduce a method to estimate the loss of configurational entropy upon insertion of a cross-link to a dimeric system. First, a clear distinction is established between the loss of entropy upon tethering and binding, two quantities that are often considered to be equivalent. By comparing the probability distribution of the center-to-center distances for untethered and cross-linked versions, we are able to calculate the loss of translational entropy upon cross-linking. The distribution function for the untethered helices is calculated from the probability that a given helix is closer to its partner than to all other helices, the "Nearest Neighbor" method. This method requires no assumptions about the nature of the solvent, and hence resolves difficulties normally associated with calculations for systems in liquids. Analysis of the restriction of angular freedom upon tethering indicates that the loss of rotational entropy is negligible. The method is applied in the context of the folding of a ten turn helical coiled coil with the tether modeled as a Gaussian chain or a flexible amino acid chain. After correcting for loop closure entropy in the docked state, we estimate the introduction of a six-residue tether in the coiled coil results in an effective concentration of the chain to be about 4 or 100 mM, depending upon whether the helices are denatured or pre-folded prior to their association. Thus, tethering results in significant stabilization for systems with millimolar or stronger dissociation constants. Copyright 2002 Wiley-Liss, Inc.
Calorie Estimation in Adults Differing in Body Weight Class and Weight Loss Status
Brown, Ruth E; Canning, Karissa L; Fung, Michael; Jiandani, Dishay; Riddell, Michael C; Macpherson, Alison K; Kuk, Jennifer L
2016-01-01
Purpose Ability to accurately estimate calories is important for weight management, yet few studies have investigated whether individuals can accurately estimate calories during exercise, or in a meal. The objective of this study was to determine if accuracy of estimation of moderate or vigorous exercise energy expenditure and calories in food is associated with body weight class or weight loss status. Methods Fifty-eight adults who were either normal weight (NW) or overweight (OW), and either attempting (WL) or not attempting weight loss (noWL), exercised on a treadmill at a moderate (60% HRmax) and a vigorous intensity (75% HRmax) for 25 minutes. Subsequently, participants estimated the number of calories they expended through exercise, and created a meal that they believed to be calorically equivalent to the exercise energy expenditure. Results The mean difference between estimated and measured calories in exercise and food did not differ within or between groups following moderate exercise. Following vigorous exercise, OW-noWL overestimated energy expenditure by 72%, and overestimated the calories in their food by 37% (P<0.05). OW-noWL also significantly overestimated exercise energy expenditure compared to all other groups (P<0.05), and significantly overestimated calories in food compared to both WL groups (P<0.05). However, among all groups there was a considerable range of over and underestimation (−280 kcal to +702 kcal), as reflected by the large and statistically significant absolute error in calorie estimation of exercise and food. Conclusion There was a wide range of under and overestimation of calories during exercise and in a meal. Error in calorie estimation may be greater in overweight adults who are not attempting weight loss. PMID:26469988
How much groundwater did California's Central Valley lose during the 2012-2016 drought?
NASA Astrophysics Data System (ADS)
Xiao, Mu; Koppa, Akash; Mekonnen, Zelalem; Pagán, Brianna R.; Zhan, Shengan; Cao, Qian; Aierken, Abureli; Lee, Hyongki; Lettenmaier, Dennis P.
2017-05-01
We estimate net groundwater storage change in the Central Valley from April 2002 to September 2016 as the difference between inflows and outflows, precipitation, evapotranspiration, and changes in soil moisture and surface water storage. We also estimate total water storage change attributable to groundwater change using Gravity Recovery and Climate Experiment (GRACE) satellite data, which should be equivalent to our water balance estimates. Over two drought periods within our 14-1/2 years study period (January 2007 to December 2009 and October 2012 to September 2016), we estimate from our water balance that a total of 16.5 km3 and 40.0 km3 of groundwater was lost, respectively. Our water balance-based estimate of the overall groundwater loss over the 14-1/2 years is -20.7 km3, which includes substantial recovery during nondrought periods The estimated rate of groundwater loss is greater during the recent drought (10.0 ± 0.2 versus 5.5 ± 0.3 km3/yr) than in the 2007-2009 drought, due to lower net inflows, a transition from row crops to trees, and higher crop water use, notwithstanding a reduction in irrigated area. The GRACE estimates of groundwater loss (-5.0 km3/yr for both water balance and GRACE during 2007-2009, and -11.2 km3/yr for GRACE versus -10 km3/yr for water balance during 2012-2016) are quite consistent for the two methods. However, over the entire study period, the GRACE-based groundwater loss estimate is almost triple that from the water balance, mostly because GRACE does not indicate the between-drought groundwater recovery that is inferred from our water balance.
Hotspot Identification for Shanghai Expressways Using the Quantitative Risk Assessment Method
Chen, Can; Li, Tienan; Sun, Jian; Chen, Feng
2016-01-01
Hotspot identification (HSID) is the first and key step of the expressway safety management process. This study presents a new HSID method using the quantitative risk assessment (QRA) technique. Crashes that are likely to happen for a specific site are treated as the risk. The aggregation of the crash occurrence probability for all exposure vehicles is estimated based on the empirical Bayesian method. As for the consequences of crashes, crashes may not only cause direct losses (e.g., occupant injuries and property damages) but also result in indirect losses. The indirect losses are expressed by the extra delays calculated using the deterministic queuing diagram method. The direct losses and indirect losses are uniformly monetized to be considered as the consequences of this risk. The potential costs of crashes, as a criterion to rank high-risk sites, can be explicitly expressed as the sum of the crash probability for all passing vehicles and the corresponding consequences of crashes. A case study on the urban expressways of Shanghai is presented. The results show that the new QRA method for HSID enables the identification of a set of high-risk sites that truly reveal the potential crash costs to society. PMID:28036009
NASA Astrophysics Data System (ADS)
Marksteiner, Quinn R.; Treiman, Michael B.; Chen, Ching-Fong; Haynes, William B.; Reiten, M. T.; Dalmas, Dale; Pulliam, Elias
2017-06-01
A resonant cavity method is presented which can measure loss tangents and dielectric constants for materials with dielectric constant from 150 to 10 000 and above. This practical and accurate technique is demonstrated by measuring barium strontium zirconium titanate bulk ferroelectric ceramic blocks. Above the Curie temperature, in the paraelectric state, barium strontium zirconium titanate has a sufficiently low loss that a series of resonant modes are supported in the cavity. At each mode frequency, the dielectric constant and loss tangent are obtained. The results are consistent with low frequency measurements and computer simulations. A quick method of analyzing the raw data using the 2D static electromagnetic modeling code SuperFish and an estimate of uncertainties are presented.
Simulating eroded soil organic carbon with the SWAT-C model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xuesong
The soil erosion and associated lateral movement of eroded carbon (C) have been identified as a possible mechanism explaining the elusive terrestrial C sink of ca. 1.7-2.6 PgC yr(-1). Here we evaluated the SWAT-C model for simulating long-term soil erosion and associated eroded C yields. Our method couples the CENTURY carbon cycling processes with a Modified Universal Soil Loss Equation (MUSLE) to estimate C losses associated with soil erosion. The results show that SWAT-C is able to simulate well long-term average eroded C yields, as well as correctly estimate the relative magnitude of eroded C yields by crop rotations. Wemore » also evaluated three methods of calculating C enrichment ratio in mobilized sediments, and found that errors associated with enrichment ratio estimation represent a significant uncertainty in SWAT-C simulations. Furthermore, we discussed limitations and future development directions for SWAT-C to advance C cycling modeling and assessment.« less
NASA Astrophysics Data System (ADS)
So, E.
2010-12-01
Earthquake casualty loss estimation, which depends primarily on building-specific casualty rates, has long suffered from a lack of cross-disciplinary collaboration in post-earthquake data gathering. An increase in our understanding of what contributes to casualties in earthquakes involve coordinated data-gathering efforts amongst disciplines; these are essential for improved global casualty estimation models. It is evident from examining past casualty loss models and reviewing field data collected from recent events, that generalized casualty rates cannot be applied globally for different building types, even within individual countries. For a particular structure type, regional and topographic building design effects, combined with variable material and workmanship quality all contribute to this multi-variant outcome. In addition, social factors affect building-specific casualty rates, including social status and education levels, and human behaviors in general, in that they modify egress and survivability rates. Without considering complex physical pathways, loss models purely based on historic casualty data, or even worse, rates derived from other countries, will be of very limited value. What’s more, as the world’s population, housing stock, and living and cultural environments change, methods of loss modeling must accommodate these variables, especially when considering casualties. To truly take advantage of observed earthquake losses, not only do damage surveys need better coordination of international and national reconnaissance teams, but these teams must integrate difference areas of expertise including engineering, public health and medicine. Research is needed to find methods to achieve consistent and practical ways of collecting and modeling casualties in earthquakes. International collaboration will also be necessary to transfer such expertise and resources to the communities in the cities which most need it. Coupling the theories and findings from the field surveys with experiments would also be advantageous as it is not always be possible to validate theories and models with actual earthquake data. In addition, colleagues in other disciplines will benefit from being introduced to the loss algorithms, methodologies and advances familiar to the engineering community, to help dissemination in earthquake mitigation and preparedness programs. It follows that new approaches to loss estimation must include a progressive assessment of what contributes to the final casualty value. In analyzing recent earthquakes, testing common hypotheses, talking to local and international researchers in the field, interviewing search and rescue and medical personnel, and comparing notes with colleagues who have visited other events, the author has developed a list of contributory factors to formulate fatality rates for use in earthquake loss estimation models. In this presentation, we will first look at the current state of data collection and assessment in casualty loss estimation. Then, the analyses of recent earthquake field data, which provide important insights to the contributory factors of fatalities in earthquakes, will be explored. The benefits of a multi-disciplinary approach in deriving fatality rates for masonry buildings will then be examined in detail.
Multi-model ensembles for assessment of flood losses and associated uncertainty
NASA Astrophysics Data System (ADS)
Figueiredo, Rui; Schröter, Kai; Weiss-Motz, Alexander; Martina, Mario L. V.; Kreibich, Heidi
2018-05-01
Flood loss modelling is a crucial part of risk assessments. However, it is subject to large uncertainty that is often neglected. Most models available in the literature are deterministic, providing only single point estimates of flood loss, and large disparities tend to exist among them. Adopting any one such model in a risk assessment context is likely to lead to inaccurate loss estimates and sub-optimal decision-making. In this paper, we propose the use of multi-model ensembles to address these issues. This approach, which has been applied successfully in other scientific fields, is based on the combination of different model outputs with the aim of improving the skill and usefulness of predictions. We first propose a model rating framework to support ensemble construction, based on a probability tree of model properties, which establishes relative degrees of belief between candidate models. Using 20 flood loss models in two test cases, we then construct numerous multi-model ensembles, based both on the rating framework and on a stochastic method, differing in terms of participating members, ensemble size and model weights. We evaluate the performance of ensemble means, as well as their probabilistic skill and reliability. Our results demonstrate that well-designed multi-model ensembles represent a pragmatic approach to consistently obtain more accurate flood loss estimates and reliable probability distributions of model uncertainty.
A comparison of methods for assessing power output in non-uniform onshore wind farms
Staid, Andrea; VerHulst, Claire; Guikema, Seth D.
2017-10-02
Wind resource assessments are used to estimate a wind farm's power production during the planning process. It is important that these estimates are accurate, as they can impact financing agreements, transmission planning, and environmental targets. Here, we analyze the challenges in wind power estimation for onshore farms. Turbine wake effects are a strong determinant of farm power production. With given input wind conditions, wake losses typically cause downstream turbines to produce significantly less power than upstream turbines. These losses have been modeled extensively and are well understood under certain conditions. Most notably, validation of different model types has favored offshoremore » farms. Models that capture the dynamics of offshore wind conditions do not necessarily perform equally as well for onshore wind farms. We analyze the capabilities of several different methods for estimating wind farm power production in 2 onshore farms with non-uniform layouts. We compare the Jensen model to a number of statistical models, to meteorological downscaling techniques, and to using no model at all. In conclusion, we show that the complexities of some onshore farms result in wind conditions that are not accurately modeled by the Jensen wake decay techniques and that statistical methods have some strong advantages in practice.« less
A comparison of methods for assessing power output in non-uniform onshore wind farms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staid, Andrea; VerHulst, Claire; Guikema, Seth D.
Wind resource assessments are used to estimate a wind farm's power production during the planning process. It is important that these estimates are accurate, as they can impact financing agreements, transmission planning, and environmental targets. Here, we analyze the challenges in wind power estimation for onshore farms. Turbine wake effects are a strong determinant of farm power production. With given input wind conditions, wake losses typically cause downstream turbines to produce significantly less power than upstream turbines. These losses have been modeled extensively and are well understood under certain conditions. Most notably, validation of different model types has favored offshoremore » farms. Models that capture the dynamics of offshore wind conditions do not necessarily perform equally as well for onshore wind farms. We analyze the capabilities of several different methods for estimating wind farm power production in 2 onshore farms with non-uniform layouts. We compare the Jensen model to a number of statistical models, to meteorological downscaling techniques, and to using no model at all. In conclusion, we show that the complexities of some onshore farms result in wind conditions that are not accurately modeled by the Jensen wake decay techniques and that statistical methods have some strong advantages in practice.« less
Robust Variable Selection with Exponential Squared Loss.
Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping
2013-04-01
Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are [Formula: see text] and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.
Robust Variable Selection with Exponential Squared Loss
Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping
2013-01-01
Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are n-consistent and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods. PMID:23913996
NASA Astrophysics Data System (ADS)
Lin, Ching-Ho; Lai, Chin-Hsing; Wu, Yee-Lin; Chen, Ming-Jen
2010-11-01
Determining the destructions of both ozone and odd oxygen, O x, in the nocturnal boundary layer (NBL) is important to evaluate the regional ozone budget and overnight ozone accumulation. This work develops a simple method to determine the dry deposition velocity of ozone and its destruction at a polluted nocturnal boundary layer. The destruction of O x can also be determined simultaneously. The method is based on O 3 and NO 2 profiles and their surface measurements. Linkages between the dry deposition velocities of O 3 and NO 2 and between the dry deposition loss of O x and its chemical loss are constructed and used. Field measurements are made at an agricultural site to demonstrate the application of the model. The model estimated nocturnal O 3 dry deposition velocities from 0.13 to 0.19 cm s -1, very close to those previously obtained for similar land types. Additionally, dry deposition and chemical reactions account for 60 and 40% of the overall nocturnal ozone loss, respectively; ozone dry deposition accounts for 50% of the overall nocturnal loss of O x, dry deposition of NO 2 accounts for another 20%, and chemical reactions account for the remaining 30%. The proposed method enables the use of measurements made in typical ozone field studies to evaluate various nocturnal destructions of O 3 and O x in a polluted environment.
Weir, Hannah K; Li, Chunyu; Henley, S Jane; Joseph, Djenaba
2017-05-01
Background: Educational attainment (EA) is inversely associated with colorectal cancer risk. Colorectal cancer screening can save lives if precancerous polyps or early cancers are found and successfully treated. This study aims to estimate the potential productivity loss (PPL) and associated avoidable colorectal cancer-related deaths among screen-eligible adults residing in lower EA counties in the United States. Methods: Mortality and population data were used to examine colorectal cancer deaths (2008-2012) among adults aged 50 to 74 years in lower EA counties, and to estimate the expected number of deaths using the mortality experience from high EA counties. Excess deaths (observed-expected) were used to estimate potential years life lost, and the human capital method was used to estimate PPL in 2012 U.S. dollars. Results: County-level colorectal cancer death rates were inversely associated with county-level EA. Of the 100,857 colorectal cancer deaths in lower EA counties, we estimated that more than 21,000 (1 in 5) was potentially avoidable and resulted in nearly $2 billion annual productivity loss. Conclusions: County-level EA disparities contribute to a large number of potentially avoidable colorectal cancer-related deaths. Increased prevention and improved screening potentially could decrease deaths and help reduce the associated economic burden in lower EA communities. Increased screening could further reduce deaths in all EA groups. Impact: These results estimate the large economic impact of potentially avoidable colorectal cancer-related deaths in economically disadvantaged communities, as measured by lower EA. Cancer Epidemiol Biomarkers Prev; 26(5); 736-42. ©2016 AACR . ©2016 American Association for Cancer Research.
Overweight and obesity on the island of Ireland: an estimation of costs
Dee, Anne; Callinan, Aoife; Doherty, Edel; O'Neill, Ciaran; McVeigh, Treasa; Sweeney, Mary Rose; Staines, Anthony; Kearns, Karen; Fitzgerald, Sarah; Sharp, Linda; Kee, Frank; Hughes, John; Balanda, Kevin; Perry, Ivan J
2015-01-01
Objectives The increasing prevalence of overweight and obesity worldwide continues to compromise population health and creates a wider societal cost in terms of productivity loss and premature mortality. Despite extensive international literature on the cost of overweight and obesity, findings are inconsistent between Europe and the USA, and particularly within Europe. Studies vary on issues of focus, specific costs and methods. This study aims to estimate the healthcare and productivity costs of overweight and obesity for the island of Ireland in 2009, using both top-down and bottom-up approaches. Methods Costs were estimated across four categories: healthcare utilisation, drug costs, work absenteeism and premature mortality. Healthcare costs were estimated using Population Attributable Fractions (PAFs). PAFs were applied to national cost data for hospital care and drug prescribing. PAFs were also applied to social welfare and national mortality data to estimate productivity costs due to absenteeism and premature mortality. Results The healthcare costs of overweight and obesity in 2009 were estimated at €437 million for the Republic of Ireland (ROI) and €127.41 million for NI. Productivity loss due to overweight and obesity was up to €865 million for ROI and €362 million for NI. The main drivers of healthcare costs are cardiovascular disease, type II diabetes, colon cancer, stroke and gallbladder disease. In terms of absenteeism, low back pain is the main driver in both jurisdictions, and for productivity loss due to premature mortality the primary driver of cost is coronary heart disease. Conclusions The costs are substantial, and urgent public health action is required in Ireland to address the problem of increasing prevalence of overweight and obesity, which if left unchecked will lead to unsustainable cost escalation within the health service and unacceptable societal costs. PMID:25776042
Wei, Yanyu; Zou, Jibin; Li, Jianjun; Qi, Wenjuan; Li, Yong
2014-01-01
Deep-sea permanent magnet motor equipped with fluid compensated pressure-tolerant system is compressed by the high pressure fluid both outside and inside. The induced stress distribution in stator core is significantly different from that in land type motor. Its effect on the magnetic properties of stator core is important for deep-sea motor designers but seldom reported. In this paper, the stress distribution in stator core, regarding the seawater compressive stress, is calculated by 2D finite element method (FEM). The effect of compressive stress on magnetic properties of electrical steel sheet, that is, permeability, BH curves, and BW curves, is also measured. Then, based on the measured magnetic properties and calculated stress distribution, the stator iron loss is estimated by stress-electromagnetics-coupling FEM. At last the estimation is verified by experiment. Both the calculated and measured results show that stator iron loss increases obviously with the seawater compressive stress. PMID:25177717
Uncertainty in eddy covariance flux estimates resulting from spectral attenuation [Chapter 4
W. J. Massman; R. Clement
2004-01-01
Surface exchange fluxes measured by eddy covariance tend to be underestimated as a result of limitations in sensor design, signal processing methods, and finite flux-averaging periods. But, careful system design, modern instrumentation, and appropriate data processing algorithms can minimize these losses, which, if not too large, can be estimated and corrected using...
The total rate of mass return to the interstellar medium from red giants and planetary nebulae
NASA Technical Reports Server (NTRS)
Knapp, G. R.; Rauch, K. P.; Wilcots, E. M.
1990-01-01
High luminosity post main sequence stars are observed to be losing mass in large amounts into the interstellar medium. The various methods used to estimate individual and total mass loss rates are summarized. Current estimates give MT 0.3 - 0.6 solar mass per year for the whole Galaxy.
Interval Estimation of Revision Effect on Scale Reliability via Covariance Structure Modeling
ERIC Educational Resources Information Center
Raykov, Tenko
2009-01-01
A didactic discussion of a procedure for interval estimation of change in scale reliability due to revision is provided, which is developed within the framework of covariance structure modeling. The method yields ranges of plausible values for the population gain or loss in reliability of unidimensional composites, which results from deletion or…
Assessment of winter wheat loss risk impacted by climate change from 1982 to 2011
NASA Astrophysics Data System (ADS)
Du, Xin
2017-04-01
The world's farmers will face increasing pressure to grow more food on less land in succeeding few decades, because it seems that the continuous population growth and agricultural products turning to biofuels would extend several decades into the future. Therefore, the increased demand for food supply worldwide calls for improved accuracy of crop productivity estimation and assessment of grain production loss risk. Extensive studies have been launched to evaluate the impacts of climate change on crop production based on various crop models drove with global or regional climate model (GCM/RCM) output. However, assessment of climate change impacts on agriculture productivity is plagued with uncertainties of the future climate change scenarios and complexity of crop model. Therefore, given uncertain climate conditions and a lack of model parameters, these methods are strictly limited in application. In this study, an empirical assessment approach for crop loss risk impacted by water stress has been established and used to evaluate the risk of winter wheat loss in China, United States, Germany, France and United Kingdom. The average value of winter wheat loss risk impacted by water stress for the three countries of Europe is about -931kg/ha, which is obviously higher in contrast with that in China (-570kg/ha) and in United States (-367kg/ha). Our study has important implications for further application of operational assessment of crop loss risk at a country or region scale. Future studies should focus on using higher spatial resolution remote sensing data, combining actual evapo-transpiration to estimate water stress, improving the method for downscaling of statistic crop yield data, and establishing much more rational and elaborate zoning method.
Oceanic and atmospheric forcing of Larsen C Ice-Shelf thinning
Holland, P. R.; Brisbourne, A.; Corr, H. F. J.; Mcgrath, Daniel; Purdon, K.; Paden, J.; Fricker, H. A.; Paolo, F. S.; Fleming, A.H.
2015-01-01
The catastrophic collapses of Larsen A and B ice shelves on the eastern Antarctic Peninsula have caused their tributary glaciers to accelerate, contributing to sea-level rise and freshening the Antarctic Bottom Water formed nearby. The surface of Larsen C Ice Shelf (LCIS), the largest ice shelf on the peninsula, is lowering. This could be caused by unbalanced ocean melting (ice loss) or enhanced firn melting and compaction (englacial air loss). Using a novel method to analyse eight radar surveys, this study derives separate estimates of ice and air thickness changes during a 15-year period. The uncertainties are considerable, but the primary estimate is that the surveyed lowering (0.066 ± 0.017 m yr−1) is caused by both ice loss (0.28 ± 0.18 m yr−1) and firn-air loss (0.037 ± 0.026 m yr−1). The ice loss is much larger than the air loss, but both contribute approximately equally to the lowering because the ice is floating. The ice loss could be explained by high basal melting and/or ice divergence, and the air loss by low surface accumulation or high surface melting and/or compaction. The primary estimate therefore requires that at least two forcings caused the surveyed lowering. Mechanisms are discussed by which LCIS stability could be compromised in the future. The most rapid pathways to collapse are offered by the ungrounding of LCIS from Bawden Ice Rise or ice-front retreat past a "compressive arch" in strain rates. Recent evidence suggests that either mechanism could pose an imminent risk.
A new approach to Ozone Depletion Potential (ODP) estimation
NASA Astrophysics Data System (ADS)
Portmann, R. W.; Daniel, J. S.; Yu, P.
2017-12-01
The Ozone Depletion Potential (ODP) is given by the time integrated global ozone loss of an ozone depleting substance (ODS) relative to a reference ODS (usually CFC-11). The ODP is used by the Montreal Protocol (and subsequent amendments) to inform policy decisions on the production of ODSs. Since the early 1990s, ODPs have usually been estimated using an approximate formulism that utilizes the lifetime and the fractional release factor of the ODS. This has the advantage that it can utilize measured concentrations of the ODSs to estimate their fractional release factors. However, there is a strong correlation between stratospheric lifetimes and fractional release factors of ODSs and that this can introduce uncertainties into ODP calculations when the terms are estimated independently. Instead, we show that the ODP is proportional to the average global ozone loss per equivalent chlorine molecule released in the stratosphere by the ODS loss process (which we call the Γ factor) and, importantly, this ratio varies only over a relatively small range ( 0.3-1.5) for ODPs with stratospheric lifetimes of 20 to more than 1,000 years. The Γ factor varies smoothly with stratospheric lifetime for ODSs with loss processes dominated by photolysis and is larger for long-lived species, while stratospheric OH loss processes produce relatively small Γs that are nearly independent of stratospheric lifetime. The fractional release approach does not accurately capture these relationships. We propose a new formulation that takes advantage of this smooth variation by parameterizing the Γ factor using ozone changes computed using the chemical climate model CESM-WACCM and the NOCAR two-dimensional model. We show that while the absolute Γ's vary between WACCM and NOCAR models, much of the difference is removed for the Γ/ΓCFC-11 ratio that is used in the ODP formula. This parameterized method simplifies the computation of ODPs while providing enhanced accuracy compared to the fractional release method and it can be used to estimate many ODPs given information on chemical reaction rates and photolysis processes.
Estimation of Time-Varying Pilot Model Parameters
NASA Technical Reports Server (NTRS)
Zaal, Peter M. T.; Sweet, Barbara T.
2011-01-01
Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.
Variable parameter McCarthy-Muskingum routing method considering lateral flow
NASA Astrophysics Data System (ADS)
Yadav, Basant; Perumal, Muthiah; Bardossy, Andras
2015-04-01
The fully mass conservative variable parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price (2013) for routing floods in channels and rivers without considering lateral flow is extended herein for accounting uniformly distributed lateral flow contribution along the reach. The proposed procedure is applied for studying flood wave movement in a 24.2 km river stretch between Rottweil and Oberndorf gauging stations of Neckar River in Germany wherein significant lateral flow contribution by intermediate catchment rainfall prevails during flood wave movement. The geometrical elements of the cross-sectional information of the considered routing river stretch without considering lateral flow are estimated using the Robust Parameter Estimation (ROPE) algorithm that allows for arriving at the best performing set of bed width and side slope of a trapezoidal section. The performance of the VPMM method is evaluated using the Nash-Sutcliffe model efficiency criterion as the objective function to be maximized using the ROPE algorithm. The twenty-seven flood events in the calibration set are considered to identify the relationship between 'total rainfall' and 'total losses' as well as to optimize the geometric characteristics of the prismatic channel (width and slope of the trapezoidal section). Based on this analysis, a relationship between total rainfall and total loss of the intermediate catchment is obtained and then used to estimate the lateral flow in the reach. Assuming the lateral flow hydrograph is of the form of inflow hydrograph and using the total intervening catchment runoff estimated from the relationship, the uniformly distributed lateral flow rate qL at any instant of time is estimated for its use in the VPMM routing method. All the 27 flood events are simulated using this routing approach considering lateral flow along the reach. Many of these simulations are able to simulate the observed hydrographs very closely. The proposed approach of accounting lateral flow using the VPMM method is independently verified by routing flood hydrograph of 6 flood events which are not used in the total rainfall vs total loss relationship established for the intervening catchment of the studied river reach. Close reproduction of the outflow hydrographs of these independent events using the proposed VPMM method accounting for lateral flow demonstrate the practical utility of the method.
Estimating fine-scale land use change dynamics using an expedient photointerpretation-based method
Tonya Lister; Andrew Lister; Eunice Alexander
2009-01-01
Population growth and urban expansion have resulted in the loss of forest land. With growing concerns about this loss and its implications for global processes and carbon budgets, there is a great need for detailed and reliable land use change data. Currently, the Northern Research Station uses an Annual Inventory design whereby all plots are revisited every 5 years...
ERIC Educational Resources Information Center
Schlauch, Robert S.; Han, Heekyung J.; Yu, Tzu-Ling J.; Carney, Edward
2017-01-01
Purpose: The purpose of this article is to examine explanations for pure-tone average-spondee threshold differences in functional hearing loss. Method: Loudness magnitude estimation functions were obtained from 24 participants for pure tones (0.5 and 1.0 kHz), vowels, spondees, and speech-shaped noise as a function of level (20-90 dB SPL).…
Rapid estimation of the economic consequences of global earthquakes
Jaiswal, Kishor; Wald, David J.
2011-01-01
The U.S. Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, operational since mid 2007, rapidly estimates the most affected locations and the population exposure at different levels of shaking intensities. The PAGER system has significantly improved the way aid agencies determine the scale of response needed in the aftermath of an earthquake. For example, the PAGER exposure estimates provided reasonably accurate assessments of the scale and spatial extent of the damage and losses following the 2008 Wenchuan earthquake (Mw 7.9) in China, the 2009 L'Aquila earthquake (Mw 6.3) in Italy, the 2010 Haiti earthquake (Mw 7.0), and the 2010 Chile earthquake (Mw 8.8). Nevertheless, some engineering and seismological expertise is often required to digest PAGER's exposure estimate and turn it into estimated fatalities and economic losses. This has been the focus of PAGER's most recent development. With the new loss-estimation component of the PAGER system it is now possible to produce rapid estimation of expected fatalities for global earthquakes (Jaiswal and others, 2009). While an estimate of earthquake fatalities is a fundamental indicator of potential human consequences in developing countries (for example, Iran, Pakistan, Haiti, Peru, and many others), economic consequences often drive the responses in much of the developed world (for example, New Zealand, the United States, and Chile), where the improved structural behavior of seismically resistant buildings significantly reduces earthquake casualties. Rapid availability of estimates of both fatalities and economic losses can be a valuable resource. The total time needed to determine the actual scope of an earthquake disaster and to respond effectively varies from country to country. It can take days or sometimes weeks before the damage and consequences of a disaster can be understood both socially and economically. The objective of the U.S. Geological Survey's PAGER system is to reduce this time gap to more rapidly and effectively mobilize response. We present here a procedure to rapidly and approximately ascertain the economic impact immediately following a large earthquake anywhere in the world. In principle, the approach presented is similar to the empirical fatality estimation methodology proposed and implemented by Jaiswal and others (2009). In order to estimate economic losses, we need an assessment of the economic exposure at various levels of shaking intensity. The economic value of all the physical assets exposed at different locations in a given area is generally not known and extremely difficult to compile at a global scale. In the absence of such a dataset, we first estimate the total Gross Domestic Product (GDP) exposed at each shaking intensity by multiplying the per-capita GDP of the country by the total population exposed at that shaking intensity level. We then scale the total GDP estimated at each intensity by an exposure correction factor, which is a multiplying factor to account for the disparity between wealth and/or economic assets to the annual GDP. The economic exposure obtained using this procedure is thus a proxy estimate for the economic value of the actual inventory that is exposed to the earthquake. The economic loss ratio, defined in terms of a country-specific lognormal cumulative distribution function of shaking intensity, is derived and calibrated against the losses from past earthquakes. This report describes the development of a country or region-specific economic loss ratio model using economic loss data available for global earthquakes from 1980 to 2007. The proposed model is a potential candidate for directly estimating economic losses within the currently-operating PAGER system. PAGER's other loss models use indirect methods that require substantially more data (such as building/asset inventories, vulnerabilities, and the asset values exposed at the time of earthquake) to implement on a global basis and will thus take more time to develop and implement within the PAGER system.
Sia, Sheau Fung; Zhao, Xihai; Li, Rui; Zhang, Yu; Chong, Winston; He, Le; Chen, Yu
2016-11-01
Internal carotid artery stenosis requires an accurate risk assessment for the prevention of stroke. Although the internal carotid artery area stenosis ratio at the common carotid artery bifurcation can be used as one of the diagnostic methods of internal carotid artery stenosis, the accuracy of results would still depend on the measurement techniques. The purpose of this study is to propose a novel method to estimate the effect of internal carotid artery stenosis on the blood flow based on the concept of minimization of energy loss. Eight internal carotid arteries from different medical centers were diagnosed as stenosed internal carotid arteries, as plaques were found at different locations on the vessel. A computational fluid dynamics solver was developed based on an open-source code (OpenFOAM) to test the flow ratio and energy loss of those stenosed internal carotid arteries. For comparison, a healthy internal carotid artery and an idealized internal carotid artery model have also been tested and compared with stenosed internal carotid artery in terms of flow ratio and energy loss. We found that at a given common carotid artery bifurcation, there must be a certain flow distribution in the internal carotid artery and external carotid artery, for which the total energy loss at the bifurcation is at a minimum; for a given common carotid artery flow rate, an irregular shaped plaque at the bifurcation constantly resulted in a large value of minimization of energy loss. Thus, minimization of energy loss can be used as an indicator for the estimation of internal carotid artery stenosis.
Measuring survival time: a probability-based approach useful in healthcare decision-making.
2011-01-01
In some clinical situations, the choice between treatment options takes into account their impact on patient survival time. Due to practical constraints (such as loss to follow-up), survival time is usually estimated using a probability calculation based on data obtained in clinical studies or trials. The two techniques most commonly used to estimate survival times are the Kaplan-Meier method and the actuarial method. Despite their limitations, they provide useful information when choosing between treatment options.
Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function
NASA Astrophysics Data System (ADS)
Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.
2017-06-01
This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-09-21
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
Economic costs of obesity in Thailand: a retrospective cost-of-illness study
2014-01-01
Background Over the last decade, the prevalence of obesity (BMI ≥ 25 kg/m2) in Thailand has been rising rapidly and consistently. Estimating the cost of obesity to society is an essential step in setting priorities for research and resource use and helping improve public awareness of the negative economic impacts of obesity. This prevalence-based, cost-of-illness study aims to estimate the economic costs of obesity in Thailand. Methods The estimated costs in this study included health care cost, cost of productivity loss due to premature mortality, and cost of productivity loss due to hospital-related absenteeism. The Obesity-Attributable Fraction (OAF) was used to estimate the extent to which the co-morbidities were attributable to obesity. The health care cost of obesity was further estimated by multiplying the number of patients in each disease category attributable to obesity by the unit cost of treatment. The cost of productivity loss was calculated using the human capital approach. Results The health care cost attributable to obesity was estimated at 5,584 million baht or 1.5% of national health expenditure. The cost of productivity loss attributable to obesity was estimated at 6,558 million baht - accounting for 54% of the total cost of obesity. The cost of hospital-related absenteeism was estimated at 694 million baht, while the cost of premature mortality was estimated at 5,864 million baht. The total cost of obesity was then estimated at 12,142 million baht (725.3 million US$PPP, 16.74 baht =1 US$PPP accounting for 0.13% of Thailand’s Gross Domestic Product (GDP). Conclusions Obesity imposes a substantial economic burden on Thai society especially in term of health care costs. Large-scale comprehensive interventions focused on improving public awareness of the cost of and problems associated with obesity and promoting a healthy lifestyle should be regarded as a public health priority. PMID:24690106
An Assessment of the Ozone Loss During the 1999-2000 SOLVE Campaign
NASA Technical Reports Server (NTRS)
Schoeberl, M. R.; Newman, P. A.; Lait, L. R.; McGee, T.; Burris, J.; Browell, E. V.; Richard, E.; VonderGathen, P.; Beveliaqua, R.; Mikkelsen, I. S.;
2000-01-01
Ozone observations from ozonesondes, the DIAL and AROTEL lidars aboard the DC-8, in situ ozone measurements from the ER-2 and satellite ozone measurements from POAM were used to assess ozone loss during the SOLVE 1999-2000 campaign. We compare three different methods of computing the ozone loss. The first method simply compares the time sequence of ozonesondes taken at the same station inside the vortex from December through the end of March. In the second method, ozonesondes from a variety of stations are compared using a variant on the Match technique. This method uses short (approx. 5-10 day) forward diabatic trajectories to connect various sonde launches. In the third method, the measurements are simply injected into a diabatic trajectory model and carried forward in time from December 1 to March 16. Over 60,000 individual measurements were used in the last calculation. Again, ozone loss is estimated by comparing vortex interior measurements made early in the campaign with those made later in the campaign. The diabatic nature of the second and third methods calculation presumably corrects for the normal increase in ozone within the vortex due to downward advection. The three methods agree that the largest ozone loss occurs between 400 and 460 K potential temperatures (approx. 16-20 km) with slightly over 1.5 ppmv lost over the winter period. Between 460 K and 500 K (approx. 22 km) net ozone loss is less than 0.8 ppmv. From 500K to 600K (26 km) net loss is less than 0.5 ppmv.
Vandergast, Amy; Wood, Dustin A.; Thompson, Andrew R.; Fisher, Mark; Barrows, Cameron W.; Grant, Tyler J.
2016-01-01
Aim The frequency and severity of habitat alterations and disturbance are predicted to increase in upcoming decades, and understanding how disturbance affects population integrity is paramount for adaptive management. Although rarely is population genetic sampling conducted at multiple time points, pre- and post-disturbance comparisons may provide one of the clearest methods to measure these impacts. We examined how genetic properties of the federally threatened Coachella Valley fringe-toed lizard (Uma inornata) responded to severe drought and habitat fragmentation across its range. Location Coachella Valley, California, USA. Methods We used 11 microsatellites to examine population genetic structure and diversity in 1996 and 2008, before and after a historic drought. We used Bayesian assignment methods and F-statistics to estimate genetic structure. We compared allelic richness across years to measure loss of genetic diversity and employed approximate Bayesian computing methods and heterozygote excess tests to explore the recent demographic history of populations. Finally, we compared effective population size across years and to abundance estimates to determine whether diversity remained low despite post-drought recovery. Results Genetic structure increased between sampling periods, likely as a result of population declines during the historic drought of the late 1990s–early 2000s, and habitat loss and fragmentation that precluded post-drought genetic rescue. Simulations supported recent demographic declines in 3 of 4 main preserves, and in one preserve, we detected significant loss of allelic richness. Effective population sizes were generally low across the range, with estimates ≤100 in most sites. Main conclusions Fragmentation and drought appear to have acted synergistically to induce genetic change over a short time frame. Progressive deterioration of connectivity, low Ne and measurable loss of genetic diversity suggest that conservation efforts have not maintained the genetic integrity of this species. Genetic sampling over time can help evaluate population trends to guide management.
Lee, Lukas Jyuhn-Hsiarn; Lin, Cheng-Kuan; Hung, Mei-Chuan; Wang, Jung-Der
2016-12-01
This study estimates the annual numbers of eight work-related cancers, total losses of quality-adjusted life years (QALYs), and lifetime healthcare expenditures that possibly could be saved by improving occupational health in Taiwan. Three databases were interlinked: the Taiwan Cancer Registry, the National Mortality Registry, and the National Health Insurance Research Database. Annual numbers of work-related cancers were estimated based on attributable fractions (AFs) abstracted from a literature review. The survival functions for eight cancers were estimated and extrapolated to lifetime using a semi-parametric method. A convenience sample of 8846 measurements of patients' quality of life with EQ-5D was collected for utility values and multiplied by survival functions to estimate quality-adjusted life expectancies (QALEs). The loss-of-QALE was obtained by subtracting the QALE of cancer from age- and sex-matched referents simulated from national vital statistics. The lifetime healthcare expenditures were estimated by multiplying the survival probability with mean monthly costs paid by the National Health Insurance for cancer diagnosis and treatment and summing this for the expected lifetime. A total of 3010 males and 726 females with eight work-related cancers were estimated in 2010. Among them, lung cancer ranked first in terms of QALY loss, with an annual total loss-of-QALE of 28,463 QALYs and total lifetime healthcare expenditures of US$36.6 million. Successful prevention of eight work-related cancers would not only avoid the occurrence of 3736 cases of cancer, but would also save more than US$70 million in healthcare costs and 46,750 QALYs for the Taiwan society in 2010.
Determination of water use in Rockford and Kankakee areas, Illinois
LaTour, John K.
1991-01-01
Amounts of water withdrawn, delivered, consumed, released, returned, and lost or gained during conveyance were determined for six communities--Rockford, Loves Park, North Park, Kankakee, Bourbonnais, and Bradley--served by the public-water systems in the Rockford and the Kankakee areas of Illinois. Water-use categories studied were commercial, industrial, domestic, and municipal uses; public supply; and sewage treatment. The availability and accuracy of water-use data are described, and water-use coefficients and methods of estimating water use are provided to improve the collection and the analysis of water-use information. Water-use data were obtained from all the water utilities and from 30 major water users in the Rockford and the Kankakee areas. Data were available for water withdrawals by water suppliers; deliveries by water suppliers to water users; returns by sewage-treatment plants and water users; releases by water users to sewers; and sewer-conveyance losses. Accuracy of the water-use data was determined from discharge measurements or reliability tests of water meters, or was estimated according to the completeness of the data. Accuracy of withdrawal and sewage-treatment-return data for the Rockford area and of withdrawal, delivery, industrial release, and sewage-treatment-return data for the Kankakee area was considered to be at least 90 percent. Where water-use data were inadequate or unavailable, various methods were used to estimate consumptive uses; releases; returns by commercial, domestic, and municipal users; and conveyance losses and gains. The methods focused on water budgeting to assure that water uses balanced. Consumptive uses were estimated by use of the consumption-budget method, the types-of-use method, consumptive-use ratios, the winter base-rate method, and the maximum lawn-watering method. The winter base-rate method provided the best domestic consumptive-use estimates, whose ratios (consumptive use from the winter base-rate method divided by deliveries and self-supply withdrawals), by community, ranged from 0.03 to 0.136 and averaged 0.068. The consumption-budget and types-of-use methods, as well as consumptive-use ratios, were used to estimate consumptive use for commercial, industrial, and municipal categories. Water budgeting was generally used to estimate releases, and conveyance losses and gains. Estimates of nonconsumptive uses by cooling systems, boilers, and lawn watering; data of deliveries to septic-system owners; and (or) water budgeting were used to estimate commercial, domestic, industrial, and municipal returns. Proportions of water use were similar in the Rockford and the Kankakee areas. Of the public-supply withdrawals in each area, about one-half was delivered for commercial and industrial uses; about one-third for domestic use; and about one-sixth for municipal use and public-supply conveyance losses.Consumptive use by all water users in the Rockford and the Kankakee areas was 13 +/- 1 percent, releases were 78 +/- 2 percent, and returns were 9 +/- 2 percent of deliveries and self-supply withdrawals. Total returns were greater than total withdrawals in the two areas because-of sewer-conveyance gains, which amounted to about 34 percent of the sewage-treatment returns for each area. Delivery rates (deliveries divided by the number of users [establishments or households]) and domestic per capita use were similar for all six communities. At a 95-percent confidence level, domestic delivery rates for each community range from 0.067 to 0.075 million gallons per household per year. Commercial delivery rates range from 0.277 to 0.535 million gallons per establishment per year. Delivery rates for all categories combined range from 0.100 to 0.192 million gallons per user per year. Domestic per capita use, which ranged from 67.2 to 71.0 gallons per day, averaged 69.2 +/- 1.1 gallons per day.
An analysis of estimation of pulmonary blood flow by the single-breath method
NASA Technical Reports Server (NTRS)
Srinivasan, R.
1986-01-01
The single-breath method represents a simple noninvasive technique for the assessment of capillary blood flow across the lung. However, this method has not gained widespread acceptance, because its accuracy is still being questioned. A rigorous procedure is described for estimating pulmonary blood flow (PBF) using data obtained with the aid of the single-breath method. Attention is given to the minimization of data-processing errors in the presence of measurement errors and to questions regarding a correction for possible loss of CO2 in the lung tissue. It is pointed out that the estimations are based on the exact solution of the underlying differential equations which describe the dynamics of gas exchange in the lung. The reported study demonstrates the feasibility of obtaining highly reliable estimates of PBF from expiratory data in the presence of random measurement errors.
NASA Astrophysics Data System (ADS)
Zengmei, L.; Guanghua, Q.; Zishen, C.
2015-05-01
The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The results show that the waterlogging control benefit estimation model constructed is applicable to the changing conditions that occur in both the disaster-inducing environment of the waterlogging-prone zone and disaster-bearing bodies, considering all conditions when rainstorms of all frequencies meet different water levels in the drainage-accepting zone. Thus, the estimation method of waterlogging control benefit can reflect the actual situation more objectively, and offer a scientific basis for rational decision-making for waterlogging control projects.
Dow’s fire and explosion index: a case-study in the process unit of an oil extraction factory
Nezamodini, Zeynab Sadat; Rezvani, Zahra; Kian, Kumars
2017-01-01
Introduction The incidence of fires and explosions have led to severe damage in many industries, primarily in industries’ financial losses. This study was conducted to estimate losses due to fire and explosion and the impact of control measures on the number of losses applying Dow’s Fire and Explosion Index. Methods This is a case study conducted in one of the process units of an oil extraction factory. Dow’s Fire and Explosion Index Hazard classification guide, 7th edition, issued by the American Institute of Chemical Engineers was applied. Data were obtained mainly through interviews and consultation with experts, as well as reported operating parameters and process documents. Results The Dow Index of the processing unit was estimated to be 243.68, and the most probable base damage was approximately $4.15 million in 2008. The actual damages were estimated to be $2,863,500, and the number of lost work days to be 64.56 days. The interruption losses were estimated to be $15,817,200 and the total losses to the system to be $18.67 million. These results demonstrated that losses resulting from production interruptions are greater than losses due to the destruction of equipment. A series of corrections was then proposed and risk analysis was performed again to examine the effects of reforms. The comparison shows that by applying reforms the FEI can change to 86.62 and the total loss can reduce to $9.03 million. Conclusion This study shows that Dow’s Index is a systematic tool to examine the impact of control measures. It also enhances resource management considering an optimal insurance contract. Considering the priority of reducing damage factors, several correction actions were suggested, such as modifying the drainage system, installation of hexane detectors, an automatic sprinkler system, fire detectors on the cable tray, and finally, using the water spray washing on the tanks. PMID:28465821
Seismic Risk Assessment and Loss Estimation for Tbilisi City
NASA Astrophysics Data System (ADS)
Tsereteli, Nino; Alania, Victor; Varazanashvili, Otar; Gugeshashvili, Tengiz; Arabidze, Vakhtang; Arevadze, Nika; Tsereteli, Emili; Gaphrindashvili, Giorgi; Gventcadze, Alexander; Goguadze, Nino; Vephkhvadze, Sophio
2013-04-01
The proper assessment of seismic risk is of crucial importance for society protection and city sustainable economic development, as it is the essential part to seismic hazard reduction. Estimation of seismic risk and losses is complicated tasks. There is always knowledge deficiency on real seismic hazard, local site effects, inventory on elements at risk, infrastructure vulnerability, especially for developing countries. Lately great efforts was done in the frame of EMME (earthquake Model for Middle East Region) project, where in the work packages WP1, WP2 , WP3 and WP4 where improved gaps related to seismic hazard assessment and vulnerability analysis. Finely in the frame of work package wp5 "City Scenario" additional work to this direction and detail investigation of local site conditions, active fault (3D) beneath Tbilisi were done. For estimation economic losses the algorithm was prepared taking into account obtained inventory. The long term usage of building is very complex. It relates to the reliability and durability of buildings. The long term usage and durability of a building is determined by the concept of depreciation. Depreciation of an entire building is calculated by summing the products of individual construction unit' depreciation rates and the corresponding value of these units within the building. This method of calculation is based on an assumption that depreciation is proportional to the building's (constructions) useful life. We used this methodology to create a matrix, which provides a way to evaluate the depreciation rates of buildings with different type and construction period and to determine their corresponding value. Finally loss was estimated resulting from shaking 10%, 5% and 2% exceedance probability in 50 years. Loss resulting from scenario earthquake (earthquake with possible maximum magnitude) also where estimated.
Re-analysis of Alaskan benchmark glacier mass-balance data using the index method
Van Beusekom, Ashely E.; O'Nell, Shad R.; March, Rod S.; Sass, Louis C.; Cox, Leif H.
2010-01-01
At Gulkana and Wolverine Glaciers, designated the Alaskan benchmark glaciers, we re-analyzed and re-computed the mass balance time series from 1966 to 2009 to accomplish our goal of making more robust time series. Each glacier's data record was analyzed with the same methods. For surface processes, we estimated missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernized the traditional degree-day model and derived new degree-day factors in an effort to match the balance time series more closely. We estimated missing yearly-site data with a new balance gradient method. These efforts showed that an additional step needed to be taken at Wolverine Glacier to adjust for non-representative index sites. As with the previously calculated mass balances, the re-analyzed balances showed a continuing trend of mass loss. We noted that the time series, and thus our estimate of the cumulative mass loss over the period of record, was very sensitive to the data input, and suggest the need to add data-collection sites and modernize our weather stations.
Witoonchart, Peerajak; Chongstitvatana, Prabhas
2017-08-01
In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.
Doubly labeled water method: in vivo oxygen and hydrogen isotope fractionation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoeller, D.A.; Leitch, C.A.; Brown, C.
The accuracy and precision of the doubly labeled water method for measuring energy expenditure are influenced by isotope fractionation during evaporative water loss and CO/sub 2/ excretion. To characterize in vivo isotope fractionation, we collected and isotopically analyzed physiological fluids and gases. Breath and transcutaneous water vapor were isotopically fractionated. The degree of fractionation indicated that the former was fractionated under equilibrium control at 37/sup 0/C, and the latter was kinetically fractionated. Sweat and urine were unfractionated. By use of isotopic balance models, the fraction of water lost via fractionating routes was estimated from the isotopic abundances of body water,more » local drinking water, and dietary solids. Fractionated water loss averaged 23% (SD = 10%) of water turnover, which agreed with our previous estimates based on metabolic rate, but there was a systematic difference between the results based on O/sub 2/ and hydrogen. Corrections for isotopic fractionation of water lost in breath and (nonsweat) transcutaneous loss should be made when using labeled water to measure water turnover or CO/sub 2/ production.« less
Electrical Load Profile Analysis Using Clustering Techniques
NASA Astrophysics Data System (ADS)
Damayanti, R.; Abdullah, A. G.; Purnama, W.; Nandiyanto, A. B. D.
2017-03-01
Data mining is one of the data processing techniques to collect information from a set of stored data. Every day the consumption of electricity load is recorded by Electrical Company, usually at intervals of 15 or 30 minutes. This paper uses a clustering technique, which is one of data mining techniques to analyse the electrical load profiles during 2014. The three methods of clustering techniques were compared, namely K-Means (KM), Fuzzy C-Means (FCM), and K-Means Harmonics (KHM). The result shows that KHM is the most appropriate method to classify the electrical load profile. The optimum number of clusters is determined using the Davies-Bouldin Index. By grouping the load profile, the demand of variation analysis and estimation of energy loss from the group of load profile with similar pattern can be done. From the group of electric load profile, it can be known cluster load factor and a range of cluster loss factor that can help to find the range of values of coefficients for the estimated loss of energy without performing load flow studies.
NASA Astrophysics Data System (ADS)
Delgado, A.; Gertig, C.; Blesa, E.; Loza, A.; Hidalgo, C.; Ron, R.
2016-05-01
Typical plant configurations for Central Receiver Systems (CRS) are comprised of a large field of heliostats which concentrate solar irradiation onto the receiver, which is elevated hundreds of meters above the ground. Wind speed changes with altitude above ground, impacting on the receiver thermal efficiency due to variations of the convective heat losses. In addition, the physical properties of air vary at high altitudes to a significant degree, which should be considered in the thermal losses calculation. DNV GL has long-reaching experience in wind energy assessment with reliable methodologies to reduce the uncertainty of the determination of the wind regime. As a part of this study, DNV GL estimates the wind speed at high altitude for different sites using two methods, a detailed estimation applying the best practices used in the wind energy sector based on measurements from various wind sensors and a simplified estimation applying the power law (1, 2) using only one wind measurement and a representative value for the surface roughness. As a result of the study, a comparison of the wind speed estimation considering both methods is presented and the impact on the receiver performance for the evaluated case is estimated.
Estimating effectiveness of crop management for reduction of soil erosion and runoff
NASA Astrophysics Data System (ADS)
Hlavcova, K.; Studvova, Z.; Kohnova, S.; Szolgay, J.
2017-10-01
The paper focuses on erosion processes in the Svacenický Creek catchment which is a small sub-catchment of the Myjava River basin. To simulate soil loss and sediment transport the USLE/SDR and WaTEM/SEDEM models were applied. The models were validated by comparing the simulated results with the actual bathymetry of a polder at the catchment outlet. Methods of crop management based on rotation and strip cropping were applied for the reduction of soil loss and sediment transport. The comparison shows that the greatest intensities of soil loss were achieved by the bare soil without vegetation and from the planting of maize for corn. The lowest values were achieved from the planting of winter wheat. At the end the effectiveness of row crops and strip cropping for decreasing design floods from the catchment was estimated.
Determining Transmission Loss from Measured External and Internal Acoustic Environments
NASA Technical Reports Server (NTRS)
Scogin, Tyler; Smith, A. M.
2012-01-01
An estimate of the internal acoustic environment in each internal cavity of a launch vehicle is needed to ensure survivability of Space Launch System (SLS) avionics. Currently, this is achieved by using the noise reduction database of heritage flight vehicles such as the Space Shuttle and Saturn V for liftoff and ascent flight conditions. Marshall Space Flight Center (MSFC) is conducting a series of transmission loss tests to verify and augment this method. For this test setup, an aluminum orthogrid curved panel representing 1/8th of the circumference of a section of the SLS main structure was mounted in between a reverberation chamber and an anechoic chamber. Transmission loss was measured across the panel using microphones. Data measured during this test will be used to estimate the internal acoustic environments for several of the SLS launch vehicle internal spaces.
Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing
2016-12-20
Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Medical Expenditures and Earnings Losses Among US Adults With Arthritis in 2013.
Murphy, Louise B; Cisternas, Miriam G; Pasta, David J; Helmick, Charles G; Yelin, Edward H
2018-06-01
We estimated the economic impact of arthritis using 2013 US Medical Expenditure Panel Survey (MEPS) data. We calculated arthritis-attributable and all-cause medical expenditures for adults age ≥18 years and arthritis-attributable earnings losses among those ages 18-64 years who had ever worked. We calculated arthritis-attributable costs using multistage regression-based methods, and conducted sensitivity analyses to estimate costs for 2 other arthritis definitions in MEPS. In 2013, estimated total national arthritis-attributable medical expenditures were $139.8 billion (range $135.9-$157.5 billion). Across expenditure categories, ambulatory care expenditures accounted for nearly half of arthritis-attributable expenditures. All-cause expenditures among adults with arthritis represented 50% of the $1.2 trillion national medical expenditures among all US adults in MEPS. Estimated total national arthritis-attributable earning losses were $163.7 billion (range $163.7-$170.0 billion). The percentage with arthritis who worked in the past year was 7.2 percentage points lower than those without arthritis (76.8% [95% confidence interval (95% CI)] 75.0-78.6 and 84.0% [95% CI 82.5-85.5], respectively, adjusted for sociodemographics and chronic conditions). Total arthritis-attributable medical expenditures and earnings losses were $303.5 billion (range $303.5-$326.9 billion). Total national arthritis-attributable medical care expenditures and earnings losses among adults with arthritis were $303.5 billion in 2013. High arthritis-attributable medical expenditures might be reduced by greater efforts to reduce pain and improve function. The high earnings losses were largely attributable to the substantially lower prevalence of working among those with arthritis compared to those without, signaling the need for interventions that keep people with arthritis in the workforce. © 2017, American College of Rheumatology.
Modelling rainfall interception by forests: a new method for estimating the canopy storage capacity
NASA Astrophysics Data System (ADS)
Pereira, Fernando; Valente, Fernanda; Nóbrega, Cristina
2015-04-01
Evaporation of rainfall intercepted by forests is usually an important part of a catchment water balance. Recognizing the importance of interception loss, several models of the process have been developed. A key parameter of these models is the canopy storage capacity (S), commonly estimated by the so-called Leyton method. However, this method is somewhat subjective in the selection of the storms used to derive S, which is particularly critical when throughfall is highly variable in space. To overcome these problems, a new method for estimating S was proposed in 2009 by Pereira et al. (Agricultural and Forest Meteorology, 149: 680-688), which uses information from a larger number of storms, is less sensitive to throughfall spatial variability and is consistent with the formulation of the two most widely used rainfall interception models, Gash analytical model and Rutter model. However, this method has a drawback: it does not account for stemflow (Sf). To allow a wider use of this methodology, we propose now a revised version which makes the estimation of S independent of the importance of stemflow. For the application of this new version we only need to establish a linear regression of throughfall vs. gross rainfall using data from all storms large enough to saturate the canopy. Two of the parameters used by the Gash and the Rutter models, pd (the drainage partitioning coefficient) and S, are then derived from the regression coefficients: pd is firstly estimated allowing then the derivation of S but, if Sf is not considered, S can be estimated making pd= 0. This new method was tested using data from a eucalyptus plantation, a maritime pine forest and a traditional olive grove, all located in Central Portugal. For both the eucalyptus and the pine forests pd and S estimated by this new approach were comparable to the values derived in previous studies using the standard procedures. In the case of the traditional olive grove, the estimates obtained by this methodology for pd and S allowed interception loss to be modelled with a normalized averaged error less than 4%. Globally, these results confirm that the method is more robust and certainly less subjective, providing adequate estimates for pd and S which, in turn, are crucial for a good performance of the interception models.
Kordi, Masoumeh; Fakari, Farzaneh Rashidi; Mazloum, Seyed Reza; Khadivzadeh, Talaat; Akhlaghi, Farideh; Tara, Mahmoud
2016-01-01
Introduction: Delay in diagnosis of bleeding can be due to underestimation of the actual amount of blood loss during delivery. Therefore, this research aimed to compare the efficacy of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume. Materials and Methods: This three-group randomized clinical trial study was performed on 105 midwifery students in Mashhad School of Nursing and Midwifery in 2013. The samples were selected by the convenience method and were randomly divided into three groups of web-based, simulation-based, and conventional training. The three groups participated before and 1 week after the training course in eight station practical tests, then, the students of the web-based group were trained on-line for 1 week, the students of the simulation-based group were trained in the Clinical Skills Centre for 4 h, and the students of the conventional group were trained for 4 h presentation by researchers. The data gathering tool was a demographic questionnaire designed by the researchers and objective structured clinical examination. Data were analyzed by software version 11.5. Results: The accuracy of visual estimation of postpartum hemorrhage volume after training increased significantly in the three groups at all stations (1, 2, 4, 5, 6 and 7 (P = 0.001), 8 (P = 0.027)) except station 3 (blood loss of 20 cc, P = 0.095), but the mean score of blood loss estimation after training did not significantly different between the three groups (P = 0.95). Conclusion: Training increased the accuracy of estimation of postpartum hemorrhage, but no significant difference was found among the three training groups. We can use web-based training as a substitute or supplement of training along with two other more common simulation and conventional methods. PMID:27500175
NASA Astrophysics Data System (ADS)
Petroselli, A.; Grimaldi, S.; Romano, N.
2012-12-01
The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.
Lee, Karl K.; Risley, John C.
2002-03-19
Precipitation-runoff models, base-flow-separation techniques, and stream gain-loss measurements were used to study recharge and ground-water surface-water interaction as part of a study of the ground-water resources of the Willamette River Basin. The study was a cooperative effort between the U.S. Geological Survey and the State of Oregon Water Resources Department. Precipitation-runoff models were used to estimate the water budget of 216 subbasins in the Willamette River Basin. The models were also used to compute long-term average recharge and base flow. Recharge and base-flow estimates will be used as input to a regional ground-water flow model, within the same study. Recharge and base-flow estimates were made using daily streamflow records. Recharge estimates were made at 16 streamflow-gaging-station locations and were compared to recharge estimates from the precipitation-runoff models. Base-flow separation methods were used to identify the base-flow component of streamflow at 52 currently operated and discontinued streamflow-gaging-station locations. Stream gain-loss measurements were made on the Middle Fork Willamette, Willamette, South Yamhill, Pudding, and South Santiam Rivers, and were used to identify and quantify gaining and losing stream reaches both spatially and temporally. These measurements provide further understanding of ground-water/surface-water interactions.
Negredo, F; Blaicher, M; Nesic, A; Kraft, P; Ott, J; Dörfler, W; Koos, C; Rockstuhl, C
2018-06-01
Photonic wire bonds, i.e., freeform waveguides written by 3D direct laser writing, emerge as a technology to connect different optical chips in fully integrated photonic devices. With the long-term vision of scaling up this technology to a large-scale fabrication process, the in situ optimization of the trajectory of photonic wire bonds is at stake. A prerequisite for the real-time optimization is the availability of a fast loss estimator for single-mode waveguides of arbitrary trajectory. Losses occur because of the bending of the waveguides and at transitions among sections of the waveguide with different curvatures. Here, we present an approach that resides on the fundamental mode approximation, i.e., the assumption that the photonic wire bonds predominantly carry their energy in a single mode. It allows us to predict in a quick and reliable way the pertinent losses from pre-computed modal properties of the waveguide, enabling fast design of optimum paths.
NASA Technical Reports Server (NTRS)
Della-Corte, Christopher
2012-01-01
Foil gas bearings are a key technology in many commercial and emerging oilfree turbomachinery systems. These bearings are nonlinear and have been difficult to analytically model in terms of performance characteristics such as load capacity, power loss, stiffness, and damping. Previous investigations led to an empirically derived method to estimate load capacity. This method has been a valuable tool in system development. The current work extends this tool concept to include rules for stiffness and damping coefficient estimation. It is expected that these rules will further accelerate the development and deployment of advanced oil-free machines operating on foil gas bearings.
Estimating economic losses from earthquakes using an empirical approach
Jaiswal, Kishor; Wald, David J.
2013-01-01
We extended the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) empirical fatality estimation methodology proposed by Jaiswal et al. (2009) to rapidly estimate economic losses after significant earthquakes worldwide. The requisite model inputs are shaking intensity estimates made by the ShakeMap system, the spatial distribution of population available from the LandScan database, modern and historic country or sub-country population and Gross Domestic Product (GDP) data, and economic loss data from Munich Re's historical earthquakes catalog. We developed a strategy to approximately scale GDP-based economic exposure for historical and recent earthquakes in order to estimate economic losses. The process consists of using a country-specific multiplicative factor to accommodate the disparity between economic exposure and the annual per capita GDP, and it has proven successful in hindcast-ing past losses. Although loss, population, shaking estimates, and economic data used in the calibration process are uncertain, approximate ranges of losses can be estimated for the primary purpose of gauging the overall scope of the disaster and coordinating response. The proposed methodology is both indirect and approximate and is thus best suited as a rapid loss estimation model for applications like the PAGER system.
Estimating productivity costs using the friction cost approach in practice: a systematic review.
Kigozi, Jesse; Jowett, Sue; Lewis, Martyn; Barton, Pelham; Coast, Joanna
2016-01-01
The choice of the most appropriate approach to valuing productivity loss has received much debate in the literature. The friction cost approach has been proposed as a more appropriate alternative to the human capital approach when valuing productivity loss, although its application remains limited. This study reviews application of the friction cost approach in health economic studies and examines how its use varies in practice across different country settings. A systematic review was performed to identify economic evaluation studies that have estimated productivity costs using the friction cost approach and published in English from 1996 to 2013. A standard template was developed and used to extract information from studies meeting the inclusion criteria. The search yielded 46 studies from 12 countries. Of these, 28 were from the Netherlands. Thirty-five studies reported the length of friction period used, with only 16 stating explicitly the source of the friction period. Nine studies reported the elasticity correction factor used. The reported friction cost approach methods used to derive productivity costs varied in quality across studies from different countries. Few health economic studies have estimated productivity costs using the friction cost approach. The estimation and reporting of productivity costs using this method appears to differ in quality by country. The review reveals gaps and lack of clarity in reporting of methods for friction cost evaluation. Generating reporting guidelines and country-specific parameters for the friction cost approach is recommended if increased application and accuracy of the method is to be realized.
Losses of Soil Carbon upon a Fire on a Drained Forested Raised Bog
NASA Astrophysics Data System (ADS)
Glukhova, T. V.; Sirin, A. A.
2018-05-01
We studied the consequences of a fire that affected 29 ha of a drained forested raised bog in Tver oblast, Central European Russia. The drainage network consisted of open 1-m-deep ditches with 60 to 160 m ditch spacing. The groundwater level (GWL) varied within the studied drained bog. We used the method of assessing the loss of soil carbon (C) based on the difference between the ash concentration in the burnt peat of the upper layer and underlying unburnt layers. The carbon loss was higher near the drainage ditches than in the sites remote from ditches. The sample median values of carbon loss (kg C/m2) were estimated at 0.37 near the drainage ditches and at 0.22 for the remote sites with a distance of 160 m between ditches. They increased to 2.23 and 0.79 near and far from the drainage ditches for 106 m ditch spacing, and ranged from 1.13 to 2.10 near the drainage ditches and were equal to 0.45 at the remote sites for 60 m ditch spacing. The maximum loss of C was at the bog margin with the 70-cm-deep GWL; the sample median was equal to 2.97 kg C/m2. The results obtained for C loss from the wildfire on the raised bog agree with the estimates obtained by other authors (1.45-4.90 kg C/m2) and confirm the importance of taking such loss into account in the estimates of the carbon budget of peat soils (Histosols).
Energy Deficit Required for Rapid Weight Loss in Elite Collegiate Wrestlers.
Kondo, Emi; Sagayama, Hiroyuki; Yamada, Yosuke; Shiose, Keisuke; Osawa, Takuya; Motonaga, Keiko; Ouchi, Shiori; Kamei, Akiko; Nakajima, Kohei; Higaki, Yasuki; Tanaka, Hiroaki; Takahashi, Hideyuki; Okamura, Koji
2018-04-26
To determine energy density for rapid weight loss (RWL) of weight-classified sports, eight male elite wrestlers were instructed to lose 6% of body mass (BM) within 53 h. Energy deficit during the RWL was calculated by subtracting total energy expenditure (TEE) determined using the doubly labeled water method (DLW) from energy intake (EI) assessed with diet records. It was also estimated from body composition change estimated with the four-component model (4C) and other conventional methods. BM decreased significantly by 4.7 ± 0.5 kg (6.4 ± 0.5%). Total body water loss was the major component of the BM loss (71.0 ± 7.6%). TEE was 9446 ± 1422 kcal, and EI was 2366 ± 1184 kcal during the RWL of 53-h; therefore, the energy deficit was 7080 ± 1525 kcal. Thus, energy density was 1507 ± 279 kcal/kg ∆BM during the RWL, comparable with values obtained using the 4C, three-component model, dual energy X-ray absorptiometry, and stable isotope dilution. Energy density for RWL of wrestlers is lower than that commonly used (7400 or 7700 kcal/kg ΔBM). Although RWL is not recommended, we propose that commonly practiced extreme energy restriction such as 7400 or 7700 kcal/kg ΔBM during RWL appears to be meaningless.
Decay in blood loss estimation skills after web-based didactic training.
Toledo, Paloma; Eosakul, Stanley T; Goetz, Kristopher; Wong, Cynthia A; Grobman, William A
2012-02-01
Accuracy in blood loss estimation has been shown to improve immediately after didactic training. The objective of this study was to evaluate retention of blood loss estimation skills 9 months after a didactic web-based training. Forty-four participants were recruited from a cohort that had undergone web-based training and testing in blood loss estimation. The web-based posttraining test, consisting of pictures of simulated blood loss, was repeated 9 months after the initial training and testing. The primary outcome was the difference in accuracy of estimated blood loss (percent error) at 9 months compared with immediately posttraining. At the 9-month follow-up, the median error in estimation worsened to -34.6%. Although better than the pretraining error of -47.8% (P = 0.003), the 9-month error was significantly less accurate than the immediate posttraining error of -13.5% (P = 0.01). Decay in blood loss estimation skills occurs by 9 months after didactic training.
A hybrid frame concealment algorithm for H.264/AVC.
Yan, Bo; Gharavi, Hamid
2010-01-01
In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.
Accelerated West Antarctic ice mass loss continues to outpace East Antarctic gains
NASA Astrophysics Data System (ADS)
Harig, Christopher; Simons, Frederik J.
2015-04-01
While multiple data sources have confirmed that Antarctica is losing ice at an accelerating rate, different measurement techniques estimate the details of its geographically highly variable mass balance with different levels of accuracy, spatio-temporal resolution, and coverage. Some scope remains for methodological improvements using a single data type. In this study we report our progress in increasing the accuracy and spatial resolution of time-variable gravimetry from the Gravity Recovery and Climate Experiment (GRACE). We determine the geographic pattern of ice mass change in Antarctica between January 2003 and June 2014, accounting for glacio-isostatic adjustment (GIA) using the IJ05_R2 model. Expressing the unknown signal in a sparse Slepian basis constructed by optimization to prevent leakage out of the regions of interest, we use robust signal processing and statistical estimation methods. Applying those to the latest time series of monthly GRACE solutions we map Antarctica's mass loss in space and time as well as can be recovered from satellite gravity alone. Ignoring GIA model uncertainty, over the period 2003-2014, West Antarctica has been losing ice mass at a rate of - 121 ± 8 Gt /yr and has experienced large acceleration of ice mass losses along the Amundsen Sea coast of - 18 ± 5 Gt /yr2, doubling the mass loss rate in the past six years. The Antarctic Peninsula shows slightly accelerating ice mass loss, with larger accelerated losses in the southern half of the Peninsula. Ice mass gains due to snowfall in Dronning Maud Land have continued to add about half the amount of West Antarctica's loss back onto the continent over the last decade. We estimate the overall mass losses from Antarctica since January 2003 at - 92 ± 10 Gt /yr.
[Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].
Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling
2013-12-01
Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.
The Cost of Crime to Society: New Crime-Specific Estimates for Policy and Program Evaluation
French, Michael T.; Fang, Hai
2010-01-01
Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than ten years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost of society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. PMID:20071107
The cost of crime to society: new crime-specific estimates for policy and program evaluation.
McCollister, Kathryn E; French, Michael T; Fang, Hai
2010-04-01
Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than 10 years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost to society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Betanzos Arroyo, L. I.; Prol Ledesma, R. M.; da Silva Pinto da Rocha, F. J. P.
2014-12-01
The Universal Soil Loss Equation (USLE), which is considered to be a contemporary approach in soil loss assessment, was used to assess soil erosion hazard in the Zacatecas mining district. The purpose of this study is to produce erosion susceptibility maps for an area that is polluted with mining tailings which are susceptible to erosion and can disperse the particles that contain heavy metals and other toxic elements. USLE method is based in the estimation of soil loss per unit area and takes into account specific parameters such as precipitation data, topography, soil erodibility, erosivity and runoff. The R-factor (rainfall erosivity) was calculated from monthly and annual precipitation data. The K-factor (soil erodibility) was estimated using soil maps available from the CONABIO at a scale of 1:250000. The LS-factor (slope length and steepness) was determined from a 30-m digital elevation model. A raster-based Geographic Information System (GIS) was used to interactively calculate soil loss and map erosion hazard. The results show that estimated erosion rates ranged from 0 to 4770.48 t/ha year. Maximum proportion of the total area of the Zacatecas mining district have nil to very extremely slight erosion severity. Small areas in the central and south part of the study area shows the critical condition requiring sustainable land management.
Ultrasound viscoelasticity assessment using an adaptive torsional shear wave propagation method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouared, Abderrahmane; Kazemirad, Siavash; Montagnon, Emmanuel
2016-04-15
Purpose: Different approaches have been used in dynamic elastography to assess mechanical properties of biological tissues. Most techniques are based on a simple inversion based on the measurement of the shear wave speed to assess elasticity, whereas some recent strategies use more elaborated analytical or finite element method (FEM) models. In this study, a new method is proposed for the quantification of both shear storage and loss moduli of confined lesions, in the context of breast imaging, using adaptive torsional shear waves (ATSWs) generated remotely with radiation pressure. Methods: A FEM model was developed to solve the inverse wave propagationmore » problem and obtain viscoelastic properties of interrogated media. The inverse problem was formulated and solved in the frequency domain and its robustness to noise and geometric constraints was evaluated. The proposed model was validated in vitro with two independent rheology methods on several homogeneous and heterogeneous breast tissue-mimicking phantoms over a broad range of frequencies (up to 400 Hz). Results: Viscoelastic properties matched benchmark rheology methods with discrepancies of 8%–38% for the shear modulus G′ and 9%–67% for the loss modulus G″. The robustness study indicated good estimations of storage and loss moduli (maximum mean errors of 19% on G′ and 32% on G″) for signal-to-noise ratios between 19.5 and 8.5 dB. Larger errors were noticed in the case of biases in lesion dimension and position. Conclusions: The ATSW method revealed that it is possible to estimate the viscoelasticity of biological tissues with torsional shear waves when small biases in lesion geometry exist.« less
Simultaneous quaternion estimation (QUEST) and bias determination
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.
Pérez-Rodríguez, Raquel; Marques, Maria Jose; Bienes, Ramón
2007-05-25
The rate of soil erosion in pine forests (Pinus halepensis) located in the Southeast of Madrid has been estimated using dendrochronological analysis based on the change in ring-growth pattern from concentric to eccentric when the root is exposed. Using 49 roots spread across five inclined areas, it has been found that the length and direction of the hillsides, as well as their vegetation cover affect the rate of erosion, while the slope itself does not. The erosion rates found for the different areas studied vary between 3.5 and 8.8 mm year(-1), that is between 40 and 101 t ha(-1) year(-1) respectively. These values are between 2 and 3 times greater than those predicted by USLE, for which this equation underestimates soil loss for Central Spain's Mediterranean conditions. Nonetheless, both methods (using dendrochronology to determine actual soil loss and theoretical prediction with USLE) are able to establish the same significant differences among the areas studied, allowing for the comparative estimate of the severity of the area's erosion problem.
NASA Astrophysics Data System (ADS)
Hiramatsu, K.; Matsui, T.; Ito, A.; Miyakita, T.; Osada, Y.; Yamamoto, T.
2004-10-01
Aircraft noise measurements were recorded at the residential areas in the vicinity of Kadena Air Base, Okinawa in 1968 and 1972 at the time of the Vietnam war. The estimated equivalent continuous A-weighted sound pressure level LAeq for 24 h was 85 dB.The time history of sound level during 24 h was estimated from the measurement conducted in 1968, and the sound level was converted into the spectrum level at the centre frequency of the critical band of temporary threshold shift (TTS) using the results of spectrum analysis of aircraft noise operated at the airfield. With the information of spectrum level and its time history, TTS was calculated as a function of time and level change. The permanent threshold shift was also calculated by means of Robinson's method and ISO's method. The results indicate the noise exposure around Kadena Air Base was hazardous to hearing and is likely to have caused hearing loss to people living in its vicinity.
Kazemipoor, Mahnaz; Hajifaraji, Majid; Radzi, Che Wan Jasimah Bt Wan Mohamed; Shamshirband, Shahaboddin; Petković, Dalibor; Mat Kiah, Miss Laiha
2015-01-01
This research examines the precision of an adaptive neuro-fuzzy computing technique in estimating the anti-obesity property of a potent medicinal plant in a clinical dietary intervention. Even though a number of mathematical functions such as SPSS analysis have been proposed for modeling the anti-obesity properties estimation in terms of reduction in body mass index (BMI), body fat percentage, and body weight loss, there are still disadvantages of the models like very demanding in terms of calculation time. Since it is a very crucial problem, in this paper a process was constructed which simulates the anti-obesity activities of caraway (Carum carvi) a traditional medicine on obese women with adaptive neuro-fuzzy inference (ANFIS) method. The ANFIS results are compared with the support vector regression (SVR) results using root-mean-square error (RMSE) and coefficient of determination (R(2)). The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the ANFIS approach. The following statistical characteristics are obtained for BMI loss estimation: RMSE=0.032118 and R(2)=0.9964 in ANFIS testing and RMSE=0.47287 and R(2)=0.361 in SVR testing. For fat loss estimation: RMSE=0.23787 and R(2)=0.8599 in ANFIS testing and RMSE=0.32822 and R(2)=0.7814 in SVR testing. For weight loss estimation: RMSE=0.00000035601 and R(2)=1 in ANFIS testing and RMSE=0.17192 and R(2)=0.6607 in SVR testing. Because of that, it can be applied for practical purposes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Costs Attributable to Overweight and Obesity in Working Asthma Patients in the United States.
Chang, Chongwon; Lee, Seung Mi; Choi, Byoung Whui; Song, Jong Hwa; Song, Hee; Jung, Sujin; Bai, Yoon Kyeong; Park, Haedong; Jeung, Seungwon; Suh, Dong Churl
2017-01-01
To estimate annual health care and productivity loss costs attributable to overweight or obesity in working asthmatic patients. This study was conducted using the 2003-2013 Medical Expenditure Panel Survey (MEPS) in the United States. Patients aged 18 to 64 years with asthma were identified via self-reported diagnosis, a Clinical Classification Code of 128, or a ICD-9-CM code of 493.xx. All-cause health care costs were estimated using a generalized linear model with a log function and a gamma distribution. Productivity loss costs were estimated in relation to hourly wages and missed work days, and a two-part model was used to adjust for patients with zero costs. To estimate the costs attributable to overweight or obesity in asthma patients, costs were estimated by the recycled prediction method. Among 11670 working patients with a diagnosis of asthma, 4428 (35.2%) were obese and 3761 (33.0%) were overweight. The health care costs attributable to obesity and overweight in working asthma patients were estimated to be $878 [95% confidence interval (CI): $861-$895] and $257 (95% CI: $251-$262) per person per year, respectively, from 2003 to 2013. The productivity loss costs attributable to obesity and overweight among working asthma patients were $256 (95% CI: $253-$260) and $26 (95% CI: $26-$27) per person per year, respectively. Health care and productivity loss costs attributable to overweight and obesity in asthma patients are substantial. This study's results highlight the importance of effective public health and educational initiatives targeted at reducing overweight and obesity among patients with asthma, which may help lower the economic burden of asthma.
Genetic influences on bone loss in the San Antonio Family Osteoporosis Study
Shaffer, John R.; Kammerer, Candace M.; Bruder, Jan M.; Cole, Shelley A.; Dyer, Thomas D.; Almasy, Laura; MacCluer, Jean W.; Blangero, John; Bauer, Richard L.; Mitchell, Braxton D.
2009-01-01
Summary The genetic contribution to age-related bone loss is not well understood. We estimated that genes accounted for 25–45% of variation in 5-year change in bone mineral density in men and women. An autosome-wide linkage scan yielded no significant evidence for chromosal regions implicated in bone loss. Introduction The contribution of genetics to acquisition of peak bone mass is well documented, but little is know about the influence of genes on subsequent bone loss with age. We therefore measured 5-year change in bone mineral density (BMD) in 300 Mexican Americans (>45 years of age) from the San Antonio Family Osteoporosis Study to identify genetic factors influencing bone loss. Methods Annualized change in BMD was calculated from measurements taken 5.5 years apart. Heritability (h2) of BMD change was estimated using variance components methods and autosome-wide linkage analysis was carried out using 460 microsatellite markers at a mean 7.6 cM interval density. Results Rate of BMD change was heritable at the forearm (h2=0.31, p=0.021), hip (h2 =0.44, p=0.017), spine (h2=0.42, p=0.005), but not whole body (h2=0.18, p=0.123). Covariates associated with rapid bone loss (advanced age, baseline BMD, female sex, low baseline weight, postmenopausal status, and interim weight loss) accounted for 10% to 28% of trait variation. No significant evidence of linkage was observed at any skeletal site. Conclusions This is one of the first studies to report significant heritability of BMD change for weight-bearing and non-weight-bearing bones in an unselected population and the first linkage scan for change in BMD. PMID:18414963
Danaei, Goodarz; Robins, James M; Young, Jessica G; Hu, Frank B; Manson, JoAnn E; Hernán, Miguel A
2016-03-01
Evidence for the effect of weight loss on coronary heart disease (CHD) or mortality has been mixed. The effect estimates can be confounded due to undiagnosed diseases that may affect weight loss. We used data from the Nurses' Health Study to estimate the 26-year risk of CHD under several hypothetical weight loss strategies. We applied the parametric g-formula and implemented a novel sensitivity analysis for unmeasured confounding due to undiagnosed disease by imposing a lag time for the effect of weight loss on chronic disease. Several sensitivity analyses were conducted. The estimated 26-year risk of CHD did not change under weight loss strategies using lag times from 0 to 18 years. For a 6-year lag time, the risk ratios of CHD for weight loss compared with no weight loss ranged from 1.00 (0.99, 1.02) to 1.02 (0.99, 1.05) for different degrees of weight loss with and without restricting the weight loss strategy to participants with no major chronic disease. Similarly, no protective effect of weight loss was estimated for mortality risk. In contrast, we estimated a protective effect of weight loss on risk of type 2 diabetes. We estimated that maintaining or losing weight after becoming overweight or obese does not reduce the risk of CHD or death in this cohort of middle-age US women. Unmeasured confounding, measurement error, and model misspecification are possible explanations but these did not prevent us from estimating a beneficial effect of weight loss on diabetes.
A simplified 137Cs transport model for estimating erosion rates in undisturbed soil.
Zhang, Xinbao; Long, Yi; He, Xiubin; Fu, Jiexiong; Zhang, Yunqi
2008-08-01
(137)Cs is an artificial radionuclide with a half-life of 30.12 years which released into the environment as a result of atmospheric testing of thermo-nuclear weapons primarily during the period of 1950s-1970s with the maximum rate of (137)Cs fallout from atmosphere in 1963. (137)Cs fallout is strongly and rapidly adsorbed by fine particles in the surface horizons of the soil, when it falls down on the ground mostly with precipitation. Its subsequent redistribution is associated with movements of the soil or sediment particles. The (137)Cs nuclide tracing technique has been used for assessment of soil losses for both undisturbed and cultivated soils. For undisturbed soils, a simple profile-shape model was developed in 1990 to describe the (137)Cs depth distribution in profile, where the maximum (137)Cs occurs in the surface horizon and it exponentially decreases with depth. The model implied that the total (137)Cs fallout amount deposited on the earth surface in 1963 and the (137)Cs profile shape has not changed with time. The model has been widely used for assessment of soil losses on undisturbed land. However, temporal variations of (137)Cs depth distribution in undisturbed soils after its deposition on the ground due to downward transport processes are not considered in the previous simple profile-shape model. Thus, the soil losses are overestimated by the model. On the base of the erosion assessment model developed by Walling, D.E., He, Q. [1999. Improved models for estimating soil erosion rates from cesium-137 measurements. Journal of Environmental Quality 28, 611-622], we discuss the (137)Cs transport process in the eroded soil profile and make some simplification to the model, develop a method to estimate the soil erosion rate more expediently. To compare the soil erosion rates calculated by the simple profile-shape model and the simple transport model, the soil losses related to different (137)Cs loss proportions of the reference inventory at the Kaixian site of the Three Gorge Region, China are estimated by the two models. The over-estimation of the soil loss by using the previous simple profile-shape model obviously increases with the time period from the sampling year to the year of 1963 and (137)Cs loss proportion of the reference inventory. As to 20-80% of (137)Cs loss proportions of the reference inventory at the Kaixian site in 2004, the annual soil loss depths estimated by the new simplified transport process model are only 57.90-56.24% of the values estimated by the previous model.
Evaluation of neural cochlear structures after noise trauma using x-ray tomography
NASA Astrophysics Data System (ADS)
Richter, Claus-Peter; Liddy, Whitney; Vo, Amanda; Young, Hunter; Stock, Stuart; Xiao, Xianghui; Whitlon, Donna
2014-09-01
According to the World Health Organization (WHO), in 2010 hearing loss affected more than 278 million people worldwide. The loss of hearing and communication has significant consequences on the emotional well-being of each affected individual. The estimated socio-economic impact is about $100 billion in unrealized household income per year. Despite this impact on society, no Food and Drug Administration (FDA) approved drug intervention is available today that would either protect or reverse the effects of hearing loss. A limiting factor for all efforts to validate drugs for treatment relates to the time consuming animal experiments and subsequent histology. Here, we present an imaging method that is superior to current gold standard methods in flexibility and time for evaluation of histology. Tissue processing times are reduced from weeks to hours. As an example, we show that Brain Derived Neurotrophic Factor (BDNF) reduces the effect of noise induced hearing loss.
Nonparametric Conditional Estimation
1987-02-01
the data because the statistician has complete control over the method. It is especially reasonable when there is a bone fide loss function to which...For example, the sample mean is m(Fn). Most calculations that statisticians perform on a set of data can be expressed as statistical functionals on...of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering
NASA Astrophysics Data System (ADS)
Song, Yunquan; Lin, Lu; Jian, Ling
2016-07-01
Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.
Spatial Data Mining for Estimating Cover Management Factor of Universal Soil Loss Equation
NASA Astrophysics Data System (ADS)
Tsai, F.; Lin, T. C.; Chiang, S. H.; Chen, W. W.
2016-12-01
Universal Soil Loss Equation (USLE) is a widely used mathematical model that describes long-term soil erosion processes. Among the six different soil erosion risk factors of USLE, the cover-management factor (C-factor) is related to land-cover/land-use. The value of C-factor ranges from 0.001 to 1, so it alone might cause a thousandfold difference in a soil erosion analysis using USLE. The traditional methods for the estimation of USLE C-factor include in situ experiments, soil physical parameter models, USLE look-up tables with land use maps, and regression models between vegetation indices and C-factors. However, these methods are either difficult or too expensive to implement in large areas. In addition, the values of C-factor obtained using these methods can not be updated frequently, either. To address this issue, this research developed a spatial data mining approach to estimate the values of C-factor with assorted spatial datasets for a multi-temporal (2004 to 2008) annual soil loss analysis of a reservoir watershed in northern Taiwan. The idea is to establish the relationship between the USLE C-factor and spatial data consisting of vegetation indices and texture features extracted from satellite images, soil and geology attributes, digital elevation model, road and river distribution etc. A decision tree classifier was used to rank influential conditional attributes in the preliminary data mining. Then, factor simplification and separation were considered to optimize the model and the random forest classifier was used to analyze 9 simplified factor groups. Experimental results indicate that the overall accuracy of the data mining model is about 79% with a kappa value of 0.76. The estimated soil erosion amounts in 2004-2008 according to the data mining results are about 50.39 - 74.57 ton/ha-year after applying the sediment delivery ratio and correction coefficient. Comparing with estimations calculated with C-factors from look-up tables, the soil erosion values estimated with C-factors generated from spatial data mining results are more in agreement with the values published by the watershed administration authority.
Photometry-based estimation of the total number of stars in the Universe.
Manojlović, Lazo M
2015-07-20
A novel photometry-based estimation of the total number of stars in the Universe is presented. The estimation method is based on the energy conservation law and actual measurements of the extragalactic background light levels. By assuming that every radiated photon is kept within the Universe volume, i.e., by approximating the Universe as an integrating cavity without losses, the total number of stars in the Universe of about 6×1022 has been obtained.
Species coextinctions and the biodiversity crisis.
Koh, Lian Pin; Dunn, Robert R; Sodhi, Navjot S; Colwell, Robert K; Proctor, Heather C; Smith, Vincent S
2004-09-10
To assess the coextinction of species (the loss of a species upon the loss of another), we present a probabilistic model, scaled with empirical data. The model examines the relationship between coextinction levels (proportion of species extinct) of affiliates and their hosts across a wide range of coevolved interspecific systems: pollinating Ficus wasps and Ficus, parasites and their hosts, butterflies and their larval host plants, and ant butterflies and their host ants. Applying a nomographic method based on mean host specificity (number of host species per affiliate species), we estimate that 6300 affiliate species are "coendangered" with host species currently listed as endangered. Current extinction estimates need to be recalibrated by taking species coextinctions into account.
Effects of habitat disturbance on tropical forest biodiversity
Alroy, John
2017-01-01
It is widely expected that habitat destruction in the tropics will cause a mass extinction in coming years, but the potential magnitude of the loss is unclear. Existing literature has focused on estimating global extinction rates indirectly or on quantifying effects only at local and regional scales. This paper directly predicts global losses in 11 groups of organisms that would ensue from disturbance of all remaining tropical forest habitats. The results are based on applying a highly accurate method of estimating species richness to 875 ecological samples. About 41% of the tree and animal species in this dataset are absent from disturbed habitats, even though most samples do still represent forests of some kind. The individual figures are 30% for trees and 8–65% for 10 animal groups. Local communities are more robust to disturbance because losses are partially balanced out by gains resulting from homogenization. PMID:28461482
Quantitative Analysis Method of Output Loss due to Restriction for Grid-connected PV Systems
NASA Astrophysics Data System (ADS)
Ueda, Yuzuru; Oozeki, Takashi; Kurokawa, Kosuke; Itou, Takamitsu; Kitamura, Kiyoyuki; Miyamoto, Yusuke; Yokota, Masaharu; Sugihara, Hiroyuki
Voltage of power distribution line will be increased due to reverse power flow from grid-connected PV systems. In the case of high density grid connection, amount of voltage increasing will be higher than the stand-alone grid connection system. To prevent the over voltage of power distribution line, PV system's output will be restricted if the voltage of power distribution line is close to the upper limit of the control range. Because of this interaction, amount of output loss will be larger in high density case. This research developed a quantitative analysis method for PV systems output and losses to clarify the behavior of grid connected PV systems. All the measured data are classified into the loss factors using 1 minute average of 1 second data instead of typical 1 hour average. Operation point on the I-V curve is estimated to quantify the loss due to the output restriction using module temperature, array output voltage, array output current and solar irradiance. As a result, loss due to output restriction is successfully quantified and behavior of output restriction is clarified.
Estimating ambiguity preferences and perceptions in multiple prior models: Evidence from the field.
Dimmock, Stephen G; Kouwenberg, Roy; Mitchell, Olivia S; Peijnenburg, Kim
2015-12-01
We develop a tractable method to estimate multiple prior models of decision-making under ambiguity. In a representative sample of the U.S. population, we measure ambiguity attitudes in the gain and loss domains. We find that ambiguity aversion is common for uncertain events of moderate to high likelihood involving gains, but ambiguity seeking prevails for low likelihoods and for losses. We show that choices made under ambiguity in the gain domain are best explained by the α-MaxMin model, with one parameter measuring ambiguity aversion (ambiguity preferences) and a second parameter quantifying the perceived degree of ambiguity (perceptions about ambiguity). The ambiguity aversion parameter α is constant and prior probability sets are asymmetric for low and high likelihood events. The data reject several other models, such as MaxMin and MaxMax, as well as symmetric probability intervals. Ambiguity aversion and the perceived degree of ambiguity are both higher for men and for the college-educated. Ambiguity aversion (but not perceived ambiguity) is also positively related to risk aversion. In the loss domain, we find evidence of reflection, implying that ambiguity aversion for gains tends to reverse into ambiguity seeking for losses. Our model's estimates for preferences and perceptions about ambiguity can be used to analyze the economic and financial implications of such preferences.
Diagnostic accuracy of MRI in the measurement of glenoid bone loss.
Gyftopoulos, Soterios; Hasan, Saqib; Bencardino, Jenny; Mayo, Jason; Nayyar, Samir; Babb, James; Jazrawi, Laith
2012-10-01
The purpose of this study is to assess the accuracy of MRI quantification of glenoid bone loss and to compare the diagnostic accuracy of MRI to CT in the measurement of glenoid bone loss. MRI, CT, and 3D CT examinations of 18 cadaveric glenoids were obtained after the creation of defects along the anterior and anteroinferior glenoid. The defects were measured by three readers separately and blindly using the circle method. These measurements were compared with measurements made on digital photographic images of the cadaveric glenoids. Paired sample Student t tests were used to compare the imaging modalities. Concordance correlation coefficients were also calculated to measure interobserver agreement. Our data show that MRI could be used to accurately measure glenoid bone loss with a small margin of error (mean, 3.44%; range, 2.06-5.94%) in estimated percentage loss. MRI accuracy was similar to that of both CT and 3D CT for glenoid loss measurements in our study for the readers familiar with the circle method, with 1.3% as the maximum expected difference in accuracy of the percentage bone loss between the different modalities (95% confidence). Glenoid bone loss can be accurately measured on MRI using the circle method. The MRI quantification of glenoid bone loss compares favorably to measurements obtained using 3D CT and CT. The accuracy of the measurements correlates with the level of training, and a learning curve is expected before mastering this technique.
Evaluation of Amino Acid and Energy Utilization in Feedstuff for Swine and Poultry Diets
Kong, C.; Adeola, O.
2014-01-01
An accurate feed formulation is essential for optimizing feed efficiency and minimizing feed cost for swine and poultry production. Because energy and amino acid (AA) account for the major cost of swine and poultry diets, a precise determination of the availability of energy and AA in feedstuffs is essential for accurate diet formulations. Therefore, the methodology for determining the availability of energy and AA should be carefully selected. The total collection and index methods are 2 major procedures for estimating the availability of energy and AA in feedstuffs for swine and poultry diets. The total collection method is based on the laborious production of quantitative records of feed intake and output, whereas the index method can avoid the laborious work, but greatly relies on accurate chemical analysis of index compound. The direct method, in which the test feedstuff in a diet is the sole source of the component of interest, is widely used to determine the digestibility of nutritional components in feedstuffs. In some cases, however, it may be necessary to formulate a basal diet and a test diet in which a portion of the basal diet is replaced by the feed ingredient to be tested because of poor palatability and low level of the interested component in the test ingredients. For the digestibility of AA, due to the confounding effect on AA composition of protein in feces by microorganisms in the hind gut, ileal digestibility rather than fecal digestibility has been preferred as the reliable method for estimating AA digestibility. Depending on the contribution of ileal endogenous AA losses in the ileal digestibility calculation, ileal digestibility estimates can be expressed as apparent, standardized, and true ileal digestibility, and are usually determined using the ileal cannulation method for pigs and the slaughter method for poultry. Among these digestibility estimates, the standardized ileal AA digestibility that corrects apparent ileal digestibility for basal endogenous AA losses, provides appropriate information for the formulation of swine and poultry diets. The total quantity of energy in feedstuffs can be partitioned into different components including gross energy (GE), digestible energy (DE), metabolizable energy (ME), and net energy based on the consideration of sequential energy losses during digestion and metabolism from GE in feeds. For swine, the total collection method is suggested for determining DE and ME in feedstuffs whereas for poultry the classical ME assay and the precision-fed method are applicable. Further investigation for the utilization of ME may be conducted by measuring either heat production or energy retention using indirect calorimetry or comparative slaughter method, respectively. This review provides information on the methodology used to determine accurate estimates of AA and energy availability for formulating swine and poultry diets. PMID:25050031
Evaluation of amino Acid and energy utilization in feedstuff for Swine and poultry diets.
Kong, C; Adeola, O
2014-07-01
An accurate feed formulation is essential for optimizing feed efficiency and minimizing feed cost for swine and poultry production. Because energy and amino acid (AA) account for the major cost of swine and poultry diets, a precise determination of the availability of energy and AA in feedstuffs is essential for accurate diet formulations. Therefore, the methodology for determining the availability of energy and AA should be carefully selected. The total collection and index methods are 2 major procedures for estimating the availability of energy and AA in feedstuffs for swine and poultry diets. The total collection method is based on the laborious production of quantitative records of feed intake and output, whereas the index method can avoid the laborious work, but greatly relies on accurate chemical analysis of index compound. The direct method, in which the test feedstuff in a diet is the sole source of the component of interest, is widely used to determine the digestibility of nutritional components in feedstuffs. In some cases, however, it may be necessary to formulate a basal diet and a test diet in which a portion of the basal diet is replaced by the feed ingredient to be tested because of poor palatability and low level of the interested component in the test ingredients. For the digestibility of AA, due to the confounding effect on AA composition of protein in feces by microorganisms in the hind gut, ileal digestibility rather than fecal digestibility has been preferred as the reliable method for estimating AA digestibility. Depending on the contribution of ileal endogenous AA losses in the ileal digestibility calculation, ileal digestibility estimates can be expressed as apparent, standardized, and true ileal digestibility, and are usually determined using the ileal cannulation method for pigs and the slaughter method for poultry. Among these digestibility estimates, the standardized ileal AA digestibility that corrects apparent ileal digestibility for basal endogenous AA losses, provides appropriate information for the formulation of swine and poultry diets. The total quantity of energy in feedstuffs can be partitioned into different components including gross energy (GE), digestible energy (DE), metabolizable energy (ME), and net energy based on the consideration of sequential energy losses during digestion and metabolism from GE in feeds. For swine, the total collection method is suggested for determining DE and ME in feedstuffs whereas for poultry the classical ME assay and the precision-fed method are applicable. Further investigation for the utilization of ME may be conducted by measuring either heat production or energy retention using indirect calorimetry or comparative slaughter method, respectively. This review provides information on the methodology used to determine accurate estimates of AA and energy availability for formulating swine and poultry diets.
Williams, R B
1999-08-01
A compartmentalised model is presented for the estimation of the monetary losses suffered by the world's poultry industry resulting from coccidiosis of chickens and costs of its control. The model is designed so that the major elements of loss may be separately quantified for any chicken-producing entity, e.g., a farm, a poultry company, a country, etc. Examples are presented and the sources, reliability and geographical relevance of the data used for each parameter are provided. Loss elements for specific geographical areas should be recalculated at appropriate intervals to take into account local and international fluctuations in costs of chicks feed, labour, financial inflation and world currency exchange rates. Equations are given for relationships among numbers of chickens, liveweights, weights of carcasses, feed consumptions, feed conversion ratio (FCR), prices of feeds, prices of anticoccidial therapeutic and prophylactic drugs, values of chickens, chicken rearing costs; and effects of coccidiosis on mortality, weight gain and FCR. Using these equations, it is theoretically possible for an international team of representatives, each using reliable local data, to calculate simultaneously each relevant loss element for their respective countries. Addition of these elements could give, for the first time, an accurate global estimate of the losses due to chicken coccidiosis. The total cost of coccidiosis in chickens in the United Kingdom in 1995 is estimated to have been at least GB pound silver 38 588 795, of which 98.1% involved broilers (80.6% due to effects on mortality, weight gain and feed conversion, and 17.5% due to the cost of chemoprophylaxis and therapy). The costs of poor performance due to coccidiosis and its chemical control totalled 4.54% of the gross revenue from UK sales of live broilers. This model includes a new method for comparing the profitabilities of different treatments in commercial trials. providing actual costs rather than the arbitrary numerical scores of other methods. Although originally designed for the study of coccidiosis, the model is equally applicable to any disease. It should be of value to agricultural economists, the animal feed and poultry industries, animal health companies, and to research scientists (particularly for preparing grant applications).
Quantifying the economic burden of productivity loss in rheumatoid arthritis.
Filipovic, Ivana; Walker, David; Forster, Fiona; Curry, Alistair S
2011-06-01
In light of the large number of recent studies and systematic reviews investigating the cost of RA, this article examines the methods used to assess the impact of RA on employment and work productivity, and provides an overview of the issues surrounding work productivity loss in the RA population. A review of the published literature was conducted in order to identify relevant articles. These articles were then reviewed and their methodologies compared. The various methods used to calculate economic loss were then explained and discussed. We found that although methods of lost productivity and associated costs varied between studies, all suggest that RA is associated with significant burden of illness. Economic analyses that exclude indirect costs will therefore underestimate the full economic impact of RA. However, the methods used to calculate productivity loss have a significant impact on the results of indirect cost analyses, and should be selected carefully when designing such studies. Several factors relating to the disease, the job and socio-demographics have been found to predict work disability. Consideration of these factors is vital when measuring the extent of both absenteeism and presenteeism, and will allow for more accurate estimation of the impact of RA on work productivity. This information may also guide interventions aiming to prevent or postpone work disability and job loss.
NASA Technical Reports Server (NTRS)
DellaCorte, Christopher
2010-01-01
Foil gas bearings are a key technology in many commercial and emerging Oil-Free turbomachinery systems. These bearings are non-linear and have been difficult to analytically model in terms of performance characteristics such as load capacity, power loss, stiffness and damping. Previous investigations led to an empirically derived method, a rule-of-thumb, to estimate load capacity. This method has been a valuable tool in system development. The current paper extends this tool concept to include rules for stiffness and damping coefficient estimation. It is expected that these rules will further accelerate the development and deployment of advanced Oil-Free machines operating on foil gas bearings
Sun, Xiaoxiao; Liang, Xinqiang; Zhang, Feng; Fu, Chaodong
2016-11-01
Nutrient runoff losses from cropping fields can lead to nonpoint source pollution; however, the level of nutrient export is difficult to evaluate, particularly at the regional scale. This study aimed to establish a novel yet simple approach for estimating total nitrogen (TN) and total phosphorus (TP) runoff losses from regional paddy fields. In this approach, temporal changes of nutrient concentrations in floodwater were coupled with runoff-processing functions in rice ( L.) fields to calculate nutrient runoff losses for three site-specific field experiments. Validation experiments verified the accuracy of this method. The geographic information system technique was used to upscale and visualize the TN and TP runoff losses from field to regional scales. The results indicated that nutrient runoff losses had significant spatio-temporal variation characteristics during rice seasons, which were positively related to fertilizer rate and precipitation. The average runoff losses over five study seasons were 20.21 kg N ha for TN and 0.76 kg P ha for TP. Scenario analysis showed that TN and TP losses dropped by 7.64 and 3.0%, respectively, for each 10% reduction of fertilizer input. For alternate wetting and drying water management, the corresponding reduction ratio was 24.7 and 14.0% respectively. Our results suggest that, although both water and fertilizer management can mitigate nutrient runoff losses, the former is significantly more effective. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Integration, Validation, and Application of a PV Snow Coverage Model in SAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, Janine M.; Ryberg, David Severin
2017-08-01
Due to the increasing deployment of PV systems in snowy climates, there is significant interest in a method capable of estimating PV losses resulting from snow coverage that has been verified for a variety of system designs and locations. Many independent snow coverage models have been developed over the last 15 years; however, there has been very little effort verifying these models beyond the system designs and locations on which they were based. Moreover, major PV modeling software products have not yet incorporated any of these models into their workflows. In response to this deficiency, we have integrated the methodologymore » of the snow model developed in the paper by Marion et al. (2013) into the National Renewable Energy Laboratory's (NREL) System Advisor Model (SAM). In this work, we describe how the snow model is implemented in SAM and we discuss our demonstration of the model's effectiveness at reducing error in annual estimations for three PV arrays. Next, we use this new functionality in conjunction with a long term historical data set to estimate average snow losses across the United States for two typical PV system designs. The open availability of the snow loss estimation capability in SAM to the PV modeling community, coupled with our results of the nationwide study, will better equip the industry to accurately estimate PV energy production in areas affected by snowfall.« less
Structural and heat-flow implications of infrared anomalies at Mt. Hood, Oregon
Friedman, Jules D.; Frank, David
1977-01-01
Surface thermal features occur in an area of 9700 m2 at Mt. Hood, on the basis of an aerial line-scan survey made April 26, 1973. The distribution of the thermal areas below the summit of Mt. Hood, shown on planimetrically corrected maps at 1:12,000, suggests structural control by a fracture system and brecciated zone peripheral to a hornblende-dacite plug dome (Crater Rock), and by a concentric fracture system that may have been associated with development of the present crater. The extent and inferred temperature of the thermal areas permits a preliminary estimate of a heat discharge of 10 megawatts, by analogy with similar fumarole and thermal fields of Mt. Baker, Washington. This figure includes a heat loss of 4 megawatts (MW) via conduction, diffusion, evaporation, and radiation to the atmosphere, and a somewhat less certain loss of 6MW via fumarolic mass transfer of vapor and advective heat loss from runoff and ice melt. The first part of the estimate is based on two-point models for differential radiant exitance and differential flux via conduction, diffusion, evaporation, and radiation from heat balance of the ground surface. Alternate methods for estimating volcanogenic geothermal flux that assume a quasi-steady state heat flow also yield estimates in the 5-11 MW range. Heat loss equivalent to cooling of the dacite plug dome is judged to be insufficient to account for the heat flux at the fumarole fields.
Clinical attachment loss: estimation by direct and indirect methods.
Barbosa, Viviane Leal; Angst, Patricia D Melchiors; Finger Stadler, Amanda; Oppermann, Rui V; Gomes, Sabrina Carvalho
2016-06-01
This observational study aimed to compare the estimation of clinical attachment loss (CAL) as measured by direct (CALD ) and indirect (CALI ) methods. Periodontitis patients (n = 75; mean age: 50.9 ± 8.02 years; 72.2% women; 50.6% smokers) received a periodontal examination (six sites/tooth) to determine the presence of visible plaque and calculus, the gingival bleeding index (GBI), periodontal probing depth (PPD), bleeding on probing (BOP), CALD and gingival recession (GR). CALI values resulted from the sum of PPD and GR values. Statistical analysis considered only data from sites with visible GR (e.g. the gingival margin apical to the cemento-enamel junction; n = 4,757 sites) and determined the mean difference between CALI and CALD measurements. Based on the mean difference, univariate and multivariate analyses were also performed. Mean CALD and CALI values were 3.96 ± 2.07 mm and 4.47 ± 2.03 mm, respectively. The indirect method overestimated CAL compared with the direct method (mean difference: 0.51 ± 1.23 mm; P < 0.001). On uni- and multivariate analyses, absence of GBI and BOP, PPD and proximal site location had significant influences on the overestimation of CAL by the indirect method (all P ≤ 0.01). The indirect method increased the CAL value by 0.38 mm for each additional 1 mm in PPD. To decrease the number of probing errors in daily practice it is suggested that direct examination is more appropriate than the indirect method for estimating CAL. © 2016 FDI World Dental Federation.
Canvasback mortality from illegal hunting on the upper Mississippi River
Korschgen, Carl E.; Kenow, Kevin P.; Nissen, James M.; Wetzel, John F.
1996-01-01
To quantify the consequences of local hunting on illegal kill of canvasbacks (Aythya valisineria), we studied the behavior of hunters on a 646-ha area open to duck hunting (closed to canvasback hunting) on Lake Onalaska, Navigation Pool 7, Wisconsin, during the 1991 and 1992 waterfowl hunting seasons. Law enforcement officers observed 258 hunting parties for 419 hours. Of 94 hunting parties encountering canvasbacks, 41 (44%) shot at the ducks on 56 occasions, or 27% of 207 encounters observed, Based on a ratio estimator, there were 790 (95% CI = 376) attempts to shoot at canvasbacks on the Lake Onalaska study area during 1991 and 837 (95% CI = 390) during 1992. Mortality of canvasbacks, excluding crippling loss, was estimated to be 128 during 1991 and 166 during 1992. Thus, total canvasback losses may be higher than currently estimated on a flyway or national basis. This estimating technique offers a promising method for enumerating hunter take of protected and legal species.
Dickens, Jade M.; Forbes, Brandon T.; Cobean, Dylan S.; Tadayon, Saeid
2011-01-01
An indirect method for estimating irrigation withdrawals is presented and results are compared to the 2005 USGS-reported irrigation withdrawals for selected States. This method is meant to demonstrate a way to check data reported or received from a third party, if metered data are unavailable. Of the 11 States where this method was applied, 8 States had estimated irrigation withdrawals that were within 15 percent of what was reported in the 2005 water-use compilation, and 3 States had estimated irrigation withdrawals that were more than 20 percent of what was reported in 2005. Recommendations for improving estimates of irrigated acreage and irrigation withdrawals also are presented in this report. Conveyance losses and irrigation-system efficiencies should be considered in order to achieve a more accurate representation of irrigation withdrawals. Better documentation of data sources and methods used can help lead to more consistent information in future irrigation water-use compilations. Finally, a summary of data sources and methods used to estimate irrigated acreage and irrigation withdrawals for the 2000 and 2005 compilations for each WSC is presented in appendix 1.
NASA Astrophysics Data System (ADS)
Molina, S.; Lang, D. H.; Lindholm, C. D.
2010-03-01
The era of earthquake risk and loss estimation basically began with the seminal paper on hazard by Allin Cornell in 1968. Following the 1971 San Fernando earthquake, the first studies placed strong emphasis on the prediction of human losses (number of casualties and injured used to estimate the needs in terms of health care and shelters in the immediate aftermath of a strong event). In contrast to these early risk modeling efforts, later studies have focused on the disruption of the serviceability of roads, telecommunications and other important lifeline systems. In the 1990s, the National Institute of Building Sciences (NIBS) developed a tool (HAZUS ®99) for the Federal Emergency Management Agency (FEMA), where the goal was to incorporate the best quantitative methodology in earthquake loss estimates. Herein, the current version of the open-source risk and loss estimation software SELENA v4.1 is presented. While using the spectral displacement-based approach (capacity spectrum method), this fully self-contained tool analytically computes the degree of damage on specific building typologies as well as the associated economic losses and number of casualties. The earthquake ground shaking estimates for SELENA v4.1 can be calculated or provided in three different ways: deterministic, probabilistic or based on near-real-time data. The main distinguishing feature of SELENA compared to other risk estimation software tools is that it is implemented in a 'logic tree' computation scheme which accounts for uncertainties of any input (e.g., scenario earthquake parameters, ground-motion prediction equations, soil models) or inventory data (e.g., building typology, capacity curves and fragility functions). The data used in the analysis is assigned with a decimal weighting factor defining the weight of the respective branch of the logic tree. The weighting of the input parameters accounts for the epistemic and aleatoric uncertainties that will always follow the necessary parameterization of the different types of input data. Like previous SELENA versions, SELENA v4.1 is coded in MATLAB which allows for easy dissemination among the scientific-technical community. Furthermore, any user has access to the source code in order to adapt, improve or refine the tool according to his or her particular needs. The handling of SELENA's current version and the provision of input data is customized for an academic environment but which can then support decision-makers of local, state and regional governmental agencies in estimating possible losses from future earthquakes.
Groundwater inflow measurements in wetland systems
Hunt, Randy J.; Krabbenhoft, David P.; Anderson, Mary P.
1996-01-01
Our current understanding of wetlands is insufficient to assess the effects of past and future wetland loss. While knowledge of wetland hydrology is crucial, groundwater flows are often neglected or uncertain. In this paper, groundwater inflows were estimated in wetlands in southwestern Wisconsin using traditional Darcy's law calculations and three independent methods that included (1) stable isotope mass balances, (2) temperature profile modeling, and (3) numerical water balance modeling techniques. Inflows calculated using Darcy's law were lower than inflows estimated using the other approaches and ranged from 0.02 to 0.3 cm/d. Estimates obtained using the other methods generally were higher (0.1 to 1.1 cm/d) and showed similar spatial trends. An areal map of groundwater flux generated by the water balance model demonstrated that areas of both recharge and discharge exist in what is considered a regional discharge area. While each method has strengths and weaknesses, the use of more than one method can reduce uncertainty in the estimates.
NASA Astrophysics Data System (ADS)
Wild, B.; Keuper, F.; Kummu, M.; Beer, C.; Blume-Werry, G.; Fontaine, S.; Gavazov, K.; Gentsch, N.; Guggenberger, G.; Hugelius, G.; Jalava, M.; Koven, C.; Krab, E. J.; Kuhry, P.; Monteux, S.; Richter, A.; Shazhad, T.; Dorrepaal, E.
2017-12-01
Predictions of soil organic carbon (SOC) losses in the northern circumpolar permafrost area converge around 15% (± 3% standard error) of the initial C pool by 2100 under the RCP 8.5 warming scenario. Yet, none of these estimates consider plant-soil interactions such as the rhizosphere priming effect (RPE). While laboratory experiments have shown that the input of plant-derived compounds can stimulate SOC losses by up to 1200%, the magnitude of RPE in natural ecosystems is unknown and no methods for upscaling exist so far. We here present the first spatial and depth explicit RPE model that allows estimates of RPE on a large scale (PrimeSCale). We combine available spatial data (SOC, C/N, GPP, ALT and ecosystem type) and new ecological insights to assess the importance of the RPE at the circumpolar scale. We use a positive saturating relationship between the RPE and belowground C allocation and two ALT-dependent rooting-depth distribution functions (for tundra and boreal forest) to proportionally assign belowground C allocation and RPE to individual soil depth increments. The model permits to take into account reasonable limiting factors on additional SOC losses by RPE including interactions between spatial and/or depth variation in GPP, plant root density, SOC stocks and ALT. We estimate potential RPE-induced SOC losses at 9.7 Pg C (5 - 95% CI: 1.5 - 23.2 Pg C) by 2100 (RCP 8.5). This corresponds to an increase of the current permafrost SOC-loss estimate from 15% of the initial C pool to about 16%. If we apply an additional molar C/N threshold of 20 to account for microbial C limitation as a requirement for the RPE, SOC losses by RPE are further reduced to 6.5 Pg C (5 - 95% CI: 1.0 - 16.8 Pg C) by 2100 (RCP 8.5). Although our results show that current estimates of permafrost soil C losses are robust without taking into account the RPE, our model also highlights high-RPE risk in Siberian lowland areas and Alaska north of the Brooks Range. The small overall impact of the RPE is largely explained by the interaction between belowground plant C allocation and SOC depth distribution. Our findings thus highlight the importance of fine scale interactions between plant and soil properties for large scale carbon fluxes and we provide a first model that bridges this gap and permits the quantification of RPE across a large area.
Serrier, Hassan; Sultan-Taieb, Hélène; Luce, Danièle; Bejean, Sophie
2014-07-01
The objective of this article was to estimate the social cost of respiratory cancer cases attributable to occupational risk factors in France in 2010. According to the attributable fraction method and based on available epidemiological data from the literature, we estimated the number of respiratory cancer cases due to each identified risk factor. We used the cost-of-illness method with a prevalence-based approach. We took into account the direct and indirect costs. We estimated the cost of production losses due to morbidity (absenteeism and presenteeism) and mortality costs (years of production losses) in the market and nonmarket spheres. The social cost of lung, larynx, sinonasal and mesothelioma cancer caused by exposure to asbestos, chromium, diesel engine exhaust, paint, crystalline silica, wood and leather dust in France in 2010 were estimated at between 917 and 2,181 million euros. Between 795 and 2,011 million euros (87-92%) of total costs were due to lung cancer alone. Asbestos was by far the risk factor representing the greatest cost to French society in 2010 at between 531 and 1,538 million euros (58-71%), ahead of diesel engine exhaust, representing an estimated social cost of between 233 and 336 million euros, and crystalline silica (119-229 million euros). Indirect costs represented about 66% of total costs. Our assessment shows the magnitude of the economic impact of occupational respiratory cancers. It allows comparisons between countries and provides valuable information for policy-makers responsible for defining public health priorities.
Using pairs of physiological models to estimate temporal variation in amphibian body temperature.
Roznik, Elizabeth A; Alford, Ross A
2014-10-01
Physical models are often used to estimate ectotherm body temperatures, but designing accurate models for amphibians is difficult because they can vary in cutaneous resistance to evaporative water loss. To account for this variability, a recently published technique requires a pair of agar models that mimic amphibians with 0% and 100% resistance to evaporative water loss; the temperatures of these models define the lower and upper boundaries of possible amphibian body temperatures for the location in which they are placed. The goal of our study was to develop a method for using these pairs of models to estimate parameters describing the distributions of body temperatures of frogs under field conditions. We radiotracked green-eyed treefrogs (Litoria serrata) and collected semi-continuous thermal data using both temperature-sensitive radiotransmitters with an automated datalogging receiver, and pairs of agar models placed in frog locations, and we collected discrete thermal data using a non-contact infrared thermometer when frogs were located. We first examined the accuracy of temperature-sensitive transmitters in estimating frog body temperatures by comparing transmitter data with direct temperature measurements taken simultaneously for the same individuals. We then compared parameters (mean, minimum, maximum, standard deviation) characterizing the distributions of temperatures of individual frogs estimated from data collected using each of the three methods. We found strong relationships between thermal parameters estimated from data collected using automated radiotelemetry and both types of thermal models. These relationships were stronger for data collected using automated radiotelemetry and impermeable thermal models, suggesting that in the field, L. serrata has a relatively high resistance to evaporative water loss. Our results demonstrate that placing pairs of thermal models in frog locations can provide accurate estimates of the distributions of temperatures experienced by individual frogs, and that comparing temperatures from model pairs to direct measurements collected simultaneously on frogs can be used to broadly characterize the skin resistance of a species, and to select which model type is most appropriate for estimating temperature distributions for that species. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mass change distribution inverted from space-borne gravimetric data using a Monte Carlo method
NASA Astrophysics Data System (ADS)
Zhou, X.; Sun, X.; Wu, Y.; Sun, W.
2017-12-01
Mass estimate plays a key role in using temporally satellite gravimetric data to quantify the terrestrial water storage change. GRACE (Gravity Recovery and Climate Experiment) only observes the low degree gravity field changes, which can be used to estimate the total surface density or equivalent water height (EWH) variation, with a limited spatial resolution of 300 km. There are several methods to estimate the mass variation in an arbitrary region, such as averaging kernel, forward modelling and mass concentration (mascon). Mascon method can isolate the local mass from the gravity change at a large scale through solving the observation equation (objective function) which represents the relationship between unknown masses and the measurements. To avoid the unreasonable local mass inverted from smoothed gravity change map, regularization has to be used in the inversion. We herein give a Markov chain Monte Carlo (MCMC) method to objectively determine the regularization parameter for the non-negative mass inversion problem. We first apply this approach to the mass inversion from synthetic data. Result show MCMC can effectively reproduce the local mass variation taking GRACE measurement error into consideration. We then use MCMC to estimate the ground water change rate of North China Plain from GRACE gravity change rate from 2003 to 2014 under a supposition of the continuous ground water loss in this region. Inversion result show that the ground water loss rate in North China Plain is 7.6±0.2Gt/yr during past 12 years which is coincident with that from previous researches.
EFFECTIVENESS OF SOIL AND WATER CONSERVATION PRACTICES FOR POLLUTION CONTROL
The potential water quality effects and economic implications of soil and water conservation practices (SWCPs) are identified. Method for estimating the effects of SWCPs on pollutant losses from croplands are presented. Mathematical simulation and linear programming models were u...
Kuhn, Gerhard; Arnold, L. Rick
2006-01-01
The U.S. Geological Survey, in cooperation with Colorado Springs Utilities, the Colorado Water Conservation Board, and the El Paso County Water Authority, began a study in 2004 to (1) apply a stream-aquifer model to Monument Creek, (2) use the results of the modeling to develop a transit-loss accounting program for Monument Creek, (3) revise the existing transit-loss accounting program for Fountain Creek to incorporate new water-management strategies and allow for incorporation of future changes in water-management strategies, and (4) integrate the two accounting programs into a single program with a Web-based user interface. The purpose of this report is to present the results of applying a stream-aquifer model to the Monument Creek study reach.Transit losses were estimated for reusable-water flows in Monument Creek that ranged from 1 to 200 cubic feet per second (ft3/s) and for native streamflows that ranged from 0 to 1,000 ft3/s. Transit losses were estimated for bank-storage, channel-storage, and evaporative losses. The same stream-aquifer model used in the previously completed (1988) Fountain Creek study was used in the Monument Creek study.Sixteen model nodes were established for the Monument Creek study reach, defining 15 subreaches. Channel length, aquifer length, and aquifer width for the subreaches were estimated from available topographic and geologic maps. Thickness of alluvial deposits and saturated thickness were estimated using lithologic and water-level data from about 100 wells and test holes in or near the Monument Creek study reach. Estimated average transmissivities for the subreaches ranged from 2,000 to 12,000 feet squared per day, and a uniform value of 0.20 was used for storage coefficient.Qualitative comparison of recorded and simulated streamflow at the downstream node for the calibration and verification simulations indicated that the two streamflows compared reasonably well. No adjustments were made to the model parameters. Differences between recorded and simulated streamflow volumes for all calibration and verification simulations ranged from about –8.8 to 7.5 percent; the total error for all simulations was about –0.7 percent.The model was used to estimate bank-storage losses for 10 to 15 native streamflows for each reusable-water flow of 1, 3, 5, 7, 10, 15, 20, 30, 40, 50, 100, and 200 ft3/s. Then the 10 to 15 bank-storage loss values were used in least-squares linear regression to estimate a relation between bank-storage loss and native streamflow for each of the 12 reusable-water flow rates. The 12 regression relations then were used to develop “look-up” tables of bank-storage loss for reusable-water flows ranging from 1 to 200 ft3/s (in 1-ft3/s increments). Additional model simulations indicated that (1) when the ratio of downstream native streamflow to upstream native streamflow was less than 1, bank-storage loss generally increased and (2) when the ratio of downstream native streamflow to upstream native streamflow was larger than 1, bank-storage loss generally decreased. These results were used to develop a bank-storage loss adjustment factor based on the ratio of native streamflow at the downstream node to native streamflow at the upstream node. The model also was used to estimate a recovery period, which is the length of time needed for the bank-storage loss to return to the stream. The recovery period was 1 day for six subreaches; 2 days for four subreaches; between 3 and 12 days for four subreaches; and 28 days for one subreach.Channel-storage losses are about 10 percent of the reusable-water flow for most of the subreaches, except for two subreaches, where the channel-storage losses are about 20 percent, and one subreach, where the losses are about 30 percent, owing to the greater channel lengths. Evaporative losses were estimated by the use of monthly pan-evaporation data and the incremental increase in stream width resulting from any reusable-water flows. Monthly pan-evaporation data were converted to a daily rate. The daily rate, when multiplied by the stream-width increase (in feet) that results from reusable-water flow and by the subreach length (in miles) gives the daily evaporative loss in cubic feet per second.
Measuring and correcting wobble in large-scale transmission radiography.
Rogers, Thomas W; Ollier, James; Morton, Edward J; Griffin, Lewis D
2017-01-01
Large-scale transmission radiography scanners are used to image vehicles and cargo containers. Acquired images are inspected for threats by a human operator or a computer algorithm. To make accurate detections, it is important that image values are precise. However, due to the scale (∼5 m tall) of such systems, they can be mechanically unstable, causing the imaging array to wobble during a scan. This leads to an effective loss of precision in the captured image. We consider the measurement of wobble and amelioration of the consequent loss of image precision. Following our previous work, we use Beam Position Detectors (BPDs) to measure the cross-sectional profile of the X-ray beam, allowing for estimation, and thus correction, of wobble. We propose: (i) a model of image formation with a wobbling detector array; (ii) a method of wobble correction derived from this model; (iii) methods for calibrating sensor sensitivities and relative offsets; (iv) a Random Regression Forest based method for instantaneous estimation of detector wobble; and (v) using these estimates to apply corrections to captured images of difficult scenes. We show that these methods are able to correct for 87% of image error due wobble, and when applied to difficult images, a significant visible improvement in the intensity-windowed image quality is observed. The method improves the precision of wobble affected images, which should help improve detection of threats and the identification of different materials in the image.
Detilleux, J; Theron, L; Duprez, J-N; Reding, E; Moula, N; Detilleux, M; Bertozzi, C; Hanzen, C; Mainil, J
2016-08-01
Milk losses associated with mastitis can be attributed to either effects of pathogens per se (i.e. direct losses) or to effects of the immune response triggered by the presence of mammary pathogens (i.e. indirect losses). Test-day milk somatic cell counts (SCC) and number of bacterial colony forming units (CFU) found in milk samples are putative measures of the level of immune response and of the bacterial load, respectively. Mediation models, in which one independent variable affects a second variable which, in turn, affects a third one, are conceivable models to estimate direct and indirect losses. Here, we evaluated the feasibility of a mediation model in which test-day SCC and milk were regressed toward bacterial CFU measured at three selected sampling dates, 1 week apart. We applied this method on cows free of clinical signs and with records on up to 3 test-days before and after the date of the first bacteriological samples. Most bacteriological cultures were negative (52.38%), others contained either staphylococci (23.08%), streptococci (9.16%), mixed bacteria (8.79%) or were contaminated (6.59%). Only losses mediated by an increase in SCC were significantly different from null. In cows with three consecutive bacteriological positive results, we estimated a decreased milk yield of 0.28 kg per day for each unit increase in log2-transformed CFU that elicited one unit increase in log2-transformed SCC. In cows with one or two bacteriological positive results, indirect milk loss was not significantly different from null although test-day milk decreased by 0.74 kg per day for each unit increase of log2-transformed SCC. These results highlight the importance of milk losses that are mediated by an increase in SCC during mammary infection and the feasibility of decomposing total milk loss into its direct and indirect components.
Accuracy of iron loss estimation in induction motors by using different iron loss models
NASA Astrophysics Data System (ADS)
Štumberger, B.; Hamler, A.; Goričan, V.; Jesenik, M.; Trlep, M.
2004-05-01
The paper presents iron loss estimation in a three-phase induction motor by using different iron loss models for the posterior iron loss calculation. The iron losses were determined by using modeled properties of used electrical steel and calculated distribution of magnetic induction B(t) in all parts of the motor by using 2D finite element software for a complete cycle of field variation. The comparison between estimated and measured core losses for a 4kW induction motor at no-load in dependency on supply voltage is given.
External Validation of Early Weight Loss Nomograms for Exclusively Breastfed Newborns.
Schaefer, Eric W; Flaherman, Valerie J; Kuzniewicz, Michael W; Li, Sherian X; Walsh, Eileen M; Paul, Ian M
2015-12-01
Nomograms that show hour-by-hour percentiles of weight loss during the birth hospitalization were recently developed to aid clinical care of breastfeeding newborns. The nomograms for breastfed neonates were based on a sample of 108,907 newborns delivered at 14 Kaiser Permanente medical centers in Northern California (United States). The objective of this study was to externally validate the published nomograms for newborn weight loss using data from a geographically distinct population. Data were compiled from the Penn State Milton S. Hershey Medical Center located in Hershey, PA. For singleton neonates delivered at ≥36 weeks of gestation between January 2013 and September 2014, weights were obtained between 6 hours and 48 hours (vaginal delivery) or 60 hours (cesarean delivery) for neonates who were exclusively breastfeeding. Quantile regression methods appropriate for repeated measures were used to estimate 50th, 75th, 90th, and 95th percentiles of weight loss as a function of time after birth. These percentile estimates were compared with the published nomograms. Of the 1,587 newborns who met inclusion criteria, 1,148 were delivered vaginally, and 439 were delivered via cesarean section. These newborns contributed 1,815 weights for vaginal deliveries (1.6 per newborn) and 893 weights for cesarean deliveries (2.0 per newborn). Percentile estimates from this Penn State sample were similar to the published nomograms. Deviations in percentile estimates for the Penn State sample were similar to deviations observed after fitting the same model separately to each medical center that made up the Kaiser Permanente sample. The published newborn weight loss nomograms for breastfed neonates were externally validated in a geographically distinct population.
Cost-effectiveness of a Primary Care Intervention to Treat Obesity
Tsai, Adam G.; Wadden, Thomas A.; Volger, Sheri; Sarwer, David B.; Vetter, Marion; Kumanyika, Shiriki; Berkowitz, Robert I.; Diewald, Lisa; Perez, Joanna; Lavenberg, Jeffrey; Panigrahi, Eva R.; Glick, Henry A.
2013-01-01
Background Data on the cost-effectiveness of the behavioral treatment of obesity are not conclusive. The cost-effectiveness of treatment in primary care settings is particularly relevant. Methods We conducted a within-trial cost-effectiveness analysis of a primary care-based obesity intervention. Study participants were randomized to: Usual Care (quarterly visits with their primary care provider); Brief Lifestyle Counseling (Brief LC; quarterly provider visits plus monthly weight loss counseling visits; or Enhanced Brief Lifestyle Counseling (Enhanced Brief LC; all above interventions, plus choice of meal replacements or weight loss medication). A health care payer perspective was used. Intervention costs were estimated from tracking data obtained prospectively. Quality adjusted life years (QALYs) were estimated with the EuroQol-5D. We estimated cost per kilogram-year of weight loss and cost per QALY. Results Weight losses after 2 years were 1.7, 2.9, and 4.6 kg for Usual Care, Brief LC, and Enhanced Brief LC, respectively (p = 0.003 for comparison of Enhanced Brief LC vs. Usual Care). The incremental cost per kilogram-year lost was $292 for Enhanced Brief LC compared to Usual Care (95% CI $38 to $394). The incremental cost per QALY was $115,397, but the 95% CI were undefined. Comparison of short term cost per kg with published estimates of longer term cost per QALYs suggested that the intervention could be cost-effective over the long term (≥ 10 years). Conclusions A primary care intervention that included monthly counseling visits and a choice of meal replacements or weight loss medication could be a cost-effective treatment for obesity over the long term. However, additional studies are needed on the cost-effectiveness of behavioral treatment of obesity. PMID:23921780
Davidov, Ori; Rosen, Sophia
2011-04-01
In medical studies, endpoints are often measured for each patient longitudinally. The mixed-effects model has been a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, in hearing loss studies, we expect hearing to deteriorate with time. This means that hearing thresholds which reflect hearing acuity will, on average, increase over time. Therefore, the regression coefficients associated with the mean effect of time on hearing ability will be constrained. Such constraints should be accounted for in the analysis. We propose maximum likelihood estimation procedures, based on the expectation-conditional maximization either algorithm, to estimate the parameters of the model while accounting for the constraints on them. The proposed methods improve, in terms of mean square error, on the unconstrained estimators. In some settings, the improvement may be substantial. Hypotheses testing procedures that incorporate the constraints are developed. Specifically, likelihood ratio, Wald, and score tests are proposed and investigated. Their empirical significance levels and power are studied using simulations. It is shown that incorporating the constraints improves the mean squared error of the estimates and the power of the tests. These improvements may be substantial. The methodology is used to analyze a hearing loss study.
Economic burden of diabetes mellitus in the WHO African region
2009-01-01
Background In 2000, the prevalence of diabetes among the 46 countries of the WHO African Region was estimated at 7.02 million people. Evidence from North America, Europe, Asia, Latin America and the Caribbean indicates that diabetes exerts a heavy health and economic burden on society. Unfortunately, there is a dearth of such evidence in the WHO African Region. The objective of this study was to estimate the economic burden associated with diabetes mellitus in the countries in the African Region. Methods Drawing information from various secondary sources, this study used standard cost-of-illness methods to estimate: (a) the direct costs, i.e. those borne by the health systems and the families in directly addressing the problem; and (b) the indirect costs, i.e. the losses in productivity attributable to premature mortality, permanent disability and temporary disability caused by the disease. Prevalence estimates of diabetes for the year 2000 were used to calculate direct and indirect costs of diabetes mellitus. A discount rate of 3% was used to convert future earnings lost into their present values. The economic burden analysis was done for three groups of countries, i.e. 6 countries whose gross national income (GNI) per capita was greater than 8000 international dollars (i.e. in purchasing power parity), 6 countries with Int$2000–7999 and 33 countries with less than Int$2000. GNI for Zimbabwe was missing. Results The 7.02 million cases of diabetes recorded by countries of the African Region in 2000 resulted in a total economic loss of Int$25.51 billion (PPP). Approximately 43.65%, 10.03% and 46.32% of that loss was incurred by groups 1, 2 and 3 countries, respectively. This translated into grand total economic loss of Int$11,431.6, Int$4,770.6 and Int$ 2,144.3 per diabetes case per year in the three groups respectively. Conclusion In spite of data limitations, the estimates reported here show that diabetes imposes a substantial economic burden on countries of the WHO African Region. That heavy burden underscores the urgent need for increased investments in the prevention and management of diabetes. PMID:19335903
Natural disasters: forecasting economic and life losses
Nishenko, Stuart P.; Barton, Christopher C.
1997-01-01
Events such as hurricanes, earthquakes, floods, tsunamis, volcanic eruptions, and tornadoes are natural disasters because they negatively impact society, and so they must be measured and understood in human-related terms. At the U.S. Geological Survey, we have developed a new method to examine fatality and dollar-loss data, and to make probabilistic estimates of the frequency and magnitude of future events. This information is vital to large sectors of society including disaster relief agencies and insurance companies.
NASA Astrophysics Data System (ADS)
Han, Xuebing; Ouyang, Minggao; Lu, Languang; Li, Jianqiu
2014-12-01
Now the lithium ion batteries are widely used in electric vehicles (EV). The cycle life is among the most important characteristics of the power battery in EV. In this report, the battery cycle life experiment is designed according to the actual working condition in EV. Five different commercial lithium ion cells are cycled alternatively under 45 °C and 5 °C and the test results are compared. Based on the cycle life experiment results and the identified battery aging mechanism, the battery cycle life models are built and fitted by the genetic algorithm. The capacity loss follows a power law relation with the cycle times and an Arrhenius law relation with the temperature. For automotive application, to save the cost and the testing time, a battery SOH (state of health) estimation method combined the on-line model based capacity estimation and regular calibration is proposed.
Estimation of the transmissivity of thin leaky-confined aquifers from single-well pumping tests
NASA Astrophysics Data System (ADS)
Worthington, Paul F.
1981-01-01
Data from the quasi-equilibrium phases of a step-drawdown test are used to evaluate the coefficient of non-linear head losses subject to the assumption of a constant effective well radius. After applying a well-loss correction to the observed drawdowns of the first step, an approximation method is used to estimate a pseudo-transmissivity of the aquifer from a single value of time-variant drawdown. The pseudo-transmissivities computed for each of a sequence of values of time pass through a minimum when there is least manifestation of casing-storage and leakage effects, phenomena to which pumping-test data of this kind are particularly susceptible. This minimum pseudo-transmissivity, adjusted for partial penetration effects where appropriate, constitutes the best possible estimate of aquifer transmissivity. The ease of application of the overall procedure is illustrated by a practical example.
Quantifying Sediment Transport in a Premontane Transitional Cloud Forest
NASA Astrophysics Data System (ADS)
Waring, E. R.; Brumbelow, J. K.
2013-12-01
Quantifying sediment transport is a difficult task in any watershed, and relatively little direct measurement has occurred in tropical, mountainous watersheds. The Howler Monkey Watershed (2.2 hectares) is located in a premontane transitional cloud forest in San Isidro de Peñas Blancas, Costa Rica. In June 2012, a V-notch stream-gaging weir was built in the catchment with a 8 ft by 6 ft by 4 ft concrete stilling basin. Sediment captured by the weir was left untouched for an 11 month time period. To collect the contents of the weir, the stream was rerouted and the weir was drained. The stilling basin contents were systematically sampled, and samples were taken to a lab and characterized using sieve and hydrometer tests. The wet volume of the remaining sediment was obtained, and dry mass was estimated. Particle size distribution of samples were obtained from lab tests, with 96% of sediment trapped by the weir being sand or coarser. The efficiency of the weir as a sediment collector was evaluated by comparing particle fall velocities to residence time of water in the weir under baseflow conditions. Under these assumptions, only two to three percent of the total mass of soil transported in the stream is thought to have been suspended in the water and lost over the V-notch. Data were compared to the Universal Soil Loss Equation (USLE), a widely accepted method for predicting soil loss in agricultural watersheds. As expected, application of the USLE to a tropical rainforest was problematic with uncertainty in parameters yielding a soil loss estimate varying by a factor of 50. Continued monitoring of sediment transport should yield data for improved methods of soil loss estimation applicable to tropical mountainous forests.
On sweat analysis for quantitative estimation of dehydration during physical exercise.
Ring, Matthias; Lohmueller, Clemens; Rauh, Manfred; Eskofier, Bjoern M
2015-08-01
Quantitative estimation of water loss during physical exercise is of importance because dehydration can impair both muscular strength and aerobic endurance. A physiological indicator for deficit of total body water (TBW) might be the concentration of electrolytes in sweat. It has been shown that concentrations differ after physical exercise depending on whether water loss was replaced by fluid intake or not. However, to the best of our knowledge, this fact has not been examined for its potential to quantitatively estimate TBW loss. Therefore, we conducted a study in which sweat samples were collected continuously during two hours of physical exercise without fluid intake. A statistical analysis of these sweat samples revealed significant correlations between chloride concentration in sweat and TBW loss (r = 0.41, p <; 0.01), and between sweat osmolality and TBW loss (r = 0.43, p <; 0.01). A quantitative estimation of TBW loss resulted in a mean absolute error of 0.49 l per estimation. Although the precision has to be improved for practical applications, the present results suggest that TBW loss estimation could be realizable using sweat samples.
Kenny, Sarah J; Palacios-Derflingher, Luz; Whittaker, Jackie L; Emery, Carolyn A
2018-03-01
Study Design Cohort study. Background Multiple operational definitions of injury exist in dance research. The influence that these different injury definitions have on epidemiological estimations of injury burden among dancers warrants investigation. Objective To describe the influence of injury definition on injury prevalence, incidence, and severity in preprofessional ballet and contemporary dancers. Methods Dancers registered in full-time preprofessional ballet (n = 85; 77 female; median age, 15 years; range, 11-19 years) and contemporary (n = 60; 58 female; median age, 19 years; range, 17-30 years) training completed weekly online questionnaires (modified Oslo Sports Trauma Research Centre questionnaire on health problems) using 3 injury definitions: (1) time loss (unable to complete 1 or more classes/rehearsals/performances for 1 or more days beyond onset), (2) medical attention, and (3) any complaint. Physical therapists completed injury report forms to capture dance-related medical attention and time-loss injuries. Percent agreement between injury registration methods was estimated. Injury prevalence (seasonal proportion of dancers injured), incidence rates (count of new injuries per 1000 dance-exposure hours), and severity (total days lost) were examined across each definition, registration method, and dance style. Results Questionnaire response rate was 99%. Agreement between registration methods ranged between 59% (time loss) and 74% (injury location). Depending on definition, registration, and dance style, injury prevalence ranged between 9.4% (95% confidence interval [CI]: 4.1%, 17.7%; time loss) and 82.4% (95% CI: 72.5%, 89.8%; any complaint), incidence rates between 0.1 (95% CI: 0.03, 0.2; time loss) and 4.9 (95% CI: 4.1, 5.8; any complaint) injuries per 1000 dance-hours, and days lost between 111 and 588 days. Conclusion Time-loss and medical-attention injury definitions underestimate the injury burden in preprofessional dancers. Accordingly, injury surveillance methodologies should consider more inclusive injury definitions. J Orthop Sports Phys Ther 2018;48(3):185-193. Epub 13 Dec 2017. doi:10.2519/jospt.2018.7542 Level of Evidence Symptom prevalence study, level 1b.
Methods for estimating heterocyclic amine concentrations in cooked meats in the US diet.
Keating, G A; Bogen, K T
2001-01-01
Heterocyclic amines (HAs) are formed in numerous cooked foods commonly consumed in the diet. A method was developed to estimate dietary HA levels using HA concentrations in experimentally cooked meats reported in the literature and meat consumption data obtained from a national dietary survey. Cooking variables (meat internal temperature and weight loss, surface temperature and time) were used to develop relationships for estimating total HA concentrations in six meat types. Concentrations of five individual HAs were estimated for specific meat type/cooking method combinations based on linear regression of total and individual HA values obtained from the literature. Using these relationships, total and individual HA concentrations were estimated for 21 meat type/cooking method combinations at four meat doneness levels. Reported consumption of the 21 meat type/cooking method combinations was obtained from a national dietary survey and the age-specific daily HA intake calculated using the estimated HA concentrations (ng/g) and reported meat intakes. Estimated mean daily total HA intakes for children (to age 15 years) and adults (30+ years) were 11 and 7.0 ng/kg/day, respectively, with 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP) estimated to comprise approximately 65% of each intake. Pan-fried meats were the largest source of HA in the diet and chicken the largest source of HAs among the different meat types.
Thomas, Carole L.; Stewart, Amy E.; Constantz, Jim E.
2000-01-01
Two methods, one a surface-water method and the second a ground-water method, were used to determine infiltration and percolation rates along a 2.5-kilometer reach of the Santa Fe River near La Bajada, New Mexico. The surface-water method uses streamflow measurements and their differences along a stream reach, streamflow-loss rates, stream surface area, and evaporation rates to determine infiltration rates. The ground-water method uses heat as a tracer to monitor percolation through shallow streambed sediments. Data collection began in October 1996 and continued through December 1997. During that period the stream reach was instrumented with three streamflow gages, and temperature profiles were monitored from the stream-sediment interface to about 3 meters below the streambed at four sites along the reach. Infiltration is the downward flow of water through the stream- sediment interface. Infiltration rates ranged from 92 to 267 millimeters per day for an intense measurement period during June 26- 28, 1997, and from 69 to 256 millimeters per day during September 27-October 6, 1997. Investigators calculated infiltration rates from streamflow loss, stream surface-area measurements, and evaporation-rate estimates. Infiltration rates may be affected by unmeasured irrigation-return flow in the study reach. Although the amount of irrigation-return flow was none to very small, it may result in underestimation of infiltration rates. The infiltration portion of streamflow loss was much greater than the evaporation portion. Infiltration accounted for about 92 to 98 percent of streamflow loss. Evaporation-rate estimates ranged from 3.4 to 7.6 millimeters per day based on pan-evaporation data collected at Cochiti Dam, New Mexico, and accounted for about 2 to 8 percent of streamflow loss. Percolation is the movement of water through saturated or unsaturated sediments below the stream-sediment interface. Percolation rates ranged from 40 to 109 millimeters per day during June 26-28, 1997. Percolation rates were not calculated for the September 27-October 6, 1997, period because a late summer flood removed the temperature sensors from the streambed. Investigators used a heat-and-water flow model, VS2DH (variably saturated, two- dimensional heat), to calculate near-surface streambed infiltration and percolation rates from temperatures measured in the stream and streambed. Near the stream-sediment interface, infiltration and percolation rates are comparable. Comparison of infiltration and percolation rates showed that infiltration rates were greater than percolation rates. The method used to calculate infiltration rates accounted for net loss or gain over the entire stream reach, whereas the method used to calculate percolation was dependent on point measurements and, as applied in this study, neglected the nonvertical component of heat and water fluxes. In general, using the ground-water method was less labor intensive than making a series of streamflow measurements and relied on temperature, an easily measured property. The ground-water method also eliminated the difficulty of measuring or estimating evaporation from the water surface and was therefore more direct. Both methods are difficult to use during periods of flood flow. The ground-water method has problems with the thermocouple-wire temperature sensors washing out during flood events. The surface- water method often cannot be used because of safety concerns for personnel making wading streamflow measurements.
Causes and methods to estimate cryptic sources of fishing mortality.
Gilman, E; Suuronen, P; Hall, M; Kennelly, S
2013-10-01
Cryptic, not readily detectable, components of fishing mortality are not routinely accounted for in fisheries management because of a lack of adequate data, and for some components, a lack of accurate estimation methods. Cryptic fishing mortalities can cause adverse ecological effects, are a source of wastage, reduce the sustainability of fishery resources and, when unaccounted for, can cause errors in stock assessments and population models. Sources of cryptic fishing mortality are (1) pre-catch losses, where catch dies from the fishing operation but is not brought onboard when the gear is retrieved, (2) ghost-fishing mortality by fishing gear that was abandoned, lost or discarded, (3) post-release mortality of catch that is retrieved and then released alive but later dies as a result of stress and injury sustained from the fishing interaction, (4) collateral mortalities indirectly caused by various ecological effects of fishing and (5) losses due to synergistic effects of multiple interacting sources of stress and injury from fishing operations, or from cumulative stress and injury caused by repeated sub-lethal interactions with fishing operations. To fill a gap in international guidance on best practices, causes and methods for estimating each component of cryptic fishing mortality are described, and considerations for their effective application are identified. Research priorities to fill gaps in understanding the causes and estimating cryptic mortality are highlighted. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.
Bacic, Janine; Velasquez, Esther; Hammer, Leslie B
2016-01-01
Objectives Qualitative studies have highlighted the possibility of job loss following occupational injuries for some workers, but prospective investigations are scant. We used a sample of nursing home workers from the Work, Family, and Health Network to prospectively investigate association between occupational injuries and job loss. Methods We merged data on 1331 workers assessed four times over an 18-month period with administrative data that include job loss from employers and publicly-available data on their workplaces. Workers self-reported occupational injuries in surveys. Multivariable logistic regression models estimated risk ratios for the impact of occupational injuries on overall job loss, whereas multinomial models were used to estimate odds ratio of voluntary and involuntary job loss. Use of marginal structural models allowed for adjustments of multilevel list of confounders that may be time-varying and/or on the causal pathway. Results By 12 months, 30.3% of workers experienced occupational injury, whereas 24.2% experienced job loss by 18 months. Comparing workers who reported occupational injuries to those reporting no injuries, risk ratio of overall job loss within subsequent 6 months was 1.31 (95% CI=0.93–1.86). Comparing the same groups, injured workers had higher odds of experiencing involuntary job loss (OR:2.19; 95% CI:1.27–3.77). Also, compared to uninjured workers, those injured more than once had higher odds of voluntary job loss (OR:1.95; 95% CI:1.03–3.67), while those injured once had higher odds of involuntary job loss (OR:2.19; 95% CI:1.18–4.05). Conclusions Despite regulatory protections, occupational injuries were associated with increased risk of voluntary and involuntary job loss for nursing home workers. PMID:26786757
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
Features of HF Radio Wave Attenuation in the Midlatitude Ionosphere Near the Skip Zone Boundary
NASA Astrophysics Data System (ADS)
Denisenko, P. F.; Skazik, A. I.
2017-06-01
We briefly describe the history of studying the decameter radio wave attenuation by different methods in the midlatitude ionosphere. A new method of estimating the attenuation of HF radio waves in the ionospheric F region near the skip zone boundary is presented. This method is based on an analysis of the time structure of the interference field generated by highly stable monochromatic X-mode radio waves at the observation point. The main parameter is the effective electron collision frequency νeff, which allows for all energy losses in the form of equivalent heat loss. The frequency νeff is estimated by matching the assumed (model) and the experimentally observed structures. Model calculations are performed using the geometrical-optics approximation. The spatial attenuation caused by the influence of the medium-scale traveling ionospheric disturbances is taken into account. Spherical shape of the ionosphere and the Earth's magnetic field are roughly allowed for. The results of recording of the level of signals from the RWM (Moscow) station at a frequency of 9.996 MHz at point Rostov are used.
NASA Astrophysics Data System (ADS)
Yang, Guijun; Yang, Hao; Jin, Xiuliang; Pignatti, Stefano; Casa, Faffaele; Silverstro, Paolo Cosmo
2016-08-01
Drought is the most costly natural disasters in China and all over the world. It is very important to evaluate the drought-induced crop yield losses and further improve water use efficiency at regional scale. Firstly, crop biomass was estimated by the combined use of Synthetic Aperture Radar (SAR) and optical remote sensing data. Then the estimated biophysical variable was assimilated into crop growth model (FAO AquaCrop) by the Particle Swarm Optimization (PSO) method from farmland scale to regional scale.At farmland scale, the most important crop parameters of AquaCrop model were determined to reduce the used parameters in assimilation procedure. The Extended Fourier Amplitude Sensitivity Test (EFAST) method was used for assessing the contribution of different crop parameters to model output. Moreover, the AquaCrop model was calibrated using the experiment data in Xiaotangshan, Beijing.At regional scale, spatial application of our methods were carried out and validated in the rural area of Yangling, Shaanxi Province, in 2014. This study will provide guideline to make irrigation decision of balancing of water consumption and yield loss.
Monitoring individual tree-based change with airborne lidar.
Duncanson, Laura; Dubayah, Ralph
2018-05-01
Understanding the carbon flux of forests is critical for constraining the global carbon cycle and managing forests to mitigate climate change. Monitoring forest growth and mortality rates is critical to this effort, but has been limited in the past, with estimates relying primarily on field surveys. Advances in remote sensing enable the potential to monitor tree growth and mortality across landscapes. This work presents an approach to measure tree growth and loss using multidate lidar campaigns in a high-biomass forest in California, USA. Individual tree crowns were delineated in 2008 and again in 2013 using a 3D crown segmentation algorithm, with derived heights and crown radii extracted and used to estimate individual tree aboveground biomass. Tree growth, loss, and aboveground biomass were analyzed with respect to tree height and crown radius. Both tree growth and loss rates decrease with increasing tree height, following the expectation that trees slow in growth rate as they age. Additionally, our aboveground biomass analysis suggests that, while the system is a net source of aboveground carbon, these carbon dynamics are governed by size class with the largest sources coming from the loss of a relatively small number of large individuals. This study demonstrates that monitoring individual tree-based growth and loss can be conducted with multidate airborne lidar, but these methods remain relatively immature. Disparities between lidar acquisitions were particularly difficult to overcome and decreased the sample of trees analyzed for growth rate in this study to 21% of the full number of delineated crowns. However, this study illuminates the potential of airborne remote sensing for ecologically meaningful forest monitoring at an individual tree level. As methods continue to improve, airborne multidate lidar will enable a richer understanding of the drivers of tree growth, loss, and aboveground carbon flux.
Measurement of blood loss during postpartum haemorrhage.
Lilley, G; Burkett-St-Laurent, D; Precious, E; Bruynseels, D; Kaye, A; Sanders, J; Alikhan, R; Collins, P W; Hall, J E; Collis, R E
2015-02-01
We set out to validate the accuracy of gravimetric quantification of blood loss during simulated major postpartum haemorrhage and to evaluate the technique in a consecutive cohort of women experiencing major postpartum haemorrhage. The study took part in a large UK delivery suite over a one-year period. All women who experienced major postpartum haemorrhage were eligible for inclusion. For the validation exercise, in a simulated postpartum haemorrhage scenario using known volumes of artificial blood, the accuracy of gravimetric measurement was compared with visual estimation made by delivery suite staff. In the clinical observation study, the blood volume lost during postpartum haemorrhage was measured gravimetrically according to our routine institutional protocol and was correlated with fall in haemoglobin. The main outcome measure was the accuracy of gravimetric measurement of blood loss. Validation exercise: the mean percentage error of gravimetrically measured blood volume was 4.0±2.7% compared to visually estimated blood volume with a mean percentage error of 34.7±32.1%. Clinical observation study: 356 out of 6187 deliveries were identified as having major postpartum haemorrhage. The correlation coefficient between measured blood loss and corrected fall in haemoglobin for all patients was 0.77; correlation was stronger (0.80) for postpartum haemorrhage >1500mL, and similar during routine and out-of-hours working. The accuracy of the gravimetric method was confirmed in simulated postpartum haemorrhage. The clinical study shows that gravimetric measurement of blood loss is correlated with the fall in haemoglobin in postpartum haemorrhage where blood loss exceeds 1500mL. The method is simple to perform, requires only basic equipment, and can be taught and used by all maternity services during major postpartum haemorrhage. Copyright © 2014 Elsevier Ltd. All rights reserved.
Connolly, Mark P; Tashjian, Cole; Kotsopoulos, Nikolaos; Bhatt, Aomesh; Postma, Maarten J
2017-07-01
Numerous approaches are used to estimate indirect productivity losses using various wage estimates applied to poor health in working aged adults. Considering the different wage estimation approaches observed in the published literature, we sought to assess variation in productivity loss estimates when using average wages compared with age-specific wages. Published estimates for average and age-specific wages for combined male/female wages were obtained from the UK Office of National Statistics. A polynomial interpolation was used to convert 5-year age-banded wage data into annual age-specific wages estimates. To compare indirect cost estimates, average wages and age-specific wages were used to project productivity losses at various stages of life based on the human capital approach. Discount rates of 0, 3, and 6 % were applied to projected age-specific and average wage losses. Using average wages was found to overestimate lifetime wages in conditions afflicting those aged 1-27 and 57-67, while underestimating lifetime wages in those aged 27-57. The difference was most significant for children where average wage overestimated wages by 15 % and for 40-year-olds where it underestimated wages by 14 %. Large differences in projecting productivity losses exist when using the average wage applied over a lifetime. Specifically, use of average wages overestimates productivity losses between 8 and 15 % for childhood illnesses. Furthermore, during prime working years, use of average wages will underestimate productivity losses by 14 %. We suggest that to achieve more precise estimates of productivity losses, age-specific wages should become the standard analytic approach.
Dorazio, R.M.; Rago, P.J.
1991-01-01
We simulated mark–recapture experiments to evaluate a method for estimating fishing mortality and migration rates of populations stratified at release and recovery. When fish released in two or more strata were recovered from different recapture strata in nearly the same proportions, conditional recapture probabilities were estimated outside the [0, 1] interval. The maximum likelihood estimates tended to be biased and imprecise when the patterns of recaptures produced extremely "flat" likelihood surfaces. Absence of bias was not guaranteed, however, in experiments where recapture rates could be estimated within the [0, 1] interval. Inadequate numbers of tag releases and recoveries also produced biased estimates, although the bias was easily detected by the high sampling variability of the estimates. A stratified tag–recapture experiment with sockeye salmon (Oncorhynchus nerka) was used to demonstrate procedures for analyzing data that produce biased estimates of recapture probabilities. An estimator was derived to examine the sensitivity of recapture rate estimates to assumed differences in natural and tagging mortality, tag loss, and incomplete reporting of tag recoveries.
NASA Astrophysics Data System (ADS)
Meneghini, Robert
1998-09-01
A method is proposed for estimating the area-average rain-rate distribution from attenuating-wavelength spaceborne or airborne radar data. Because highly attenuated radar returns yield unreliable estimates of the rain rate, these are eliminated by means of a proxy variable, Q, derived from the apparent radar reflectivity factors and a power law relating the attenuation coefficient and the reflectivity factor. In determining the probability distribution function of areawide rain rates, the elimination of attenuated measurements at high rain rates and the loss of data at light rain rates, because of low signal-to-noise ratios, leads to truncation of the distribution at the low and high ends. To estimate it over all rain rates, a lognormal distribution is assumed, the parameters of which are obtained from a nonlinear least squares fit to the truncated distribution. Implementation of this type of threshold method depends on the method used in estimating the high-resolution rain-rate estimates (e.g., either the standard Z-R or the Hitschfeld-Bordan estimate) and on the type of rain-rate estimate (either point or path averaged). To test the method, measured drop size distributions are used to characterize the rain along the radar beam. Comparisons with the standard single-threshold method or with the sample mean, taken over the high-resolution estimates, show that the present method usually provides more accurate determinations of the area-averaged rain rate if the values of the threshold parameter, QT, are chosen in the range from 0.2 to 0.4.
Experimental estimation of energy absorption during heel strike in human barefoot walking.
Baines, Patricia M; Schwab, A L; van Soest, A J
2018-01-01
Metabolic energy expenditure during human gait is poorly understood. Mechanical energy loss during heel strike contributes to this energy expenditure. Previous work has estimated the energy absorption during heel strike as 0.8 J using an effective foot mass model. The aim of our study is to investigate the possibility of determining the energy absorption by more directly estimating the work done by the ground reaction force, the force-integral method. Concurrently another aim is to compare this method of direct determination of work to the method of an effective foot mass model. Participants of our experimental study were asked to walk barefoot at preferred speed. Ground reaction force and lower leg kinematics were collected at high sampling frequency (3000 Hz; 1295 Hz), with tight synchronization. The work done by the ground reaction force is 3.8 J, estimated by integrating this force over the foot-ankle deformation. The effective mass model is improved by dropping the assumption that foot-ankle deformation is maximal at the instant of the impact force peak. On theoretical grounds it is clear that in the presence of substantial damping that peak force and peak deformation do not occur simultaneously. The energy absorption results, due the vertical force only, corresponding to the force-integral method is similar to the results of the improved application of the effective mass model (2.7 J; 2.5 J). However the total work done by the ground reaction force calculated by the force-integral method is significantly higher than that of the vertical component alone. We conclude that direct estimation of the work done by the ground reaction force is possible and preferable over the use of the effective foot mass model. Assuming that energy absorbed is lost, the mechanical energy loss of heel strike is around 3.8 J for preferred walking speeds (≈ 1.3 m/s), which contributes to about 15-20% of the overall metabolic cost of transport.
The problem of estimating recent genetic connectivity in a changing world.
Samarasin, Pasan; Shuter, Brian J; Wright, Stephen I; Rodd, F Helen
2017-02-01
Accurate understanding of population connectivity is important to conservation because dispersal can play an important role in population dynamics, microevolution, and assessments of extirpation risk and population rescue. Genetic methods are increasingly used to infer population connectivity because advances in technology have made them more advantageous (e.g., cost effective) relative to ecological methods. Given the reductions in wildlife population connectivity since the Industrial Revolution and more recent drastic reductions from habitat loss, it is important to know the accuracy of and biases in genetic connectivity estimators when connectivity has declined recently. Using simulated data, we investigated the accuracy and bias of 2 common estimators of migration (movement of individuals among populations) rate. We focused on the timing of the connectivity change and the magnitude of that change on the estimates of migration by using a coalescent-based method (Migrate-n) and a disequilibrium-based method (BayesAss). Contrary to expectations, when historically high connectivity had declined recently: (i) both methods over-estimated recent migration rates; (ii) the coalescent-based method (Migrate-n) provided better estimates of recent migration rate than the disequilibrium-based method (BayesAss); (iii) the coalescent-based method did not accurately reflect long-term genetic connectivity. Overall, our results highlight the problems with comparing coalescent and disequilibrium estimates to make inferences about the effects of recent landscape change on genetic connectivity among populations. We found that contrasting these 2 estimates to make inferences about genetic-connectivity changes over time could lead to inaccurate conclusions. © 2016 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Kinoshita, Youhei; Tanoue, Masahiro; Watanabe, Satoshi; Hirabayashi, Yukiko
2018-01-01
This study represents the first attempt to quantify the effects of autonomous adaptation on the projection of global flood hazards and to assess future flood risk by including this effect. A vulnerability scenario, which varies according to the autonomous adaptation effect for conventional disaster mitigation efforts, was developed based on historical vulnerability values derived from flood damage records and a river inundation simulation. Coupled with general circulation model outputs and future socioeconomic scenarios, potential future flood fatalities and economic loss were estimated. By including the effect of autonomous adaptation, our multimodel ensemble estimates projected a 2.0% decrease in potential flood fatalities and an 821% increase in potential economic losses by 2100 under the highest emission scenario together with a large population increase. Vulnerability changes reduced potential flood consequences by 64%-72% in terms of potential fatalities and 28%-42% in terms of potential economic losses by 2100. Although socioeconomic changes made the greatest contribution to the potential increased consequences of future floods, about a half of the increase of potential economic losses was mitigated by autonomous adaptation. There is a clear and positive relationship between the global temperature increase from the pre-industrial level and the estimated mean potential flood economic loss, while there is a negative relationship with potential fatalities due to the autonomous adaptation effect. A bootstrapping analysis suggests a significant increase in potential flood fatalities (+5.7%) without any adaptation if the temperature increases by 1.5 °C-2.0 °C, whereas the increase in potential economic loss (+0.9%) was not significant. Our method enables the effects of autonomous adaptation and additional adaptation efforts on climate-induced hazards to be distinguished, which would be essential for the accurate estimation of the cost of adaptation to climate change.
Earthquake Hazard Mitigation Using a Systems Analysis Approach to Risk Assessment
NASA Astrophysics Data System (ADS)
Legg, M.; Eguchi, R. T.
2015-12-01
The earthquake hazard mitigation goal is to reduce losses due to severe natural events. The first step is to conduct a Seismic Risk Assessment consisting of 1) hazard estimation, 2) vulnerability analysis, 3) exposure compilation. Seismic hazards include ground deformation, shaking, and inundation. The hazard estimation may be probabilistic or deterministic. Probabilistic Seismic Hazard Assessment (PSHA) is generally applied to site-specific Risk assessments, but may involve large areas as in a National Seismic Hazard Mapping program. Deterministic hazard assessments are needed for geographically distributed exposure such as lifelines (infrastructure), but may be important for large communities. Vulnerability evaluation includes quantification of fragility for construction or components including personnel. Exposure represents the existing or planned construction, facilities, infrastructure, and population in the affected area. Risk (expected loss) is the product of the quantified hazard, vulnerability (damage algorithm), and exposure which may be used to prepare emergency response plans, retrofit existing construction, or use community planning to avoid hazards. The risk estimate provides data needed to acquire earthquake insurance to assist with effective recovery following a severe event. Earthquake Scenarios used in Deterministic Risk Assessments provide detailed information on where hazards may be most severe, what system components are most susceptible to failure, and to evaluate the combined effects of a severe earthquake to the whole system or community. Casualties (injuries and death) have been the primary factor in defining building codes for seismic-resistant construction. Economic losses may be equally significant factors that can influence proactive hazard mitigation. Large urban earthquakes may produce catastrophic losses due to a cascading of effects often missed in PSHA. Economic collapse may ensue if damaged workplaces, disruption of utilities, and resultant loss of income produces widespread default on payments. With increased computational power and more complete inventories of exposure, Monte Carlo methods may provide more accurate estimation of severe losses and the opportunity to increase resilience of vulnerable systems and communities.
A remote sensing method for estimating regional reservoir area and evaporative loss
Zhang, Hua; Gorelick, Steven M.; Zimba, Paul V.; ...
2017-10-07
Evaporation from the water surface of a reservoir can significantly affect its function of ensuring the availability and temporal stability of water supply. Current estimations of reservoir evaporative loss are dependent on water area derived from a reservoir storage-area curve. Such curves are unavailable if the reservoir is located in a data-sparse region or questionable if long-term sedimentation has changed the original elevation-area relationship. In this paper, we propose a remote sensing framework to estimate reservoir evaporative loss at the regional scale. This framework uses a multispectral water index to extract reservoir area from Landsat imagery and estimate monthly evaporationmore » volume based on pan-derived evaporative rates. The optimal index threshold is determined based on local observations and extended to unobserved locations and periods. Built on the cloud computing capacity of the Google Earth Engine, this framework can efficiently analyze satellite images at large spatiotemporal scales, where such analysis is infeasible with a single computer. Our study involves 200 major reservoirs in Texas, captured in 17,811 Landsat images over a 32-year period. The results show that these reservoirs contribute to an annual evaporative loss of 8.0 billion cubic meters, equivalent to 20% of their total active storage or 53% of total annual water use in Texas. At five coastal basins, reservoir evaporative losses exceed the minimum freshwater inflows required to sustain ecosystem health and fishery productivity of the receiving estuaries. Reservoir evaporative loss can be significant enough to counterbalance the positive effects of impounding water and to offset the contribution of water conservation and reuse practices. Our results also reveal the spatially variable performance of the multispectral water index and indicate the limitation of using scene-level cloud cover to screen satellite images. Finally, this study demonstrates the advantage of combining satellite remote sensing and cloud computing to support regional water resources assessment.« less
A remote sensing method for estimating regional reservoir area and evaporative loss
NASA Astrophysics Data System (ADS)
Zhang, Hua; Gorelick, Steven M.; Zimba, Paul V.; Zhang, Xiaodong
2017-12-01
Evaporation from the water surface of a reservoir can significantly affect its function of ensuring the availability and temporal stability of water supply. Current estimations of reservoir evaporative loss are dependent on water area derived from a reservoir storage-area curve. Such curves are unavailable if the reservoir is located in a data-sparse region or questionable if long-term sedimentation has changed the original elevation-area relationship. We propose a remote sensing framework to estimate reservoir evaporative loss at the regional scale. This framework uses a multispectral water index to extract reservoir area from Landsat imagery and estimate monthly evaporation volume based on pan-derived evaporative rates. The optimal index threshold is determined based on local observations and extended to unobserved locations and periods. Built on the cloud computing capacity of the Google Earth Engine, this framework can efficiently analyze satellite images at large spatiotemporal scales, where such analysis is infeasible with a single computer. Our study involves 200 major reservoirs in Texas, captured in 17,811 Landsat images over a 32-year period. The results show that these reservoirs contribute to an annual evaporative loss of 8.0 billion cubic meters, equivalent to 20% of their total active storage or 53% of total annual water use in Texas. At five coastal basins, reservoir evaporative losses exceed the minimum freshwater inflows required to sustain ecosystem health and fishery productivity of the receiving estuaries. Reservoir evaporative loss can be significant enough to counterbalance the positive effects of impounding water and to offset the contribution of water conservation and reuse practices. Our results also reveal the spatially variable performance of the multispectral water index and indicate the limitation of using scene-level cloud cover to screen satellite images. This study demonstrates the advantage of combining satellite remote sensing and cloud computing to support regional water resources assessment.
A remote sensing method for estimating regional reservoir area and evaporative loss
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hua; Gorelick, Steven M.; Zimba, Paul V.
Evaporation from the water surface of a reservoir can significantly affect its function of ensuring the availability and temporal stability of water supply. Current estimations of reservoir evaporative loss are dependent on water area derived from a reservoir storage-area curve. Such curves are unavailable if the reservoir is located in a data-sparse region or questionable if long-term sedimentation has changed the original elevation-area relationship. In this paper, we propose a remote sensing framework to estimate reservoir evaporative loss at the regional scale. This framework uses a multispectral water index to extract reservoir area from Landsat imagery and estimate monthly evaporationmore » volume based on pan-derived evaporative rates. The optimal index threshold is determined based on local observations and extended to unobserved locations and periods. Built on the cloud computing capacity of the Google Earth Engine, this framework can efficiently analyze satellite images at large spatiotemporal scales, where such analysis is infeasible with a single computer. Our study involves 200 major reservoirs in Texas, captured in 17,811 Landsat images over a 32-year period. The results show that these reservoirs contribute to an annual evaporative loss of 8.0 billion cubic meters, equivalent to 20% of their total active storage or 53% of total annual water use in Texas. At five coastal basins, reservoir evaporative losses exceed the minimum freshwater inflows required to sustain ecosystem health and fishery productivity of the receiving estuaries. Reservoir evaporative loss can be significant enough to counterbalance the positive effects of impounding water and to offset the contribution of water conservation and reuse practices. Our results also reveal the spatially variable performance of the multispectral water index and indicate the limitation of using scene-level cloud cover to screen satellite images. Finally, this study demonstrates the advantage of combining satellite remote sensing and cloud computing to support regional water resources assessment.« less
Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model
NASA Astrophysics Data System (ADS)
Al Sobhi, Mashail M.
2015-02-01
Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.
Inferring species trees from incongruent multi-copy gene trees using the Robinson-Foulds distance
2013-01-01
Background Constructing species trees from multi-copy gene trees remains a challenging problem in phylogenetics. One difficulty is that the underlying genes can be incongruent due to evolutionary processes such as gene duplication and loss, deep coalescence, or lateral gene transfer. Gene tree estimation errors may further exacerbate the difficulties of species tree estimation. Results We present a new approach for inferring species trees from incongruent multi-copy gene trees that is based on a generalization of the Robinson-Foulds (RF) distance measure to multi-labeled trees (mul-trees). We prove that it is NP-hard to compute the RF distance between two mul-trees; however, it is easy to calculate this distance between a mul-tree and a singly-labeled species tree. Motivated by this, we formulate the RF problem for mul-trees (MulRF) as follows: Given a collection of multi-copy gene trees, find a singly-labeled species tree that minimizes the total RF distance from the input mul-trees. We develop and implement a fast SPR-based heuristic algorithm for the NP-hard MulRF problem. We compare the performance of the MulRF method (available at http://genome.cs.iastate.edu/CBL/MulRF/) with several gene tree parsimony approaches using gene tree simulations that incorporate gene tree error, gene duplications and losses, and/or lateral transfer. The MulRF method produces more accurate species trees than gene tree parsimony approaches. We also demonstrate that the MulRF method infers in minutes a credible plant species tree from a collection of nearly 2,000 gene trees. Conclusions Our new phylogenetic inference method, based on a generalized RF distance, makes it possible to quickly estimate species trees from large genomic data sets. Since the MulRF method, unlike gene tree parsimony, is based on a generic tree distance measure, it is appealing for analyses of genomic data sets, in which many processes such as deep coalescence, recombination, gene duplication and losses as well as phylogenetic error may contribute to gene tree discord. In experiments, the MulRF method estimated species trees accurately and quickly, demonstrating MulRF as an efficient alternative approach for phylogenetic inference from large-scale genomic data sets. PMID:24180377
An overall estimation of losses caused by diseases in the Brazilian fish farms.
Tavares-Dias, Marcos; Martins, Maurício Laterça
2017-12-01
Parasitic and infectious diseases are common in finfish, but are difficult to accurately estimate the economic impacts on the production in a country with large dimensions like Brazil. The aim of this study was to estimate the costs caused by economic losses of finfish due to mortality by diseases in Brazil. A model for estimating the costs related to parasitic and bacterial diseases in farmed fish and an estimative of these economic impacts are presented. We used official data of production and mortality of finfish for rough estimation of economic losses. The losses herein presented are related to direct and indirect economic costs for freshwater farmed fish, which were estimated in US$ 84 million per year. Finally, it was possible to establish by the first time an estimative of overall losses in finfish production in Brazil using data available from production. Therefore, this current estimative must help researchers and policy makers to approximate the economic costs of diseases for fish farming industry, as well as for developing of public policies on the control measures of diseases and priority research lines.
Estimated economic impact of vaccinations in 73 low- and middle-income countries, 2001–2020
Clark, Samantha; Portnoy, Allison; Grewal, Simrun; Stack, Meghan L; Sinha, Anushua; Mirelman, Andrew; Franklin, Heather; Friberg, Ingrid K; Tam, Yvonne; Walker, Neff; Clark, Andrew; Ferrari, Matthew; Suraratdecha, Chutima; Sweet, Steven; Goldie, Sue J; Garske, Tini; Li, Michelle; Hansen, Peter M; Johnson, Hope L; Walker, Damian
2017-01-01
Abstract Objective To estimate the economic impact likely to be achieved by efforts to vaccinate against 10 vaccine-preventable diseases between 2001 and 2020 in 73 low- and middle-income countries largely supported by Gavi, the Vaccine Alliance. Methods We used health impact models to estimate the economic impact of achieving forecasted coverages for vaccination against Haemophilus influenzae type b, hepatitis B, human papillomavirus, Japanese encephalitis, measles, Neisseria meningitidis serogroup A, rotavirus, rubella, Streptococcus pneumoniae and yellow fever. In comparison with no vaccination, we modelled the costs – expressed in 2010 United States dollars (US$) – of averted treatment, transportation costs, productivity losses of caregivers and productivity losses due to disability and death. We used the value-of-a-life-year method to estimate the broader economic and social value of living longer, in better health, as a result of immunization. Findings We estimated that, in the 73 countries, vaccinations given between 2001 and 2020 will avert over 20 million deaths and save US$ 350 billion in cost of illness. The deaths and disability prevented by vaccinations given during the two decades will result in estimated lifelong productivity gains totalling US$ 330 billion and US$ 9 billion, respectively. Over the lifetimes of the vaccinated cohorts, the same vaccinations will save an estimated US$ 5 billion in treatment costs. The broader economic and social value of these vaccinations is estimated at US$ 820 billion. Conclusion By preventing significant costs and potentially increasing economic productivity among some of the world’s poorest countries, the impact of immunization goes well beyond health. PMID:28867843
Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok
2016-05-01
To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Amenda, Lisa; Pfurtscheller, Clemens
2013-04-01
By virtue of augmented settling in hazardous areas and increased asset values, natural disasters such as floods, landslides and rockfalls cause high economic losses in Alpine lateral valleys. Especially in small municipalities, indirect losses, mainly stemming from a breakdown of transport networks, and costs of emergency can reach critical levels. A quantification of these losses is necessary to estimate the worthiness of mitigation measures, to determine the appropriate level of disaster assistance and to improve risk management strategies. There are comprehensive approaches available for assessing direct losses. However, indirect losses and costs of emergency are widely not assessed and the empirical basis for estimating these costs is weak. To address the resulting uncertainties of project appraisals, a standardized methodology has been developed dealing with issues of local economic effects and emergency efforts needed. In our approach, the cost-benefit-analysis for technical mitigation of the Austrian Torrent and Avalanche Control (TAC) will be optimized and extended using the 2005-debris flow as a design event, which struggled a small town in the upper Inn valley in southwest Tyrol (Austria). Thereby, 84 buildings were affected, 430 people were evacuated and due to this, the TAC implemented protection measures for 3.75 million Euros. Upgrading the method of the TAC and analyzing to what extent the cost-benefit-ratio is about to change, is one of the main objectives of this study. For estimating short-run indirect effects and costs of emergency on the local level, data was collected via questionnaires, field mapping, guided interviews, as well as intense literature research. According to this, up-to-date calculation methods were evolved and the cost-benefit-analysis of TAC was recalculated with these new-implemented results. The cost-benefit-ratio will be more precise and specific and hence, the decision, which mitigation alternative will be carried out. Based on this, the worthiness of the mitigation measures can be determined in more detail and the proper level of emergency assistance can be calculated more adequately. By dint of this study, a better data basis will be created evaluating technical and non-technical mitigation measures, which is useful for government agencies, insurance companies and research.
Gómez-Romano, Fernando; Villanueva, Beatriz; Fernández, Jesús; Woolliams, John A; Pong-Wong, Ricardo
2016-01-13
Optimal contribution methods have proved to be very efficient for controlling the rates at which coancestry and inbreeding increase and therefore, for maintaining genetic diversity. These methods have usually relied on pedigree information for estimating genetic relationships between animals. However, with the large amount of genomic information now available such as high-density single nucleotide polymorphism (SNP) chips that contain thousands of SNPs, it becomes possible to calculate more accurate estimates of relationships and to target specific regions in the genome where there is a particular interest in maximising genetic diversity. The objective of this study was to investigate the effectiveness of using genomic coancestry matrices for: (1) minimising the loss of genetic variability at specific genomic regions while restricting the overall loss in the rest of the genome; or (2) maximising the overall genetic diversity while restricting the loss of diversity at specific genomic regions. Our study shows that the use of genomic coancestry was very successful at minimising the loss of diversity and outperformed the use of pedigree-based coancestry (genetic diversity even increased in some scenarios). The results also show that genomic information allows a targeted optimisation to maintain diversity at specific genomic regions, whether they are linked or not. The level of variability maintained increased when the targeted regions were closely linked. However, such targeted management leads to an important loss of diversity in the rest of the genome and, thus, it is necessary to take further actions to constrain this loss. Optimal contribution methods also proved to be effective at restricting the loss of diversity in the rest of the genome, although the resulting rate of coancestry was higher than the constraint imposed. The use of genomic matrices when optimising contributions permits the control of genetic diversity and inbreeding at specific regions of the genome through the minimisation of partial genomic coancestry matrices. The formula used to predict coancestry in the next generation produces biased results and therefore it is necessary to refine the theory of genetic contributions when genomic matrices are used to optimise contributions.
NASA Astrophysics Data System (ADS)
Wald, D. J.; Jaiswal, K. S.; Marano, K.; Hearne, M.; Earle, P. S.; So, E.; Garcia, D.; Hayes, G. P.; Mathias, S.; Applegate, D.; Bausch, D.
2010-12-01
The U.S. Geological Survey (USGS) has begun publicly releasing earthquake alerts for significant earthquakes around the globe based on estimates of potential casualties and economic losses. These estimates should significantly enhance the utility of the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system that has been providing estimated ShakeMaps and computing population exposures to specific shaking intensities since 2007. Quantifying earthquake impacts and communicating loss estimates (and their uncertainties) to the public has been the culmination of several important new and evolving components of the system. First, the operational PAGER system now relies on empirically-based loss models that account for estimated shaking hazard, population exposure, and employ country-specific fatality and economic loss functions derived using analyses of losses due to recent and past earthquakes. In some countries, our empirical loss models are informed in part by PAGER’s semi-empirical and analytical loss models, and building exposure and vulnerability data sets, all of which are being developed in parallel to the empirical approach. Second, human and economic loss information is now portrayed as a supplement to existing intensity/exposure content on both PAGER summary alert (available via cell phone/email) messages and web pages. Loss calculations also include estimates of the economic impact with respect to the country’s gross domestic product. Third, in order to facilitate rapid and appropriate earthquake responses based on our probable loss estimates, in early 2010 we proposed a four-level Earthquake Impact Scale (EIS). Instead of simply issuing median estimates for losses—which can be easily misunderstood and misused—this scale provides ranges of losses from which potential responders can gauge expected overall impact from strong shaking. EIS is based on two complementary criteria: the estimated cost of damage, which is most suitable for U.S. domestic events; and estimated ranges of fatalities, which are generally more appropriate for global events, particularly in earthquake-vulnerable countries. Alert levels are characterized by alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1000, respectively. For damage impact, yellow, orange, and red thresholds are triggered when estimated US dollar losses reach 1 million, 100 million, and 1 billion+ levels, respectively. Finally, alerting protocols now explicitly support EIS-based alerts. Critical users can receive PAGER alerts i) based on the EIS-based alert level, in addition to or as an alternative to magnitude and population/intensity exposure-based alerts, and ii) optionally, based on user-selected regions of the world. The essence of PAGER’s impact-based alerting is that actionable loss information is now available in the immediate aftermath of significant earthquakes worldwide based on quantifiable, albeit uncertain, loss estimates provided by the USGS.
Operational Risk Measurement of Chinese Commercial Banks Based on Extreme Value Theory
NASA Astrophysics Data System (ADS)
Song, Jiashan; Li, Yong; Ji, Feng; Peng, Cheng
The financial institutions and supervision institutions have all agreed on strengthening the measurement and management of operational risks. This paper attempts to build a model on the loss of operational risks basing on Peak Over Threshold model, emphasizing on weighted least square, which improved Hill’s estimation method, while discussing the situation of small sample, and fix the sample threshold more objectively basing on the media-published data of primary banks loss on operational risk from 1994 to 2007.
NASA Astrophysics Data System (ADS)
Jaiswal, P.; van Westen, C. J.; Jetten, V.
2010-06-01
A quantitative approach for landslide risk assessment along transportation lines is presented and applied to a road and a railway alignment in the Nilgiri hills in southern India. The method allows estimating direct risk affecting the alignments, vehicles and people, and indirect risk resulting from the disruption of economic activities. The data required for the risk estimation were obtained from historical records. A total of 901 landslides were catalogued initiating from cut slopes along the railway and road alignment. The landslides were grouped into three magnitude classes based on the landslide type, volume, scar depth, run-out distance, etc and their probability of occurrence was obtained using frequency-volume distribution. Hazard, for a given return period, expressed as the number of landslides of a given magnitude class per kilometre of cut slopes, was obtained using Gumbel distribution and probability of landslide magnitude. In total 18 specific hazard scenarios were generated using the three magnitude classes and six return periods (1, 3, 5, 15, 25, and 50 years). The assessment of the vulnerability of the road and railway line was based on damage records whereas the vulnerability of different types of vehicles and people was subjectively assessed based on limited historic incidents. Direct specific loss for the alignments (railway line and road), vehicles (train, bus, lorry, car and motorbike) was expressed in monetary value (US), and direct specific loss of life of commuters was expressed in annual probability of death. Indirect specific loss (US) derived from the traffic interruption was evaluated considering alternative driving routes, and includes losses resulting from additional fuel consumption, additional travel cost, loss of income to the local business, and loss of revenue to the railway department. The results indicate that the total loss, including both direct and indirect loss, from 1 to 50 years return period, varies from US 90 840 to US 779 500 and the average annual total loss was estimated as US 35 000. The annual probability of a person most at risk travelling in a bus, lorry, car, motorbike and train is less than 10-4/annum in all the time periods considered. The detailed estimation of direct and indirect risk will facilitate developing landslide risk mitigation and management strategies for transportation lines in the study area.
Estimating ambiguity preferences and perceptions in multiple prior models: Evidence from the field
Dimmock, Stephen G.; Kouwenberg, Roy; Mitchell, Olivia S.; Peijnenburg, Kim
2016-01-01
We develop a tractable method to estimate multiple prior models of decision-making under ambiguity. In a representative sample of the U.S. population, we measure ambiguity attitudes in the gain and loss domains. We find that ambiguity aversion is common for uncertain events of moderate to high likelihood involving gains, but ambiguity seeking prevails for low likelihoods and for losses. We show that choices made under ambiguity in the gain domain are best explained by the α-MaxMin model, with one parameter measuring ambiguity aversion (ambiguity preferences) and a second parameter quantifying the perceived degree of ambiguity (perceptions about ambiguity). The ambiguity aversion parameter α is constant and prior probability sets are asymmetric for low and high likelihood events. The data reject several other models, such as MaxMin and MaxMax, as well as symmetric probability intervals. Ambiguity aversion and the perceived degree of ambiguity are both higher for men and for the college-educated. Ambiguity aversion (but not perceived ambiguity) is also positively related to risk aversion. In the loss domain, we find evidence of reflection, implying that ambiguity aversion for gains tends to reverse into ambiguity seeking for losses. Our model’s estimates for preferences and perceptions about ambiguity can be used to analyze the economic and financial implications of such preferences. PMID:26924890
Insensible perspiration during anaesthesia and surgery.
Reithner, L; Johansson, H; Strouth, L
1980-10-01
Cutaneous and respiratory insensible perspiration were studied in patients during anaesthesia and surgery. The evaporative water loss from the respiratory tract was studied in 27 patients undergoing abdominal surgery. The method used was based on a fast-acting aspiration psychrometer, and the expired gases were heated so that no condensation could occur before the gases reached the psychrometer. The evaporative water loss from the skin was studied in 18 patients undergoing abdominal surgery. The method used was based on estimation of the vapour pressure gradient immediately adjacent to the surface of the skin. It was shown that the evaporative water and heat loss from the respiratory tract during surgery amount to about 10 g.m-2.h-1, resp. 25kJ.m-2.h-1, which is a 15% increase in comparison with normal breathing in an indoor environment. The loss from the skin was about 10 g.m-2.h-1, which does not differ from results obtained in healthy individuals in a corresponding environment.
[Menstrual blood loss and iron nutritional status in female undergraduate students].
Li, Jing; Gao, Qiang; Tian, Su; Chen, Yuexiao; Ma, Yuxia; Huang, Zhenwu
2011-03-01
To study menstrual blood loss and iron nutritional status in female undergraduate students. Thirty female undergraduate students were selected by simple random sampling method, the general information were investigated by questionnaire. The menstrual blood was collected by weighing every pad before and after use, and the blood not collected in pads was estimated. Hemoglobin, serum free protoporphyrin and serum ferritin were measured by regular method. The relationship between menstrual blood loss and iron nutritional status was analyzed by bivariate correlation statistics. The average menstrual period was (4.5 +/- 1.4) days. The average menstrual blood loss was (59.3 +/- 25.1) g, in a range of 24 g to 110 g. The average content of serum ferritin, free protoporphyrin and hemoglobin was (25.13 +/- 14.33) ng/ml, (0.06 +/- 0.01) microg/ml and (131.61 +/- 9.76) g/L respectively. There were 22.58% of subjects in iron reduction period (serum ferritin < 12 ng/ml). The menstrual blood loss was negatively correlated with serum ferritin. The amount of menstrual blood loss among individual students was significantly different. No clinical anemia does not mean in a good iron nutritional status. Serum ferritin is a sensitive indicator for iron nutritional status.
Indirect cost of maternal deaths in the WHO African Region in 2010.
Kirigia, Joses Muthuri; Mwabu, Germano Mwige; Orem, Juliet Nabyonga; Muthuri, Rosenabi Deborah Karimi
2014-08-31
An estimated 147,741 maternal deaths occurred in 2010 in 45 of the 47 countries in the African Region of the World Health Organization (WHO). The objective of this study was to estimate the indirect cost of maternal deaths in the Region to provide data for use in advocacy for increased domestic and external investment in multisectoral policy interventions to curb maternal mortality. This study used the cost-of-illness method to estimate the indirect cost of maternal mortality, i.e. the loss in non-health gross domestic product (GDP) attributable to maternal deaths. Estimates on maternal mortality for 2010 from Trends in maternal mortality: 1990 to 2010 published by WHO, UNICEF, UNFPA and the World Bank were used in these calculations. Values for future non-health GDP lost were converted into their present values by applying a 3% discount rate. One-way sensitivity analysis at 5% and 10% discount rates assessed the impact on non-health GDP loss. Indirect cost analysis was undertaken for the countries, categorized under three income groups. Group 1 consisted of nine high and upper middle income countries, Group 2 of 12 lower middle income countries, and Group 3 of 26 low income countries. Estimates for Seychelles in Group 1 and South Sudan in Group 3 were not provided in the source used. The 147,741 maternal deaths that occurred in 45 countries in the African Region in 2010 resulted in a total non-health GDP loss of Int$ 4.5 billion (PPP). About 24.5% of the loss was in Group 1 countries, 44.9% in Group 2 countries and 30.6% in Group 3 countries. This translated into losses in non-health GDP of Int$ 139,219, Int$ 35,440 and Int$ 16,397 per maternal death, respectively, for the three groups. Using discount rates of 5% and 10% reduced the total non-health GDP loss by 19.1% and 47.7%, respectively. Maternal mortality is responsible for a noteworthy level of non-health GDP loss among the countries in the African Region. There is urgent need, therefore, to increase domestic and external investment to scale up coverage of existing cost-effective, multisectoral women's health interventions to reduce maternal morbidity and mortality.
Channel estimation in few mode fiber mode division multiplexing transmission system
NASA Astrophysics Data System (ADS)
Hei, Yongqiang; Li, Li; Li, Wentao; Li, Xiaohui; Shi, Guangming
2018-03-01
It is abundantly clear that obtaining the channel state information (CSI) is of great importance for the equalization and detection in coherence receivers. However, to the best of the authors' knowledge, in most of the existing literatures, CSI is assumed to be perfectly known at the receiver. So far, few literature discusses the effects of imperfect CSI on MDM system performance caused by channel estimation. Motivated by that, in this paper, the channel estimation in few mode fiber (FMF) mode division multiplexing (MDM) system is investigated, in which two classical channel estimation methods, i.e., least square (LS) method and minimum mean square error (MMSE) method, are discussed with the assumption of the spatially white noise lumped at the receiver side of MDM system. Both the capacity and BER performance of MDM system affected by mode-dependent gain or loss (MDL) with different channel estimation errors have been studied. Simulation results show that the capacity and BER performance can be further deteriorated in MDM system by the channel estimation, and an 1e-3 variance of channel estimation error is acceptable in MDM system with 0-6 dB MDL values.
Detilleux, J; Kastelic, J P; Barkema, H W
2015-03-01
Milk losses associated with mastitis can be attributed to either effects of pathogens per se (i.e., direct losses) or effects of the immune response triggered by intramammary infection (indirect losses). The distinction is important in terms of mastitis prevention and treatment. Regardless, the number of pathogens is often unknown (particularly in field studies), making it difficult to estimate direct losses, whereas indirect losses can be approximated by measuring the association between increased somatic cell count (SCC) and milk production. An alternative is to perform a mediation analysis in which changes in milk yield are allocated into their direct and indirect components. We applied this method on data for clinical mastitis, milk and SCC test-day recordings, results of bacteriological cultures (Escherichia coli, Staphylococcus aureus, Streptococcus uberis, coagulase-negative staphylococci, Streptococcus dysgalactiae, and streptococci other than Strep. dysgalactiae and Strep. uberis), and cow characteristics. Following a diagnosis of clinical mastitis, the cow was treated and changes (increase or decrease) in milk production before and after a diagnosis were interpreted counterfactually. On a daily basis, indirect changes, mediated by SCC increase, were significantly different from zero for all bacterial species, with a milk yield decrease (ranging among species from 4 to 33g and mediated by an increase of 1000 SCC/mL/day) before and a daily milk increase (ranging among species from 2 to 12g and mediated by a decrease of 1000 SCC/mL/day) after detection. Direct changes, not mediated by SCC, were only different from zero for coagulase-negative staphylococci before diagnosis (72g per day). We concluded that mixed structural equation models were useful to estimate direct and indirect effects of the presence of clinical mastitis on milk yield. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Morales, Roberto; Barriga-Carrasco, Manuel D.; Casas, David
2017-04-01
The instantaneous charge state of uranium ions traveling through a fully ionized hydrogen plasma has been theoretically studied and compared with one of the first energy loss experiments in plasmas, carried out at GSI-Darmstadt by Hoffmann et al. in the 1990s. For this purpose, two different methods to estimate the instantaneous charge state of the projectile have been employed: (1) rate equations using ionization and recombination cross sections and (2) equilibrium charge state formulas for plasmas. Also, the equilibrium charge state has been obtained using these ionization and recombination cross sections and compared with the former equilibrium formulas. The equilibrium charge state of projectiles in plasmas is not always reached, and it depends mainly on the projectile velocity and the plasma density. Therefore, a non-equilibrium or an instantaneous description of the projectile charge is necessary. The charge state of projectile ions cannot be measured, except after exiting the target, and experimental data remain very scarce. Thus, the validity of our charge state model is checked by comparing the theoretical predictions with an energy loss experiment, as the energy loss has a generally quadratic dependence on the projectile charge state. The dielectric formalism has been used to calculate the plasma stopping power including the Brandt-Kitagawa (BK) model to describe the charge distribution of the projectile. In this charge distribution, the instantaneous number of bound electrons instead of the equilibrium number has been taken into account. Comparing our theoretical predictions with experiments, it is shown the necessity of including the instantaneous charge state and the BK charge distribution for a correct energy loss estimation. The results also show that the initial charge state has a strong influence in order to estimate the energy loss of the uranium ions.
An Estimation of Private Household Costs to Receive Free Oral Cholera Vaccine in Odisha, India.
Mogasale, Vittal; Kar, Shantanu K; Kim, Jong-Hoon; Mogasale, Vijayalaxmi V; Kerketta, Anna S; Patnaik, Bikash; Rath, Shyam Bandhu; Puri, Mahesh K; You, Young Ae; Khuntia, Hemant K; Maskery, Brian; Wierzba, Thomas F; Sah, Binod
2015-01-01
Service provider costs for vaccine delivery have been well documented; however, vaccine recipients' costs have drawn less attention. This research explores the private household out-of-pocket and opportunity costs incurred to receive free oral cholera vaccine during a mass vaccination campaign in rural Odisha, India. Following a government-driven oral cholera mass vaccination campaign targeting population over one year of age, a questionnaire-based cross-sectional survey was conducted to estimate private household costs among vaccine recipients. The questionnaire captured travel costs as well as time and wage loss for self and accompanying persons. The productivity loss was estimated using three methods: self-reported, government defined minimum daily wages and gross domestic product per capita in Odisha. On average, families were located 282.7 (SD = 254.5) meters from the nearest vaccination booths. Most family members either walked or bicycled to the vaccination sites and spent on average 26.5 minutes on travel and 15.7 minutes on waiting. Depending upon the methodology, the estimated productivity loss due to potential foregone income ranged from $0.15 to $0.29 per dose of cholera vaccine received. The private household cost of receiving oral cholera vaccine constituted 24.6% to 38.0% of overall vaccine delivery costs. The private household costs resulting from productivity loss for receiving a free oral cholera vaccine is a substantial proportion of overall vaccine delivery cost and may influence vaccine uptake. Policy makers and program managers need to recognize the importance of private costs and consider how to balance programmatic delivery costs with private household costs to receive vaccines.
Similar Estimates of Temperature Impacts on Global Wheat Yield by Three Independent Methods
NASA Technical Reports Server (NTRS)
Liu, Bing; Asseng, Senthold; Muller, Christoph; Ewart, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.;
2016-01-01
The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify 'method uncertainty' in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.
Similar estimates of temperature impacts on global wheat yield by three independent methods
NASA Astrophysics Data System (ADS)
Liu, Bing; Asseng, Senthold; Müller, Christoph; Ewert, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; Rosenzweig, Cynthia; Aggarwal, Pramod K.; Alderman, Phillip D.; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andy; Deryng, Delphine; Sanctis, Giacomo De; Doltra, Jordi; Fereres, Elias; Folberth, Christian; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A.; Izaurralde, Roberto C.; Jabloun, Mohamed; Jones, Curtis D.; Kersebaum, Kurt C.; Kimball, Bruce A.; Koehler, Ann-Kristin; Kumar, Soora Naresh; Nendel, Claas; O'Leary, Garry J.; Olesen, Jørgen E.; Ottman, Michael J.; Palosuo, Taru; Prasad, P. V. Vara; Priesack, Eckart; Pugh, Thomas A. M.; Reynolds, Matthew; Rezaei, Ehsan E.; Rötter, Reimund P.; Schmid, Erwin; Semenov, Mikhail A.; Shcherbak, Iurii; Stehfest, Elke; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wall, Gerard W.; Wang, Enli; White, Jeffrey W.; Wolf, Joost; Zhao, Zhigan; Zhu, Yan
2016-12-01
The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify `method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.
Ehieli, Eric I; Howard, Lauren E; Monk, Terri G; Ferrandino, Michael N; Polascik, Thomas J; Walther, Philip J; Freedland, Stephen J
2016-08-01
To study the effect of end-expiratory pressure used during anesthesia on blood loss during radical prostatectomy. We evaluated 247 patients who underwent either radical retropubic prostatectomy or robot-assisted laparoscopic prostatectomy at a single institution from 2008 to 2013 by one of four surgeons. Patient characteristics were compared using t-tests, rank sum or χ(2) -tests as appropriate. The association between positive end-expiratory pressure and estimated blood loss was tested using linear regression. Patients were classified into high (≥4 cmH2 O) and low (≤1 cmH2 O) positive-end expiratory pressure groups. Estimated blood loss in radical retropubic prostatectomy was higher in the high positive end-expiratory pressure group (1000 mL vs 800 mL, P = 0.042). Estimated blood loss in robot-assisted laparoscopic prostatectomy was lower in the high positive end-expiratory pressure group (150 mL vs 250 mL, P = 0.015). After adjusting for other factors known to influence blood loss, a 5-cmH2 O increase in positive end-expiratory pressure was associated with a 34.9% increase in estimated blood loss (P = 0.030) for radical retropubic prostatectomy, and a 33.0% decrease for robot-assisted laparoscopic prostatectomy (P = 0.038). In radical retropubic prostatectomy, high positive end-expiratory pressure was associated with higher estimated blood loss, and the benefits of positive end-expiratory pressure should be weighed against the risk of increased estimated blood loss. In robot-assisted laparoscopic prostatectomy, high positive end-expiratory pressure was associated with lower estimated blood loss, and might have more than just pulmonary benefits. © 2016 The Japanese Urological Association.
Integration, Validation, and Application of a PV Snow Coverage Model in SAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryberg, David; Freeman, Janine
2015-09-01
Due to the increasing deployment of PV systems in snowy climates, there is significant interest in a method capable of estimating PV losses resulting from snow coverage that has been verified for a wide variety of system designs and locations. A scattering of independent snow coverage models have been developed over the last 15 years; however, there has been very little effort spent on verifying these models beyond the system design and location on which they were based. Moreover, none of the major PV modeling software products have incorporated any of these models into their workflow. In response to thismore » deficiency, we have integrated the methodology of the snow model developed in the paper by Marion et al. [1] into the National Renewable Energy Laboratory's (NREL) System Advisor Model (SAM). In this work we describe how the snow model is implemented in SAM and discuss our demonstration of the model's effectiveness at reducing error in annual estimations for two PV arrays. Following this, we use this new functionality in conjunction with a long term historical dataset to estimate average snow losses across the United States for a typical PV system design. The open availability of the snow loss estimation capability in SAM to the PV modeling community, coupled with our results of the nation-wide study, will better equip the industry to accurately estimate PV energy production in areas affected by snowfall.« less
NASA Astrophysics Data System (ADS)
Yin, Shui-qing; Wang, Zhonglei; Zhu, Zhengyuan; Zou, Xu-kai; Wang, Wen-ting
2018-07-01
Extreme precipitation can cause flooding and may result in great economic losses and deaths. The return level is a commonly used measure of extreme precipitation events and is required for hydrological engineer designs, including those of sewerage systems, dams, reservoirs and bridges. In this paper, we propose a two-step method to estimate the return level and its uncertainty for a study region. In the first step, we use the generalized extreme value distribution, the L-moment method and the stationary bootstrap to estimate the return level and its uncertainty at the site with observations. In the second step, a spatial model incorporating the heterogeneous measurement errors and covariates is trained to estimate return levels at sites with no observations and to improve the estimates at sites with limited information. The proposed method is applied to the daily rainfall data from 273 weather stations in the Haihe river basin of North China. We compare the proposed method with two alternatives: the first one is based on the ordinary Kriging method without measurement error, and the second one smooths the estimated location and scale parameters of the generalized extreme value distribution by the universal Kriging method. Results show that the proposed method outperforms its counterparts. We also propose a novel approach to assess the two-step method by comparing it with the at-site estimation method with a series of reduced length of observations. Estimates of the 2-, 5-, 10-, 20-, 50- and 100-year return level maps and the corresponding uncertainties are provided for the Haihe river basin, and a comparison with those released by the Hydrology Bureau of Ministry of Water Resources of China is made.
NASA Astrophysics Data System (ADS)
Paudel, Y.; Botzen, W. J. W.; Aerts, J. C. J. H.
2013-03-01
This study applies Bayesian Inference to estimate flood risk for 53 dyke ring areas in the Netherlands, and focuses particularly on the data scarcity and extreme behaviour of catastrophe risk. The probability density curves of flood damage are estimated through Monte Carlo simulations. Based on these results, flood insurance premiums are estimated using two different practical methods that each account in different ways for an insurer's risk aversion and the dispersion rate of loss data. This study is of practical relevance because insurers have been considering the introduction of flood insurance in the Netherlands, which is currently not generally available.
Noninvasive estimation of assist pressure for direct mechanical ventricular actuation
NASA Astrophysics Data System (ADS)
An, Dawei; Yang, Ming; Gu, Xiaotong; Meng, Fan; Yang, Tianyue; Lin, Shujing
2018-02-01
Direct mechanical ventricular actuation is effective to reestablish the ventricular function with non-blood contact. Due to the energy loss within the driveline of the direct cardiac compression device, it is necessary to acquire the accurate value of assist pressure acting on the heart surface. To avoid myocardial trauma induced by invasive sensors, the noninvasive estimation method is developed and the experimental device is designed to measure the sample data for fitting the estimation models. By examining the goodness of fit numerically and graphically, the polynomial model presents the best behavior among the four alternative models. Meanwhile, to verify the effect of the noninvasive estimation, the simplified lumped parameter model is utilized to calculate the pre-support and the post-support left ventricular pressure. Furthermore, by adjusting the driving pressure beyond the range of the sample data, the assist pressure is estimated with the similar waveform and the post-support left ventricular pressure approaches the value of the adult healthy heart, indicating the good generalization ability of the noninvasive estimation method.
Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry
2018-06-19
Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.
NASA Astrophysics Data System (ADS)
Shimizu, Hiromasa; Shimodaira, Takahiro
2018-04-01
We report on magnetoplasmonic Si waveguides with a ferromagnetic Fe/conductive metal Au multilayer for realizing a sizable magnetooptic effect with a low propagation loss for integrated optical isolators. By combining the ferromagnetic metal Fe with a highly conductive Au layer, the largest nonreciprocal differences in effective index were estimated for propagation lengths of 1-20 µm. Mode analysis with and without a Au layer clarified that the insertion of a Au layer on an Fe layer improves the optical confinement in the Fe layer with reduced propagation loss and is effective in enlarging the magnetooptic effect for the same propagation length. On the basis of the optimized Fe/Au multilayer structure, we designed waveguide optical isolators based on nonreciprocal coupling by the finite difference time domain (FDTD) method. We estimated an optical isolation of 10.8 dB with a forward insertion loss of 13.4 dB in a 34-µm-long nonreciprocal directional coupler.
Faith, Daniel P.
2015-01-01
The phylogenetic diversity measure, (‘PD’), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. PMID:25561672
NASA Astrophysics Data System (ADS)
Daniell, James; Mühr, Bernhard; Kunz-Plapp, Tina; Brink, Susan A.; Kunz, Michael; Khazai, Bijan; Wenzel, Friedemann
2014-05-01
In the aftermath of a disaster, the extent of the socioeconomic loss (fatalities, homelessness and economic losses) is often not known and it may take days before a reasonable estimate is known. Using the technique of socio-economic fragility functions developed (Daniell, 2014) using a regression of socio-economic indicators through time against historical empirical loss vs. intensity data, a first estimate can be established. With more information from the region as the disaster unfolds, a more detailed estimate can be provided via a calibration of the initial loss estimate parameters. In 2013, two main disasters hit the Philippines; the Bohol earthquake in October and the Haiyan typhoon in November. Although both disasters were contrasting and hit different regions, the same generalised methodology was used for initial rapid estimates and then the updating of the disaster loss estimate through time. The CEDIM Forensic Disaster Analysis Group of KIT and GFZ produced 6 reports for Bohol and 2 reports for Haiyan detailing various aspects of the disasters from the losses to building damage, the socioeconomic profile and also the social networking and disaster response. This study focusses on the loss analysis undertaken. The following technique was used:- 1. A regression of historical earthquake and typhoon losses for the Philippines was examined using the CATDAT Damaging Earthquakes Database, and various Philippines databases respectively. 2. The historical intensity impact of the examined events were placed in a GIS environment in order to allow correlation with the population and capital stock database from 1900-2013 to create a loss function. The modified human development index from 1900-2013 was also used to also calibrate events through time. 3. The earthquake intensity and the wind speed intensity was used from the 2013 events as well as the 2013 capital stock and population in order to calculate the number of fatalities (except in Haiyan), homeless and economic losses. 4. After the initial estimate, damage patterns were examined and the loss estimates calibrated. The economic loss estimates of 9.5 billion USD capital stock and 4.1 billion USD GDP costs and the estimate of 2.1 million long term homeless from the Typhoon Haiyan event from the initial model proved very accurate with around the same values coming from reports around a month after the event. For the Bohol earthquake, the economic loss estimate was reasonable (around 100 million USD), however, the number of fatalities was slightly underestimated given the intensity field being underestimated and due to the number of landslide and other deaths (heart attacks etc.) in the first day. As the damage estimates were reported on post-disaster over the next days, the fatality function was calibrated and produced results closer to 200 deaths. Such parsimonious modelling in the aftermath of a disaster and socioeconomic profiling of the disaster area can prove useful to relief agencies and governments as well as those on the ground giving a first estimate of the extent of the damage and the models will as such continue to be developed in the course of FDA. Daniell J.E. (2014) The development of socio-economic fragility functions for use in worldwide rapid earthquake loss estimation procedures, Ph.D. Thesis (in publishing), Karlsruhe Institute of Technology, Karlsruhe, Germany.
Regionalising MUSLE factors for application to a data-scarce catchment
NASA Astrophysics Data System (ADS)
Gwapedza, David; Slaughter, Andrew; Hughes, Denis; Mantel, Sukhmani
2018-04-01
The estimation of soil loss and sediment transport is important for effective management of catchments. A model for semi-arid catchments in southern Africa has been developed; however, simplification of the model parameters and further testing are required. Soil loss is calculated through the Modified Universal Soil Loss Equation (MUSLE). The aims of the current study were to: (1) regionalise the MUSLE erodibility factors and; (2) perform a sensitivity analysis and validate the soil loss outputs against independently-estimated measures. The regionalisation was developed using Geographic Information Systems (GIS) coverages. The model was applied to a high erosion semi-arid region in the Eastern Cape, South Africa. Sensitivity analysis indicated model outputs to be more sensitive to the vegetation cover factor. The simulated soil loss estimates of 40 t ha-1 yr-1 were within the range of estimates by previous studies. The outcome of the present research is a framework for parameter estimation for the MUSLE through regionalisation. This is part of the ongoing development of a model which can estimate soil loss and sediment delivery at broad spatial and temporal scales.
The cost of productivity losses associated with allergic rhinitis.
Crystal-Peters, J; Crown, W H; Goetzel, R Z; Schutt, D C
2000-03-01
To measure the cost of absenteeism and reduced productivity associated with allergic rhinitis. The National Health Interview Survey (NHIS) was used to obtain information on days lost from work and lost productivity due to allergic rhinitis. Wage estimates for occupations obtained from the Bureau of Labor Statistics (BLS) were used to calculate the costs. Productivity losses associated with a diagnosis of allergic rhinitis in the 1995 NHIS were estimated to be $601 million. When additional survey information on the use of sedating over-the-counter (OTC) allergy medications, as well as workers' self-assessments of their reduction in at-work productivity due to allergic rhinitis, were considered, the estimated productivity loss increased dramatically. At-work productivity losses were estimated to range from $2.4 billion to $4.6 billion. Despite the inherent difficulty of measuring productivity losses, our lowest estimate is several times higher than previous estimates of the indirect medical costs associated with allergic rhinitis treatment. The most significant productivity losses resulted not from absenteeism but from reduced at-work productivity associated with the use of sedating OTC antihistamines.
Huang, Jian; Zhang, Cun-Hui
2013-01-01
The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100
Losses to single-family housing from ground motions in the 1994 Northridge, California, earthquake
Wesson, R.L.; Perkins, D.M.; Leyendecker, E.V.; Roth, R.J.; Petersen, M.D.
2004-01-01
The distributions of insured losses to single-family housing following the 1994 Northridge, California, earthquake for 234 ZIP codes can be satisfactorily modeled with gamma distributions. Regressions of the parameters in the gamma distribution on estimates of ground motion, derived from ShakeMap estimates or from interpolated observations, provide a basis for developing curves of conditional probability of loss given a ground motion. Comparison of the resulting estimates of aggregate loss with the actual aggregate loss gives satisfactory agreement for several different ground-motion parameters. Estimates of loss based on a deterministic spatial model of the earthquake ground motion, using standard attenuation relationships and NEHRP soil factors, give satisfactory results for some ground-motion parameters if the input ground motions are increased about one and one-half standard deviations above the median, reflecting the fact that the ground motions for the Northridge earthquake tended to be higher than the median ground motion for other earthquakes with similar magnitude. The results give promise for making estimates of insured losses to a similar building stock under future earthquake loading. ?? 2004, Earthquake Engineering Research Institute.
Initial Evaluations of LoC Prediction Algorithms Using the NASA Vertical Motion Simulator
NASA Technical Reports Server (NTRS)
Krishnakumar, Kalmanje; Stepanyan, Vahram; Barlow, Jonathan; Hardy, Gordon; Dorais, Greg; Poolla, Chaitanya; Reardon, Scott; Soloway, Donald
2014-01-01
Flying near the edge of the safe operating envelope is an inherently unsafe proposition. Edge of the envelope here implies that small changes or disturbances in system state or system dynamics can take the system out of the safe envelope in a short time and could result in loss-of-control events. This study evaluated approaches to predicting loss-of-control safety margins as the aircraft gets closer to the edge of the safe operating envelope. The goal of the approach is to provide the pilot aural, visual, and tactile cues focused on maintaining the pilot's control action within predicted loss-of-control boundaries. Our predictive architecture combines quantitative loss-of-control boundaries, an adaptive prediction method to estimate in real-time Markov model parameters and associated stability margins, and a real-time data-based predictive control margins estimation algorithm. The combined architecture is applied to a nonlinear transport class aircraft. Evaluations of various feedback cues using both test and commercial pilots in the NASA Ames Vertical Motion-base Simulator (VMS) were conducted in the summer of 2013. The paper presents results of this evaluation focused on effectiveness of these approaches and the cues in preventing the pilots from entering a loss-of-control event.
NASA Astrophysics Data System (ADS)
Pfurtscheller, C.; Lochner, B.; Brucker, A.
2012-04-01
The interaction of relief-driven alpine natural processes with the anthropogenic sphere often leads to natural disasters which significantly impact on remote alpine economies. When evaluating the effects of such events for future risk prevention strategies, it is essential to assess indirect losses. While the economic measurement of direct effects - the physical impact on structures and infrastructure - seems fairly manageable, less is known about the dimensions of indirect effects, especially on a local and regional scale within the Alps. The lack of standardized terminology, empirical data and methods to estimate indirect economic effects currently hampers profound decision support. In our study of the 2005 flood event in Tyrol, we surveyed companies from all sectors of the economy to identify the main drivers of indirect effects and interrupted economic flows. In collaboration with the Federal State administration, we extrapolate the total regional economic effects of this catastrophic event. Using quantitative and qualitative methods, we established and analysed a data pool of questionnaire and interview results as well as direct loss data. We mainly focus on the decrease in value creation and the negative impacts on tourism. We observed that disrupted traffic networks can have a highly negative impact, especially for the tourism sector in lateral alpine valleys. Within a month, turnover fell by approximately EUR 3.3 million in the investigated area. In the short run (until August 2006), the shortfall in touristic revenues in the Paznaun valley aggregated to approx. EUR 5.3 million. We observed that overnight stays rebound very quickly so that long-term effects are marginal. In addition, we tried to identify possible economical losers as well as winners of severe hazard impacts. In response to such flood events, high investments are made to improve disaster and risk management. Nearly 70% of the respondents specified the (re)construction sector and similar businesses as main beneficiaries and about 40% mentioned infrastructural improvements, as in streets or protective measures, as the most positive effect. We present an empirical approach to assess the economic consequences of fatal events and provide rules of thumb to quickly estimate indirect economic losses from natural disasters, at least for the Alpine Space, at the local and regional level. The methods and results of this study can help to improve ex-post loss estimations, and with it, ex-ante methods for the cost efficiency of risk reduction measures, e.g. cost-benefit-analysis.
NASA Astrophysics Data System (ADS)
Mbabazi, D.; Mohanty, B.; Gaur, N.
2017-12-01
Evapotranspiration (ET) is an important component of the water and energy balance and accounts for 60 -70% of precipitation losses. However, accurate estimates of ET are difficult to quantify at varying spatial and temporal scales. Eddy covariance methods estimate ET at high temporal resolutions but without capturing the spatial variation in ET within its footprint. On the other hand, remote sensing methods using Landsat imagery provide ET with high spatial resolution but low temporal resolution (16 days). In this study, we used both eddy covariance and remote sensing methods to generate high space-time resolution ET. Daily, monthly and seasonal ET estimates were obtained using the eddy covariance (EC) method, Penman-Monteith (PM) and Mapping Evapotranspiration with Internalized Calibration (METRIC) models to determine cotton and native prairie ET dynamics in the Brazos river basin characterized by varying hydro-climatic and geological gradients. Daily estimates of spatially distributed ET (30 m resolution) were generated using spatial autocorrelation and temporal interpolations between the EC flux variable footprints and METRIC ET for the 2016 and 2017 growing seasons. A comparison of the 2016 and 2017 preliminary daily ET estimates showed similar ET dynamics/trends among the EC, PM and METRIC methods, and 5-20% differences in seasonal ET estimates. This study will improve the spatial estimates of EC ET and temporal resolution of satellite derived ET thus providing better ET data for water use management.
NASA Technical Reports Server (NTRS)
Klos, Jacob; Palumbo, Daniel L.
2003-01-01
A method to intended for measurement of the insertion loss of an acoustic treatment applied to an aircraft fuselage in-situ is documented in this paper. Using this method, the performance of a treatment applied to a limited portion of an aircraft fuselage can be assessed even though the untreated fuselage also radiates into the cabin, corrupting the intensity measurement. This corrupting noise in the intensity measurement incoherent with the panel vibration of interest is removed by correlating the intensity to reference transducers such as accelerometers. Insertion loss of the acoustic treatments is estimated from the ratio of correlated intensity measurements with and without a treatment applied. In the case of turbulent boundary layer excitation of the fuselage, this technique can be used to assess the performance of noise control methods without requiring treatment of the entire fuselage. Several experimental studies and numerical simulations have been conducted, and results from three case studies are documented in this paper. Conclusions are drawn about the use of this method to study aircraft sidewall treatments.
NASA Astrophysics Data System (ADS)
Broich, Mark
Humid tropical forest cover loss is threatening the sustainability of ecosystem goods and services as vast forest areas are rapidly cleared for industrial scale agriculture and tree plantations. Despite the importance of humid tropical forest in the provision of ecosystem services and economic development opportunities, the spatial and temporal distribution of forest cover loss across large areas is not well quantified. Here I improve the quantification of humid tropical forest cover loss using two remote sensing-based methods: sampling and wall-to-wall mapping. In all of the presented studies, the integration of coarse spatial, high temporal resolution data with moderate spatial, low temporal resolution data enable advances in quantifying forest cover loss in the humid tropics. Imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) are used as the source of coarse spatial resolution, high temporal resolution data and imagery from the Landsat Enhanced Thematic Mapper Plus (ETM+) sensor are used as the source of moderate spatial, low temporal resolution data. In a first study, I compare the precision of different sampling designs for the Brazilian Amazon using the annual deforestation maps derived by the Brazilian Space Agency for reference. I show that sampling designs can provide reliable deforestation estimates; furthermore, sampling designs guided by MODIS data can provide more efficient estimates than the systematic design used for the United Nations Food and Agricultural Organization Forest Resource Assessment 2010. Sampling approaches, such as the one demonstrated, are viable in regions where data limitations, such as cloud contamination, limit exhaustive mapping methods. Cloud-contaminated regions experiencing high rates of change include Insular Southeast Asia, specifically Indonesia and Malaysia. Due to persistent cloud cover, forest cover loss in Indonesia has only been mapped at a 5-10 year interval using photo interpretation of single best Landsat images. Such an approach does not provide timely results, and cloud cover reduces the utility of map outputs. In a second study, I develop a method to exhaustively mine the recently opened Landsat archive for cloud-free observations and automatically map forest cover loss for Sumatra and Kalimantan for the 2000-2005 interval. In a comparison with a reference dataset consisting of 64 Landsat sample blocks, I show that my method, using per pixel time-series, provides more accurate forest cover loss maps for multiyear intervals than approaches using image composites. In a third study, I disaggregate Landsat-mapped forest cover loss, mapped over a multiyear interval, by year using annual forest cover loss maps generated from coarse spatial, high temporal resolution MODIS imagery. I further disaggregate and analyze forest cover loss by forest land use, and provinces. Forest cover loss trends show high spatial and temporal variability. These results underline the importance of annual mapping for the quantification of forest cover loss in Indonesia, specifically in the light of the developing Reducing Emissions from Deforestation and Forest Degradation in Developing Countries policy framework (REDD). All three studies highlight the advances in quantifying forest cover loss in the humid tropics made by integrating coarse spatial, high temporal resolution data with moderate spatial, low temporal resolution data. The three methods presented can be combined into an integrated monitoring strategy.
Adaptive Video Streaming Using Bandwidth Estimation for 3.5G Mobile Network
NASA Astrophysics Data System (ADS)
Nam, Hyeong-Min; Park, Chun-Su; Jung, Seung-Won; Ko, Sung-Jea
Currently deployed mobile networks including High Speed Downlink Packet Access (HSDPA) offer only best-effort Quality of Service (QoS). In wireless best effort networks, the bandwidth variation is a critical problem, especially, for mobile devices with small buffers. This is because the bandwidth variation leads to packet losses caused by buffer overflow as well as picture freezing due to high transmission delay or buffer underflow. In this paper, in order to provide seamless video streaming over HSDPA, we propose an efficient real-time video streaming method that consists of the available bandwidth (AB) estimation for the HSDPA network and the transmission rate control to prevent buffer overflows/underflows. In the proposed method, the client estimates the AB and the estimated AB is fed back to the server through real-time transport control protocol (RTCP) packets. Then, the server adaptively adjusts the transmission rate according to the estimated AB and the buffer state obtained from the RTCP feedback information. Experimental results show that the proposed method achieves seamless video streaming over the HSDPA network providing higher video quality and lower transmission delay.
Evaluation of the site effect with Heuristic Methods
NASA Astrophysics Data System (ADS)
Torres, N. N.; Ortiz-Aleman, C.
2017-12-01
The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.
Pandemic risk: how large are the expected losses?
Fan, Victoria Y; Jamison, Dean T; Summers, Lawrence H
2018-02-01
There is an unmet need for greater investment in preparedness against major epidemics and pandemics. The arguments in favour of such investment have been largely based on estimates of the losses in national incomes that might occur as the result of a major epidemic or pandemic. Recently, we extended the estimate to include the valuation of the lives lost as a result of pandemic-related increases in mortality. This produced markedly higher estimates of the full value of loss that might occur as the result of a future pandemic. We parametrized an exceedance probability function for a global influenza pandemic and estimated that the expected number of influenza-pandemic-related deaths is about 720 000 per year. We calculated that the expected annual losses from pandemic risk to be about 500 billion United States dollars - or 0.6% of global income - per year. This estimate falls within - but towards the lower end of - the Intergovernmental Panel on Climate Change's estimates of the value of the losses from global warming, which range from 0.2% to 2% of global income. The estimated percentage of annual national income represented by the expected value of losses varied by country income grouping: from a little over 0.3% in high-income countries to 1.6% in lower-middle-income countries. Most of the losses from influenza pandemics come from rare, severe events.
Physics-based approach to color image enhancement in poor visibility conditions.
Tan, K K; Oakley, J P
2001-10-01
Degradation of images by the atmosphere is a familiar problem. For example, when terrain is imaged from a forward-looking airborne camera, the atmosphere degradation causes a loss in both contrast and color information. Enhancement of such images is a difficult task because of the complexity in restoring both the luminance and the chrominance while maintaining good color fidelity. One particular problem is the fact that the level of contrast loss depends strongly on wavelength. A novel method is presented for the enhancement of color images. This method is based on the underlying physics of the degradation process, and the parameters required for enhancement are estimated from the image itself.
Methods for estimating the amount of vernal pool habitat in the northeastern United States
Van Meter, R.; Bailey, L.L.; Grant, E.H.C.
2008-01-01
The loss of small, seasonal wetlands is a major concern for a variety of state, local, and federal organizations in the northeastern U.S. Identifying and estimating the number of vernal pools within a given region is critical to developing long-term conservation and management strategies for these unique habitats and their faunal communities. We use three probabilistic sampling methods (simple random sampling, adaptive cluster sampling, and the dual frame method) to estimate the number of vernal pools on protected, forested lands. Overall, these methods yielded similar values of vernal pool abundance for each study area, and suggest that photographic interpretation alone may grossly underestimate the number of vernal pools in forested habitats. We compare the relative efficiency of each method and discuss ways of improving precision. Acknowledging that the objectives of a study or monitoring program ultimately determine which sampling designs are most appropriate, we recommend that some type of probabilistic sampling method be applied. We view the dual-frame method as an especially useful way of combining incomplete remote sensing methods, such as aerial photograph interpretation, with a probabilistic sample of the entire area of interest to provide more robust estimates of the number of vernal pools and a more representative sample of existing vernal pool habitats.
Dinitz, Laura B.
2008-01-01
With costs of natural disasters skyrocketing and populations increasingly settling in areas vulnerable to natural hazards, society is challenged to better allocate its limited risk-reduction resources. In 2000, Congress passed the Disaster Mitigation Act, amending the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Robert T. Stafford Disaster Relief and Emergency Assistance Act, Pub. L. 93-288, 1988; Federal Emergency Management Agency, 2002, 2008b; Disaster Mitigation Act, 2000), mandating that State, local, and tribal communities prepare natural-hazard mitigation plans to qualify for pre-disaster mitigation grants and post-disaster aid. The Federal Emergency Management Agency (FEMA) was assigned to coordinate and implement hazard-mitigation programs, and it published information about specific mitigation-plan requirements and the mechanisms (through the Hazard Mitigation Grant Program-HMGP) for distributing funds (Federal Emergency Management Agency, 2002). FEMA requires that each community develop a mitigation strategy outlining long-term goals to reduce natural-hazard vulnerability, mitigation objectives and specific actions to reduce the impacts of natural hazards, and an implementation plan for those actions. The implementation plan should explain methods for prioritizing, implementing, and administering the actions, along with a 'cost-benefit review' justifying the prioritization. FEMA, along with the National Institute of Building Sciences (NIBS), supported the development of HAZUS ('Hazards U.S.'), a geospatial natural-hazards loss-estimation tool, to help communities quantify potential losses and to aid in the selection and prioritization of mitigation actions. HAZUS was expanded to a multiple-hazard version, HAZUS-MH, that combines population, building, and natural-hazard science and economic data and models to estimate physical damages, replacement costs, and business interruption for specific natural-hazard scenarios. HAZUS-MH currently performs analyses for earthquakes, floods, and hurricane wind. HAZUS-MH loss estimates, however, do not account for some uncertainties associated with the specific natural-hazard scenarios, such as the likelihood of occurrence within a particular time horizon or the effectiveness of alternative risk-reduction options. Because of the uncertainties involved, it is challenging to make informative decisions about how to cost-effectively reduce risk from natural-hazard events. Risk analysis is one approach that decision-makers can use to evaluate alternative risk-reduction choices when outcomes are unknown. The Land Use Portfolio Model (LUPM), developed by the U.S. Geological Survey (USGS), is a geospatial scenario-based tool that incorporates hazard-event uncertainties to support risk analysis. The LUPM offers an approach to estimate and compare risks and returns from investments in risk-reduction measures. This paper describes and demonstrates a hypothetical application of the LUPM for Ventura County, California, and examines the challenges involved in developing decision tools that provide quantitative methods to estimate losses and analyze risk from natural hazards.
Error, Power, and Blind Sentinels: The Statistics of Seagrass Monitoring
Schultz, Stewart T.; Kruschel, Claudia; Bakran-Petricioli, Tatjana; Petricioli, Donat
2015-01-01
We derive statistical properties of standard methods for monitoring of habitat cover worldwide, and criticize them in the context of mandated seagrass monitoring programs, as exemplified by Posidonia oceanica in the Mediterranean Sea. We report the novel result that cartographic methods with non-trivial classification errors are generally incapable of reliably detecting habitat cover losses less than about 30 to 50%, and the field labor required to increase their precision can be orders of magnitude higher than that required to estimate habitat loss directly in a field campaign. We derive a universal utility threshold of classification error in habitat maps that represents the minimum habitat map accuracy above which direct methods are superior. Widespread government reliance on blind-sentinel methods for monitoring seafloor can obscure the gradual and currently ongoing losses of benthic resources until the time has long passed for meaningful management intervention. We find two classes of methods with very high statistical power for detecting small habitat cover losses: 1) fixed-plot direct methods, which are over 100 times as efficient as direct random-plot methods in a variable habitat mosaic; and 2) remote methods with very low classification error such as geospatial underwater videography, which is an emerging, low-cost, non-destructive method for documenting small changes at millimeter visual resolution. General adoption of these methods and their further development will require a fundamental cultural change in conservation and management bodies towards the recognition and promotion of requirements of minimal statistical power and precision in the development of international goals for monitoring these valuable resources and the ecological services they provide. PMID:26367863
NASA Astrophysics Data System (ADS)
Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello
2017-11-01
State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.
Plot-scale soil loss estimation with laser scanning and photogrammetry methods
NASA Astrophysics Data System (ADS)
Szabó, Boglárka; Szabó, Judit; Jakab, Gergely; Centeri, Csaba; Szalai, Zoltán; Somogyi, Árpád; Barsi, Árpád
2017-04-01
Structure from Motion (SfM) is an automatic feature-matching algorithm, which nowadays is widely used tool in photogrammetry for geoscience applications. SfM method and parallel terrestrial laser scanning measurements are widespread and they can be well accomplished for quantitative soil erosion measurements as well. Therefore, our main scope was soil erosion characterization quantitatively and qualitatively, 3D visualization and morphological characterization of soil-erosion-dynamics. During the rainfall simulation, the surface had been measured and compared before and after the rainfall event by photogrammetry (SfM - Structure from Motion) and laser scanning (TLS - Terrestrial Laser Scanning) methods. The validation of the given results had been done by the caught runoff and the measured soil-loss value. During the laboratory experiment, the applied rainfall had 40 mm/h rainfall intensity. The size of the plot was 0.5 m2. The laser scanning had been implemented with Faro Focus 3D 120 S type equipment, while the SfM shooting had been carried out by 2 piece SJCAM SJ4000+ type, 12 MP resolution and 4K action cams. The photo-reconstruction had been made with Agisoft Photoscan software, while evaluation of the resulted point-cloud from laser scanning and photogrammetry had been implemented partly in CloudCompare and partly in ArcGIS. The resulted models and the calculated surface changes didn't prove to be suitable for estimating soil-loss, only for the detection of changes in the vertical surface. The laser scanning resulted a quite precise surface model, while the SfM method is affected by errors at the surface model due to other factors. The method needs more adequate technical laboratory preparation.
2010-01-01
Background Tomographic imaging has revealed that the body mass index does not give a reliable state of overall fitness. However, high measurement costs make the tomographic imaging unsuitable for large scale studies or repeated individual use. This paper reports an experimental investigation of a new electromagnetic method and its feasibility for assessing body composition. The method is called body electrical loss analysis (BELA). Methods The BELA method uses a high-Q parallel resonant circuit to produce a time-varying magnetic field. The Q of the resonator changes when the sample is placed in its coil. This is caused by induced eddy currents in the sample. The new idea in the BELA method is the altered spatial distribution of the electrical losses generated by these currents. The distribution of losses is varied using different excitation frequencies. The feasibility of the method was tested using simplified phantoms. Two of these phantoms were rough estimations of human torso. One had fat in the middle of its volume and saline solution in the outer shell volume. The other had reversed conductivity distributions. The phantoms were placed in the resonator and the change in the losses was measured. Five different excitation frequencies from 100 kHz to 200 kHz were used. Results The rate of loss as a function of frequency was observed to be approximately three times larger for a phantom with fat in the middle of its volume than for one with fat in its outer shell volume. Conclusions At higher frequencies the major signal contribution can be shifted toward outer shell volume. This enables probing the conductivity distribution of the subject by weighting outer structural components. The authors expect that the loss changing rate over frequency can be a potential index for body composition analysis. PMID:21047441
Neckband retention for lesser snow geese in the western Arctic
Samuel, M.D.; Goldberg, Diana R.; Smith, A.E.; Baranyuk, W.; Cooch, E.G.
2001-01-01
Neckbands are commonly used in waterfowl studies (especially geese) to identify individuals for determination of movement and behavior and to estimate population parameters. Substantial neckband loss can adversely affect these research objectives and produce biased survival estimates. We used capture, recovery, and observation histories for lesser snow geese (Chen caerulescens caerulescens) banded in the western Arctic, 1993-1996, to estimate neckband retention. We found that neckband retention differed between snow goose breeding colonies at Wrangel Island, Russia, and Banks Island, Northwest Territories, Canada. Male snow geese had higher neckband loss than females, a pattern similar to that found for Canada geese (Branta canadensis) and lesser snow geese in Alaska. We found that the rate of neckband loss increased with time, suggesting that neckbands are lost as the plastic deteriorates. Survival estimates for geese based on resighting neckbands will be biased unless estimates are corrected for neckband loss. We recommend that neckband loss be estimated using survival estimators that incorporate recaptures, recoveries, and observations of marked birds. Research and management studies using neckbands should be designed to improve neckband retention and to include the assessment of neckband retention.
NASA Astrophysics Data System (ADS)
Kang, S.; Kim, K.
2013-12-01
Regionally varying seismic hazards can be estimated using an earthquake loss estimation system (e.g. HAZUS-MH). The estimations for actual earthquakes help federal and local authorities develop rapid, effective recovery measures. Estimates for scenario earthquakes help in designing a comprehensive earthquake hazard mitigation plan. Local site characteristics influence the ground motion. Although direct measurements are desirable to construct a site-amplification map, such data are expensive and time consuming to collect. Thus we derived a site classification map of the southern Korean Peninsula using geologic and geomorphologic data, which are readily available for the entire southern Korean Peninsula. Class B sites (mainly rock) are predominant in the area, although localized areas of softer soils are found along major rivers and seashores. The site classification map is compared with independent site classification studies to confirm our site classification map effectively represents the local behavior of site amplification during an earthquake. We then estimated the losses due to a magnitude 6.7 scenario earthquake in Gyeongju, southeastern Korea, with and without the site classification map. Significant differences in loss estimates were observed. The loss without the site classification map decreased without variation with increasing epicentral distance, while the loss with the site classification map varied from region to region, due to both the epicentral distance and local site effects. The major cause of the large loss expected in Gyeongju is the short epicentral distance. Pohang Nam-Gu is located farther from the earthquake source region. Nonetheless, the loss estimates in the remote city are as large as those in Gyeongju and are attributed to the site effect of soft soil found widely in the area.
Zhang, Jie; Fan, Xinghua; Graham, Lisa; Chan, Tak W; Brook, Jeffrey R
2013-01-01
Sampling of particle-phase organic carbon (OC) from diesel engines is complicated by adsorption and evaporation of semivolatile organic carbon (SVOC), defined as positive and negative artifacts, respectively. In order to explore these artifacts, an integrated organic gas and particle sampler (IOGAPS) was applied, in which an XAD-coated multichannel annular denuder was placed upstream to remove the gas-phase SVOC and two downstream sorbent-impregnated filters (SIFs) were employed to capture the evaporated SVOC. Positive artifacts can be reduced by using a denuder but particle loss also occurs. This paper investigates the IOGAPS with respect to particle loss, denuder efficiency, and particle-phase OC artifacts by comparing OC, elemental carbon (EC), SVOC, and selected organic species, as well as particle size distributions. Compared to the filterpack methods typically used, the IOGAPS approach results in estimation of both positive and negative artifacts, especially the negative artifact. The positive and negative artifacts were 190 microg/m3 and 67 microg/m3, representing 122% and 43% of the total particle OC measured by the IOGAPS, respectively. However particle loss and denuder break-through were also found to exist. Monitoring particle mass loss by particle number or EC concentration yielded similar results ranging from 10% to 24% depending upon flow rate. Using the measurements of selected particle-phase organic species to infer particle loss resulted in larger estimates, on the order of 32%. The denuder collection efficiencyfor SVOCs at 74 L/min was found to be less than 100%, with an average of 84%. In addition to these uncertainties the IOGAPS method requires a considerable amount of extra effort to apply. These disadvantages must be weighed against the benefits of being able to estimate positive artifacts and correct, with some uncertainty, for the negative artifacts when selecting a method for sampling diesel emissions. Measurements of diesel emissions are necessary to understand their adverse impacts. Much of the emissions is organic carbon covering a range ofvolatilities, complicating determination of the particle fraction because of sampling artifacts. In this paper an approach to quantify artifacts is evaluated for a diesel engine. This showed that 63% of the particle organic carbon typically measured could be the positive artifact while the negative artifact is about one-third of this value. However, this approach adds time and expense and leads to other uncertainties, implying that effort is needed to develop methods to accurately measure diesel emissions.
Hydrogeology, hydrologic budget, and water chemistry of the Medina Lake area, Texas
Lambert, Rebecca B.; Grimm, Kenneth C.; Lee, Roger W.
2000-01-01
A three-phase study of the Medina Lake area in Texas was done to assess the hydrogeology and hydrology of Medina and Diversion Lakes combined (the lake system) and to determine what fraction of seepage losses from the lake system might enter the regional ground-water-flow system of the Edwards and (or) Trinity aquifers. Phase 1 consisted of revising the geologic framework for the Medina Lake area. Results of field mapping show that the upper member of the Glen Rose Limestone underlies Medina Lake and the intervening stream channel from the outflow of Medina Lake to the midpoint of Diversion Lake, where the Diversion Lake fault intersects Diversion Lake. A thin sequence of strata consisting primarily of the basal nodular and dolomitic members of the Kainer Formation of the Edwards Group, is present in the southern part of the study area. On the southern side of Medina Lake, the contact between the upper member of the Glen Rose Limestone and the basal nodular member is approximately 1,000 feet above mean sea level, and the contact between the basal nodular member and the dolomitic member is approximately 1,050 feet above mean sea level. The most porous and permeable part of the basal nodular member is about 1,045 feet above mean sea level. At these altitudes, Medina Lake is in hydrologic connection with rocks in the Edwards aquifer recharge zone, and Medina Lake appears to lose more water to the ground-water system along this bedding plane contact. Hydrologic budgets calculated during phase 2 for Medina Lake, Diversion Lake, and Medina/Diversion Lakes combined indicate that: (1) losses from Medina and Diversion Lakes can be quantified; (2) a portion of those losses are entering the Edwards aquifer; and (3) losses to the Trinity aquifer in the Medina Lake area are minimal and within the error of the hydrologic budgets. Hydrologic budgets based on streamflow, precipitation, evaporation, and change in lake storage were used to quantify losses (recharge) to the ground-water system from Medina Lake, Diversion Lake, and Medina/Diversion Lakes combined during October 1995–September 1996. Water losses from Medina Lake to the Edwards/Trinity aquifers ranged from -14.0 to 135 acre-feet per day; Diversion Lake ranged from -1.2 to 93.1 acre-feet per day; and Medina/Diversion Lakes combined ranged from 36.1 to 119 acre-feet per day.Monthly average recharge during December 1995–July 1996 was estimated using an alternative method developed during this study (current study method) and compared to monthly average recharge during December 1995–July 1996 estimated using the existing USGS method and the Trans-Texas method. Recharge to the Edwards aquifer estimated using the current study method was about 69 and 73 percent of the recharge estimated using the USGS and Trans-Texas methods, respectively. The USGS and Trans-Texas methods overestimated recharge from Medina Lake compared to the recharge estimated with the current study method when Medina Lake stage was between about 1,027 and 1,032 feet above mean sea level and underestimated recharge from Medina Lake when lake stage was between about 1,036 and 1,045 feet above mean sea level. The USGS and Trans-Texas methods underestimated recharge from Diversion Lake compared to the recharge estimated with the current study method when Diversion Lake stage was greater than 913 feet above mean sea level and overestimated recharge from Diversion Lake when lake stage was less than 913 feet above mean sea level.The water quality of Medina Lake and Medina River and in selected wells and springs in the Edwards and Trinity aquifers was characterized during phase 3 of the study. Environmental isotope analyses and geochemical modeling also were used to determine where water losses from the lake system might be entering the ground-water-flow system. Isotopic ratios of deuterium, oxygen, and strontium were analyzed in selected surface-water, lake-water, and ground-water samples to trace the isotopic “signature” of the lake water as it mixes with the ground water and to determine the fraction of lake water and ground water in selected Edwards aquifer wells. Isotopic data and geochemical modeling were used to show that lake water is moving into the Edwards aquifer in two fault blocks in the eastern Medina storage unit. One fault block is bounded on the north by the Vandenburg School fault and on the south by the Haby Crossing fault, and the second fault block is bounded on the north by the Diversion Lake fault and on the south by the Haby Crossing fault. In selected Edwards aquifer wells located southwest of Medina Lake and west of Diversion Lake, the proportion of lake water ranged from about 10 to 45 percent. Geochemical modeling using NETPATH confirms the degree of mixing between lake water and aquifer water shown by the isotopes.
Smartphone-Based Hearing Screening in Noisy Environments
Na, Youngmin; Joo, Hyo Sung; Yang, Hyejin; Kang, Soojin; Hong, Sung Hwa; Woo, Jihwan
2014-01-01
It is important and recommended to detect hearing loss as soon as possible. If it is found early, proper treatment may help improve hearing and reduce the negative consequences of hearing loss. In this study, we developed smartphone-based hearing screening methods that can ubiquitously test hearing. However, environmental noise generally results in the loss of ear sensitivity, which causes a hearing threshold shift (HTS). To overcome this limitation in the hearing screening location, we developed a correction algorithm to reduce the HTS effect. A built-in microphone and headphone were calibrated to provide the standard units of measure. The HTSs in the presence of either white or babble noise were systematically investigated to determine the mean HTS as a function of noise level. When the hearing screening application runs, the smartphone automatically measures the environmental noise and provides the HTS value to correct the hearing threshold. A comparison to pure tone audiometry shows that this hearing screening method in the presence of noise could closely estimate the hearing threshold. We expect that the proposed ubiquitous hearing test method could be used as a simple hearing screening tool and could alert the user if they suffer from hearing loss. PMID:24926692
NASA Astrophysics Data System (ADS)
Gündoğan, R.; Alma, V.; Dindaroğlu, T.; Günal, H.; Yakupoğlu, T.; Susam, T.; Saltalı, K.
2017-11-01
Calculation of gullies by remote sensing images obtained from satellite or aerial platforms is often not possible because gullies in agricultural fields, defined as the temporary gullies are filled in a very short time with tillage operations. Therefore, fast and accurate estimation of sediment loss with the temporary gully erosion is of great importance. In this study, it is aimed to monitor and calculate soil losses caused by the gully erosion that occurs in agricultural areas with low altitude unmanned aerial vehicles. According to the calculation with Pix4D, gully volume was estimated to be 10.41 m3 and total loss of soil was estimated to be 14.47 Mg. The RMSE value of estimations was found to be 0.89. The results indicated that unmanned aerial vehicles could be used in predicting temporary gully erosion and losses of soil.
USDA-ARS?s Scientific Manuscript database
Data assimilation and regression are two commonly used methods for predicting agricultural yield from remote sensing observations. Data assimilation is a generative approach because it requires explicit approximations of the Bayesian prior and likelihood to compute the probability density function...
Studies on quantifying evaporation in permeable pavement systems are limited to few laboratory studies that used a scale to weigh evaporative losses and a field application with a tunnel-evaporation gauge. A primary objective of this research was to quantify evaporation for a la...
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
NASA Astrophysics Data System (ADS)
Lazzari, Maurizio; Danese, Maria; Gioia, Dario; Piccarreta, Marco
2013-04-01
Sedimentary budget estimation is an important topic for both scientific and social community, because it is crucial to understand both dynamics of orogenic belts and many practical problems, such as soil conservation and sediment accumulation in reservoir. Estimations of sediment yield or denudation rates in southern-central Italy are generally obtained by simple empirical relationships based on statistical regression between geomorphic parameters of the drainage network and the measured suspended sediment yield at the outlet of several drainage basins or through the use of models based on sediment delivery ratio or on soil loss equations. In this work, we perform a study of catchment dynamics and an estimation of sedimentary yield for several mountain catchments of the central-western sector of the Basilicata region, southern Italy. Sediment yield estimation has been obtained through both an indirect estimation of suspended sediment yield based on the Tu index (mean annual suspension sediment yield, Ciccacci et al., 1980) and the application of the Rusle (Renard et al., 1997) and the USPED (Mitasova et al., 1996) empirical methods. The preliminary results indicate a reliable difference between the RUSLE and USPED methods and the estimation based on the Tu index; a critical data analysis of results has been carried out considering also the present-day spatial distribution of erosion, transport and depositional processes in relation to the maps obtained from the application of those different empirical methods. The studied catchments drain an artificial reservoir (i.e. the Camastra dam), where a detailed evaluation of the amount of historical sediment storage has been collected. Sediment yield estimation obtained by means of the empirical methods have been compared and checked with historical data of sediment accumulation measured in the artificial reservoir of the Camastra dam. The validation of such estimations of sediment yield at the scale of large catchments using sediment storage in reservoirs provides a good opportunity: i) to test the reliability of the empirical methods used to estimate the sediment yield; ii) to investigate the catchment dynamics and its spatial and temporal evolution in terms of erosion, transport and deposition. References Ciccacci S., Fredi F., Lupia Palmieri E., Pugliese F., 1980. Contributo dell'analisi geomorfica quantitativa alla valutazione dell'entita dell'erosione nei bacini fluviali. Bollettino della Società Geologica Italiana 99: 455-516. Mitasova H, Hofierka J, Zlocha M, Iverson LR. 1996. Modeling topographic potential for erosion and deposition using GIS. International Journal of Geographical Information Systems 10: 629-641. Renard K.G., Foster G.R., Weesies G.A., McCool D.K., Yoder D.C., 1997. Predicting soil erosion by water: a guide to conservation planning with the Revised Universal Soil Loss Equation (RUSLE), USDA-ARS, Agricultural Handbook No. 703.
New approach to estimating variability in visual field data using an image processing technique.
Crabb, D P; Edgar, D F; Fitzke, F W; McNaught, A I; Wynn, H P
1995-01-01
AIMS--A new framework for evaluating pointwise sensitivity variation in computerised visual field data is demonstrated. METHODS--A measure of local spatial variability (LSV) is generated using an image processing technique. Fifty five eyes from a sample of normal and glaucomatous subjects, examined on the Humphrey field analyser (HFA), were used to illustrate the method. RESULTS--Significant correlation between LSV and conventional estimates--namely, HFA pattern standard deviation and short term fluctuation, were found. CONCLUSION--LSV is not dependent on normals' reference data or repeated threshold determinations, thus potentially reducing test time. Also, the illustrated pointwise maps of LSV could provide a method for identifying areas of fluctuation commonly found in early glaucomatous field loss. PMID:7703196
High-resolution bottom-loss estimation using the ambient-noise vertical coherence function.
Muzi, Lanfranco; Siderius, Martin; Quijano, Jorge E; Dosso, Stan E
2015-01-01
The seabed reflection loss (shortly "bottom loss") is an important quantity for predicting transmission loss in the ocean. A recent passive technique for estimating the bottom loss as a function of frequency and grazing angle exploits marine ambient noise (originating at the surface from breaking waves, wind, and rain) as an acoustic source. Conventional beamforming of the noise field at a vertical line array of hydrophones is a fundamental step in this technique, and the beamformer resolution in grazing angle affects the quality of the estimated bottom loss. Implementation of this technique with short arrays can be hindered by their inherently poor angular resolution. This paper presents a derivation of the bottom reflection coefficient from the ambient-noise spatial coherence function, and a technique based on this derivation for obtaining higher angular resolution bottom-loss estimates. The technique, which exploits the (approximate) spatial stationarity of the ambient-noise spatial coherence function, is demonstrated on both simulated and experimental data.
Rijnsburger, Adriana J.; Severens, Johan L.
2016-01-01
Background Neglected Tropical Diseases (NTDs) not only cause health and life expectancy loss, but can also lead to economic consequences including reduced ability to work. This article describes a systematic literature review of the effect on the economic productivity of individuals affected by one of the five worldwide most prevalent NTDs: lymphatic filariasis, onchocerciasis, schistosomiasis, soil-transmitted helminths (ascariasis, trichuriasis, and hookworm infection) and trachoma. These diseases are eligible to preventive chemotherapy (PCT). Methodology/Principal Findings Eleven bibliographic databases were searched using different names of all NTDs and various keywords relating to productivity. Additional references were identified through reference lists from relevant papers. Of the 5316 unique publications found in the database searches, thirteen papers were identified for lymphatic filariasis, ten for onchocerciasis, eleven for schistosomiasis, six for soil-transmitted helminths and three for trachoma. Besides the scarcity in publications reporting the degree of productivity loss, this review revealed large variation in the estimated productivity loss related to these NTDs. Conclusions It is clear that productivity is affected by NTDs, although the actual impact depends on the type and severity of the NTD as well as on the context where the disease occurs. The largest impact on productivity loss of individuals affected by one of these diseases seems to be due to blindness from onchocerciasis and severe schistosomiasis manifestations; productivity loss due to trachoma-related blindness has never been studied directly. However, productivity loss at an individual level might differ from productivity loss at a population level because of differences in the prevalence of NTDs. Variation in estimated productivity loss between and within diseases is caused by differences in research methods and setting. Publications should provide enough information to enable readers to assess the quality and relevance of the study for their purposes. PMID:26890487
A micromethod for the enzymatic estimation of the degree of glycogen ramification.
Serafini, M T; Alemany, M
1987-10-01
A comparison of methods for the evaluation of glycogen content in liver tissue of rats has been carried out by determining the recoveries in the differential ethanol precipitation of glycogen from alkaline tissue digests as well as the actual quantitative equivalence between glycogen content and actual glucose measured. Hydrolytic/enzymatic methods gave lower results than non-specific chemical methods such as anthrone. These lower values, combined with the losses in the purification process resulted in much lower glycogen estimations than the actual estimated tissue content. A method has been devised for the measurement of glycogen ramification in small liver tissue samples, using neutral periodate oxidation of the molecule, followed by determination of the formic acid evolved from the branch ends with formic acid dehydrogenase. The method gave results very similar to the classical methods in which the acid formed is measured titrimetrically. Rat liver tissue contained a mean 323 +/- 69 mmol of glucose equivalents of glycogen per gram of tissue; this glycogen had a mean chain length of 11.4 +/- 0.8 units.
Nutrients: a major consideration in intensive forest management
James W. Hornbeck
1977-01-01
Estimates of nutrient losses are compared for stem-only harvest versus a whole-tree harvest of a clearcut northern hardwood stand. Combined nutrient losses due to increased leaching and removal of vegetation after stem-only harvesting are estimated to be 334 kg/ha for calcium and 265 kg/ha for nitrogen. For a whole-tree harvest, combined losses are estimated at 537 kg/...
Measuring Forest Area Loss Over Time Using FIA Plots and Satellite Imagery
Michael L. Hoppus; Andrew J. Lister
2005-01-01
How accurately can FIA plots, scattered at 1 per 6,000 acres, identify often rare forest land loss, estimated at less than 1 percent per year in the Northeast? Here we explore this question mathematically, empirically, and by comparing FIA plot estimates of forest change with satellite image based maps of forest loss. The mathematical probability of exactly estimating...
Esposito, Felice; Cappabianca, Paolo; Angileri, Filippo F; Cavallo, Luigi M; Priola, Stefano M; Crimi, Salvatore; Solari, Domenico; Germanò, Antonino F; Tomasello, Francesco
2016-07-26
Gelatin-thrombin hemostatic matrix (FloSeal®) use is associated with shorter surgical times and less blood loss, parameters that are highly valued in neurosurgical procedures. We aimed to assess the effectiveness of gelatin-thrombin in neurosurgical procedures and estimate its economic value. In a 6-month retrospective evaluation at 2 hospitals, intraoperative and postoperative information were collected from patients undergoing neurosurgical procedures where bleeding was controlled with gelatin-thrombin matrix or according to local bleeding control guidelines (control group). Study endpoints were: length of surgery, estimated blood loss, hospitalization duration, blood units utilized, intensive care unit days, postoperative complications, and time-to-recovery. Statistical methods compared endpoints between the gelatin-thrombin and control groups and resource utilization costs were estimated. Seventy-eight patients (38 gelatin-thrombin; 40 control) were included. Gelatin-thrombin was associated with a shorter surgery duration than control 166±40 versus 185±55, p=0.0839); a lower estimated blood loss (185±80 versus 250±95ml; p=0.0017); a shorter hospital stay (10±3 versus 13±3 days; p<0.001); fewer intensive care unit days (10 days/3 patients and 20 days/4 patients); and shorter time-to-recovery (3±2.2 versus 4±2.8 weeks; p=0861). Fewer gelatin-thrombin patients experienced postoperative complications (3 minor) than the control group (5 minor; 3 major). No gelatin-thrombin patient required blood transfusion; 5 units were administered in the control group. The cost of gelatin-thrombin (€268.40/unit) was offset by the shorter surgery duration (difference of 19 minutes at €858 per hour) and the economic value of improved the other endpoint outcomes (ie, shorter hospital stay, less blood loss/lack of need for transfusion, fewer intensive care unit days, and complications). Gelatin-thrombin hemostatic matrix use in patients undergoing neurosurgical procedures was associated with better intra- and post-operative parameters than conventional hemostasis methods, with these parameters having substantial economic benefits.
The case for earlier cochlear implantation in postlingually deaf adults.
Dowell, Richard C
2016-01-01
This paper aimed to estimate the difference in speech perception outcomes that may occur due to timing of cochlear implantation in relation to the progression of hearing loss. Data from a large population-based sample of adults with acquired hearing loss using cochlear implants (CIs) was used to estimate the effects of duration of hearing loss, age, and pre-implant auditory skills on outcomes for a hypothetical standard patient. A total of 310 adults with acquired severe/profound bilateral hearing loss who received a CI in Melbourne, Australia between 1994 and 2006 provided the speech perception data and demographic information to derive regression equations for estimating CI outcomes. For a hypothetical CI candidate with progressive sensorineural hearing loss, the estimates of speech perception scores following cochlear implantation are significantly better if implantation occurs relatively soon after onset of severe hearing loss and before the loss of all functional auditory skills. Improved CI outcomes and quality of life benefit may be achieved for adults with progressive severe hearing loss if they are implanted earlier in the progression of the pathology.
Jean-Christophe Domec; Ge Sun; Asko Noormets; Michael J. Gavazzi; Emrys A. Treasure; Erika Cohen; Jennifer J. Swenson; Steve G. McNulty; John S. King
2012-01-01
Increasing variability of rainfall patterns requires detailed understanding of the pathways of water loss from ecosystems to optimize carbon uptake and management choices. In the current study we characterized the usability of three alternative methods of different rigor for quantifying stand-level evapotranspiration (ET), partitioned ET into tree transpiration (T),...
NASA Astrophysics Data System (ADS)
Li, Q.; Wang, Y. L.; Li, H. C.; Zhang, M.; Li, C. Z.; Chen, X.
2017-12-01
Rainfall threshold plays an important role in flash flood warning. A simple and easy method, using Rational Equation to calculate rainfall threshold, was proposed in this study. The critical rainfall equation was deduced from the Rational Equation. On the basis of the Manning equation and the results of Chinese Flash Flood Survey and Evaluation (CFFSE) Project, the critical flow was obtained, and the net rainfall was calculated. Three aspects of the rainfall losses, i.e. depression storage, vegetation interception, and soil infiltration were considered. The critical rainfall was the sum of the net rainfall and the rainfall losses. Rainfall threshold was estimated after considering the watershed soil moisture using the critical rainfall. In order to demonstrate this method, Zuojiao watershed in Yunnan Province was chosen as study area. The results showed the rainfall thresholds calculated by the Rational Equation method were approximated to the rainfall thresholds obtained from CFFSE, and were in accordance with the observed rainfall during flash flood events. Thus the calculated results are reasonable and the method is effective. This study provided a quick and convenient way to calculated rainfall threshold of flash flood warning for the grass root staffs and offered technical support for estimating rainfall threshold.
The cost of vision loss in Canada. 1. Methodology.
Gordon, Keith D; Cruess, Alan F; Bellan, Lorne; Mitchell, Scott; Pezzullo, M Lynne
2011-08-01
This paper outlines the methodology used to estimate the cost of vision loss in Canada. The results of this study will be presented in a second paper. The cost of vision loss (VL) in Canada was estimated using a prevalence-based approach. This was done by estimating the number of people with VL in a base period (2007) and the costs associated with treating them. The cost estimates included direct health system expenditures on eye conditions that cause VL, as well as other indirect financial costs such as productivity losses. Estimates were also made of the value of the loss of healthy life, measured in Disability Adjusted Life Years or DALY's. To estimate the number of cases of VL in the population, epidemiological data on prevalence rates were applied to population data. The number of cases of VL was stratified by gender, age, ethnicity, severity and cause. The following sources were used for estimating prevalence: Population-based eye studies; Canadian Surveys; Canadian journal articles and research studies; and International Population Based Eye Studies. Direct health costs were obtained primarily from Health Canada and Canadian Institute for Health Information (CIHI) sources, while costs associated with productivity losses were based on employment information compiled by Statistics Canada and on economic theory of productivity loss. Costs related to vision rehabilitation (VR) were obtained from Canadian VR organizations. This study shows that it is possible to estimate the costs for VL for a country in the absence of ongoing local epidemiological studies. Copyright © 2011 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Badjana, Hèou Maléki; Olofsson, Pontus; Woodcock, Curtis E.; Helmschrot, Joerg; Wala, Kpérkouma; Akpagana, Koffi
2017-12-01
In West Africa, accurate classification of land cover and land change remains a big challenge due to the patchy and heterogeneous nature of the landscape. Limited data availability, human resources and technical capacities, further exacerbate the challenge. The result is a region that is among the more understudied areas in the world, which in turn has resulted in a lack of appropriate information required for sustainable natural resources management. The objective of this paper is to explore open source software and easy-to-implement approaches to mapping and estimation of land change that are transferrable to local institutions to increase capacity in the region, and to provide updated information on the regional land surface dynamics. To achieve these objectives, stable land cover and land change between 2001 and 2013 in the Kara River Basin in Togo and Benin were mapped by direct multitemporal classification of Landsat data by parameterization and evaluation of two machine-learning algorithms. Areas of land cover and change were estimated by application of an unbiased estimator to sample data following international guidelines. A prerequisite for all tools and methods was implementation in an open source environment, and adherence to international guidelines for reporting land surface activities. Findings include a recommendation of the Random Forests algorithm as implemented in Orfeo Toolbox, and a stratified estimation protocol - all executed in the QGIS graphical use interface. It was found that despite an estimated reforestation of 10,0727 ± 3480 ha (95% confidence interval), the combined rate of forest and savannah loss amounted to 56,271 ± 9405 ha (representing a 16% loss of the forestlands present in 2001), resulting in a rather sharp net loss of forestlands in the study area. These dynamics had not been estimated prior to this study, and the results will provide useful information for decision making pertaining to natural resources management, land management planning, and the implementation of the United Nations Collaborative Programme on Reducing Emissions from Deforestation and Forest Degradation in Developing Countries (UN-REDD).
Increased wintertime CO2 loss as a result of sustained tundra warming
NASA Astrophysics Data System (ADS)
Webb, Elizabeth E.; Schuur, Edward A. G.; Natali, Susan M.; Oken, Kiva L.; Bracho, Rosvel; Krapek, John P.; Risk, David; Nickerson, Nick R.
2016-02-01
Permafrost soils currently store approximately 1672 Pg of carbon (C), but as high latitudes warm, this temperature-protected C reservoir will become vulnerable to higher rates of decomposition. In recent decades, air temperatures in the high latitudes have warmed more than any other region globally, particularly during the winter. Over the coming century, the arctic winter is also expected to experience the most warming of any region or season, yet it is notably understudied. Here we present nonsummer season (NSS) CO2 flux data from the Carbon in Permafrost Experimental Heating Research project, an ecosystem warming experiment of moist acidic tussock tundra in interior Alaska. Our goals were to quantify the relationship between environmental variables and winter CO2 production, account for subnivean photosynthesis and late fall plant C uptake in our estimate of NSS CO2 exchange, constrain NSS CO2 loss estimates using multiple methods of measuring winter CO2 flux, and quantify the effect of winter soil warming on total NSS CO2 balance. We measured CO2 flux using four methods: two chamber techniques (the snow pit method and one where a chamber is left under the snow for the entire season), eddy covariance, and soda lime adsorption, and found that NSS CO2 loss varied up to fourfold, depending on the method used. CO2 production was dependent on soil temperature and day of season but atmospheric pressure and air temperature were also important in explaining CO2 diffusion out of the soil. Warming stimulated both ecosystem respiration and productivity during the NSS and increased overall CO2 loss during this period by 14% (this effect varied by year, ranging from 7 to 24%). When combined with the summertime CO2 fluxes from the same site, our results suggest that this subarctic tundra ecosystem is shifting away from its historical function as a C sink to a C source.
The Concept Design of a Split Flow Liquid Hydrogen Turbopump
2008-03-01
Oxygen Boost Pump OTP Oxygen Turbopump O/B Overboard b Passage depth inches Lp Passage loss Kp Passage loss constant Recommended value = 0.3...user or a diffusion model is selected . 2 1 2p tW W DR= ∗ (1.49) 39 There are eight methods within Pumpal® to estimate the value of the...allows the user to select a tip model secondary mass flow fraction. The mass fraction was set to 0.05. This value is within the range (0.02-0.10
Kinnell, P I A
2017-10-15
Traditionally, the Universal Soil Loss Equation (USLE) and the revised version of it (RUSLE) have been applied to predicting the long term average soil loss produced by rainfall erosion in many parts of the world. Overtime, it has been recognized that there is a need to predict soil losses over shorter time scales and this has led to the development of WEPP and RUSLE2 which can be used to predict soil losses generated by individual rainfall events. Data currently exists that enables the RUSLE2, WEPP and the USLE-M to estimate historic soil losses from bare fallow runoff and soil loss plots recorded in the USLE database. Comparisons of the abilities of the USLE-M and RUSLE2 to estimate event soil losses from bare fallow were undertaken under circumstances where both models produced the same total soil loss as observed for sets of erosion events on 4 different plots at 4 different locations. Likewise, comparisons of the abilities of the USLE-M and WEPP to model event soil loss from bare fallow were undertaken for sets of erosion events on 4 plots at 4 different locations. Despite being calibrated specifically for each plot, WEPP produced the worst estimates of event soil loss for all the 4 plots. Generally, the USLE-M using measured runoff to calculate the product of the runoff ratio, storm kinetic energy and the maximum 30-minute rainfall intensity produced the best estimates. As to be expected, ability of the USLE-M to estimate event soil loss was reduced when runoff predicted by either RUSLE2 or WEPP was used. Despite this, the USLE-M using runoff predicted by WEPP estimated event soil loss better than WEPP. RUSLE2 also outperformed WEPP. Copyright © 2017 Elsevier B.V. All rights reserved.
Workplace smoking related absenteeism and productivity costs in Taiwan
Tsai, S; Wen, C; Hu, S; Cheng, T; Huang, S
2005-01-01
Objective: To estimate productivity losses and financial costs to employers caused by cigarette smoking in the Taiwan workplace. Methods: The human capital approach was used to calculate lost productivity. Assuming the value of lost productivity was equal to the wage/salary rate and basing the calculations on smoking rate in the workforce, average days of absenteeism, average wage/salary rate, and increased risk and absenteeism among smokers obtained from earlier research, costs due to smoker absenteeism were estimated. Financial losses caused by passive smoking, smoking breaks, and occupational injuries were calculated. Results: Using a conservative estimate of excess absenteeism from work, male smokers took off an average of 4.36 sick days and male non-smokers took off an average of 3.30 sick days. Female smokers took off an average of 4.96 sick days and non-smoking females took off an average of 3.75 sick days. Excess absenteeism caused by employee smoking was estimated to cost US$178 million per annum for males and US$6 million for females at a total cost of US$184 million per annum. The time men and women spent taking smoking breaks amounted to nine days per year and six days per year, respectively, resulting in reduced output productivity losses of US$733 million. Increased sick leave costs due to passive smoking were approximately US$81 million. Potential costs incurred from occupational injuries among smoking employees were estimated to be US$34 million. Conclusions: Financial costs caused by increased absenteeism and reduced productivity from employees who smoke are significant in Taiwan. Based on conservative estimates, total costs attributed to smoking in the workforce were approximately US$1032 million. PMID:15923446
The relationship between loudness intensity functions and the click-ABR wave V latency.
Serpanos, Y C; O'Malley, H; Gravel, J S
1997-10-01
To assess the relationship of loudness growth and the click-evoked auditory brain stem response (ABR) wave V latency-intensity function (LIF) in listeners with normal hearing or cochlear hearing loss. The effect of hearing loss configuration on the intensity functions was also examined. Behavioral and electrophysiological intensity functions were obtained using click stimuli of comparable intensities in listeners with normal hearing (Group I; n = 10), and cochlear hearing loss of flat (Group II; n = 10) or sloping (Group III; n = 10) configurations. Individual intensity functions were obtained from measures of loudness growth using the psychophysical methods of absolute magnitude estimation and production of loudness (geometrically averaged to provide the measured loudness function), and from the wave V latency measures of the ABR. Slope analyses for the behavioral and electrophysiological intensity functions were separately performed by group. The loudness growth functions for the groups with cochlear hearing loss approximated the normal function at high intensities, with overall slope values consistent with those reported from previous psychophysical research. The ABR wave V LIF for the group with a flat configuration of cochlear hearing loss approximated the normal function at high intensities, and was displaced parallel to the normal function for the group with sloping configuration. The relationship between the behavioral and electrophysiological intensity functions was examined at individual intensities across the range of the functions for each subject. A significant relationship was obtained between loudness and the ABR wave V LIFs for the groups with normal hearing and flat configuration of cochlear hearing loss; the association was not significant (p = 0.10) for the group with a sloping configuration of cochlear hearing loss. The results of this study established a relationship between loudness and the ABR wave V latency for listeners with normal hearing, and flat cochlear hearing loss. In listeners with a sloping configuration of cochlear hearing loss, the relationship was not significant. This suggests that the click-evoked ABR may be used to estimate loudness growth at least for individuals with normal hearing and those with a flat configuration of cochlear hearing loss. Predictive equations were derived to estimate loudness growth for these groups. The use of frequency-specific stimuli may provide more precise information on the nature of the relationship between loudness growth and the ABR wave V latency, particularly for listeners with sloping configurations of cochlear hearing loss.
[Original method of extracapsule fragmentation of the lens nucleus during phacoemulsification].
Avetisov, S E; Iusef, Iusef Naim; Mamikonian, V R; Vvedenskiĭ, A S; Iusef, Said Naim; Mutonen, N V
2002-01-01
Clinical estimation of different modifications of phacoemulsification revealed the formation of the second tunnel in the nucleus for its division into quadrants in "four-quadrant phaco" increases the required duration of ultrasonography (US) and irrigation, which causes greater endothelial losses associated with the use of nuclear breakdown by means of a chopper tunnel. When the authors used their own methods of "extracapsular half-nuclei" fragmentation, endothelial losses are rather greater than those with the similar method "stop & "chop", which is associated with closer disposition of the US tip to the posterior corneal surface. At the same time nuclear breakdown by means a chapper in the capsular sac by the "stop & chop" method causes dilation of Zinn's ligaments, fraught by their rupture, particularly if latent derangement or defects of the zonular apparatus, and increases the risk of damage to the posterior capsule by the chopper.
Temporary Losses of Highway Capacity and Impacts on Performance: Phase 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, S.M.
2004-11-10
Traffic congestion and its impacts significantly affect the nation's economic performance and the public's quality of life. In most urban areas, travel demand routinely exceeds highway capacity during peak periods. In addition, events such as crashes, vehicle breakdowns, work zones, adverse weather, railroad crossings, large trucks loading/unloading in urban areas, and other factors such as toll collection facilities and sub-optimal signal timing cause temporary capacity losses, often worsening the conditions on already congested highway networks. The impacts of these temporary capacity losses include delay, reduced mobility, and reduced reliability of the highway system. They can also cause drivers to re-routemore » or reschedule trips. Such information is vital to formulating sound public policies for the highway infrastructure and its operation. In response to this need, Oak Ridge National Laboratory, sponsored by the Federal Highway Administration (FHWA), made an initial attempt to provide nationwide estimates of the capacity losses and delay caused by temporary capacity-reducing events (Chin et al. 2002). This study, called the Temporary Loss of Capacity (TLC) study, estimated capacity loss and delay on freeways and principal arterials resulting from fatal and non-fatal crashes, vehicle breakdowns, and adverse weather, including snow, ice, and fog. In addition, it estimated capacity loss and delay caused by sub-optimal signal timing at intersections on principal arterials. It also included rough estimates of capacity loss and delay on Interstates due to highway construction and maintenance work zones. Capacity loss and delay were estimated for calendar year 1999, except for work zone estimates, which were estimated for May 2001 to May 2002 due to data availability limitations. Prior to the first phase of this study, which was completed in May of 2002, no nationwide estimates of temporary losses of highway capacity by type of capacity-reducing event had been made. This report describes the second phase of the TLC study (TLC2). TLC2 improves upon the first study by expanding the scope to include delays from rain, toll collection facilities, railroad crossings, and commercial truck pickup and delivery (PUD) activities in urban areas. It includes estimates of work zone capacity loss and delay for all freeways and principal arterials, rather than for Interstates only. It also includes improved estimates of delays caused by fog, snow, and ice, which are based on data not available during the initial phase of the study. Finally, computational errors involving crash and breakdown delay in the original TLC report are corrected.« less
Evaluating simplified methods for liquefaction assessment for loss estimation
NASA Astrophysics Data System (ADS)
Kongar, Indranil; Rossetto, Tiziana; Giovinazzi, Sonia
2017-06-01
Currently, some catastrophe models used by the insurance industry account for liquefaction by applying a simple factor to shaking-induced losses. The factor is based only on local liquefaction susceptibility and this highlights the need for a more sophisticated approach to incorporating the effects of liquefaction in loss models. This study compares 11 unique models, each based on one of three principal simplified liquefaction assessment methods: liquefaction potential index (LPI) calculated from shear-wave velocity, the HAZUS software method and a method created specifically to make use of USGS remote sensing data. Data from the September 2010 Darfield and February 2011 Christchurch earthquakes in New Zealand are used to compare observed liquefaction occurrences to forecasts from these models using binary classification performance measures. The analysis shows that the best-performing model is the LPI calculated using known shear-wave velocity profiles, which correctly forecasts 78 % of sites where liquefaction occurred and 80 % of sites where liquefaction did not occur, when the threshold is set at 7. However, these data may not always be available to insurers. The next best model is also based on LPI but uses shear-wave velocity profiles simulated from the combination of USGS VS30 data and empirical functions that relate VS30 to average shear-wave velocities at shallower depths. This model correctly forecasts 58 % of sites where liquefaction occurred and 84 % of sites where liquefaction did not occur, when the threshold is set at 4. These scores increase to 78 and 86 %, respectively, when forecasts are based on liquefaction probabilities that are empirically related to the same values of LPI. This model is potentially more useful for insurance since the input data are publicly available. HAZUS models, which are commonly used in studies where no local model is available, perform poorly and incorrectly forecast 87 % of sites where liquefaction occurred, even at optimal thresholds. This paper also considers two models (HAZUS and EPOLLS) for estimation of the scale of liquefaction in terms of permanent ground deformation but finds that both models perform poorly, with correlations between observations and forecasts lower than 0.4 in all cases. Therefore these models potentially provide negligible additional value to loss estimation analysis outside of the regions for which they have been developed.
Parameters estimation of sandwich beam model with rigid polyurethane foam core
NASA Astrophysics Data System (ADS)
Barbieri, Nilson; Barbieri, Renato; Winikes, Luiz Carlos
2010-02-01
In this work, the physical parameters of sandwich beams made with the association of hot-rolled steel, Polyurethane rigid foam and High Impact Polystyrene, used for the assembly of household refrigerators and food freezers are estimated using measured and numeric frequency response functions (FRFs). The mathematical models are obtained using the finite element method (FEM) and the Timoshenko beam theory. The physical parameters are estimated using the amplitude correlation coefficient and genetic algorithm (GA). The experimental data are obtained using the impact hammer and four accelerometers displaced along the sample (cantilevered beam). The parameters estimated are Young's modulus and the loss factor of the Polyurethane rigid foam and the High Impact Polystyrene.
Johannesen, Peter T.; Pérez-González, Patricia; Lopez-Poveda, Enrique A.
2014-01-01
Identifying the multiple contributors to the audiometric loss of a hearing impaired (HI) listener at a particular frequency is becoming gradually more useful as new treatments are developed. Here, we infer the contribution of inner (IHC) and outer hair cell (OHC) dysfunction to the total audiometric loss in a sample of 68 hearing aid candidates with mild-to-severe sensorineural hearing loss, and for test frequencies of 0.5, 1, 2, 4, and 6 kHz. It was assumed that the audiometric loss (HLTOTAL) at each test frequency was due to a combination of cochlear gain loss, or OHC dysfunction (HLOHC), and inefficient IHC processes (HLIHC), all of them in decibels. HLOHC and HLIHC were estimated from cochlear I/O curves inferred psychoacoustically using the temporal masking curve (TMC) method. 325 I/O curves were measured and 59% of them showed a compression threshold (CT). The analysis of these I/O curves suggests that (1) HLOHC and HLIHC account on average for 60–70 and 30–40% of HLTOTAL, respectively; (2) these percentages are roughly constant across frequencies; (3) across-listener variability is large; (4) residual cochlear gain is negatively correlated with hearing loss while residual compression is not correlated with hearing loss. Altogether, the present results support the conclusions from earlier studies and extend them to a wider range of test frequencies and hearing-loss ranges. Twenty-four percent of I/O curves were linear and suggested total cochlear gain loss. The number of linear I/O curves increased gradually with increasing frequency. The remaining 17% I/O curves suggested audiometric losses due mostly to IHC dysfunction and were more frequent at low (≤1 kHz) than at high frequencies. It is argued that in a majority of listeners, hearing loss is due to a common mechanism that concomitantly alters IHC and OHC function and that IHC processes may be more labile in the apex than in the base. PMID:25100940
NASA Astrophysics Data System (ADS)
Bauer, I. E.; Bhatti, J. S.; Hurdle, P. A.
2004-05-01
Field-based decomposition studies that examine several site types tend to use one of two approaches: Either the decay of one (or more) standard litters is examined in all sites, or litters native to each site type are incubated in the environment they came from. The first of these approaches examines effects of environment on decay, whereas the latter determines rates of mass loss characteristic of each site type. Both methods are usually restricted to a limited number of litters, and neither allows for a direct estimate of ecosystem-level parameters (e.g. heterotrophic respiration). In order to examine changes in total organic matter turnover along forest - peatland gradients in central Saskatchewan, we measured mass loss of native peat samples from six different depths (surface to 50 cm) over one year. Samples were obtained by sectioning short peat cores, and cores and samples were returned to their original position after determining the initial weight of each sample. A standard litter (birch popsicle sticks) was included at each depth, and water tables and soil temperature were monitored over the growing season. After one year, average mass loss in surface peat samples was similar to published values from litter bag studies, ranging from 12 to 21 percent in the environments examined. Native peat mass loss showed few systematic differences between sites or along the forest - peatland gradient, with over 60 percent of the total variability explained by depth alone. Mass loss of standard litter samples was highly variable, with high values in areas at the transition between upland and peatland that may have experienced recent disturbance. In combination, these results suggest strong litter-based control over natural rates of organic matter turnover. Estimates of heterotrophic respiration calculated from the mass loss data are higher than values obtained by eddy covariance or static chamber techniques, probably reflecting loss of material during the handling of samples or increased mass loss from manipulated profiles. Nevertheless, the core-based method is a useful tool in examining carbon dynamics of organic soils, since it provides a good relative index of organic matter turnover, and allows for separate examination of environmental and litter-based effects.
Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge
Jaiswal, K.S.; Wald, D.J.
2012-01-01
We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.
Costs of IQ Loss from Leaded Aviation Gasoline Emissions
Wolfe, Philip J.; Giang, Amanda; Ashok, Akshay; Selin, Noelle E.; Barrett, Steven R. H.
2017-01-01
In the United States, general aviation piston-driven aircraft are now the largest source of lead emitted to the atmosphere. Elevated lead concentrations impair children’s IQ and can lead to lower earnings potentials. This study is the first assessment of the nationwide annual costs of IQ losses from aircraft lead emissions. We develop a general aviation emissions inventory for the continental United States and model its impact on atmospheric concentrations using the Community Multi-Scale Air Quality Model (CMAQ). We use these concentrations to quantify the impacts of annual aviation lead emissions on the U.S. population using two methods: through static estimates of cohort-wide IQ deficits and through dynamic economy-wide effects using a computational general equilibrium model. We also examine the sensitivity of these damage estimates to different background lead concentrations, showing the impact of lead controls and regulations on marginal costs. We find that aircraft-attributable lead contributes to $1.06 billion 2006 USD ($0.01 – $11.6) in annual damages from lifetime earnings reductions, and that dynamic economy-wide methods result in damage estimates that are 54% larger. Because the marginal costs of lead are dependent on background concentration, the costs of piston-driven aircraft lead emissions are expected to increase over time as regulations on other emissions sources are tightened. PMID:27494542
Costs of IQ Loss from Leaded Aviation Gasoline Emissions.
Wolfe, Philip J; Giang, Amanda; Ashok, Akshay; Selin, Noelle E; Barrett, Steven R H
2016-09-06
In the United States, general aviation piston-driven aircraft are now the largest source of lead emitted to the atmosphere. Elevated lead concentrations impair children's IQ and can lead to lower earnings potentials. This study is the first assessment of the nationwide annual costs of IQ losses from aircraft lead emissions. We develop a general aviation emissions inventory for the continental United States and model its impact on atmospheric concentrations using the community multi-scale air quality model (CMAQ). We use these concentrations to quantify the impacts of annual aviation lead emissions on the U.S. population using two methods: through static estimates of cohort-wide IQ deficits and through dynamic economy-wide effects using a computational general equilibrium model. We also examine the sensitivity of these damage estimates to different background lead concentrations, showing the impact of lead controls and regulations on marginal costs. We find that aircraft-attributable lead contributes to $1.06 billion 2006 USD ($0.01-$11.6) in annual damages from lifetime earnings reductions, and that dynamic economy-wide methods result in damage estimates that are 54% larger. Because the marginal costs of lead are dependent on background concentration, the costs of piston-driven aircraft lead emissions are expected to increase over time as regulations on other emissions sources are tightened.
Prediction of overall and blade-element performance for axial-flow pump configurations
NASA Technical Reports Server (NTRS)
Serovy, G. K.; Kavanagh, P.; Okiishi, T. H.; Miller, M. J.
1973-01-01
A method and a digital computer program for prediction of the distributions of fluid velocity and properties in axial flow pump configurations are described and evaluated. The method uses the blade-element flow model and an iterative numerical solution of the radial equilbrium and continuity conditions. Correlated experimental results are used to generate alternative methods for estimating blade-element turning and loss characteristics. Detailed descriptions of the computer program are included, with example input and typical computed results.
Does choice of estimators influence conclusions from true metabolizable energy feeding trials?
Sherfy, M.H.; Kirkpatrick, R.L.; Webb, K.E.
2005-01-01
True metabolizable energy (TME) is a measure of avian dietary quality that accounts for metabolic fecal and endogenous urinary energy losses (EL) of non-dietary origin. The TME is calculated using a bird fed the test diet and an estimate of EL derived from another bird (Paired Bird Correction), the same bird (Self Correction), or several other birds (Group Mean Correction). We evaluated precision of these estimators by using each to calculate TME of three seed diets in blue-winged teal (Anas discors). The TME varied by <2% among estimators for all three diets, and Self Correction produced the least variable TMEs for each. The TME did not differ between estimators in nine paired comparisons within diets, but variation between estimators within individual birds was sufficient to be of practical consequence. Although differences in precision among methods were slight, Self Correction required the lowest sample size to achieve a given precision. Feeding trial methods that minimize variation among individuals have several desirable properties, including higher precision of TME estimates and more rigorous experimental control. Consequently, we believe that Self Correction is most likely to accurately represent nutritional value of food items and should be considered the standard method for TME feeding trials. ?? Dt. Ornithologen-Gesellschaft e.V. 2005.
Combining MODIS and Landsat imagery to estimate and map boreal forest cover loss
Potapov, P.; Hansen, Matthew C.; Stehman, S.V.; Loveland, Thomas R.; Pittman, K.
2008-01-01
Estimation of forest cover change is important for boreal forests, one of the most extensive forested biomes, due to its unique role in global timber stock, carbon sequestration and deposition, and high vulnerability to the effects of global climate change. We used time-series data from the MODerate Resolution Imaging Spectroradiometer (MODIS) to produce annual forest cover loss hotspot maps. These maps were used to assign all blocks (18.5 by 18.5 km) partitioning the boreal biome into strata of high, medium and low likelihood of forest cover loss. A stratified random sample of 118 blocks was interpreted for forest cover and forest cover loss using high spatial resolution Landsat imagery from 2000 and 2005. Area of forest cover gross loss from 2000 to 2005 within the boreal biome is estimated to be 1.63% (standard error 0.10%) of the total biome area, and represents a 4.02% reduction in year 2000 forest cover. The proportion of identified forest cover loss relative to regional forest area is much higher in North America than in Eurasia (5.63% to 3.00%). Of the total forest cover loss identified, 58.9% is attributable to wildfires. The MODIS pan-boreal change hotspot estimates reveal significant increases in forest cover loss due to wildfires in 2002 and 2003, with 2003 being the peak year of loss within the 5-year study period. Overall, the precision of the aggregate forest cover loss estimates derived from the Landsat data and the value of the MODIS-derived map displaying the spatial and temporal patterns of forest loss demonstrate the efficacy of this protocol for operational, cost-effective, and timely biome-wide monitoring of gross forest cover loss.
Burden of disease and costs of aneurysmal subarachnoid haemorrhage (aSAH) in the United Kingdom
2010-01-01
Background To estimate life years and quality-adjusted life years (QALYs) lost and the economic burden of aneurysmal subarachnoid haemorrhage (aSAH) in the United Kingdom including healthcare and non-healthcare costs from a societal perspective. Methods All UK residents in 2005 with aSAH (International Classification of Diseases 10th revision (ICD-10) code I60). Sex and age-specific abridged life tables were generated for a general population and aSAH cohorts. QALYs in each cohort were calculated adjusting the life tables with health-related quality of life (HRQL) data. Healthcare costs included hospital expenditure, cerebrovascular rehabilitation, primary care and community health and social services. Non-healthcare costs included informal care and productivity losses arising from morbidity and premature death. Results A total of 80,356 life years and 74,807 quality-adjusted life years were estimated to be lost due to aSAH in the UK in 2005. aSAH costs the National Health Service (NHS) £168.2 million annually with hospital inpatient admissions accounting for 59%, community health and social services for 18%, aSAH-related operations for 15% and cerebrovascular rehabilitation for 6% of the total NHS estimated costs. The average per patient cost for the NHS was estimated to be £23,294. The total economic burden (including informal care and using the human capital method to estimate production losses) of a SAH in the United Kingdom was estimated to be £510 million annually. Conclusion The economic and disease burden of aSAH in the United Kingdom is reported in this study. Decision-makers can use these results to complement other information when informing prevention policies in this field and to relate health care expenditures to disease categories. PMID:20423472
Determination of tropical deforestation rates and related carbon losses from 1990 to 2010
Achard, Frédéric; Beuchle, René; Mayaux, Philippe; Stibig, Hans-Jürgen; Bodart, Catherine; Brink, Andreas; Carboni, Silvia; Desclée, Baudouin; Donnay, François; Eva, Hugh D; Lupi, Andrea; Raši, Rastislav; Seliger, Roman; Simonetti, Dario
2014-01-01
We estimate changes in forest cover (deforestation and forest regrowth) in the tropics for the two last decades (1990–2000 and 2000–2010) based on a sample of 4000 units of 10 ×10 km size. Forest cover is interpreted from satellite imagery at 30 × 30 m resolution. Forest cover changes are then combined with pan-tropical biomass maps to estimate carbon losses. We show that there was a gross loss of tropical forests of 8.0 million ha yr−1 in the 1990s and 7.6 million ha yr−1 in the 2000s (0.49% annual rate), with no statistically significant difference. Humid forests account for 64% of the total forest cover in 2010 and 54% of the net forest loss during second study decade. Losses of forest cover and Other Wooded Land (OWL) cover result in estimates of carbon losses which are similar for 1990s and 2000s at 887 MtC yr−1 (range: 646–1238) and 880 MtC yr−1 (range: 602–1237) respectively, with humid regions contributing two-thirds. The estimates of forest area changes have small statistical standard errors due to large sample size. We also reduce uncertainties of previous estimates of carbon losses and removals. Our estimates of forest area change are significantly lower as compared to national survey data. We reconcile recent low estimates of carbon emissions from tropical deforestation for early 2000s and show that carbon loss rates did not change between the two last decades. Carbon losses from deforestation represent circa 10% of Carbon emissions from fossil fuel combustion and cement production during the last decade (2000–2010). Our estimates of annual removals of carbon from forest regrowth at 115 MtC yr−1 (range: 61–168) and 97 MtC yr−1 (53–141) for the 1990s and 2000s respectively are five to fifteen times lower than earlier published estimates. PMID:24753029
Determination of tropical deforestation rates and related carbon losses from 1990 to 2010.
Achard, Frédéric; Beuchle, René; Mayaux, Philippe; Stibig, Hans-Jürgen; Bodart, Catherine; Brink, Andreas; Carboni, Silvia; Desclée, Baudouin; Donnay, François; Eva, Hugh D; Lupi, Andrea; Raši, Rastislav; Seliger, Roman; Simonetti, Dario
2014-08-01
We estimate changes in forest cover (deforestation and forest regrowth) in the tropics for the two last decades (1990-2000 and 2000-2010) based on a sample of 4000 units of 10 ×10 km size. Forest cover is interpreted from satellite imagery at 30 × 30 m resolution. Forest cover changes are then combined with pan-tropical biomass maps to estimate carbon losses. We show that there was a gross loss of tropical forests of 8.0 million ha yr(-1) in the 1990s and 7.6 million ha yr(-1) in the 2000s (0.49% annual rate), with no statistically significant difference. Humid forests account for 64% of the total forest cover in 2010 and 54% of the net forest loss during second study decade. Losses of forest cover and Other Wooded Land (OWL) cover result in estimates of carbon losses which are similar for 1990s and 2000s at 887 MtC yr(-1) (range: 646-1238) and 880 MtC yr(-1) (range: 602-1237) respectively, with humid regions contributing two-thirds. The estimates of forest area changes have small statistical standard errors due to large sample size. We also reduce uncertainties of previous estimates of carbon losses and removals. Our estimates of forest area change are significantly lower as compared to national survey data. We reconcile recent low estimates of carbon emissions from tropical deforestation for early 2000s and show that carbon loss rates did not change between the two last decades. Carbon losses from deforestation represent circa 10% of Carbon emissions from fossil fuel combustion and cement production during the last decade (2000-2010). Our estimates of annual removals of carbon from forest regrowth at 115 MtC yr(-1) (range: 61-168) and 97 MtC yr(-1) (53-141) for the 1990s and 2000s respectively are five to fifteen times lower than earlier published estimates. © The Authors Global Change Biology Published by John Wiley & Sons Ltd.
Combining statistical inference and decisions in ecology
Williams, Perry J.; Hooten, Mevin B.
2016-01-01
Statistical decision theory (SDT) is a sub-field of decision theory that formally incorporates statistical investigation into a decision-theoretic framework to account for uncertainties in a decision problem. SDT provides a unifying analysis of three types of information: statistical results from a data set, knowledge of the consequences of potential choices (i.e., loss), and prior beliefs about a system. SDT links the theoretical development of a large body of statistical methods including point estimation, hypothesis testing, and confidence interval estimation. The theory and application of SDT have mainly been developed and published in the fields of mathematics, statistics, operations research, and other decision sciences, but have had limited exposure in ecology. Thus, we provide an introduction to SDT for ecologists and describe its utility for linking the conventionally separate tasks of statistical investigation and decision making in a single framework. We describe the basic framework of both Bayesian and frequentist SDT, its traditional use in statistics, and discuss its application to decision problems that occur in ecology. We demonstrate SDT with two types of decisions: Bayesian point estimation, and an applied management problem of selecting a prescribed fire rotation for managing a grassland bird species. Central to SDT, and decision theory in general, are loss functions. Thus, we also provide basic guidance and references for constructing loss functions for an SDT problem.
NASA Technical Reports Server (NTRS)
Klos, J.; Brown, S. A.
2002-01-01
A technique to measure the radiated acoustic intensity and transmission loss of panels is documented in this paper. This facility has been upgraded to include a test fixture that scans the acoustic intensity radiated from a panel on the anechoic receiving room side of the transmission loss window. The acoustic intensity incident on the panel from the reverberant side of the transmission loss window is estimated from measurements made using six stationary microphones in the reverberant source room. From the measured incident and radiated intensity, the sound power transmission loss is calculated. The setup of the facility and data acquisition system are documented. A transmission loss estimate of a typical panel is shown. The measurement-to-measurement and setup-to-setup repeatability of the transmission loss estimate are assessed. Conclusions are drawn about the ability to measure changes in transmission loss due to changes in panel construction.
Welfare analysis of a zero-smoking policy - A case study in Japan.
Nakamura, Yuuki; Takahashi, Kenzo; Nomura, Marika; Kamei, Miwako
2018-03-19
Smoking cessation efforts in Japan reduce smoking rates. A future zero-smoking policy would completely prohibit smoking (0% rate). We therefore analyzed the social welfare of smokers and non-smokers under a hypothetical zero-smoking policy. The demand curve for smoking from 1990 to 2014 was estimated by defining quantity as the number of cigarettes smoked and price as total tobacco sales/total cigarettes smoked by the two-stage least squares method using the tax on tobacco as the instrumental variable. In the estimation equation (calculated using the ordinary least squares method), the price of tobacco was the dependent variable and tobacco quantity the explanatory variable. The estimated constant was 31.90, the estimated coefficient of quantity was - 0.0061 (both, p < 0.0004), and the determinant coefficient was 0.9187. Thus, the 2015 consumer surplus was 1.08 trillion yen (US$ 9.82 billion) (95% confidence interval (CI), 889 billion yen (US$ 8.08 billion) - 1.27 trillion yen (US$ 11.6 billion)). Because tax revenue from tobacco in 2011 was 2.38 trillion yen (US$ 21.6 billion), the estimated deadweight loss if smoking were prohibited in 2014 was 3.31 trillion yen (US$ 30.2 billion) (95% CI, 3.13 trillion yen (US$ 28.5 billion) - 3.50 trillion yen (US$ 31.8 billion)), representing a deadweight loss about 0.6 trillion yen (US$ 5.45 billion) below the 2014 disease burden (4.10-4.12 trillion yen (US$ 37.3-37.5 billion)). We conclude that a zero-smoking policy would improve social welfare in Japan.
Prosdocimi, Massimo; Burguet, Maria; Di Prima, Simone; Sofia, Giulia; Terol, Enric; Rodrigo Comino, Jesús; Cerdà, Artemi; Tarolli, Paolo
2017-01-01
Soil water erosion is a serious problem, especially in agricultural lands. Among these, vineyards deserve attention, because they constitute for the Mediterranean areas a type of land use affected by high soil losses. A significant problem related to the study of soil water erosion in these areas consists in the lack of a standardized procedure of collecting data and reporting results, mainly due to a variability among the measurement methods applied. Given this issue and the seriousness of soil water erosion in Mediterranean vineyards, this works aims to quantify the soil losses caused by simulated rainstorms, and compare them with each other depending on two different methodologies: (i) rainfall simulation and (ii) surface elevation change-based, relying on high-resolution Digital Elevation Models (DEMs) derived from a photogrammetric technique (Structure-from-Motion or SfM). The experiments were carried out in a typical Mediterranean vineyard, located in eastern Spain, at very fine scales. SfM data were obtained from one reflex camera and a smartphone built-in camera. An index of sediment connectivity was also applied to evaluate the potential effect of connectivity within the plots. DEMs derived from the smartphone and the reflex camera were comparable with each other in terms of accuracy and capability of estimating soil loss. Furthermore, soil loss estimated with the surface elevation change-based method resulted to be of the same order of magnitude of that one obtained with rainfall simulation, as long as the sediment connectivity within the plot was considered. High-resolution topography derived from SfM revealed to be essential in the sediment connectivity analysis and, therefore, in the estimation of eroded materials, when comparing them to those derived from the rainfall simulation methodology. The fact that smartphones built-in cameras could produce as much satisfying results as those derived from reflex cameras is a high value added for using SfM. Copyright © 2016 Elsevier B.V. All rights reserved.
Remote sensing as a tool for estimating soil erosion potential
NASA Technical Reports Server (NTRS)
Morris-Jones, D. R.; Morgan, K. M.; Kiefer, R. W.
1979-01-01
The Universal Soil Loss Equation is a frequently used methodology for estimating soil erosion potential. The Universal Soil Loss Equation requires a variety of types of geographic information (e.g. topographic slope, soil erodibility, land use, crop type, and soil conservation practice) in order to function. This information is traditionally gathered from topographic maps, soil surveys, field surveys, and interviews with farmers. Remote sensing data sources and interpretation techniques provide an alternative method for collecting information regarding land use, crop type, and soil conservation practice. Airphoto interpretation techniques and medium altitude, multi-date color and color infrared positive transparencies (70mm) were utilized in this study to determine their effectiveness for gathering the desired land use/land cover data. Successful results were obtained within the test site, a 6136 hectare watershed in Dane County, Wisconsin.
Shi, Zhonglin; Wen, Anbang; Zhang, Xinbao; Yan, Dongchun
2011-10-01
The potential for using (7)Be measurements to document soil redistribution associated with a heavy rainfall was estimated using (7)Be method on a bare purple soil plot in the Three Gorges Reservoir region of China. The results were compared with direct measurement from traditional approaches of erosion pins and runoff plots. The study shows that estimation of soil losses from (7)Be are comparable with the monitoring results provided by erosion pins and runoff plots, and are also in agreement with the existing knowledge provided by 137Cs measurements. The results obtained from this study demonstrated the potential for using (7)Be technique to quantify short-term erosion rates in these areas. Copyright © 2011 Elsevier Ltd. All rights reserved.
Braeye, Toon; Verheagen, Jan; Mignon, Annick; Flipse, Wim; Pierard, Denis; Huygen, Kris; Schirvel, Carole; Hens, Niel
2016-01-01
Introduction Surveillance networks are often not exhaustive nor completely complementary. In such situations, capture-recapture methods can be used for incidence estimation. The choice of estimator and their robustness with respect to the homogeneity and independence assumptions are however not well documented. Methods We investigated the performance of five different capture-recapture estimators in a simulation study. Eight different scenarios were used to detect and combine case-information. The scenarios increasingly violated assumptions of independence of samples and homogeneity of detection probabilities. Belgian datasets on invasive pneumococcal disease (IPD) and pertussis provided motivating examples. Results No estimator was unbiased in all scenarios. Performance of the parametric estimators depended on how much of the dependency and heterogeneity were correctly modelled. Model building was limited by parameter estimability, availability of additional information (e.g. covariates) and the possibilities inherent to the method. In the most complex scenario, methods that allowed for detection probabilities conditional on previous detections estimated the total population size within a 20–30% error-range. Parametric estimators remained stable if individual data sources lost up to 50% of their data. The investigated non-parametric methods were more susceptible to data loss and their performance was linked to the dependence between samples; overestimating in scenarios with little dependence, underestimating in others. Issues with parameter estimability made it impossible to model all suggested relations between samples for the IPD and pertussis datasets. For IPD, the estimates for the Belgian incidence for cases aged 50 years and older ranged from 44 to58/100,000 in 2010. The estimates for pertussis (all ages, Belgium, 2014) ranged from 24.2 to30.8/100,000. Conclusion We encourage the use of capture-recapture methods, but epidemiologists should preferably include datasets for which the underlying dependency structure is not too complex, a priori investigate this structure, compensate for it within the model and interpret the results with the remaining unmodelled heterogeneity in mind. PMID:27529167
Evaluation of the percentage of ganglion cells in the ganglion cell layer of the rodent retina
Schlamp, Cassandra L.; Montgomery, Angela D.; Mac Nair, Caitlin E.; Schuart, Claudia; Willmer, Daniel J.
2013-01-01
Purpose Retinal ganglion cells comprise a percentage of the neurons actually residing in the ganglion cell layer (GCL) of the rodent retina. This estimate is useful to extrapolate ganglion cell loss in models of optic nerve disease, but the values reported in the literature are highly variable depending on the methods used to obtain them. Methods We tested three retrograde labeling methods and two immunostaining methods to calculate ganglion cell number in the mouse retina (C57BL/6). Additionally, a double-stain retrograde staining method was used to label rats (Long-Evans). The number of total neurons was estimated using a nuclear stain and selecting for nuclei that met specific criteria. Cholinergic amacrine cells were identified using transgenic mice expressing Tomato fluorescent protein. Total neurons and total ganglion cell numbers were measured in microscopic fields of 104 µm2 to determine the percentage of neurons comprising ganglion cells in each field. Results Historical estimates of the percentage of ganglion cells in the mouse GCL range from 36.1% to 67.5% depending on the method used. Experimentally, retrograde labeling methods yielded a combined estimate of 50.3% in mice. A retrograde method also yielded a value of 50.21% for rat retinas. Immunolabeling estimates were higher at 64.8%. Immunolabeling may introduce overestimates, however, with non-specific labeling effects, or ectopic expression of antigens in neurons other than ganglion cells. Conclusions Since immunolabeling methods may overestimate ganglion cell numbers, we conclude that 50%, which is consistently derived from retrograde labeling methods, is a reliable estimate of the ganglion cells in the neuronal population of the GCL. PMID:23825918
NASA Astrophysics Data System (ADS)
Florian, Ehmele; Michael, Kunz
2016-04-01
Several major flood events occurred in Germany in the past 15-20 years especially in the eastern parts along the rivers Elbe and Danube. Examples include the major floods of 2002 and 2013 with an estimated loss of about 2 billion Euros each. The last major flood events in the State of Baden-Württemberg in southwest Germany occurred in the years 1978 and 1993/1994 along the rivers Rhine and Neckar with an estimated total loss of about 150 million Euros (converted) each. Flood hazard originates from a combination of different meteorological, hydrological and hydraulic processes. Currently there is no defined methodology available for evaluating and quantifying the flood hazard and related risk for larger areas or whole river catchments instead of single gauges. In order to estimate the probable maximum loss for higher return periods (e.g. 200 years, PML200), a stochastic model approach is designed since observational data are limited in time and space. In our approach, precipitation is linearly composed of three elements: background precipitation, orographically-induces precipitation, and a convectively-driven part. We use linear theory of orographic precipitation formation for the stochastic precipitation model (SPM), which is based on fundamental statistics of relevant atmospheric variables. For an adequate number of historic flood events, the corresponding atmospheric conditions and parameters are determined in order to calculate a probability density function (pdf) for each variable. This method involves all theoretically possible scenarios which may not have happened, yet. This work is part of the FLORIS-SV (FLOod RISk Sparkassen Versicherung) project and establishes the first step of a complete modelling chain of the flood risk. On the basis of the generated stochastic precipitation event set, hydrological and hydraulic simulations will be performed to estimate discharge and water level. The resulting stochastic flood event set will be used to quantify the flood risk and to estimate probable maximum loss (e.g. PML200) for a given property (buildings, industry) portfolio.
[Estimation of N loss loading by runoff from paddy field during submersed period in Hangjiahu area].
Tian, Ping; Chen, Yingxu; Tian, Guangming; Liang, Xinqiang; Zhang, Qiuling; Yu, Qiaogang; Li, Hua
2006-10-01
As the largest bread basket in Zhejiang Province, Hangjiahu area is facing more and more serious water pollution, while the N loss loading by runoff from the paddy field during its submersed period is the main cause of the pollution. Through field experiment and fixed spot observation, the model of precipitation - runoff in Yangtze delta was testified, and the results showed that the precipitation - runoff model from HE Baogen was basically accorded with the fact after considering the impact of field overflow mouth, and the error was between - 19. 9% and + 18. 0%. The model of N concentration with precipitation - runoff in paddy field during submersed period was brought forward, with the R value being 0. 948. These two models consisted of the model of N loss loading by runoff from paddy field during submersed period. Based on this model as well as the past 30 years data of fertilization and precipitation, 1: 250,000 topography map, land use map, and water system map, the N loss loading and its distribution were estimated by using GIS method, and the results showed that the N loss loading was different from place to place, with an average of 35.26 kg N x hm(-2), and accounting for 12. 69% of the applied N. The N loss loading in Anji and Yuhang with obviously more precipitation was higher than that in other places, while Haining also had a serious N loss problem because of the huge amount of applied N.
Monitoring of prestress losses using long-gauge fiber optic sensors
NASA Astrophysics Data System (ADS)
Abdel-Jaber, Hiba; Glisic, Branko
2017-04-01
Prestressed concrete has been increasingly used in the construction of bridges due to its superiority as a building material. This has necessitated better assessment of its on-site performance. One of the most important indicators of structural integrity and performance of prestressed concrete structures is the spatial distribution of prestress forces over time, i.e. prestress losses along the structure. Time-dependent prestress losses occur due to dimensional changes in the concrete caused by creep and shrinkage, in addition to strand relaxation. Maintaining certain force levels in the strands, and thus the concrete cross-sections, is essential to ensuring stresses in the concrete do not exceed design stresses, which could cause malfunction or failure of the structure. This paper presents a novel method for monitoring prestress losses based on long-gauge fiber optic sensors embedded in the concrete during construction. The method includes the treatment of varying environmental factors such as temperature to ensure accuracy of results in on-site applications. The method is presented as applied to a segment of a post-tensioned pedestrian bridge on the Princeton University campus, Streicker Bridge. The segment is a three-span continuous girder supported on steel columns, with sensors embedded at key locations along the structure during construction in October 2009. Temperature and strain measurements have been recorded intermittently since construction. The prestress loss results are compared to estimates from design documents.
Earthquake Loss Scenarios: Warnings about the Extent of Disasters
NASA Astrophysics Data System (ADS)
Wyss, M.; Tolis, S.; Rosset, P.
2016-12-01
It is imperative that losses expected due to future earthquakes be estimated. Officials and the public need to be aware of what disaster is likely in store for them in order to reduce the fatalities and efficiently help the injured. Scenarios for earthquake parameters can be constructed to a reasonable accuracy in highly active earthquake belts, based on knowledge of seismotectonics and history. Because of the inherent uncertainties of loss estimates however, it would be desirable that more than one group calculate an estimate for the same area. By discussing these estimates, one may find a consensus of the range of the potential disasters and persuade officials and residents of the reality of the earthquake threat. To model a scenario and estimate earthquake losses requires data sets that are sufficiently accurate of the number of people present, the built environment, and if possible the transmission of seismic waves. As examples we use loss estimates for possible repeats of historic earthquakes in Greece that occurred between -464 and 700. We model future large Greek earthquakes as having M6.8 and rupture lengths of 60 km. In four locations where historic earthquakes with serious losses have occurred, we estimate that 1,000 to 1,500 people might perish, with an additional factor of four people injured. Defining the area of influence of these earthquakes as that with shaking intensities larger and equal to V, we estimate that 1.0 to 2.2 million people in about 2,000 settlements may be affected. We calibrate the QLARM tool for calculating intensities and losses in Greece, using the M6, 1999 Athens earthquake and matching the isoseismal information for six earthquakes, which occurred in Greece during the last 140 years. Comparing fatality numbers that would occur theoretically today with the numbers reported, and correcting for the increase in population, we estimate that the improvement of the building stock has reduced the mortality and injury rate in Greek earthquakes by average factors of 3.0 and 1.9, respectively. In addition, it would be desirable to estimate the expected monetary losses by adding a data layer for values of the various building types present.
Analytical Method to Estimate the Complex Permittivity of Oil Samples.
Su, Lijuan; Mata-Contreras, Javier; Vélez, Paris; Fernández-Prieto, Armando; Martín, Ferran
2018-03-26
In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR), which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT) can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.
USDA-ARS?s Scientific Manuscript database
Current restoration efforts for the Chesapeake Bay watershed mandate a timeline for reducing the load of nutrients and sediment to receiving waters. The Chesapeake Bay Watershed Model (WSM) has been used for two decades to simulate hydrology and nutrient and sediment transport; however, spatial limi...
Wooden breast condition results in reduced thaw loss in frozen-thawed broiler breast fillets
USDA-ARS?s Scientific Manuscript database
Wooden breast condition (WBC) is an emerging quality issue with broiler breast meat that significantly affects both raw and cooked meat properties. The objective of this study was to investigate the effects of WBC on meat water-holding capacity (WHC) estimated with different methods. Broiler breast ...
Floods and food security: A method to estimate the effect of inundation on crops availability
NASA Astrophysics Data System (ADS)
Pacetti, Tommaso; Caporali, Enrica; Rulli, Maria Cristina
2017-12-01
The inner connections between floods and food security are extremely relevant, especially in developing countries where food availability can be highly jeopardized by extreme events that damage the primary access to food, i.e. agriculture. A method for the evaluation of the effects of floods on food supply, consisting of the integration of remote sensing data, agricultural statistics and water footprint databases, is proposed and applied to two different case studies. Based on the existing literature related to extreme floods, the events in Bangladesh (2007) and in Pakistan (2010) have been selected as exemplary case studies. Results show that the use of remote sensing data combined with other sources of onsite information is particularly useful to assess the effects of flood events on food availability. The damages caused by floods on agricultural areas are estimated in terms of crop losses and then converted into lost calories and water footprint as complementary indicators. Method results are fully repeatable; whereas, for remote sensed data the sources of data are valid worldwide and the data regarding land use and crops characteristics are strongly site specific, which need to be carefully evaluated. A sensitivity analysis has been carried out for the water depth critical on the crops in Bangladesh, varying the assumed level by ±20%. The results show a difference in the energy content losses estimation of 12% underlying the importance of an accurate data choice.
Application of MUSLE for the prediction of phosphorus losses.
Noor, Hamze; Mirnia, Seyed Khalagh; Fazli, Somaye; Raisi, Mohamad Bagher; Vafakhah, Mahdi
2010-01-01
Soil erosion in forestlands affects not only land productivity but also the water body down stream. The Universal Soil Loss Equation (USLE) has been applied broadly for the prediction of soil loss from upland fields. However, there are few reports concerning the prediction of nutrient (P) losses based on the USLE and its versions. The present study was conducted to evaluate the applicability of the deterministic model Modified Universal Soil Loss Equation (MUSLE) to estimation of phosphorus losses in the Kojor forest watershed, northern Iran. The model was tested and calibrated using accurate continuous P loss data collected during seven storm events in 2008. Results of the original model simulations for storm-wise P loss did not match the observed data, while the revised version of the model could imitate the observed values well. The results of the study approved the efficient application of the revised MUSLE in estimating storm-wise P losses in the study area with a high level of agreement of beyond 93%, an acceptable estimation error of some 35%.
Tiwari, Ashwani; VanLeeuwen, John A.; Dohoo, Ian R.; Keefe, Greg P.; Weersink, Alfons
2008-01-01
The objective of this study was to estimate the annual losses from Mycobacterium avium subspecies paratuberculosis (MAP) for an average, MAP-seropositive, Canadian dairy herd. A partial-budget simulation model was developed with 4 components of direct production losses (decreased milk production, premature voluntary culling, mortality, and reproductive losses). Input values were obtained primarily from a national seroprevalence survey of 373 Canadian dairy farms in 8 of 10 provinces. The model took into account the variability and uncertainty of the required input values; consequently, it produced probability distributions of the estimated losses. For an average Canadian dairy herd with 12.7% of 61 cows seropositive for MAP, the mean loss was $2992 (95% C.I., $143 to $9741) annually, or $49 per cow per year. Additional culling, decreased milk production, mortality, and reproductive losses accounted for 46%, 9%, 16%, and 29% of the losses, respectively. Canadian dairy producers should use best management practices to reduce these substantial annual losses. PMID:18624066
Analysis of temperature profiles for investigating stream losses beneath ephemeral channels
Constantz, Jim; Stewart, Amy E.; Niswonger, Richard G.; Sarma, Lisa
2002-01-01
Continuous estimates of streamflow are challenging in ephemeral channels. The extremely transient nature of ephemeral streamflows results in shifting channel geometry and degradation in the calibration of streamflow stations. Earlier work suggests that analysis of streambed temperature profiles is a promising technique for estimating streamflow patterns in ephemeral channels. The present work provides a detailed examination of the basis for using heat as a tracer of stream/groundwater exchanges, followed by a description of an appropriate heat and water transport simulation code for ephemeral channels, as well as discussion of several types of temperature analysis techniques to determine streambed percolation rates. Temperature‐based percolation rates for three ephemeral stream sites are compared with available surface water estimates of channel loss for these sites. These results are combined with published results to develop conclusions regarding the accuracy of using vertical temperature profiles in estimating channel losses. Comparisons of temperature‐based streambed percolation rates with surface water‐based channel losses indicate that percolation rates represented 30% to 50% of the total channel loss. The difference is reasonable since channel losses include both vertical and nonvertical component of channel loss as well as potential evapotranspiration losses. The most significant advantage of the use of sediment‐temperature profiles is their robust and continuous nature, leading to a long‐term record of the timing and duration of channel losses and continuous estimates of streambed percolation. The primary disadvantage is that temperature profiles represent the continuous percolation rate at a single point in an ephemeral channel rather than an average seepage loss from the entire channel.
Molecular dynamics simulations of classical sound absorption in a monatomic gas
NASA Astrophysics Data System (ADS)
Ayub, M.; Zander, A. C.; Huang, D. M.; Cazzolato, B. S.; Howard, C. Q.
2018-05-01
Sound wave propagation in argon gas is simulated using molecular dynamics (MD) in order to determine the attenuation of acoustic energy due to classical (viscous and thermal) losses at high frequencies. In addition, a method is described to estimate attenuation of acoustic energy using the thermodynamic concept of exergy. The results are compared against standing wave theory and the predictions of the theory of continuum mechanics. Acoustic energy losses are studied by evaluating various attenuation parameters and by comparing the changes in behavior at three different frequencies. This study demonstrates acoustic absorption effects in a gas simulated in a thermostatted molecular simulation and quantifies the classical losses in terms of the sound attenuation constant. The approach can be extended to further understanding of acoustic loss mechanisms in the presence of nanoscale porous materials in the simulation domain.
NASA Astrophysics Data System (ADS)
Kuttippurath, J.; Godin-Beekmann, S.; Lefevre, F.; Pazmino, A.
2009-04-01
The ozone loss in the recent Antarctic winters were high enough to pause a lag in the recovery phase of stratospheric ozone above this continent. We quantitatively examine the extent of ozone loss variability during 2005-2008 with simulations from a high resolution chemical transport model, MIMOSA-CHIM. The simulated results are cross-checked with the observed loss from Microwave Limb Sounder (MLS) satellite sensor data. This study uses the vortex averaged data at the potential temperature level 475 K from both MIMOSA and MLS to estimate the ozone loss by transport method. Minimum temperatures calculated from ECMWF analyzes over 50-90°S at 475 K are coldest in 2008 during June-July and in 2006 during September-November. In general, Antarctic winters experience NAT temperatures from mid-May to mid-October and ICE temperatures from June to September. Due to the saturation of chemical ozone loss, the year-to-year difference in temperatures do not have a large effect. The estimated cumulative ozone loss from MIMOSA-CHIM at 475 K is 3.2 in 2005, 2.9 in 2006, 2.8 in 2007 and 2.0 ppm in 2008. The measured cumulative loss in the respective years also show similar values: respectively 3.3, 3.2, 2.8 and 2.2 ppm in 2005, 2006, 2007 and 2008. Both data sets show the same loss trend, as the cumulative loss is highest in 2005 followed by 2006 and the lowest in 2008, and are in accord with the chlorine activation and denitrification found in the respective winters. The simulations in 2008 lack adequate diabatic descent as assessed from tracer simulations in comparison with measurements. This eventually produced relatively lower values for ozone loss in 2008 in both data sets even though the observed chlorine activation was found to be similar to previous winters.
Kirigia, Joses Muthuri; Muthuri, Rosenabi Deborah Karimi
2016-06-01
In 2014, almost half of the global tuberculosis deaths occurred in the World Health Organization (WHO) African Region. Approximately 21.5 % of the 6 060 742 TB cases (new and relapse) reported to the WHO in 2014 were in the African Region. The specific objective of this study was to estimate future gross domestic product (GDP) losses associated with TB deaths in the African Region for use in advocating for better strategies to prevent and control tuberculosis. The cost-of-illness method was used to estimate non-health GDP losses associated with TB deaths. Future non-health GDP losses were discounted at 3 %. The analysis was conducted for three income groups of countries. One-way sensitivity analysis at 5 and 10 % discount rates was undertaken to assess the impact on the expected non-health GDP loss. The 0.753 million tuberculosis deaths that occurred in the African Region in 2014 would be expected to decrease the future non-health GDP by International Dollars (Int$) 50.4 billion. Nearly 40.8, 46.7 and 12.5 % of that loss would come from high and upper-middle- countries or lower-middle- and low-income countries, respectively. The average total non-health GDP loss would be Int$66 872 per tuberculosis death. The average non-health GDP loss per TB death was Int$167 592 for Group 1, Int$69 808 for Group 2 and Int$21 513 for Group 3. Tuberculosis exerts a sizeable economic burden on the economies of the WHO AFR countries. This implies the need to strongly advocate for better strategies to prevent and control tuberculosis and to help countries end the epidemic of tuberculosis by 2030, as envisioned in the United Nations General Assembly resolution on Sustainable Development Goals (SDGs).
Estimating annualized earthquake losses for the conterminous United States
Jaiswal, Kishor S.; Bausch, Douglas; Chen, Rui; Bouabid, Jawhar; Seligson, Hope
2015-01-01
We make use of the most recent National Seismic Hazard Maps (the years 2008 and 2014 cycles), updated census data on population, and economic exposure estimates of general building stock to quantify annualized earthquake loss (AEL) for the conterminous United States. The AEL analyses were performed using the Federal Emergency Management Agency's (FEMA) Hazus software, which facilitated a systematic comparison of the influence of the 2014 National Seismic Hazard Maps in terms of annualized loss estimates in different parts of the country. The losses from an individual earthquake could easily exceed many tens of billions of dollars, and the long-term averaged value of losses from all earthquakes within the conterminous U.S. has been estimated to be a few billion dollars per year. This study estimated nationwide losses to be approximately $4.5 billion per year (in 2012$), roughly 80% of which can be attributed to the States of California, Oregon and Washington. We document the change in estimated AELs arising solely from the change in the assumed hazard map. The change from the 2008 map to the 2014 map results in a 10 to 20% reduction in AELs for the highly seismic States of the Western United States, whereas the reduction is even more significant for Central and Eastern United States.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fortin, Dominique; Ng, Angela; Tsang, Derek
Purpose: The increased sparing of normal tissues in intensity modulated proton therapy (IMPT) in pediatric brain tumor treatments should translate into improved neurocognitive outcomes. Models were used to estimate the intelligence quotient (IQ) and the risk of hearing loss 5 years post radiotherapy and to compare outcomes of proton against photon in pediatric brain tumors. Methods: Patients who had received intensity modulated radiotherapy (IMRT) were randomly selected from our retrospective database. The existing planning CT and contours were used to generate IMPT plans. The RBE-corrected dose was calculated for both IMPT and IMRT. For each patient, the IQ was estimatedmore » via a Monte Carlo technique, whereas the reported incidence of hearing loss as a function of cochlear dose was used to estimate the probability of occurrence. Results: The integrated brain dose was reduced in all IMPT plans, translating into a gain of 2 IQ points on average for protons for the whole cohort at 5 years post-treatment. In terms of specific diseases, the gains in IQ ranged from 0.8 points for medulloblastoma, to 2.7 points for craniopharyngioma. Hearing loss probability was evaluated on a per-ear-basis and was found to be systematically less for proton versus photon: overall 2.9% versus 7.2%. Conclusions: A method was developed to predict IQ and hearing outcomes in pediatric brain tumor patients on a case-by-case basis. A modest gain was systematically observed for proton in all patients. Given the uncertainties within the model used and our reinterpretation, these gains may be underestimated.« less
Quantifying soil carbon loss and uncertainty from a peatland wildfire using multi-temporal LiDAR
Reddy, Ashwan D.; Hawbaker, Todd J.; Wurster, F.; Zhu, Zhiliang; Ward, S.; Newcomb, Doug; Murray, R.
2015-01-01
Peatlands are a major reservoir of global soil carbon, yet account for just 3% of global land cover. Human impacts like draining can hinder the ability of peatlands to sequester carbon and expose their soils to fire under dry conditions. Estimating soil carbon loss from peat fires can be challenging due to uncertainty about pre-fire surface elevations. This study uses multi-temporal LiDAR to obtain pre- and post-fire elevations and estimate soil carbon loss caused by the 2011 Lateral West fire in the Great Dismal Swamp National Wildlife Refuge, VA, USA. We also determine how LiDAR elevation error affects uncertainty in our carbon loss estimate by randomly perturbing the LiDAR point elevations and recalculating elevation change and carbon loss, iterating this process 1000 times. We calculated a total loss using LiDAR of 1.10 Tg C across the 25 km2 burned area. The fire burned an average of 47 cm deep, equivalent to 44 kg C/m2, a value larger than the 1997 Indonesian peat fires (29 kg C/m2). Carbon loss via the First-Order Fire Effects Model (FOFEM) was estimated to be 0.06 Tg C. Propagating the LiDAR elevation error to the carbon loss estimates, we calculated a standard deviation of 0.00009 Tg C, equivalent to 0.008% of total carbon loss. We conclude that LiDAR elevation error is not a significant contributor to uncertainty in soil carbon loss under severe fire conditions with substantial peat consumption. However, uncertainties may be more substantial when soil elevation loss is of a similar or smaller magnitude than the reported LiDAR error.
Enhanced Assimilation of InSAR Displacement and Well Data for Groundwater Monitoring
NASA Astrophysics Data System (ADS)
Abdullin, A.; Jonsson, S.
2016-12-01
Ground deformation related to aquifer exploitation can cause damage to buildings and infrastructure leading to major economic losses and sometimes even loss of human lives. Understanding reservoir behavior helps in assessing possible future ground movement and water depletion hazard of a region under study. We have developed an InSAR-based data assimilation framework for groundwater reservoirs that efficiently incorporates InSAR data for improved reservoir management and forecasts. InSAR displacement data are integrated with the groundwater modeling software MODFLOW using ensemble-based assimilation approaches. We have examined several Ensemble Methods for updating model parameters such as hydraulic conductivity and model variables like pressure head while simultaneously providing an estimate of the uncertainty. A realistic three-dimensional aquifer model was built to demonstrate the capability of the Ensemble Methods incorporating InSAR-derived displacement measurements. We find from these numerical tests that including both ground deformation and well water level data as observations improves the RMSE of the hydraulic conductivity estimate by up to 20% comparing to using only one type of observations. The RMSE estimation of this property after the final time step is similar for Ensemble Kalman Filter (EnKF), Ensemble Smoother (ES) and ES with multiple data assimilation (ES-MDA) methods. The results suggest that the high spatial and temporal resolution subsidence observations from InSAR are very helpful for accurately quantifying hydraulic parameters. We have tested the framework on several different examples and have found good performance in improving aquifer properties estimation, which should prove useful for groundwater management. Our ongoing work focuses on assimilating real InSAR-derived time series and hydraulic head data for calibrating and predicting aquifer properties of basin-wide groundwater systems.
Partial knowledge, entropy, and estimation
MacQueen, James; Marschak, Jacob
1975-01-01
In a growing body of literature, available partial knowledge is used to estimate the prior probability distribution p≡(p1,...,pn) by maximizing entropy H(p)≡-Σpi log pi, subject to constraints on p which express that partial knowledge. The method has been applied to distributions of income, of traffic, of stock-price changes, and of types of brand-article purchases. We shall respond to two justifications given for the method: (α) It is “conservative,” and therefore good, to maximize “uncertainty,” as (uniquely) represented by the entropy parameter. (β) One should apply the mathematics of statistical thermodynamics, which implies that the most probable distribution has highest entropy. Reason (α) is rejected. Reason (β) is valid when “complete ignorance” is defined in a particular way and both the constraint and the estimator's loss function are of certain kinds. PMID:16578733
Proportional hazards model with varying coefficients for length-biased data.
Zhang, Feipeng; Chen, Xuerong; Zhou, Yong
2014-01-01
Length-biased data arise in many important applications including epidemiological cohort studies, cancer prevention trials and studies of labor economics. Such data are also often subject to right censoring due to loss of follow-up or the end of study. In this paper, we consider a proportional hazards model with varying coefficients for right-censored and length-biased data, which is used to study the interact effect nonlinearly of covariates with an exposure variable. A local estimating equation method is proposed for the unknown coefficients and the intercept function in the model. The asymptotic properties of the proposed estimators are established by using the martingale theory and kernel smoothing techniques. Our simulation studies demonstrate that the proposed estimators have an excellent finite-sample performance. The Channing House data is analyzed to demonstrate the applications of the proposed method.
Exergetic analysis of autonomous power complex for drilling rig
NASA Astrophysics Data System (ADS)
Lebedev, V. A.; Karabuta, V. S.
2017-10-01
The article considers the issue of increasing the energy efficiency of power equipment of the drilling rig. At present diverse types of power plants are used in power supply systems. When designing and choosing a power plant, one of the main criteria is its energy efficiency. The main indicator in this case is the effective efficiency factor calculated by the method of thermal balances. In the article, it is suggested to use the exergy method to determine energy efficiency, which allows to perform estimations of the thermodynamic perfection degree of the system by the example of a gas turbine plant: relative estimation (exergetic efficiency factor) and an absolute estimation. An exergetic analysis of the gas turbine plant operating in a simple scheme was carried out using the program WaterSteamPro. Exergy losses in equipment elements are calculated.
Faith, Daniel P
2015-02-19
The phylogenetic diversity measure, ('PD'), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
An Improved Mathematical Scheme for LTE-Advanced Coexistence with FM Broadcasting Service
Al-hetar, Abdulaziz M.
2016-01-01
Power spectral density (PSD) overlapping analysis is considered the surest approach to evaluate feasibility of compatibility between wireless communication systems. In this paper, a new closed-form for the Interference Signal Power Attenuation (ISPA) is mathematically derived to evaluate interference caused from Orthogonal Frequency Division Multiplexing (OFDM)-based Long Term Evolution (LTE)-Advanced into Frequency Modulation (FM) broadcasting service. In this scheme, ISPA loss due to PSD overlapping of both OFDM-based LTE-Advanced and FM broadcasting service is computed. The proposed model can estimate power attenuation loss more precisely than the Advanced Minimum Coupling Loss (A-MCL) and approximate-ISPA methods. Numerical results demonstrate that the interference power is less than that obtained using the A-MCL and approximate ISPA methods by 2.8 and 1.5 dB at the co-channel and by 5.2 and 2.2 dB at the adjacent channel with null guard band, respectively. The outperformance of this scheme over the other methods leads to more diminishing in the required physical distance between the two systems which ultimately supports efficient use of the radio frequency spectrum. PMID:27855216
An Improved Mathematical Scheme for LTE-Advanced Coexistence with FM Broadcasting Service.
Shamsan, Zaid Ahmed; Al-Hetar, Abdulaziz M
2016-01-01
Power spectral density (PSD) overlapping analysis is considered the surest approach to evaluate feasibility of compatibility between wireless communication systems. In this paper, a new closed-form for the Interference Signal Power Attenuation (ISPA) is mathematically derived to evaluate interference caused from Orthogonal Frequency Division Multiplexing (OFDM)-based Long Term Evolution (LTE)-Advanced into Frequency Modulation (FM) broadcasting service. In this scheme, ISPA loss due to PSD overlapping of both OFDM-based LTE-Advanced and FM broadcasting service is computed. The proposed model can estimate power attenuation loss more precisely than the Advanced Minimum Coupling Loss (A-MCL) and approximate-ISPA methods. Numerical results demonstrate that the interference power is less than that obtained using the A-MCL and approximate ISPA methods by 2.8 and 1.5 dB at the co-channel and by 5.2 and 2.2 dB at the adjacent channel with null guard band, respectively. The outperformance of this scheme over the other methods leads to more diminishing in the required physical distance between the two systems which ultimately supports efficient use of the radio frequency spectrum.
Measuring food intake with digital photography
Martin, Corby K.; Nicklas, Theresa; Gunturk, Bahadir; Correa, John B.; Allen, H. Raymond; Champagne, Catherine
2014-01-01
The Digital Photography of Foods Method accurately estimates the food intake of adults and children in cafeterias. When using this method, imags of food selection and leftovers are quickly captured in the cafeteria. These images are later compared to images of “standard” portions of food using a computer application. The amount of food selected and discarded is estimated based upon this comparison, and the application automatically calculates energy and nutrient intake. Herein, we describe this method, as well as a related method called the Remote Food Photography Method (RFPM), which relies on Smartphones to estimate food intake in near real-time in free-living conditions. When using the RFPM, participants capture images of food selection and leftovers using a Smartphone and these images are wirelessly transmitted in near real-time to a server for analysis. Because data are transferred and analyzed in near real-time, the RFPM provides a platform for participants to quickly receive feedback about their food intake behavior and to receive dietary recommendations to achieve weight loss and health promotion goals. The reliability and validity of measuring food intake with the RFPM in adults and children will also be reviewed. The body of research reviewed herein demonstrates that digital imaging accurately estimates food intake in many environments and it has many advantages over other methods, including reduced participant burden, elimination of the need for participants to estimate portion size, and incorporation of computer automation to improve the accuracy, efficiency, and the cost-effectiveness of the method. PMID:23848588
Productivity losses and public finance burden attributable to breast cancer in Poland, 2010-2014.
Łyszczarz, Błażej; Nojszewska, Ewelina
2017-10-10
Apart from the health and social burden of the disease, breast cancer (BC) has important economic implications for the sick, health system and whole economy. There has been a growing interest in the economic aspects of breast cancer and analyses of the disease costs seem to be the most explored topic. However, the results from these studies are hardly comparable. With this study we aim to contribute to the field by providing estimates of productivity losses and public finance burden attributable to BC in Poland. We used retrospective prevalence-based top-down approach to estimate the productivity losses (indirect costs) of BC in Poland in the period 2010-2014. Human capital method (HCM) and societal perspective were used to estimate the costs of: absenteeism of the sick and caregivers, presenteeism of the sick and caregivers, disability, and premature mortality. We also used figures illustrating public finance burden attributable to the disease. Deterministic sensitivity analysis was performed to assess the stability of the estimates. A variety of data sources were used with the social insurance system and Polish National Cancer Registry being the most important ones. Productivity losses associated with BC in Poland were €583.7 million in 2010 and they increased to €699.7 million in 2014. Throughout the period these costs accounted for 0.162-0.171% of GDP, an equivalent of 62,531-65,816 per capita GDP. Losses attributable to disability and premature mortality proved to be the major cost drivers with 27.6%-30.6% and 22.0%-24.6% of the total costs respectively. The costs due to caregivers' presenteeism were negligible (0.1% of total costs). Public finance expenditure for social insurance benefits to BC sufferers ranged from €50.2 million (2010) to €56.6 million (2014), an equivalent of 0.72-0.79% of expenditures for all diseases. Potential losses in public finance revenues accounted for €173.9 million in 2010 and €211.0 million in 2014. Sensitivity analysis showed that the results were robust to changes in the model parameters. The productivity losses attributable to BC in Poland were a sizable burden for the society. They contributed both to decreased economy output and to public finance deficit.
Regional soil erosion assessment based on a sample survey and geostatistics
NASA Astrophysics Data System (ADS)
Yin, Shuiqing; Zhu, Zhengyuan; Wang, Li; Liu, Baoyuan; Xie, Yun; Wang, Guannan; Li, Yishan
2018-03-01
Soil erosion is one of the most significant environmental problems in China. From 2010 to 2012, the fourth national census for soil erosion sampled 32 364 PSUs (Primary Sampling Units, small watersheds) with the areas of 0.2-3 km2. Land use and soil erosion controlling factors including rainfall erosivity, soil erodibility, slope length, slope steepness, biological practice, engineering practice, and tillage practice for the PSUs were surveyed, and the soil loss rate for each land use in the PSUs was estimated using an empirical model, the Chinese Soil Loss Equation (CSLE). Though the information collected from the sample units can be aggregated to estimate soil erosion conditions on a large scale; the problem of estimating soil erosion condition on a regional scale has not been addressed well. The aim of this study is to introduce a new model-based regional soil erosion assessment method combining a sample survey and geostatistics. We compared seven spatial interpolation models based on the bivariate penalized spline over triangulation (BPST) method to generate a regional soil erosion assessment from the PSUs. Shaanxi Province (3116 PSUs) in China was selected for the comparison and assessment as it is one of the areas with the most serious erosion problem. Ten-fold cross-validation based on the PSU data showed the model assisted by the land use, rainfall erosivity factor (R), soil erodibility factor (K), slope steepness factor (S), and slope length factor (L) derived from a 1 : 10 000 topography map is the best one, with the model efficiency coefficient (ME) being 0.75 and the MSE being 55.8 % of that for the model assisted by the land use alone. Among four erosion factors as the covariates, the S factor contributed the most information, followed by K and L factors, and R factor made almost no contribution to the spatial estimation of soil loss. The LS factor derived from 30 or 90 m Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM) data worsened the estimation when used as the covariates for the interpolation of soil loss. Due to the unavailability of a 1 : 10 000 topography map for the entire area in this study, the model assisted by the land use, R, and K factors, with a resolution of 250 m, was used to generate the regional assessment of the soil erosion for Shaanxi Province. It demonstrated that 54.3 % of total land in Shaanxi Province had annual soil loss equal to or greater than 5 t ha-1 yr-1. High (20-40 t ha-1 yr-1), severe (40-80 t ha-1 yr-1), and extreme ( > 80 t ha-1 yr-1) erosion occupied 14.0 % of the total land. The dry land and irrigated land, forest, shrubland, and grassland in Shaanxi Province had mean soil loss rates of 21.77, 3.51, 10.00, and 7.27 t ha-1 yr-1, respectively. Annual soil loss was about 207.3 Mt in Shaanxi Province, with 68.9 % of soil loss originating from the farmlands and grasslands in Yan'an and Yulin districts in the northern Loess Plateau region and Ankang and Hanzhong districts in the southern Qingba mountainous region. This methodology provides a more accurate regional soil erosion assessment and can help policymakers to take effective measures to mediate soil erosion risks.
Method of estimation of scanning system quality
NASA Astrophysics Data System (ADS)
Larkin, Eugene; Kotov, Vladislav; Kotova, Natalya; Privalov, Alexander
2018-04-01
Estimation of scanner parameters is an important part in developing electronic document management system. This paper suggests considering the scanner as a system that contains two main channels: a photoelectric conversion channel and a channel for measuring spatial coordinates of objects. Although both of channels consist of the same elements, the testing of their parameters should be executed separately. The special structure of the two-dimensional reference signal is offered for this purpose. In this structure, the fields for testing various parameters of the scanner are sp atially separated. Characteristics of the scanner are associated with the loss of information when a document is digitized. The methods to test grayscale transmitting ability, resolution and aberrations level are offered.
Well-being losses due to care-giving☆☆☆
van den Berg, Bernard; Fiebig, Denzil G.; Hall, Jane
2014-01-01
This paper estimates the impact of informal caregiving on self-reported well-being. It uses a sample of 23,285 respondents of the first eleven waves of the Household, Income and Labour Dynamics in Australia (HILDA). We apply a relatively new analytical method that enables us to estimate fixed effects ordered logit to analyse subjective well-being. The econometric estimates show that providing informal care has a negative effect on subjective well-being. The empirical evidence of our paper could be helpful to inform policy makers to better understand the impact of caregiving and design the appropriate long term care policies and support services. PMID:24662888
An Establishment of Rainfall-induced Soil Erosion Index for the Slope Land in Watershed
NASA Astrophysics Data System (ADS)
Tsai, Kuang-Jung; Chen, Yie-Ruey; Hsieh, Shun-Chieh; Shu, Chia-Chun; Chen, Ying-Hui
2014-05-01
With more and more concentrated extreme rainfall events as a result of climate change, in Taiwan, mass cover soil erosion occurred frequently and led to sediment related disasters in high intensity precipiton region during typhoons or torrential rain storms. These disasters cause a severely lost to the property, public construction and even the casualty of the resident in the affected areas. Therefore, we collected soil losses by using field investigation data from the upstream of watershed where near speific rivers to explore the soil erosion caused by heavy rainfall under different natural environment. Soil losses induced by rainfall and runoff were obtained from the long-term soil depth measurement of erosion plots, which were established in the field, used to estimate the total volume of soil erosion. Furthermore, the soil erosion index was obtained by referring to natural environment of erosion test plots and the Universal Soil Loss Equation (USLE). All data collected from field were used to compare with the one obtained from laboratory test recommended by the Technical Regulation for Soil and Water Conservation in Taiwan. With MATLAB as a modeling platform, evaluation model for soil erodibility factors was obtained by golden section search method, considering factors contributing to the soil erosion; such as degree of slope, soil texture, slope aspect, the distance far away from water system, topography elevation, and normalized difference vegetation index (NDVI). The distribution map of soil erosion index was developed by this project and used to estimate the rainfall-induced soil losses from erosion plots have been established in the study area since 2008. All results indicated that soil erodibility increases with accumulated rainfall amount regardless of soil characteristics measured in the field. Under the same accumulated rainfall amount, the volume of soil erosion also increases with the degree of slope and soil permeability, but decreases with the shear strength of top soil within 30 cm and the coverage of vegetation. The slope plays more important role than the soil permeability on soil erosion. However, soil losses are not proportional to the hardness of top soil or subsurface soil. The empirical formula integrated with soil erosion index map for evaluating soil erodibility obtained from optimal numerical search method can be used to estimate the soil losses induced by rainfall and runoff erosion on slope land in Taiwan. Keywords: Erosion Test Plot, Soil Erosion, Optimal Numerical Search, Universal Soil Loss Equation.
Boulanger, Guillaume; Bayeux, Thomas; Mandin, Corinne; Kirchner, Séverine; Vergriette, Benoit; Pernelet-Joly, Valérie; Kopp, Pierre
2017-07-01
An evaluation of the socio-economic costs of indoor air pollution can facilitate the development of appropriate public policies. For the first time in France, such an evaluation was conducted for six selected pollutants: benzene, trichloroethylene, radon, carbon monoxide, particles (PM 2.5 fraction), and environmental tobacco smoke (ETS). The health impacts of indoor exposure were either already available in published works or were calculated. For these calculations, two approaches were followed depending on the available data: the first followed the principles of quantitative health risk assessment, and the second was based on concepts and methods related to the health impact assessment. For both approaches, toxicological data and indoor concentrations related to each target pollutant were used. External costs resulting from mortality, morbidity (life quality loss) and production losses attributable to these health impacts were assessed. In addition, the monetary costs for the public were determined. Indoor pollution associated with the selected pollutants was estimated to have cost approximately €20 billion in France in 2004. Particles contributed the most to the total cost (75%), followed by radon. Premature death and the costs of the quality of life loss accounted for approximately 90% of the total cost. Despite the use of different methods and data, similar evaluations previously conducted in other countries yielded figures within the same order of magnitude. Copyright © 2017 Elsevier Ltd. All rights reserved.
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
Kan, Hirohito; Arai, Nobuyuki; Takizawa, Masahiro; Omori, Kazuyoshi; Kasai, Harumasa; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta
2018-06-11
We developed a non-regularized, variable kernel, sophisticated harmonic artifact reduction for phase data (NR-VSHARP) method to accurately estimate local tissue fields without regularization for quantitative susceptibility mapping (QSM). We then used a digital brain phantom to evaluate the accuracy of the NR-VSHARP method, and compared it with the VSHARP and iterative spherical mean value (iSMV) methods through in vivo human brain experiments. Our proposed NR-VSHARP method, which uses variable spherical mean value (SMV) kernels, minimizes L2 norms only within the volume of interest to reduce phase errors and save cortical information without regularization. In a numerical phantom study, relative local field and susceptibility map errors were determined using NR-VSHARP, VSHARP, and iSMV. Additionally, various background field elimination methods were used to image the human brain. In a numerical phantom study, the use of NR-VSHARP considerably reduced the relative local field and susceptibility map errors throughout a digital whole brain phantom, compared with VSHARP and iSMV. In the in vivo experiment, the NR-VSHARP-estimated local field could sufficiently achieve minimal boundary losses and phase error suppression throughout the brain. Moreover, the susceptibility map generated using NR-VSHARP minimized the occurrence of streaking artifacts caused by insufficient background field removal. Our proposed NR-VSHARP method yields minimal boundary losses and highly precise phase data. Our results suggest that this technique may facilitate high-quality QSM. Copyright © 2017. Published by Elsevier Inc.
Estimation of mean response via effective balancing score
Hu, Zonghui; Follmann, Dean A.; Wang, Naisyin
2015-01-01
Summary We introduce effective balancing scores for estimation of the mean response under a missing at random mechanism. Unlike conventional balancing scores, the effective balancing scores are constructed via dimension reduction free of model specification. Three types of effective balancing scores are introduced: those that carry the covariate information about the missingness, the response, or both. They lead to consistent estimation with little or no loss in efficiency. Compared to existing estimators, the effective balancing score based estimator relieves the burden of model specification and is the most robust. It is a near-automatic procedure which is most appealing when high dimensional covariates are involved. We investigate both the asymptotic and the numerical properties, and demonstrate the proposed method in a study on Human Immunodeficiency Virus disease. PMID:25797955
Jones, Kelly W; Lewis, David J
2015-01-01
Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES)--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case illustrates that if time-invariant unobservables are not present, matching combined with differences in means or cross-sectional regression leads to similar estimates of program effectiveness as matching combined with fixed effects panel regression. These results highlight the importance of considering observable and unobservable forms of bias and the methodological assumptions across estimators when designing an impact evaluation of conservation programs.
Effects of Statewide Job Losses on Adolescent Suicide-Related Behaviors
Ananat, Elizabeth Oltmans; Gibson-Davis, Christina M.
2014-01-01
Objectives. We investigated the impact of statewide job loss on adolescent suicide-related behaviors. Methods. We used 1997 to 2009 data from the Youth Risk Behavior Survey and the Bureau of Labor Statistics to estimate the effects of statewide job loss on adolescents’ suicidal ideation, suicide attempts, and suicide plans. Probit regression models controlled for demographic characteristics, state of residence, and year; samples were divided according to gender and race/ethnicity. Results. Statewide job losses during the year preceding the survey increased girls’ probability of suicidal ideation and suicide plans and non-Hispanic Black adolescents’ probability of suicidal ideation, suicide plans, and suicide attempts. Job losses among 1% of a state’s working-age population increased the probability of girls and Blacks reporting suicide-related behaviors by 2 to 3 percentage points. Job losses did not affect the suicide-related behaviors of boys, non-Hispanic Whites, or Hispanics. The results were robust to the inclusion of other state economic characteristics. Conclusions. As are adults, adolescents are affected by economic downturns. Our findings show that statewide job loss increases adolescent girls’ and non-Hispanic Blacks’ suicide-related behaviors. PMID:25122027
De la Torre, Daniel; Sierra, Maria Jose
2007-01-01
The approach developed by Fuhrer in 1995 to estimate wheat yield losses induced by ozone and modulated by the soil water content (SWC) was applied to the data on Catalonian wheat yields. The aim of our work was to apply this approach and adjust it to Mediterranean environmental conditions by means of the necessary corrections. The main objective pursued was to prove the importance of soil water availability in the estimation of relative wheat yield losses as a factor that modifies the effects of tropospheric ozone on wheat, and to develop the algorithms required for the estimation of relative yield losses, adapted to the Mediterranean environmental conditions. The results show that this is an easy way to estimate relative yield losses just using meteorological data, without using ozone fluxes, which are much more difficult to calculate. Soil water availability is very important as a modulating factor of the effects of ozone on wheat; when soil water availability decreases, almost twice the amount of accumulated exposure to ozone is required to induce the same percentage of yield loss as in years when soil water availability is high. PMID:17619747
Navrud, Ståle; Tuan, Tran Huu; Tinh, Bui Duc
2012-01-01
Background Natural disasters have severe impacts on the health and well-being of affected households. However, we find evidence that official damage cost assessments for floods and other natural disasters in Vietnam, where households have little or no insurance, clearly underestimate the total economic damage costs of these events as they do not include the welfare loss from mortality, morbidity and well-being experienced by the households affected by the floods. This should send a message to the local communities and national authorities that higher investments in flood alleviation, reduction and adaptive measures can be justified since the social benefits of these measures in terms of avoided damage costs are higher than previously thought. Methods We pioneer the use of the contingent valuation (CV) approach of willingness-to-contribute (WTC) labour to a flood prevention program, as a measure of the welfare loss experienced by household due to a flooding event. In a face-to-face household survey of 706 households in the Quang Nam province in Central Vietnam, we applied this approach together with reported direct physical damage in order to shed light of the welfare loss experienced by the households. We asked about households’ WTC labour and multiplied their WTC person-days of labour by an estimate for their opportunity cost of time in order to estimate the welfare loss to households from the 2007 floods. Results The results showed that this contingent valuation (CV) approach of asking about willingness-to-pay in-kind avoided the main problems associated with applying CV in developing countries. Conclusion Thus, the CV approach of WTC labour instead of money is promising in terms of capturing the total welfare loss of natural disasters, and promising in terms of further application in other developing countries and for other types of natural disasters. PMID:22761603
Yelin, Edward; Murphy, Louise; Cisternas, Miriam G.; Foreman, Aimee J.; Pasta, David J.; Helmick, Charles G.
2010-01-01
Objective To obtain estimates of medical care expenditures and earnings losses associated with arthritis and other rheumatic conditions and the increment in such costs attributable to arthritis and other rheumatic conditions in the US in 2003, and to compare these estimates with those from 1997. Methods Estimates for 2003 were derived from the Medical Expenditures Panel Survey (MEPS), a national probability sample of households. We tabulated medical care expenditures of adult MEPS respondents, stratified by arthritis and comorbidity status, and used regression techniques to estimate the increment of medical care expenditures attributable to arthritis and other rheumatic conditions. We also estimated the earnings losses sustained by working-age adults with arthritis and other rheumatic conditions. Estimates for 2003 were compared with those from 1997, inflated to 2003 terms. Results In 2003, there were 46.1 million adults with arthritis and other rheumatic conditions (versus 36.8 million in 1997). Adults with arthritis and other rheumatic conditions incurred mean medical care expenditures of $6,978 in 2003 (versus $6,346 in 1997), of which $1,635 was for prescriptions ($899 in 1997). Expenditures for adults with arthritis and other rheumatic conditions totaled $321.8 billion in 2003 ($233.5 billion in 1997). In 2003, the mean increment in medical care expenditures attributable to arthritis and other rheumatic conditions was $1,752 ($1,762 in 1997), for a total of $80.8 billion ($64.8 billion in 1997). Persons with arthritis and other rheumatic conditions ages 18–64 years earned $3,613 less than other persons (versus $4,551 in 1997), for a total of $108.0 billion (versus $99.0 billion). Of this amount, $1,590 was attributable to arthritis and other rheumatic conditions (versus $1,946 in 1997), for a total of $47.0 billion ($43.3 billion in 1997). Conclusion Our findings indicate that the increase in medical care expenditures and earnings losses between 1997 and 2003 is due more to an increase in the number of persons with arthritis and other rheumatic conditions than to costs per case. PMID:17469096
Projection-based circular constrained state estimation and fusion over long-haul links
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Qiang; Rao, Nageswara S.
In this paper, we consider a scenario where sensors are deployed over a large geographical area for tracking a target with circular nonlinear constraints on its motion dynamics. The sensor state estimates are sent over long-haul networks to a remote fusion center for fusion. We are interested in different ways to incorporate the constraints into the estimation and fusion process in the presence of communication loss. In particular, we consider closed-form projection-based solutions, including rules for fusing the estimates and for incorporating the constraints, which jointly can guarantee timely fusion often required in realtime systems. We test the performance ofmore » these methods in the long-haul tracking environment using a simple example.« less
Quantification of the precipitation loss of radiation belt electrons observed by SAMPEX
NASA Astrophysics Data System (ADS)
Tu, Weichao; Selesnick, Richard; Li, Xinlin; Looper, Mark
2010-07-01
Based on SAMPEX/PET observations, the rates and the spatial and temporal variations of electron loss to the atmosphere in the Earth's radiation belt were quantified using a drift diffusion model that includes the effects of azimuthal drift and pitch angle diffusion. The measured electrons by SAMPEX can be distinguished as trapped, quasi-trapped (in the drift loss cone), and precipitating (in the bounce loss cone). The drift diffusion model simulates the low-altitude electron distribution from SAMPEX. After fitting the model results to the data, the magnitudes and variations of the electron lifetime can be quantitatively determined based on the optimum model parameter values. Three magnetic storms of different magnitudes were selected to estimate the various loss rates of ˜0.5-3 MeV electrons during different phases of the storms and at L shells ranging from L = 3.5 to L = 6.5 (L represents the radial distance in the equatorial plane under a dipole field approximation). The storms represent a small storm, a moderate storm from the current solar minimum, and an intense storm right after the previous solar maximum. Model results for the three individual events showed that fast precipitation losses of relativistic electrons, as short as hours, persistently occurred in the storm main phases and with more efficient loss at higher energies over wide range of L regions and over all the SAMPEX-covered local times. In addition to this newly discovered common feature of the main phase electron loss for all the storm events and at all L locations, some other properties of the electron loss rates, such as the local time and energy dependence that vary with time or locations, were also estimated and discussed. This method combining model with the low-altitude observations provides direct quantification of the electron loss rate, a prerequisite for any comprehensive modeling of the radiation belt electron dynamics.
Estimating Tool–Tissue Forces Using a 3-Degree-of-Freedom Robotic Surgical Tool
Zhao, Baoliang; Nelson, Carl A.
2016-01-01
Robot-assisted minimally invasive surgery (MIS) has gained popularity due to its high dexterity and reduced invasiveness to the patient; however, due to the loss of direct touch of the surgical site, surgeons may be prone to exert larger forces and cause tissue damage. To quantify tool–tissue interaction forces, researchers have tried to attach different kinds of sensors on the surgical tools. This sensor attachment generally makes the tools bulky and/or unduly expensive and may hinder the normal function of the tools; it is also unlikely that these sensors can survive harsh sterilization processes. This paper investigates an alternative method by estimating tool–tissue interaction forces using driving motors' current, and validates this sensorless force estimation method on a 3-degree-of-freedom (DOF) robotic surgical grasper prototype. The results show that the performance of this method is acceptable with regard to latency and accuracy. With this tool–tissue interaction force estimation method, it is possible to implement force feedback on existing robotic surgical systems without any sensors. This may allow a haptic surgical robot which is compatible with existing sterilization methods and surgical procedures, so that the surgeon can obtain tool–tissue interaction forces in real time, thereby increasing surgical efficiency and safety. PMID:27303591
Estimating Tool-Tissue Forces Using a 3-Degree-of-Freedom Robotic Surgical Tool.
Zhao, Baoliang; Nelson, Carl A
2016-10-01
Robot-assisted minimally invasive surgery (MIS) has gained popularity due to its high dexterity and reduced invasiveness to the patient; however, due to the loss of direct touch of the surgical site, surgeons may be prone to exert larger forces and cause tissue damage. To quantify tool-tissue interaction forces, researchers have tried to attach different kinds of sensors on the surgical tools. This sensor attachment generally makes the tools bulky and/or unduly expensive and may hinder the normal function of the tools; it is also unlikely that these sensors can survive harsh sterilization processes. This paper investigates an alternative method by estimating tool-tissue interaction forces using driving motors' current, and validates this sensorless force estimation method on a 3-degree-of-freedom (DOF) robotic surgical grasper prototype. The results show that the performance of this method is acceptable with regard to latency and accuracy. With this tool-tissue interaction force estimation method, it is possible to implement force feedback on existing robotic surgical systems without any sensors. This may allow a haptic surgical robot which is compatible with existing sterilization methods and surgical procedures, so that the surgeon can obtain tool-tissue interaction forces in real time, thereby increasing surgical efficiency and safety.
Li, Songhai; Wang, Ding; Wang, Kexiong; Hoffmann-Kuhnt, Matthias; Fernando, Nimal; Taylor, Elizabeth A; Lin, Wenzhi; Chen, Jialin; Ng, Timothy
2016-01-01
The hearing of a stranded Indo-Pacific humpback dolphin (Sousa chinensis) in Zhuhai, China, was measured. The age of this animal was estimated to be ~40 years. The animal's hearing was measured using a noninvasive auditory evoked potential (AEP) method. The results showed that the high-frequency hearing cutoff frequency of the studied dolphin was ~30-40 kHz lower than that of a conspecific younger individual ~13 year old. The lower high-frequency hearing range in the older dolphin was explained as a likely result of age-related hearing loss (presbycusis).
NASA Astrophysics Data System (ADS)
Fresnay, S.; Ponte, A. L.; Le Gentil, S.; Le Sommer, J.
2018-03-01
Several methods that reconstruct the three-dimensional ocean dynamics from sea level are presented and evaluated in the Gulf Stream region with a 1/60° realistic numerical simulation. The use of sea level is motivated by its better correlation with interior pressure or quasi-geostrophic potential vorticity (PV) compared to sea surface temperature and sea surface salinity, and, by its observability via satellite altimetry. The simplest method of reconstruction relies on a linear estimation of pressure at depth from sea level. Another method consists in linearly estimating PV from sea level first and then performing a PV inversion. The last method considered, labeled SQG for surface quasi-geostrophy, relies on a PV inversion but assumes no PV anomalies. The first two methods show comparable skill at levels above -800 m. They moderately outperform SQG which emphasizes the difficulty of estimating interior PV from surface variables. Over the 250-1,000 m depth range, the three methods skillfully reconstruct pressure at wavelengths between 500 and 200 km whereas they exhibit a rapid loss of skill between 200 and 100 km wavelengths. Applicability to a real case scenario and leads for improvements are discussed.
Health technology assessment of non-invasive interventions for weight loss and body shape in Iran
Nojomi, Marzieh; Moradi-Lakeh, Maziar; Velayati, Ashraf; Naghibzadeh-Tahami, Ahmad; Dadgostar, Haleh; Ghorabi, Gholamhossein; Moradi-Joo, Mohammad; Yaghoubi, Mohsen
2016-01-01
Background: The burden of obesity and diet-related chronic diseases is increasing in Iran, and prevention and treatment strategies are needed to address this problem. The aim of this study was to determine the outcome, cost, safety and cost-consequence of non-invasive weight loss interventions in Iran. Methods: We performed a systematic review to compare non-invasive interventions (cryolipolysis and radiofrequency/ ultrasonic cavitation) with semi-invasive (lipolysis) and invasive (liposuction). A sensitive electronic searching was done to find available interventional studies. Reduction of abdomen circumference (cm), reduction in fat layer thickness (%) and weight reduction (kg) were outcomes of efficacy. Meta-analysis with random models was used for pooling efficacy estimates among studies with the same follow-up duration. Average cost per intervention was estimated based on the capital, maintenance, staff, consumable and purchase costs. Results: Of 3,111 studies identified in our reviews, 13 studies assessed lipolysis, 10 cryolipolysis and 8 considered radiofrequency. Nine studies with the same follow-up duration in three different outcome group were included in meta-analysis. Radiofrequency showed an overall pooled estimate of 2.7 cm (95% CI; 2.3-3.1) of mean reduction in circumference of abdomen after intervention. Pooled estimate of reduction in fat layer thickness was 78% (95% CI; 73%-83%) after Lipolysis and a pooled estimate of weight loss was 3.01 kg (95% CI; 2.3-3.6) after lipousuction. The cost analysis revealed no significant differences between the costs of these interventions. Conclusion: The present study showed that non-invasive interventions appear to have better clinical efficacy, specifically in the body shape measurement, and less cost compared to invasive intervention (liposuction) PMID:27390717
Attitude-Independent Magnetometer Calibration for Spin-Stabilized Spacecraft
NASA Technical Reports Server (NTRS)
Natanson, Gregory
2005-01-01
The paper describes a three-step estimator to calibrate a Three-Axis Magnetometer (TAM) using TAM and slit Sun or star sensor measurements. In the first step, the Calibration Utility forms a loss function from the residuals of the magnitude of the geomagnetic field. This loss function is minimized with respect to biases, scale factors, and nonorthogonality corrections. The second step minimizes residuals of the projection of the geomagnetic field onto the spin axis under the assumption that spacecraft nutation has been suppressed by a nutation damper. Minimization is done with respect to various directions of the body spin axis in the TAM frame. The direction of the spin axis in the inertial coordinate system required for the residual computation is assumed to be unchanged with time. It is either determined independently using other sensors or included in the estimation parameters. In both cases all estimation parameters can be found using simple analytical formulas derived in the paper. The last step is to minimize a third loss function formed by residuals of the dot product between the geomagnetic field and Sun or star vector with respect to the misalignment angle about the body spin axis. The method is illustrated by calibrating TAM for the Fast Auroral Snapshot Explorer (FAST) using in-flight TAM and Sun sensor data. The estimated parameters include magnetic biases, scale factors, and misalignment angles of the spin axis in the TAM frame. Estimation of the misalignment angle about the spin axis was inconclusive since (at least for the selected time interval) the Sun vector was about 15 degrees from the direction of the spin axis; as a result residuals of the dot product between the geomagnetic field and Sun vectors were to a large extent minimized as a by-product of the second step.
Jerszurki, Daniela; Souza, Jorge L. M.; Silva, Lucas C. R.
2017-01-01
The development of new reference evapotranspiration (ETo) methods hold significant promise for improving our quantitative understanding of climatic impacts on water loss from the land to the atmosphere. To address the challenge of estimating ETo in tropical and subtropical regions where direct measurements are scarce we tested a new method based on geographical patterns of extraterrestrial radiation (Ra) and atmospheric water potential (Ψair). Our approach consisted of generating daily estimates of ETo across several climate zones in Brazil–as a model system–which we compared with standard EToPM (Penman-Monteith) estimates. In contrast with EToPM, the simplified method (EToMJS) relies solely on Ψair calculated from widely available air temperature (oC) and relative humidity (%) data, which combined with Ra data resulted in reliable estimates of equivalent evaporation (Ee) and ETo. We used regression analyses of Ψair vs EToPM and Ee vs EToPM to calibrate the EToMJS(Ψair) and EToMJS estimates from 2004 to 2014 and between seasons and climatic zone. Finally, we evaluated the performance of the new method based on the coefficient of determination (R2) and correlation (R), index of agreement “d”, mean absolute error (MAE) and mean reason (MR). This evaluation confirmed the suitability of the EToMJS method for application in tropical and subtropical regions, where the climatic information needed for the standard EToPM calculation is absent. PMID:28658324
Jerszurki, Daniela; Souza, Jorge L M; Silva, Lucas C R
2017-01-01
The development of new reference evapotranspiration (ETo) methods hold significant promise for improving our quantitative understanding of climatic impacts on water loss from the land to the atmosphere. To address the challenge of estimating ETo in tropical and subtropical regions where direct measurements are scarce we tested a new method based on geographical patterns of extraterrestrial radiation (Ra) and atmospheric water potential (Ψair). Our approach consisted of generating daily estimates of ETo across several climate zones in Brazil-as a model system-which we compared with standard EToPM (Penman-Monteith) estimates. In contrast with EToPM, the simplified method (EToMJS) relies solely on Ψair calculated from widely available air temperature (oC) and relative humidity (%) data, which combined with Ra data resulted in reliable estimates of equivalent evaporation (Ee) and ETo. We used regression analyses of Ψair vs EToPM and Ee vs EToPM to calibrate the EToMJS(Ψair) and EToMJS estimates from 2004 to 2014 and between seasons and climatic zone. Finally, we evaluated the performance of the new method based on the coefficient of determination (R2) and correlation (R), index of agreement "d", mean absolute error (MAE) and mean reason (MR). This evaluation confirmed the suitability of the EToMJS method for application in tropical and subtropical regions, where the climatic information needed for the standard EToPM calculation is absent.
A method for vibrational assessment of cortical bone
NASA Astrophysics Data System (ADS)
Song, Yan; Gunaratne, Gemunu H.
2006-09-01
Large bones from many anatomical locations of the human skeleton consist of an outer shaft (cortex) surrounding a highly porous internal region (trabecular bone) whose structure is reminiscent of a disordered cubic network. Age related degradation of cortical and trabecular bone takes different forms. Trabecular bone weakens primarily by loss of connectivity of the porous network, and recent studies have shown that vibrational response can be used to obtain reliable estimates for loss of its strength. In contrast, cortical bone degrades via the accumulation of long fractures and changes in the level of mineralization of the bone tissue. In this paper, we model cortical bone by an initially solid specimen with uniform density to which long fractures are introduced; we find that, as in the case of trabecular bone, vibrational assessment provides more reliable estimates of residual strength in cortical bone than is possible using measurements of density or porosity.
Andronowski, Janna M; Crowder, Christian
2018-05-21
Quantifying the amount of cortical bone loss is one variable used in histological methods of adult age estimation. Measurements of cortical area tend to be subjective and additional information regarding bone loss is not captured considering cancellous bone is disregarded. We describe whether measuring bone area (cancellous + cortical area) rather than cortical area may improve histological age estimation for the sixth rib. Mid-shaft rib cross-sections (n = 114) with a skewed sex distribution were analyzed. Ages range from 16 to 87 years. Variables included: total cross-sectional area, cortical area, bone area, relative bone area, relative cortical area, and endosteal area. Males have larger mean total cross-sectional area, bone area, and cortical area than females. Females display a larger mean endosteal area and greater mean relative measure values. Relative bone area significantly correlates with age. The relative bone area variable will provide researchers with a less subjective and more accurate measure than cortical area. © 2018 American Academy of Forensic Sciences.
Interactive computation of coverage regions for indoor wireless communication
NASA Astrophysics Data System (ADS)
Abbott, A. Lynn; Bhat, Nitin; Rappaport, Theodore S.
1995-12-01
This paper describes a system which assists in the strategic placement of rf base stations within buildings. Known as the site modeling tool (SMT), this system allows the user to display graphical floor plans and to select base station transceiver parameters, including location and orientation, interactively. The system then computes and highlights estimated coverage regions for each transceiver, enabling the user to assess the total coverage within the building. For single-floor operation, the user can choose between distance-dependent and partition- dependent path-loss models. Similar path-loss models are also available for the case of multiple floors. This paper describes the method used by the system to estimate coverage for both directional and omnidirectional antennas. The site modeling tool is intended to be simple to use by individuals who are not experts at wireless communication system design, and is expected to be very useful in the specification of indoor wireless systems.
Gastro-Intestinal Blood Loss Measured by Radioactive Chromium
Cameron, A. D.
1960-01-01
A new technique is described for the measurement of blood loss in the faeces of patients labelled with radioactive chromium (51Cr). The method is simple and is probably more accurate at low levels of faecal radioactivity than those previously described. The method will measure as little as 0·02μC of 51Cr in whole blood in a 24-hour stool. The apparent average daily blood loss in a series of 10 normal people averaged 0·6 ml., with a range of 0·3 to 1·3 ml. Estimations of plasma and salivary radioactivity have been made in an attempt to assess the importance of contamination from eluted 51Cr. Minor radioactivity in plasma but none in saliva was recorded. Stool contamination from such sources is thought to be insignificant. No significant correlation existed between chemical occult blood tests and isotope measurements, where there was less than 10 ml. of whole blood in a 24-hour stool. PMID:13807135
Methods for measuring risk-aversion: problems and solutions
NASA Astrophysics Data System (ADS)
Thomas, P. J.
2013-09-01
Risk-aversion is a fundamental parameter determining how humans act when required to operate in situations of risk. Its general applicability has been discussed in a companion presentation, and this paper examines methods that have been used in the past to measure it and their attendant problems. It needs to be borne in mind that risk-aversion varies with the size of the possible loss, growing strongly as the possible loss becomes comparable with the decision maker's assets. Hence measuring risk-aversion when the potential loss or gain is small will produce values close to the risk-neutral value of zero, irrespective of who the decision maker is. It will also be shown how the generally accepted practice of basing a measurement on the results of a three-term Taylor series will estimate a limiting value, minimum or maximum, rather than the value utilised in the decision. A solution is to match the correct utility function to the results instead.
Estimating plant available water content from remotely sensed evapotranspiration
NASA Astrophysics Data System (ADS)
van Dijk, A. I. J. M.; Warren, G.; Doody, T.
2012-04-01
Plant available water content (PAWC) is an emergent soil property that is a critical variable in hydrological modelling. PAWC determines the active soil water storage and, in water-limited environments, is the main cause of different ecohydrological behaviour between (deep-rooted) perennial vegetation and (shallow-rooted) seasonal vegetation. Conventionally, PAWC is estimated for a combination of soil and vegetation from three variables: maximum rooting depth and the volumetric water content at field capacity and permanent wilting point, respectively. Without elaborate local field observation, large uncertainties in PAWC occur due to the assumptions associated with each of the three variables. We developed an alternative, observation-based method to estimate PAWC from precipitation observations and CSIRO MODIS Reflectance-based Evapotranspiration (CMRSET) estimates. Processing steps include (1) removing residual systematic bias in the CMRSET estimates, (2) making spatially appropriate assumptions about local water inputs and surface runoff losses, (3) using mean seasonal patterns in precipitation and CMRSET to estimate the seasonal pattern in soil water storage changes, (4) from these, calculating the mean seasonal storage range, which can be treated as an estimate of PAWC. We evaluate the resulting PAWC estimates against those determined in field experiments for 180 sites across Australia. We show that the method produces better estimates of PAWC than conventional techniques. In addition, the method provides detailed information with full continental coverage at moderate resolution (250 m) scale. The resulting maps can be used to identify likely groundwater dependent ecosystems and to derive PAWC distributions for each combination of soil and vegetation type.
Overview and Assessment of Antarctic Ice-Sheet Mass Balance Estimates: 1992-2009
NASA Technical Reports Server (NTRS)
Zwally, H. Jay; Giovinetto, Mario B.
2011-01-01
Mass balance estimates for the Antarctic Ice Sheet (AIS) in the 2007 report by the Intergovernmental Panel on Climate Change and in more recent reports lie between approximately ?50 to -250 Gt/year for 1992 to 2009. The 300 Gt/year range is approximately 15% of the annual mass input and 0.8 mm/year Sea Level Equivalent (SLE). Two estimates from radar altimeter measurements of elevation change by European Remote-sensing Satellites (ERS) (?28 and -31 Gt/year) lie in the upper part, whereas estimates from the Input-minus-Output Method (IOM) and the Gravity Recovery and Climate Experiment (GRACE) lie in the lower part (-40 to -246 Gt/year). We compare the various estimates, discuss the methodology used, and critically assess the results. We also modify the IOM estimate using (1) an alternate extrapolation to estimate the discharge from the non-observed 15% of the periphery, and (2) substitution of input from a field data compilation for input from an atmospheric model in 6% of area. The modified IOM estimate reduces the loss from 136 Gt/year to 13 Gt/year. Two ERS-based estimates, the modified IOM, and a GRACE-based estimate for observations within 1992 2005 lie in a narrowed range of ?27 to -40 Gt/year, which is about 3% of the annual mass input and only 0.2 mm/year SLE. Our preferred estimate for 1992 2001 is -47 Gt/year for West Antarctica, ?16 Gt/year for East Antarctica, and -31 Gt/year overall (?0.1 mm/year SLE), not including part of the Antarctic Peninsula (1.07% of the AIS area). Although recent reports of large and increasing rates of mass loss with time from GRACE-based studies cite agreement with IOM results, our evaluation does not support that conclusion
NASA Astrophysics Data System (ADS)
Shanafield, Margaret; Cook, Peter G.
2014-04-01
Aquifer recharge through ephemeral streambeds is believed to be a major source of groundwater recharge in arid areas; however, comparatively few studies quantify this streamflow recharge. This review synthesizes the available field-based aquifer recharge literature from arid regions around the world. Seven methods for quantifying ephemeral and intermittent stream infiltration and aquifer recharge are reviewed; controlled infiltration experiments, monitoring changes in water content, heat as a tracer of infiltration, reach length water balances, floodwave front tracking, groundwater mounding, and groundwater dating. The pertinent temporal and spatial scales, as well as the advantages and limitations of each method are illustrated with examples from the literature. Comparisons between the methods are used to highlight appropriate uses of each field method, with emphasis on the advantages of using multiple methods within a study in order to avoid the potential drawbacks inherent in any single method. Research needs are identified, including: quantitative uncertainty analysis, long-term data collection and analysis, understanding of the role of riparian vegetation, and reconciliation of transmission losses and infiltration estimates with actual aquifer recharge.
Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography
NASA Astrophysics Data System (ADS)
Chu, Pan; Lei, Jing
2017-11-01
The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.
Prediction method of sediment discharge from forested basin
Kazutoki Abe; Ushio Kurokawa; Robert R. Ziemer
2000-01-01
An estimation model for sediment discharge from a forested basin using Universal Soil Loss Equation and delivery ratio was developed. Study basins are North fork and South fork in Caspar Creek, north California, where Forest Service, USDA has been using water and sediment discharge from both basins since 1962. The whole basin is covered with the forest, mainly...
22 CFR 211.9 - Liability for loss damage or improper distribution of commodities.
Code of Federal Regulations, 2010 CFR
2010-04-01
... cargo; (B) Report on discharging method (including whether a scale was used, its type and calibration and other factors affecting its accuracy, or an explanation of why a scale was not used and how weight... customs; (D) Provide actual or estimated (if scales not used) quantity of cargo lost during discharge and...
Strength loss in southern pine poles damaged by woodpeckers
R.W. Rumsey; G.E. Woodson
1973-01-01
Woodpecker damage caused extensive reductions in strength of 50-foot, class-2 utility poles, the amount depending on the cross-sectional area of wood removed and its distance from the apex. Two methods for estimating when damaged poles should be replaced proved to be conservative when applied to results of field tests. Such conservative predictions of falling loads...
Strength loss in southern pine poles damaged by woodpeckers
R.L. Rumsey; George E. Woodson
1973-01-01
Woodpecker damage caused extensive reductions in strength of 50-foot, class-2 utility poles, the amount depending on the cross-sectional area of wood removed and its distance from the apex. Two methods for estimating when damaged poles should be replaced proved to be conservative when applied to results of field rests. Such conservative predictions of failing loads...
Effect of sequential surface irrigations on field-scale emissions of 1,3-dichloropropene.
Yates, S R; Knuteson, J; Ernst, F F; Zheng, W; Wang, Q
2008-12-01
A field experiment was conducted to measure subsurface movement and volatilization of 1,3-dichloropropene (1,3-D) after shank injection to an agricultural soil. The goal of this study was to evaluate the effect of sprinkler irrigation on the emissions of 1,3-D to the atmosphere and is based on recent research that has shown that saturating the soil pore space reduces gas-phase diffusion and leads to reduced volatilization rates. Aerodynamic, integrated horizontal flux, and theoretical profile shape methods were used to estimate fumigant volatilization rates and total emission losses. These methods provide estimates of the volatilization rate based on measurements of wind speed, temperature, and 1,3-D concentration in the atmosphere. The volatilization rate was measured continuously for 16 days, and the daily peak volatilization rates for the three methods ranged from 18 to 60 microg m(-2) s(-1). The total 13-D mass entering the atmosphere was approximately 44-68 kg ha(-1), or 10-15% of the applied active ingredient This represents approximately 30-50% reduction in the total emission losses compared to conventional fumigant applications in field and field-plot studies. Significant reduction in volatilization of 1,3-D was observed when five surface irrigations were applied to the field, one immediately after fumigation followed by daily irrigations.
Cubí-Mollá, Patricia; Peña-Longobardo, Luz María; Casal, Bruno; Rivera, Berta; Oliva-Moreno, Juan
2015-09-01
To estimate the years of potential life lost, years of potential productive life lost and the labor productivity losses attributable to premature deaths due to traffic injuries between 2002 and 2012 in Spain. Several statistical sources were combined (Spanish Registry of Deaths, Labor Force Survey and Wage Structure Survey) to develop a simulation model based on the human capital approach. This model allowed us to estimate the loss of labor productivity caused by premature deaths following traffic injuries from 2002 to 2012. In addition, mortality tables with life expectancy estimates were used to compute years of potential life lost and years of potential productive life lost. The estimated loss of labour productivity caused by fatal traffic injuries between 2002 and 2012 in Spain amounted to 9,521 million euros (baseline year 2012). The aggregate number of years of potential life lost in the period amounted to 1,433,103, whereas the years of potential productive life lost amounted to 875,729. Throughout the period analyzed, labor productivity losses and years of life lost diminished substantially. Labor productivity losses due to fatal traffic injuries decreased throughout the period analyzed. Nevertheless, the cumulative loss was alarmingly high. Estimation of the economic impact of health problems can complement conventional indicators of distinct dimensions and be used to support public policy making. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.
Jaiswal, Kishor; Wald, D.J.
2013-01-01
This chapter summarizes the state-of-the-art for rapid earthquake impact estimation. It details the needs and challenges associated with quick estimation of earthquake losses following global earthquakes, and provides a brief literature review of various approaches that have been used in the past. With this background, the chapter introduces the operational earthquake loss estimation system developed by the U.S. Geological Survey (USGS) known as PAGER (for Prompt Assessment of Global Earthquakes for Response). It also details some of the ongoing developments of PAGER’s loss estimation models to better supplement the operational empirical models, and to produce value-added web content for a variety of PAGER users.
Estimation of the energy loss at the blades in rowing: common assumptions revisited.
Hofmijster, Mathijs; De Koning, Jos; Van Soest, A J
2010-08-01
In rowing, power is inevitably lost as kinetic energy is imparted to the water during push-off with the blades. Power loss is estimated from reconstructed blade kinetics and kinematics. Traditionally, it is assumed that the oar is completely rigid and that force acts strictly perpendicular to the blade. The aim of the present study was to evaluate how reconstructed blade kinematics, kinetics, and average power loss are affected by these assumptions. A calibration experiment with instrumented oars and oarlocks was performed to establish relations between measured signals and oar deformation and blade force. Next, an on-water experiment was performed with a single female world-class rower rowing at constant racing pace in an instrumented scull. Blade kinematics, kinetics, and power loss under different assumptions (rigid versus deformable oars; absence or presence of a blade force component parallel to the oar) were reconstructed. Estimated power losses at the blades are 18% higher when parallel blade force is incorporated. Incorporating oar deformation affects reconstructed blade kinematics and instantaneous power loss, but has no effect on estimation of power losses at the blades. Assumptions on oar deformation and blade force direction have implications for the reconstructed blade kinetics and kinematics. Neglecting parallel blade forces leads to a substantial underestimation of power losses at the blades.
NASA Astrophysics Data System (ADS)
Hergoualc'h, Kristell; Verchot, Louis V.
2011-06-01
The increasing and alarming trend of degradation and deforestation of tropical peat swamp forests may contribute greatly to climate change. Estimates of carbon (C) losses associated with land use change in tropical peatlands are needed. To assess these losses we examined C stocks and peat C fluxes in virgin peat swamp forests and tropical peatlands affected by six common types of land use. Phytomass C loss from the conversion of virgin peat swamp forest to logged forest, fire-damaged forest, mixed croplands and shrublands, rice field, oil palm plantation, and Acacia plantation were calculated using the stock difference method and estimated at 116.9 ± 39.8, 151.6 ± 36.0, 204.1 ± 28.6, 214.9 ± 28.4, 188.1 ± 29.8, and 191.7 ± 28.5 Mg C ha-1, respectively. Total C loss from uncontrolled fires ranged from 289.5 ± 68.1 Mg C ha-1 in rice fields to 436.2 ± 77.0 Mg C ha-1 in virgin peat swamp forest. We assessed the effects of land use change on C stocks in the peat by looking at how the change in vegetation cover altered the main C inputs (litterfall and root mortality) and outputs (heterotrophic respiration, CH4 flux, fires, and soluble and physical removal) before and after conversion. The difference between the soil input-output balances in the virgin peat swamp forest and in the oil palm plantation gave an estimate of peat C loss of 10.8 ± 3.5 Mg C ha-1 yr-1. Peat C loss from other land use conversions could not be assessed due to lack of data, principally on soil heterotrophic respiration rates. Over 25 years, the conversion of tropical virgin peat swamp forest into oil palm plantation represents a total C loss from both biomass and peat of 427.2 ± 90.7 Mg C ha-1 or 17.1 ± 3.6 Mg C ha-1 yr-1. In all situations, peat C loss contributed more than 63% to total C loss, demonstrating the urgent need in terms of the atmospheric greenhouse gas burden to protect tropical virgin peat swamp forests from land use change and fires.
NASA Astrophysics Data System (ADS)
Wood, W. W.; Wood, W. W.
2001-05-01
Evaluation of ground-water supply in arid areas requires estimation of annual recharge. Traditional physical-based hydrologic estimates of ground-water recharge result in large uncertainties when applied in arid, mountainous environments because of infrequent, intense rainfall events, destruction of water-measuring structures associated with those events, and consequent short periods of hydrologic records. To avoid these problems and reduce the uncertainty of recharge estimates, a chloride mass-balance (CMB) approach was used to provide a time-integrated estimate. Seven basins exhibiting dry-stream beds (wadis) in the Asir and Hijaz Mountains, western Saudi Arabia, were selected to evaluate the method. Precipitation among the basins ranged from less than 70 mm/y to nearly 320 mm/y. Rain collected from 35 locations in these basins averaged 2.0 mg/L chloride. Ground water from 140 locations in the wadi alluvium averaged 200 mg/L chloride. This chloride concentration ratio of precipitation to ground water suggests that on average, approximately 1 percent of the rainfall is recharged, while the remainder is lost to evaporation. Ground-water recharge from precipitation in individual basins ranged from less than 1 to nearly 4 percent and was directly proportional to total precipitation. Independent calculations of recharge using Darcy's Law were consistent with these findings and are within the range typically found in other arid areas of the world. Development of ground water has lowered the water level beneath the wadis and provided more storage thus minimizing chloride loss from the basin by river discharge. Any loss of chloride from the basin results in an overestimate of the recharge flux by the chloride-mass balance approach. In well-constrained systems recharge in arid, mountainous areas where the mass of chloride entering and leaving the basin is known or can be reasonably estimated, the CMB approach provides a rapid, inexpensive method for estimating time-integrated ground-water recharge.
Analysis of backward error recovery for concurrent processes with recovery blocks
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1982-01-01
Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.
Stochastic Modeling of Empirical Storm Loss in Germany
NASA Astrophysics Data System (ADS)
Prahl, B. F.; Rybski, D.; Kropp, J. P.; Burghoff, O.; Held, H.
2012-04-01
Based on German insurance loss data for residential property we derive storm damage functions that relate daily loss with maximum gust wind speed. Over a wide range of loss, steep power law relationships are found with spatially varying exponents ranging between approximately 8 and 12. Global correlations between parameters and socio-demographic data are employed to reduce the number of local parameters to 3. We apply a Monte Carlo approach to calculate German loss estimates including confidence bounds in daily and annual resolution. Our model reproduces the annual progression of winter storm losses and enables to estimate daily losses over a wide range of magnitude.
An Assessment of the Ozone Loss During the 1999-2000 SOLVE Arctic Campaign
NASA Technical Reports Server (NTRS)
Schoeberl, Mark R.; Newman, Paul A.; Lait, Leslie R.; McGee, Thomas J.; Burris, John F.; Browell, Edward V.; Grant, William B.; Richard, Eric; VonderGathen, Peter; Bevilacqua, Richard;
2001-01-01
Ozone observations from ozonesondes, the lidars aboard the DC-8, in situ ozone measurements from the ER-2, and satellite ozone measurements from Polar Ozone and Aerosol Measurement III (POAM) were used to assess ozone loss during the Sage III Ozone Loss and Validation Experiment (SOLVE) 1999-2000 Arctic campaign. Two methods of analysis were used. In the first method a simple regression analysis is performed on the ozonesonde and POAM measurements within the vortex. In the second method, the ozone measurements from all available ozone data were injected into a free running diabatic trajectory model and carried forward in time from December 1 to March 15. Vortex ozone loss was then estimated by comparing the ozone values of those parcels initiated early in the campaign with those parcels injected later in the campaign. Despite the variety of observational techniques used during SOLVE, the measurements provide a fairly consistent picture. Over the whole vortex, the largest ozone loss occurs between 550 and 400 K potential temperatures (approximately 23-16 km) with over 1.5 ppmv lost by March 15, the end of the SOLVE mission period. An ozone loss rate of 0.04-0.05 ppmv/day was computed for March 15. Ozonesondes launched after March 15 suggest that an additional 0.5 ppmv or more ozone was lost between March 15 and April 1. The small disagreement between ozonesonde and POAM analysis of January ozone loss is found to be due to biases in vortex sampling. POAM makes most of its solar occultation measurements at the vortex edge during January 2000 which bias samples toward air parcels that have been exposed to sunlight and likely do experience ozone loss. Ozonesonde measurements and the trajectory technique use observations that are more distributed within the interior of the vortex. Thus the regression analysis of the POAM measurements tends to overestimate mid-winter vortex ozone loss. Finally, our loss calculations are broadly consistent with other loss computations using ER-2 tracer data and MLS satellite data, but we find no evidence for the 1992 high mid-January loss reported using the Match technique.
[Medical computer-aided detection method based on deep learning].
Tao, Pan; Fu, Zhongliang; Zhu, Kai; Wang, Lili
2018-03-01
This paper performs a comprehensive study on the computer-aided detection for the medical diagnosis with deep learning. Based on the region convolution neural network and the prior knowledge of target, this algorithm uses the region proposal network, the region of interest pooling strategy, introduces the multi-task loss function: classification loss, bounding box localization loss and object rotation loss, and optimizes it by end-to-end. For medical image it locates the target automatically, and provides the localization result for the next stage task of segmentation. For the detection of left ventricular in echocardiography, proposed additional landmarks such as mitral annulus, endocardial pad and apical position, were used to estimate the left ventricular posture effectively. In order to verify the robustness and effectiveness of the algorithm, the experimental data of ultrasonic and nuclear magnetic resonance images are selected. Experimental results show that the algorithm is fast, accurate and effective.
Lamb mode selection for accurate wall loss estimation via guided wave tomography
NASA Astrophysics Data System (ADS)
Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.
2014-02-01
Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1-2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S0 and A0, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A0 to thickness variations was shown to be superior to S0, however, the attenuation from A0 when a liquid loading was present was much higher than S0. A0 was less sensitive to the presence of coatings on the surface of than S0.
Markewich, Helaine W.; Buell, Gary R.; Britsch, Louis D.; McGeehin, John P.; Robbins, John A.; Wrenn, John H.; Dillon, Douglas L.; Fries, Terry L.; Morehead, Nancy R.
2007-01-01
Soil/sediment of the Mississippi River deltaic plain (MRDP) in southeastern Louisiana is rich in organic carbon (OC). The MRDP contains about 2 percent of all OC in the surface meter of soil/sediment in the Mississippi River Basin (MRB). Environments within the MRDP differ in soil/sediment organic carbon (SOC) accumulation rate, storage, and inventory. The focus of this study was twofold: (1) develop a database for OC and bulk density for MRDP soil/sediment; and (2) estimate SOC storage, inventory, and accumulation rates for the dominant environments (brackish, intermediate, and fresh marsh; natural levee; distributary; backswamp; and swamp) in the MRDP. Comparative studies were conducted to determine which field and laboratory methods result in the most accurate and reproducible bulk-density values for each marsh environment. Sampling methods included push-core, vibracore, peat borer, and Hargis1 sampler. Bulk-density data for cores taken by the 'short push-core method' proved to be more internally consistent than data for samples collected by other methods. Laboratory methods to estimate OC concentration and inorganic-constituent concentration included mass spectrometry, coulometry, and loss-on-ignition. For the sampled MRDP environments, these methods were comparable. SOC storage was calculated for each core with adequate OC and bulk-density data. SOC inventory was calculated using core-specific data from this study and available published and unpublished pedon data linked to SSURGO2 map units. Sample age was estimated using isotopic cesium (137Cs), lead (210Pb), and carbon (14C), elemental Pb, palynomorphs, other stratigraphic markers, and written history. SOC accumulation rates were estimated for each core with adequate age data. Cesium-137 profiles for marsh soil/sediment are the least ambiguous. Levee and distributary 137Cs profiles show the effects of intermittent allochthonous input and/or sediment resuspension. Cesium-137 and 210Pb data gave the most consistent and interpretable information for age estimations of soil/sediment deposited during the 1900s. For several cores, isotopic 14C and 137Cs data allowed the 1963-64 nuclear weapons testing (NWT) peak-activity datum to be placed within a few-centimeter depth interval. In some cores, a too old 14C age (when compared to 137Cs and microstratigraphic-marker data) is the probable result of old carbon bound to clay minerals incorporated into the organic soil/sediment. Elemental Pb coupled with Pb source-function data allowed age estimation for soil/sediment that accumulated during the late 1920s through the 1980s. Exotic pollen (for example, Vigna unguiculata and Alternanthera philoxeroides) and other microstratigraphic indicators (for example, carbon spherules) allowed age estimations for marsh soil/sediment deposited during the settlement of New Orleans (1717-20) through the early 1900s. For this study, MRDP distributary and swamp environments were each represented by only one core, backswamp environment by two cores, all other environments by three or more cores. MRDP core data for the surface meter soil/sediment indicate that (1) coastal marshes, abandoned distributaries, and swamps have regional SOC-storage values >16 kg m-2; (2) swamps and abandoned distributaries have the highest SOC storage values (swamp, 44.8 kg m-2; abandoned distributary, 50.9 kg m-2); (3) fresh-to-brackish marsh environments have the second highest site-specific SOC-storage values; and (4) site-specific marsh SOC storage values decrease as the salinity of the environment increases (fresh-marsh, 36.2 kg m-2; intermediate marsh, 26.2 kg m-2; brackish marsh, 21.5 kg m-2). This inverse relation between salinity and SOC storage is opposite the regional systematic increase in SOC storage with increasing salinity that is evident when SOC storage is mapped by linking pedon data to SSURGO map units (fresh marsh, 47 kg m-2; intermediate marsh, 67 kg m-2; brackish marsh, 75 kg m-2; and salt marsh, 80 kg m-2). MRDP core data for this study also indicate that levees and backswamp have regional SOC-storage values <16 kg m-2. Group-mean SOC storage for cores from these environments are natural levee (17.0 kg m-2) and backswamp (14.1 kg m-2). An estimate for the SOC inventory in the surface meter of soil/sediment in the MRDP can be made using the SSURGO mapped portion of the coastal-marsh vegetative-type map (13,236 km2, land-only area) published by the Louisiana Department of Wildlife and Fisheries and U.S. Geological Survey (1997). This area has a SOC inventory (surface meter) of 677 Tg (slightly more than 2 percent of the 30,289 Tg SOC inventory for the MRB). The MRDP (6,180 km2, land-only area) has an estimated SOC inventory of 397 Tg. Most of the MRDP is located within the SSURGO mapped coastal marshlands. The entire MRDP, including water, has an area of about 10,800 km2. Using the ratio of total MRDP area to SSURGO mapped MRDP area as an adjustment, the MRDP SOC inventory is estimated at 694 Tg. This larger estimate of 694 Tg for the SOC inventory is probably more realistic, because it is reasonable to assume that the marsh sediments overlain by shallow water have comparable SOC storage to that of the adjacent land areas. MRDP core data for this study indicate that there is some variability in long-term SOC mass-accumulation rates for centuries and millennia and that this variability may indicate important geologic changes or changes in land use. However, the consistency of the range in rates of SOC accumulation through time suggests a remarkable degree of marsh sustainability throughout the Holocene, including the recent period of significant marsh modification/channelization for human use. One example of marsh sustainability is its present ability to function as a SOC sink even with Louisiana's large-scale coastal land loss during the last several decades. With coastal-marsh restoration efforts, this sink potential will increase. Looking to the future, a total of 1,101 g m-2 yr-1 SOC is projected to be lost from all of coastal Louisiana (U.S. Army Corps of Engineers, Louisiana Coastal Area (LCA) subprovinces 1-4; not just the MRDP) through coastal erosion from year 2000 to 2050. This translates to a projected SOC-loss rate of about 0.20 percent per year. The recent Hurricanes Katrina and Rita, which devastated the Louisiana coast during late August and late September 2005, transformed about 259 km2 (100 mi2) of marsh to open water (U.S. Geological Survey, 2005). To the extent that some or all of this land loss is permanent, this result equates to a SOC loss of about 15 Tg. This estimate is based on the year-2000 15,153-km2 land area for the LCA study area that includes LCA subprovince 4. Using the year-2000 land area, the LCA study area had an estimated SOC inventory of 858 Tg. The estimated 15 Tg SOC loss attributable to Hurricanes Katrina and Rita is 1.7 percent of the year-2000 LCA inventory and 2.3 percent of the year-2000 MRDP inventory. If this SOC loss is included in the projection for the year 2050, then the MRDP would either remain a source with a net SOC loss of 3 Tg or become a weak sink with a net SOC gain of 4 Tg. These estimates are lower bounds for potential SOC flux because they are only for the surface meter of landmass.
Estimating tag loss of the Atlantic Horseshoe crab, Limulus polyphemus, using a multi-state model
Butler, Catherine Alyssa; McGowan, Conor P.; Grand, James B.; Smith, David
2012-01-01
The Atlantic Horseshoe crab, Limulus polyphemus, is a valuable resource along the Mid-Atlantic coast which has, in recent years, experienced new management paradigms due to increased concern about this species role in the environment. While current management actions are underway, many acknowledge the need for improved and updated parameter estimates to reduce the uncertainty within the management models. Specifically, updated and improved estimates of demographic parameters such as adult crab survival in the regional population of interest, Delaware Bay, could greatly enhance these models and improve management decisions. There is however, some concern that difficulties in tag resighting or complete loss of tags could be occurring. As apparent from the assumptions of a Jolly-Seber model, loss of tags can result in a biased estimate and underestimate a survival rate. Given that uncertainty, as a first step towards estimating an unbiased estimate of adult survival, we first took steps to estimate the rate of tag loss. Using data from a double tag mark-resight study conducted in Delaware Bay and Program MARK, we designed a multi-state model to allow for the estimation of mortality of each tag separately and simultaneously.
NASA Astrophysics Data System (ADS)
Handley, Heather K.; Turner, Simon; Afonso, Juan C.; Dosseto, Anthony; Cohen, Tim
2013-02-01
Quantifying the rates of landscape evolution in response to climate change is inhibited by the difficulty of dating the formation of continental detrital sediments. We present uranium isotope data for Cooper Creek palaeochannel sediments from the Lake Eyre Basin in semi-arid South Australia in order to attempt to determine the formation ages and hence residence times of the sediments. To calculate the amount of recoil loss of 234U, a key input parameter used in the comminution approach, we use two suggested methods (weighted geometric and surface area measurement with an incorporated fractal correction) and typical assumed input parameter values found in the literature. The calculated recoil loss factors and comminution ages are highly dependent on the method of recoil loss factor determination used and the chosen assumptions. To appraise the ramifications of the assumptions inherent in the comminution age approach and determine individual and combined comminution age uncertainties associated to each variable, Monte Carlo simulations were conducted for a synthetic sediment sample. Using a reasonable associated uncertainty for each input factor and including variations in the source rock and measured (234U/238U) ratios, the total combined uncertainty on comminution age in our simulation (for both methods of recoil loss factor estimation) can amount to ±220-280 ka. The modelling shows that small changes in assumed input values translate into large effects on absolute comminution age. To improve the accuracy of the technique and provide meaningful absolute comminution ages, much tighter constraints are required on the assumptions for input factors such as the fraction of α-recoil lost 234Th and the initial (234U/238U) ratio of the source material. In order to be able to directly compare calculated comminution ages produced by different research groups, the standardisation of pre-treatment procedures, recoil loss factor estimation and assumed input parameter values is required. We suggest a set of input parameter values for such a purpose. Additional considerations for calculating comminution ages of sediments deposited within large, semi-arid drainage basins are discussed.
Doutres, Olivier; Atalla, Noureddine
2010-08-01
The objective of this paper is to propose a simple tool to estimate the absorption vs. transmission loss contributions of a multilayered blanket unbounded in a double panel structure and thus guide its optimization. The normal incidence airborne sound transmission loss of the double panel structure, without structure-borne connections, is written in terms of three main contributions; (i) sound transmission loss of the panels, (ii) sound transmission loss of the blanket and (iii) sound absorption due to multiple reflections inside the cavity. The method is applied to four different blankets frequently used in automotive and aeronautic applications: a non-symmetric multilayer made of a screen in sandwich between two porous layers and three symmetric porous layers having different pore geometries. It is shown that the absorption behavior of the blanket controls the acoustic behavior of the treatment at low and medium frequencies and its transmission loss at high frequencies. Acoustic treatment having poor sound absorption behavior can affect the performance of the double panel structure.
Single- and dual-photon absorptiometry in osteoporosis and osteomalacia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahner, H.W.
Single- and dual-photon absorptiometric methods have been used in the past to identify populations at risk for bone loss, to define the osteoporotic syndrome in terms of bone mass, and to evaluate treatment regimens to prevent bone loss. Technical improvements have made these procedures available for the nontraumatic measurement of bone mineral in the management of the individual patient suspected of having osteoporosis or other bone loss. This requires a different approach to data interpretation because decisions have to be made on the basis of a single measurement. Osteoporosis and osteomalacia cannot be distinguished by bone mineral measurements because bothmore » are characterized by a decrease in content of bone mineral. Bone mineral measurements can be used to assess the risk of fracture and, with it, the severity of bone loss. This allows treatment decisions to be made. Repeated measurements made under well-defined conditions allow estimation of long-term rate of bone loss and monitoring of treatment effect. 38 references.« less
Rank-based methods for modeling dependence between loss triangles.
Côté, Marie-Pier; Genest, Christian; Abdallah, Anas
2016-01-01
In order to determine the risk capital for their aggregate portfolio, property and casualty insurance companies must fit a multivariate model to the loss triangle data relating to each of their lines of business. As an inadequate choice of dependence structure may have an undesirable effect on reserve estimation, a two-stage inference strategy is proposed in this paper to assist with model selection and validation. Generalized linear models are first fitted to the margins. Standardized residuals from these models are then linked through a copula selected and validated using rank-based methods. The approach is illustrated with data from six lines of business of a large Canadian insurance company for which two hierarchical dependence models are considered, i.e., a fully nested Archimedean copula structure and a copula-based risk aggregation model.
Land Use Change and Soil Organic Carbon Dynamics in China
NASA Astrophysics Data System (ADS)
Peng, C.; Wu, H.; Guo, Z.
2004-05-01
The changes of soil organic carbon depend not only on biogeochemical and climatological processes, but also on human activities and their interaction with carbon cycle. A long history of agricultural exploitation, forest management practice, rapid change in land use, forestry policies, and economic growth suggest that Chinese terrestrial ecosystems play an important role in the global carbon cycles. Using the data compiled from China's second national soil survey and an improved method of soil carbon bulk density, we have estimated the changes of soil organic carbon due to land use, and compared the spatial distribution and storage of soil organic carbon (SOC) in cultivated soils and non-cultivated soils in China. The results reveal that ~57% of the cultivated soil subgroups (~31% of the total soil surface) have experienced a significant carbon loss, ranging from 40% to 10% relative to their non-cultivated counterparts. The most significant carbon loss is observed for the non-irrigated soils (dry farmland) within a semi-arid/semi-humid belt from northeastern to southwestern China, with the maximum loss occurring in northeast China. Our results suggest that total organic carbon storage in soils in China is estimated to be about 70.31 Pg, representing 4.7% of the world storage. The results also indicated that a soil organic carbon loss of 7.1 Pg was primarily due to human activity, in which the loss in organic horizons has contributed to 77%. This total loss of soil organic carbon in China induced by land use represents 9.5% of the world's soil organic carbon decrease.