Sample records for estimation method employed

  1. Work productivity loss from depression: evidence from an employer survey.

    PubMed

    Rost, Kathryn M; Meng, Hongdao; Xu, Stanley

    2014-12-18

    National working groups identify the need for return on investment research conducted from the purchaser perspective; however, the field has not developed standardized methods for measuring the basic components of return on investment, including costing out the value of work productivity loss due to illness. Recent literature is divided on whether the most commonly used method underestimates or overestimates this loss. The goal of this manuscript is to characterize between and within variation in the cost of work productivity loss from illness estimated by the most commonly used method and its two refinements. One senior health benefit specialist from each of 325 companies employing 100+ workers completed a cross-sectional survey describing their company size, industry and policies/practices regarding work loss which allowed the research team to derive the variables needed to estimate work productivity loss from illness using three methods. Compensation estimates were derived by multiplying lost work hours from presenteeism and absenteeism by wage/fringe. Disruption correction adjusted this estimate to account for co-worker disruption, while friction correction accounted for labor substitution. The analysis compared bootstrapped means and medians between and within these three methods. The average company realized an annual $617 (SD = $75) per capita loss from depression by compensation methods and a $649 (SD = $78) loss by disruption correction, compared to a $316 (SD = $58) loss by friction correction (p < .0001). Agreement across estimates was 0.92 (95% CI 0.90, 0.93). Although the methods identify similar companies with high costs from lost productivity, friction correction reduces the size of compensation estimates of productivity loss by one half. In analyzing the potential consequences of method selection for the dissemination of interventions to employers, intervention developers are encouraged to include friction methods in their estimate of the economic value of interventions designed to improve absenteeism and presenteeism. Business leaders in industries where labor substitution is common are encouraged to seek friction corrected estimates of return on investment. Health policy analysts are encouraged to target the dissemination of productivity enhancing interventions to employers with high losses rather than all employers. NCT01013220.

  2. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  3. Methods for estimating the labour force insured by the Ontario Workplace Safety and Insurance Board: 1990-2000.

    PubMed

    Smith, Peter M; Mustard, Cameron A; Payne, Jennifer I

    2004-01-01

    This paper presents a methodology for estimating the size and composition of the Ontario labour force eligible for coverage under the Ontario Workplace Safety & Insurance Act (WSIA). Using customized tabulations from Statistics Canada's Labour Force Survey (LFS), we made adjustments for self-employment, unemployment, part-time employment and employment in specific industrial sectors excluded from insurance coverage under the WSIA. Each adjustment to the LFS reduced the estimates of the insured labour force relative to the total Ontario labour force. These estimates were then developed for major occupational and industrial groups stratified by gender. Additional estimates created to test assumptions used in the methodology produced similar results. The methods described in this paper advance those previously used to estimate the insured labour force, providing researchers with a useful tool to describe trends in the rate of injury across differing occupational, industrial and gender groups in Ontario.

  4. Comparison of employer productivity metrics to lost productivity estimated by commonly used questionnaires

    PubMed Central

    Gardner, Bethany T.; Dale, Ann Marie; Buckner-Petty, Skye; Van Dillen, Linda; Amick, Benjamin C.; Evanoff, Bradley

    2016-01-01

    Objective To assess construct and discriminant validity of four health-related work productivity loss questionnaires in relation to employer productivity metrics, and to describe variation in economic estimates of productivity loss provided by the questionnaires in healthy workers. Methods 58 billing office workers completed surveys including health information and four productivity loss questionnaires. Employer productivity metrics and work hours were also obtained. Results Productivity loss questionnaires were weakly to moderately correlated with employer productivity metrics. Workers with more health complaints reported greater health-related productivity loss than healthier workers, but showed no loss on employer productivity metrics. Economic estimates of productivity loss showed wide variation among questionnaires, yet no loss of actual productivity. Conclusions Additional studies are needed comparing questionnaires with objective measures in larger samples and other industries, to improve measurement methods for health-related productivity loss. PMID:26849261

  5. An Employee Total Health Management–Based Survey of Iowa Employers

    PubMed Central

    Merchant, James A.; Lind, David P.; Kelly, Kevin M.; Hall, Jennifer L.

    2015-01-01

    Objective To implement an Employee Total Health Management (ETHM) model-based questionnaire and provide estimates of model program elements among a statewide sample of Iowa employers. Methods Survey a stratified random sample of Iowa employers, characterize and estimate employer participation in ETHM program elements Results Iowa employers are implementing under 30% of all 12 components of ETHM, with the exception of occupational safety and health (46.6%) and worker compensation insurance coverage (89.2%), but intend modest expansion of all components in the coming year. Conclusions The Employee Total Health Management questionnaire-based survey provides estimates of progress Iowa employers are making toward implementing components of total worker health programs. PMID:24284757

  6. Input Forces Estimation for Nonlinear Systems by Applying a Square-Root Cubature Kalman Filter.

    PubMed

    Song, Xuegang; Zhang, Yuexin; Liang, Dakai

    2017-10-10

    This work presents a novel inverse algorithm to estimate time-varying input forces in nonlinear beam systems. With the system parameters determined, the input forces can be estimated in real-time from dynamic responses, which can be used for structural health monitoring. In the process of input forces estimation, the Runge-Kutta fourth-order algorithm was employed to discretize the state equations; a square-root cubature Kalman filter (SRCKF) was employed to suppress white noise; the residual innovation sequences, a priori state estimate, gain matrix, and innovation covariance generated by SRCKF were employed to estimate the magnitude and location of input forces by using a nonlinear estimator. The nonlinear estimator was based on the least squares method. Numerical simulations of a large deflection beam and an experiment of a linear beam constrained by a nonlinear spring were employed. The results demonstrated accuracy of the nonlinear algorithm.

  7. Estimation of inhalation flow profile using audio-based methods to assess inhaler medication adherence.

    PubMed

    Taylor, Terence E; Lacalle Muls, Helena; Costello, Richard W; Reilly, Richard B

    2018-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence.

  8. Estimation of inhalation flow profile using audio-based methods to assess inhaler medication adherence

    PubMed Central

    Lacalle Muls, Helena; Costello, Richard W.; Reilly, Richard B.

    2018-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence. PMID:29346430

  9. 49 CFR Appendix B to Part 227 - Methods for Estimating the Adequacy of Hearing Protector Attenuation

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Methods for Estimating the Adequacy of Hearing..., App. B Appendix B to Part 227—Methods for Estimating the Adequacy of Hearing Protector Attenuation This appendix is mandatory. Employers must select one of the following three methods by which to...

  10. Improving the quality of parameter estimates obtained from slug tests

    USGS Publications Warehouse

    Butler, J.J.; McElwee, C.D.; Liu, W.

    1996-01-01

    The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.

  11. Improved and standardized method for assessing years lived with disability after injury

    PubMed Central

    Polinder, S; Lyons, RA; Lund, J; Ditsuwan, V; Prinsloo, M; Veerman, JL; van Beeck, EF

    2012-01-01

    Abstract Objective To develop a standardized method for calculating years lived with disability (YLD) after injury. Methods The method developed consists of obtaining data on injury cases seen in emergency departments as well as injury-related hospital admissions, using the EUROCOST system to link the injury cases to disability information and employing empirical data to describe functional outcomes in injured patients. Findings Overall, 87 weights and proportions for 27 injury diagnoses involving lifelong consequences were included in the method. Almost all of the injuries investigated (96–100%) could be assigned to EUROCOST categories. The mean number of YLD per case of injury varied with the country studied. Use of the novel method resulted in estimated burdens of injury that were 3 to 8 times higher, in terms of YLD, than the corresponding estimates produced using the conventional methods employed in global burden of disease studies, which employ disability-adjusted life years. Conclusion The novel method for calculating YLD after injury can be applied in different settings, overcomes some limitations of the method used to calculate the global burden of disease, and allows more accurate estimates of the population burden of injury. PMID:22807597

  12. Regional economic activity and absenteeism: a new approach to estimating the indirect costs of employee productivity loss.

    PubMed

    Bankert, Brian; Coberley, Carter; Pope, James E; Wells, Aaron

    2015-02-01

    This paper presents a new approach to estimating the indirect costs of health-related absenteeism. Productivity losses related to employee absenteeism have negative business implications for employers and these losses effectively deprive the business of an expected level of employee labor. The approach herein quantifies absenteeism cost using an output per labor hour-based method and extends employer-level results to the region. This new approach was applied to the employed population of 3 health insurance carriers. The economic cost of absenteeism was estimated to be $6.8 million, $0.8 million, and $0.7 million on average for the 3 employers; regional losses were roughly twice the magnitude of employer-specific losses. The new approach suggests that costs related to absenteeism for high output per labor hour industries exceed similar estimates derived from application of the human capital approach. The materially higher costs under the new approach emphasize the importance of accurately estimating productivity losses.

  13. Adaptive neuro fuzzy inference system-based power estimation method for CMOS VLSI circuits

    NASA Astrophysics Data System (ADS)

    Vellingiri, Govindaraj; Jayabalan, Ramesh

    2018-03-01

    Recent advancements in very large scale integration (VLSI) technologies have made it feasible to integrate millions of transistors on a single chip. This greatly increases the circuit complexity and hence there is a growing need for less-tedious and low-cost power estimation techniques. The proposed work employs Back-Propagation Neural Network (BPNN) and Adaptive Neuro Fuzzy Inference System (ANFIS), which are capable of estimating the power precisely for the complementary metal oxide semiconductor (CMOS) VLSI circuits, without requiring any knowledge on circuit structure and interconnections. The ANFIS to power estimation application is relatively new. Power estimation using ANFIS is carried out by creating initial FIS modes using hybrid optimisation and back-propagation (BP) techniques employing constant and linear methods. It is inferred that ANFIS with the hybrid optimisation technique employing the linear method produces better results in terms of testing error that varies from 0% to 0.86% when compared to BPNN as it takes the initial fuzzy model and tunes it by means of a hybrid technique combining gradient descent BP and mean least-squares optimisation algorithms. ANFIS is the best suited for power estimation application with a low RMSE of 0.0002075 and a high coefficient of determination (R) of 0.99961.

  14. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    PubMed Central

    Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung

    2015-01-01

    Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282

  15. Winter bird population studies and project prairie birds for surveying grassland birds

    USGS Publications Warehouse

    Twedt, D.J.; Hamel, P.B.; Woodrey, M.S.

    2008-01-01

    We compared 2 survey methods for assessing winter bird communities in temperate grasslands: Winter Bird Population Study surveys are area-searches that have long been used in a variety of habitats whereas Project Prairie Bird surveys employ active-flushing techniques on strip-transects and are intended for use in grasslands. We used both methods to survey birds on 14 herbaceous reforested sites and 9 coastal pine savannas during winter and compared resultant estimates of species richness and relative abundance. These techniques did not yield similar estimates of avian populations. We found Winter Bird Population Studies consistently produced higher estimates of species richness, whereas Project Prairie Birds produced higher estimates of avian abundance for some species. When it is important to identify all species within the winter bird community, Winter Bird Population Studies should be the survey method of choice. If estimates of the abundance of relatively secretive grassland bird species are desired, the use of Project Prairie Birds protocols is warranted. However, we suggest that both survey techniques, as currently employed, are deficient and recommend distance- based survey methods that provide species-specific estimates of detection probabilities be incorporated into these survey methods.

  16. Comparison of Employer Factors in Disability and Other Employment Discrimination Charges

    ERIC Educational Resources Information Center

    Nazarov, Zafar E.; von Schrader, Sarah

    2014-01-01

    Purpose: We explore whether certain employer characteristics predict Americans with Disabilities Act (ADA) charges and whether the same characteristics predict receipt of the Age Discrimination in Employment Act and Title VII of the Civil Rights Act charges. Method: We estimate a set of multivariate regressions using the ordinary least squares…

  17. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  18. DETECTORS AND EXPERIMENTAL METHODS: Heuristic approach for peak regions estimation in gamma-ray spectra measured by a NaI detector

    NASA Astrophysics Data System (ADS)

    Zhu, Meng-Hua; Liu, Liang-Gang; You, Zhong; Xu, Ao-Ao

    2009-03-01

    In this paper, a heuristic approach based on Slavic's peak searching method has been employed to estimate the width of peak regions for background removing. Synthetic and experimental data are used to test this method. With the estimated peak regions using the proposed method in the whole spectrum, we find it is simple and effective enough to be used together with the Statistics-sensitive Nonlinear Iterative Peak-Clipping method.

  19. Comparison of Efficiency of Jackknife and Variance Component Estimators of Standard Errors. Program Statistics Research. Technical Report.

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…

  20. Comparison of Employer Productivity Metrics to Lost Productivity Estimated by Commonly Used Questionnaires.

    PubMed

    Gardner, Bethany T; Dale, Ann Marie; Buckner-Petty, Skye; Van Dillen, Linda; Amick, Benjamin C; Evanoff, Bradley

    2016-02-01

    The aim of the study was to assess construct and discriminant validity of four health-related work productivity loss questionnaires in relation to employer productivity metrics, and to describe variation in economic estimates of productivity loss provided by the questionnaires in healthy workers. Fifty-eight billing office workers completed surveys including health information and four productivity loss questionnaires. Employer productivity metrics and work hours were also obtained. Productivity loss questionnaires were weakly to moderately correlated with employer productivity metrics. Workers with more health complaints reported greater health-related productivity loss than healthier workers, but showed no loss on employer productivity metrics. Economic estimates of productivity loss showed wide variation among questionnaires, yet no loss of actual productivity. Additional studies are needed comparing questionnaires with objective measures in larger samples and other industries, to improve measurement methods for health-related productivity loss.

  1. Learning Multiple Band-Pass Filters for Sleep Stage Estimation: Towards Care Support for Aged Persons

    NASA Astrophysics Data System (ADS)

    Takadama, Keiki; Hirose, Kazuyuki; Matsushima, Hiroyasu; Hattori, Kiyohiko; Nakajima, Nobuo

    This paper proposes the sleep stage estimation method that can provide an accurate estimation for each person without connecting any devices to human's body. In particular, our method learns the appropriate multiple band-pass filters to extract the specific wave pattern of heartbeat, which is required to estimate the sleep stage. For an accurate estimation, this paper employs Learning Classifier System (LCS) as the data-mining techniques and extends it to estimate the sleep stage. Extensive experiments on five subjects in mixed health confirm the following implications: (1) the proposed method can provide more accurate sleep stage estimation than the conventional method, and (2) the sleep stage estimation calculated by the proposed method is robust regardless of the physical condition of the subject.

  2. Hierarchical Bayesian analysis to incorporate age uncertainty in growth curve analysis and estimates of age from length: Florida manatee (Trichechus manatus) carcasses

    USGS Publications Warehouse

    Schwarz, L.K.; Runge, M.C.

    2009-01-01

    Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.

  3. Method and system for diagnostics of apparatus

    NASA Technical Reports Server (NTRS)

    Gorinevsky, Dimitry (Inventor)

    2012-01-01

    Proposed is a method, implemented in software, for estimating fault state of an apparatus outfitted with sensors. At each execution period the method processes sensor data from the apparatus to obtain a set of parity parameters, which are further used for estimating fault state. The estimation method formulates a convex optimization problem for each fault hypothesis and employs a convex solver to compute fault parameter estimates and fault likelihoods for each fault hypothesis. The highest likelihoods and corresponding parameter estimates are transmitted to a display device or an automated decision and control system. The obtained accurate estimate of fault state can be used to improve safety, performance, or maintenance processes for the apparatus.

  4. Multilevel Modeling with Correlated Effects

    ERIC Educational Resources Information Center

    Kim, Jee-Seon; Frees, Edward W.

    2007-01-01

    When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized…

  5. A Method for Estimating Zero-Flow Pressure and Intracranial Pressure

    PubMed Central

    Caren, Marzban; Paul, Raymond Illian; David, Morison; Anne, Moore; Michel, Kliot; Marek, Czosnyka; Pierre, Mourad

    2012-01-01

    Background It has been hypothesized that critical closing pressure of cerebral circulation, or zero-flow pressure (ZFP), can estimate intracranial pressure (ICP). One ZFP estimation method employs extrapolation of arterial blood pressure versus blood-flow velocity. The aim of this study is to improve ICP predictions. Methods Two revisions are considered: 1) The linear model employed for extrapolation is extended to a nonlinear equation, and 2) the parameters of the model are estimated by an alternative criterion (not least-squares). The method is applied to data on transcranial Doppler measurements of blood-flow velocity, arterial blood pressure, and ICP, from 104 patients suffering from closed traumatic brain injury, sampled across the United States and England. Results The revisions lead to qualitative (e.g., precluding negative ICP) and quantitative improvements in ICP prediction. In going from the original to the revised method, the ±2 standard deviation of error is reduced from 33 to 24 mm Hg; the root-mean-squared error (RMSE) is reduced from 11 to 8.2 mm Hg. The distribution of RMSE is tighter as well; for the revised method the 25th and 75th percentiles are 4.1 and 13.7 mm Hg, respectively, as compared to 5.1 and 18.8 mm Hg for the original method. Conclusions Proposed alterations to a procedure for estimating ZFP lead to more accurate and more precise estimates of ICP, thereby offering improved means of estimating it noninvasively. The quality of the estimates is inadequate for many applications, but further work is proposed which may lead to clinically useful results. PMID:22824923

  6. Maternal Employment and Child Development: A Fresh Look Using Newer Methods

    ERIC Educational Resources Information Center

    Hill, Jennifer L.; Waldfogel, Jane; Brooks-Gunn, Jeanne; Han, Wen-Jui

    2005-01-01

    The employment rate for mothers with young children has increased dramatically over the past 25 years. Estimating the effects of maternal employment on children's development is challenged by selection bias and the missing data endemic to most policy research. To address these issues, this study uses propensity score matching and multiple…

  7. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  8. Multistage point relascope and randomized branch sampling for downed coarse woody debris estimation

    Treesearch

    Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine

    2002-01-01

    New sampling methods have recently been introduced that allow estimation of downed coarse woody debris using an angle gauge, or relascope. The theory behind these methods is based on sampling straight pieces of downed coarse woody debris. When pieces deviate from this ideal situation, auxillary methods must be employed. We describe a two-stage procedure where the...

  9. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection

    PubMed Central

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-01-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393

  10. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection.

    PubMed

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-12-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.

  11. Precise attitude rate estimation using star images obtained by mission telescope for satellite missions

    NASA Astrophysics Data System (ADS)

    Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi

    2015-02-01

    Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.

  12. Feature Grouping and Selection Over an Undirected Graph.

    PubMed

    Yang, Sen; Yuan, Lei; Lai, Ying-Cheng; Shen, Xiaotong; Wonka, Peter; Ye, Jieping

    2012-01-01

    High-dimensional regression/classification continues to be an important and challenging problem, especially when features are highly correlated. Feature selection, combined with additional structure information on the features has been considered to be promising in promoting regression/classification performance. Graph-guided fused lasso (GFlasso) has recently been proposed to facilitate feature selection and graph structure exploitation, when features exhibit certain graph structures. However, the formulation in GFlasso relies on pairwise sample correlations to perform feature grouping, which could introduce additional estimation bias. In this paper, we propose three new feature grouping and selection methods to resolve this issue. The first method employs a convex function to penalize the pairwise l ∞ norm of connected regression/classification coefficients, achieving simultaneous feature grouping and selection. The second method improves the first one by utilizing a non-convex function to reduce the estimation bias. The third one is the extension of the second method using a truncated l 1 regularization to further reduce the estimation bias. The proposed methods combine feature grouping and feature selection to enhance estimation accuracy. We employ the alternating direction method of multipliers (ADMM) and difference of convex functions (DC) programming to solve the proposed formulations. Our experimental results on synthetic data and two real datasets demonstrate the effectiveness of the proposed methods.

  13. Methods for the accurate estimation of confidence intervals on protein folding ϕ-values

    PubMed Central

    Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.

    2006-01-01

    ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714

  14. Unbiased estimation of oceanic mean rainfall from satellite borne radiometer measurements

    NASA Technical Reports Server (NTRS)

    Mittal, M. C.

    1981-01-01

    The statistical properties of the radar derived rainfall obtained during the GARP Atlantic Tropical Experiment (GATE) are used to derive quantitative estimates of the spatial and temporal sampling errors associated with estimating rainfall from brightness temperature measurements such as would be obtained from a satelliteborne microwave radiometer employing a practical size antenna aperture. A basis for a method of correcting the so called beam filling problem, i.e., for the effect of nonuniformity of rainfall over the radiometer beamwidth is provided. The method presented employs the statistical properties of the observations themselves without need for physical assumptions beyond those associated with the radiative transfer model. The simulation results presented offer a validation of the estimated accuracy that can be achieved and the graphs included permit evaluation of the effect of the antenna resolution on both the temporal and spatial sampling errors.

  15. Multiple input electrode gap controller

    DOEpatents

    Hysinger, C.L.; Beaman, J.J.; Melgaard, D.K.; Williamson, R.L.

    1999-07-27

    A method and apparatus for controlling vacuum arc remelting (VAR) furnaces by estimation of electrode gap based on a plurality of secondary estimates derived from furnace outputs. The estimation is preferably performed by Kalman filter. Adaptive gain techniques may be employed, as well as detection of process anomalies such as glows. 17 figs.

  16. Multiple input electrode gap controller

    DOEpatents

    Hysinger, Christopher L.; Beaman, Joseph J.; Melgaard, David K.; Williamson, Rodney L.

    1999-01-01

    A method and apparatus for controlling vacuum arc remelting (VAR) furnaces by estimation of electrode gap based on a plurality of secondary estimates derived from furnace outputs. The estimation is preferably performed by Kalman filter. Adaptive gain techniques may be employed, as well as detection of process anomalies such as glows.

  17. A Comparison of Methods for Nonparametric Estimation of Item Characteristic Curves for Binary Items

    ERIC Educational Resources Information Center

    Lee, Young-Sun

    2007-01-01

    This study compares the performance of three nonparametric item characteristic curve (ICC) estimation procedures: isotonic regression, smoothed isotonic regression, and kernel smoothing. Smoothed isotonic regression, employed along with an appropriate kernel function, provides better estimates and also satisfies the assumption of strict…

  18. Estimation of Biomass and Canopy Height in Bermudagrass, Alfalfa, and Wheat Using Ultrasonic, Laser, and Spectral Sensors

    PubMed Central

    Pittman, Jeremy Joshua; Arnall, Daryl Brian; Interrante, Sindy M.; Moffet, Corey A.; Butler, Twain J.

    2015-01-01

    Non-destructive biomass estimation of vegetation has been performed via remote sensing as well as physical measurements. An effective method for estimating biomass must have accuracy comparable to the accepted standard of destructive removal. Estimation or measurement of height is commonly employed to create a relationship between height and mass. This study examined several types of ground-based mobile sensing strategies for forage biomass estimation. Forage production experiments consisting of alfalfa (Medicago sativa L.), bermudagrass [Cynodon dactylon (L.) Pers.], and wheat (Triticum aestivum L.) were employed to examine sensor biomass estimation (laser, ultrasonic, and spectral) as compared to physical measurements (plate meter and meter stick) and the traditional harvest method (clipping). Predictive models were constructed via partial least squares regression and modeled estimates were compared to the physically measured biomass. Least significant difference separated mean estimates were examined to evaluate differences in the physical measurements and sensor estimates for canopy height and biomass. Differences between methods were minimal (average percent error of 11.2% for difference between predicted values versus machine and quadrat harvested biomass values (1.64 and 4.91 t·ha−1, respectively), except at the lowest measured biomass (average percent error of 89% for harvester and quad harvested biomass < 0.79 t·ha−1) and greatest measured biomass (average percent error of 18% for harvester and quad harvested biomass >6.4 t·ha−1). These data suggest that using mobile sensor-based biomass estimation models could be an effective alternative to the traditional clipping method for rapid, accurate in-field biomass estimation. PMID:25635415

  19. Estimation of the spatial autocorrelation function: consequences of sampling dynamic populations in space and time

    Treesearch

    Patrick C. Tobin

    2004-01-01

    The estimation of spatial autocorrelation in spatially- and temporally-referenced data is fundamental to understanding an organism's population biology. I used four sets of census field data, and developed an idealized space-time dynamic system, to study the behavior of spatial autocorrelation estimates when a practical method of sampling is employed. Estimates...

  20. A novel application of artificial neural network for wind speed estimation

    NASA Astrophysics Data System (ADS)

    Fang, Da; Wang, Jianzhou

    2017-05-01

    Providing accurate multi-steps wind speed estimation models has increasing significance, because of the important technical and economic impacts of wind speed on power grid security and environment benefits. In this study, the combined strategies for wind speed forecasting are proposed based on an intelligent data processing system using artificial neural network (ANN). Generalized regression neural network and Elman neural network are employed to form two hybrid models. The approach employs one of ANN to model the samples achieving data denoising and assimilation and apply the other to predict wind speed using the pre-processed samples. The proposed method is demonstrated in terms of the predicting improvements of the hybrid models compared with single ANN and the typical forecasting method. To give sufficient cases for the study, four observation sites with monthly average wind speed of four given years in Western China were used to test the models. Multiple evaluation methods demonstrated that the proposed method provides a promising alternative technique in monthly average wind speed estimation.

  1. Reconstruction of an acoustic pressure field in a resonance tube by particle image velocimetry.

    PubMed

    Kuzuu, K; Hasegawa, S

    2015-11-01

    A technique for estimating an acoustic field in a resonance tube is suggested. The estimation of an acoustic field in a resonance tube is important for the development of the thermoacoustic engine, and can be conducted employing two sensors to measure pressure. While this measurement technique is known as the two-sensor method, care needs to be taken with the location of pressure sensors when conducting pressure measurements. In the present study, particle image velocimetry (PIV) is employed instead of a pressure measurement by a sensor, and two-dimensional velocity vector images are extracted as sequential data from only a one- time recording made by a video camera of PIV. The spatial velocity amplitude is obtained from those images, and a pressure distribution is calculated from velocity amplitudes at two points by extending the equations derived for the two-sensor method. By means of this method, problems relating to the locations and calibrations of multiple pressure sensors are avoided. Furthermore, to verify the accuracy of the present method, the experiments are conducted employing the conventional two-sensor method and laser Doppler velocimetry (LDV). Then, results by the proposed method are compared with those obtained with the two-sensor method and LDV.

  2. [Research & development on computer expert system for forensic bones estimation].

    PubMed

    Zhao, Jun-ji; Zhang, Jan-zheng; Liu, Nin-guo

    2005-08-01

    To build an expert system for forensic bones estimation. By using the object oriented method, employing statistical data of forensic anthropology, combining the statistical data frame knowledge representation with productions and also using the fuzzy matching and DS evidence theory method. Software for forensic estimation of sex, age and height with opened knowledge base was designed. This system is reliable and effective, and it would be a good assistant of the forensic technician.

  3. On the more accurate channel model and positioning based on time-of-arrival for visible light localization

    NASA Astrophysics Data System (ADS)

    Amini, Changeez; Taherpour, Abbas; Khattab, Tamer; Gazor, Saeed

    2017-01-01

    This paper presents an improved propagation channel model for the visible light in indoor environments. We employ this model to derive an enhanced positioning algorithm using on the relation between the time-of-arrivals (TOAs) and the distances for two cases either by assuming known or unknown transmitter and receiver vertical distances. We propose two estimators, namely the maximum likelihood estimator and an estimator by employing the method of moments. To have an evaluation basis for these methods, we calculate the Cramer-Rao lower bound (CRLB) for the performance of the estimations. We show that the proposed model and estimations result in a superior performance in positioning when the transmitter and receiver are perfectly synchronized in comparison to the existing state-of-the-art counterparts. Moreover, the corresponding CRLB of the proposed model represents almost about 20 dB reduction in the localization error bound in comparison with the previous model for some practical scenarios.

  4. Absenteeism and Employer Costs Associated With Chronic Diseases and Health Risk Factors in the US Workforce

    PubMed Central

    Roy, Kakoli; Lang, Jason E.; Payne, Rebecca L.; Howard, David H.

    2016-01-01

    Introduction Employers may incur costs related to absenteeism among employees who have chronic diseases or unhealthy behaviors. We examined the association between employee absenteeism and 5 conditions: 3 risk factors (smoking, physical inactivity, and obesity) and 2 chronic diseases (hypertension and diabetes). Methods We identified 5 chronic diseases or risk factors from 2 data sources: MarketScan Health Risk Assessment and the Medical Expenditure Panel Survey (MEPS). Absenteeism was measured as the number of workdays missed because of sickness or injury. We used zero-inflated Poisson regression to estimate excess absenteeism as the difference in the number of days missed from work by those who reported having a risk factor or chronic disease and those who did not. Covariates included demographics (eg, age, education, sex) and employment variables (eg, industry, union membership). We quantified absenteeism costs in 2011 and adjusted them to reflect growth in employment costs to 2015 dollars. Finally, we estimated absenteeism costs for a hypothetical small employer (100 employees) and a hypothetical large employer (1,000 employees). Results Absenteeism estimates ranged from 1 to 2 days per individual per year depending on the risk factor or chronic disease. Except for the physical inactivity and obesity estimates, disease- and risk-factor–specific estimates were similar in MEPS and MarketScan. Absenteeism increased with the number of risk factors or diseases reported. Nationally, each risk factor or disease was associated with annual absenteeism costs greater than $2 billion. Absenteeism costs ranged from $16 to $81 (small employer) and $17 to $286 (large employer) per employee per year. Conclusion Absenteeism costs associated with chronic diseases and health risk factors can be substantial. Employers may incur these costs through lower productivity, and employees could incur costs through lower wages. PMID:27710764

  5. MPN estimation of qPCR target sequence recoveries from whole cell calibrator samples.

    PubMed

    Sivaganesan, Mano; Siefring, Shawn; Varma, Manju; Haugland, Richard A

    2011-12-01

    DNA extracts from enumerated target organism cells (calibrator samples) have been used for estimating Enterococcus cell equivalent densities in surface waters by a comparative cycle threshold (Ct) qPCR analysis method. To compare surface water Enterococcus density estimates from different studies by this approach, either a consistent source of calibrator cells must be used or the estimates must account for any differences in target sequence recoveries from different sources of calibrator cells. In this report we describe two methods for estimating target sequence recoveries from whole cell calibrator samples based on qPCR analyses of their serially diluted DNA extracts and most probable number (MPN) calculation. The first method employed a traditional MPN calculation approach. The second method employed a Bayesian hierarchical statistical modeling approach and a Monte Carlo Markov Chain (MCMC) simulation method to account for the uncertainty in these estimates associated with different individual samples of the cell preparations, different dilutions of the DNA extracts and different qPCR analytical runs. The two methods were applied to estimate mean target sequence recoveries per cell from two different lots of a commercially available source of enumerated Enterococcus cell preparations. The mean target sequence recovery estimates (and standard errors) per cell from Lot A and B cell preparations by the Bayesian method were 22.73 (3.4) and 11.76 (2.4), respectively, when the data were adjusted for potential false positive results. Means were similar for the traditional MPN approach which cannot comparably assess uncertainty in the estimates. Cell numbers and estimates of recoverable target sequences in calibrator samples prepared from the two cell sources were also used to estimate cell equivalent and target sequence quantities recovered from surface water samples in a comparative Ct method. Our results illustrate the utility of the Bayesian method in accounting for uncertainty, the high degree of precision attainable by the MPN approach and the need to account for the differences in target sequence recoveries from different calibrator sample cell sources when they are used in the comparative Ct method. Published by Elsevier B.V.

  6. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  7. Estimating Pay Gaps for Workers with Disabilities: Implications from Broadening Definitions and Data Sets

    ERIC Educational Resources Information Center

    Hallock, Kevin F.; Jin, Xin; Barrington, Linda

    2014-01-01

    Purpose: To compare pay gap estimates across 3 different national survey data sets for people with disabilities relative to those without disabilities when pay is measured as wage and salary alone versus a (total compensation) definition that includes an estimate of the value of benefits. Method: Estimates of the cost to the employers of employee…

  8. Using the Ridge Regression Procedures to Estimate the Multiple Linear Regression Coefficients

    NASA Astrophysics Data System (ADS)

    Gorgees, HazimMansoor; Mahdi, FatimahAssim

    2018-05-01

    This article concerns with comparing the performance of different types of ordinary ridge regression estimators that have been already proposed to estimate the regression parameters when the near exact linear relationships among the explanatory variables is presented. For this situations we employ the data obtained from tagi gas filling company during the period (2008-2010). The main result we reached is that the method based on the condition number performs better than other methods since it has smaller mean square error (MSE) than the other stated methods.

  9. Taking up or turning down: new estimates of household demand for employer-sponsored health insurance.

    PubMed

    Abraham, Jean Marie; Feldman, Roger

    2010-01-01

    This study provides new estimates of demand for employer-sponsored health insurance, using the 1997-2001 linked Household Component-Insurance Component of the Medical Expenditure Panel Survey (MEPS). Our focus is on households' decisions to take up coverage through a worker's employer. We found a significant inverse relationship between the out-of-pocket premium and the probability of taking up coverage, with the price effect considerably larger when we used instrumental variables methods to account for endogenous out-of-pocket premiums. Additionally, workers in families with more children eligible for Medicaid were less likely to take up coverage.

  10. Optimizing hidden layer node number of BP network to estimate fetal weight

    NASA Astrophysics Data System (ADS)

    Su, Juan; Zou, Yuanwen; Lin, Jiangli; Wang, Tianfu; Li, Deyu; Xie, Tao

    2007-12-01

    The ultrasonic estimation of fetal weigh before delivery is of most significance for obstetrical clinic. Estimating fetal weight more accurately is crucial for prenatal care, obstetrical treatment, choosing appropriate delivery methods, monitoring fetal growth and reducing the risk of newborn complications. In this paper, we introduce a method which combines golden section and artificial neural network (ANN) to estimate the fetal weight. The golden section is employed to optimize the hidden layer node number of the back propagation (BP) neural network. The method greatly improves the accuracy of fetal weight estimation, and simultaneously avoids choosing the hidden layer node number with subjective experience. The estimation coincidence rate achieves 74.19%, and the mean absolute error is 185.83g.

  11. A subagging regression method for estimating the qualitative and quantitative state of groundwater

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Park, E.; Choi, J.; Han, W. S.; Yun, S. T.

    2016-12-01

    A subagging regression (SBR) method for the analysis of groundwater data pertaining to the estimation of trend and the associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of the other methods and the uncertainties are reasonably estimated where the others have no uncertainty analysis option. To validate further, real quantitative and qualitative data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by SBR, whereas the GPR has limitations in representing the variability of non-Gaussian skewed data. From the implementations, it is determined that the SBR method has potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data.

  12. 15 CFR 90.8 - Evidence required.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., DEPARTMENT OF COMMERCE PROCEDURE FOR CHALLENGING POPULATION ESTIMATES § 90.8 Evidence required. (a) The... the criteria, standards, and regular processes the Census Bureau employs to generate the population... uses a cohort-component of change method to produce population estimates. Each year, the components of...

  13. Formula Feedback and Central Cities: The Case of the Comprehensive Employment and Training Act.

    ERIC Educational Resources Information Center

    Jones, E. Terrence; Phares, Donald

    1978-01-01

    This study critically examines the measurement of the Comprehesive Employment and Training Act's key allocation variable, unemployment. The analysis indicates (1) unemployment rates are higher than government estimates and (2) methods used to measure state and local umemployment have several weaknesses. (Author/RLV)

  14. Future Labor Market Demand and Vocational Education.

    ERIC Educational Resources Information Center

    Goldstein, Harold

    Review of the methods for estimating future employment opportunities shows that there is an ongoing system, involving the Department of Labor and state employment agencies, for making projections for the United States as a whole and for states and major metropolitan areas. This system combines national research on economic growth, technological…

  15. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    ERIC Educational Resources Information Center

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  16. Attitude determination and parameter estimation using vector observations - Theory

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.

  17. Application of Kalman filter in frequency offset estimation for coherent optical quadrature phase-shift keying communication system

    NASA Astrophysics Data System (ADS)

    Jiang, Wen; Yang, Yanfu; Zhang, Qun; Sun, Yunxu; Zhong, Kangping; Zhou, Xian; Yao, Yong

    2016-09-01

    The frequency offset estimation (FOE) schemes based on Kalman filter are proposed and investigated in detail via numerical simulation and experiment. The schemes consist of a modulation phase removing stage and Kalman filter estimation stage. In the second stage, the Kalman filters are employed for tracking either differential angles or differential data between two successive symbols. Several implementations of the proposed FOE scheme are compared by employing different modulation removing methods and two Kalman algorithms. The optimal FOE implementation is suggested for different operating conditions including optical signal-to-noise ratio and the number of the available data symbols.

  18. Analysis of longitudinal marginal structural models.

    PubMed

    Bryan, Jenny; Yu, Zhuo; Van Der Laan, Mark J

    2004-07-01

    In this article we construct and study estimators of the causal effect of a time-dependent treatment on survival in longitudinal studies. We employ a particular marginal structural model (MSM), proposed by Robins (2000), and follow a general methodology for constructing estimating functions in censored data models. The inverse probability of treatment weighted (IPTW) estimator of Robins et al. (2000) is used as an initial estimator and forms the basis for an improved, one-step estimator that is consistent and asymptotically linear when the treatment mechanism is consistently estimated. We extend these methods to handle informative censoring. The proposed methodology is employed to estimate the causal effect of exercise on mortality in a longitudinal study of seniors in Sonoma County. A simulation study demonstrates the bias of naive estimators in the presence of time-dependent confounders and also shows the efficiency gain of the IPTW estimator, even in the absence such confounding. The efficiency gain of the improved, one-step estimator is demonstrated through simulation.

  19. VO2 estimation using 6-axis motion sensor with sports activity classification.

    PubMed

    Nagata, Takashi; Nakamura, Naoteru; Miyatake, Masato; Yuuki, Akira; Yomo, Hiroyuki; Kawabata, Takashi; Hara, Shinsuke

    2016-08-01

    In this paper, we focus on oxygen consumption (VO2) estimation using 6-axis motion sensor (3-axis accelerometer and 3-axis gyroscope) for people playing sports with diverse intensities. The VO2 estimated with a small motion sensor can be used to calculate the energy expenditure, however, its accuracy depends on the intensities of various types of activities. In order to achieve high accuracy over a wide range of intensities, we employ an estimation framework that first classifies activities with a simple machine-learning based classification algorithm. We prepare different coefficients of linear regression model for different types of activities, which are determined with training data obtained by experiments. The best-suited model is used for each type of activity when VO2 is estimated. The accuracy of the employed framework depends on the trade-off between the degradation due to classification errors and improvement brought by applying separate, optimum model to VO2 estimation. Taking this trade-off into account, we evaluate the accuracy of the employed estimation framework by using a set of experimental data consisting of VO2 and motion data of people with a wide range of intensities of exercises, which were measured by a VO2 meter and motion sensor, respectively. Our numerical results show that the employed framework can improve the estimation accuracy in comparison to a reference method that uses a common regression model for all types of activities.

  20. Dental age estimation employing CBCT scans enhanced with Mimics software: Comparison of two different approaches using pulp/tooth volumetric analysis.

    PubMed

    Asif, Muhammad Khan; Nambiar, Phrabhakaran; Mani, Shani Ann; Ibrahim, Norliza Binti; Khan, Iqra Muhammad; Sukumaran, Prema

    2018-02-01

    The methods of dental age estimation and identification of unknown deceased individuals are evolving with the introduction of advanced innovative imaging technologies in forensic investigations. However, assessing small structures like root canal volumes can be challenging in spite of using highly advanced technology. The aim of the study was to investigate which amongst the two methods of volumetric analysis of maxillary central incisors displayed higher strength of correlation between chronological age and pulp/tooth volume ratio for Malaysian adults. Volumetric analysis of pulp cavity/tooth ratio was employed in Method 1 and pulp chamber/crown ratio (up to cemento-enamel junction) was analysed in Method 2. The images were acquired employing CBCT scans and enhanced by manipulating them with the Mimics software. These scans belonged to 56 males and 54 females and their ages ranged from 16 to 65 years. Pearson correlation and regression analysis indicated that both methods used for volumetric measurements had strong correlation between chronological age and pulp/tooth volume ratio. However, Method 2 gave higher coefficient of determination value (R2 = 0.78) when compared to Method 1 (R2 = 0.64). Moreover, manipulation in Method 2 was less time consuming and revealed higher inter-examiner reliability (0.982) as no manual intervention during 'multiple slice editing phase' of the software was required. In conclusion, this study showed that volumetric analysis of pulp cavity/tooth ratio is a valuable gender independent technique and the Method 2 regression equation should be recommended for dental age estimation. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  1. [Employment "survival" among nursing workers at a public hospital].

    PubMed

    Anselmi, M L; Duarte, G G; Angerami, E L

    2001-07-01

    This study aimed at estimating the employment "survival" time of nursing workers after their admission to a public hospital as a turnover index. The Life Table method was used in order to calculate the employment survival probability by X years for each one of the categories of workers. The results showed an accentuated turnover of the work force in the studied period. The categories nursing auxiliary and nurse presented low stability in employment while the category nursing technician was more stable.

  2. Logistic regression for circular data

    NASA Astrophysics Data System (ADS)

    Al-Daffaie, Kadhem; Khan, Shahjahan

    2017-05-01

    This paper considers the relationship between a binary response and a circular predictor. It develops the logistic regression model by employing the linear-circular regression approach. The maximum likelihood method is used to estimate the parameters. The Newton-Raphson numerical method is used to find the estimated values of the parameters. A data set from weather records of Toowoomba city is analysed by the proposed methods. Moreover, a simulation study is considered. The R software is used for all computations and simulations.

  3. Comparing exposure assessment methods for traffic-related air pollution in an adverse pregnancy outcome study

    PubMed Central

    Wu, Jun; Wilhelm, Michelle; Chung, Judith; Ritz, Beate

    2011-01-01

    Background Previous studies reported adverse impacts of traffic-related air pollution exposure on pregnancy outcomes. Yet, little information exists on how effect estimates are impacted by the different exposure assessment methods employed in these studies. Objectives To compare effect estimates for traffic-related air pollution exposure and preeclampsia, preterm birth (gestational age less than 37 weeks), and very preterm birth (gestational age less than 30 weeks) based on four commonly-used exposure assessment methods. Methods We identified 81,186 singleton births during 1997–2006 at four hospitals in Los Angeles and Orange Counties, California. Exposures were assigned to individual subjects based on residential address at delivery using the nearest ambient monitoring station data [carbon monoxide (CO), nitrogen dioxide (NO2), nitric oxide (NO), nitrogen oxides (NOx), ozone (O3), and particulate matter less than 2.5 (PM2.5) or less than 10 (PM10) μm in aerodynamic diameter], both unadjusted and temporally-adjusted land-use regression (LUR) model estimates (NO, NO2, and NOx), CALINE4 line-source air dispersion model estimates (NOx and PM2.5), and a simple traffic-density measure. We employed unconditional logistic regression to analyze preeclampsia in our birth cohort, while for gestational age-matched risk sets with preterm and very preterm birth we employed conditional logistic regression. Results We observed elevated risks for preeclampsia, preterm birth, and very preterm birth from maternal exposures to traffic air pollutants measured at ambient stations (CO, NO, NO2, and NOx) and modeled through CALINE4 (NOx and PM2.5) and LUR (NO2 and NOx). Increased risk of preterm birth and very preterm birth were also positively associated with PM10 and PM2.5 air pollution measured at ambient stations. For LUR-modeled NO2 and NOx exposures, elevated risks for all the outcomes were observed in Los Angeles only – the region for which the LUR models were initially developed. Unadjusted LUR models often produced odds ratios somewhat larger in size than temporally-adjusted models. The size of effect estimates was smaller for exposures based on simpler traffic density measures than the other exposure assessment methods. Conclusion We generally confirmed that traffic-related air pollution was associated with adverse reproductive outcomes regardless of the exposure assessment method employed, yet the size of the estimated effect depended on how both temporal and spatial variations were incorporated into exposure assessment. The LUR model was not transferable even between two contiguous areas within the same large metropolitan area in Southern California. PMID:21453913

  4. Adaptive UAV Attitude Estimation Employing Unscented Kalman Filter, FOAM and Low-Cost MEMS Sensors

    PubMed Central

    de Marina, Héctor García; Espinosa, Felipe; Santos, Carlos

    2012-01-01

    Navigation employing low cost MicroElectroMechanical Systems (MEMS) sensors in Unmanned Aerial Vehicles (UAVs) is an uprising challenge. One important part of this navigation is the right estimation of the attitude angles. Most of the existent algorithms handle the sensor readings in a fixed way, leading to large errors in different mission stages like take-off aerobatic maneuvers. This paper presents an adaptive method to estimate these angles using off-the-shelf components. This paper introduces an Attitude Heading Reference System (AHRS) based on the Unscented Kalman Filter (UKF) using the Fast Optimal Attitude Matrix (FOAM) algorithm as the observation model. The performance of the method is assessed through simulations. Moreover, field experiments are presented using a real fixed-wing UAV. The proposed low cost solution, implemented in a microcontroller, shows a satisfactory real time performance. PMID:23012559

  5. The a priori SDR Estimation Techniques with Reduced Speech Distortion for Acoustic Echo and Noise Suppression

    NASA Astrophysics Data System (ADS)

    Thoonsaengngam, Rattapol; Tangsangiumvisai, Nisachon

    This paper proposes an enhanced method for estimating the a priori Signal-to-Disturbance Ratio (SDR) to be employed in the Acoustic Echo and Noise Suppression (AENS) system for full-duplex hands-free communications. The proposed a priori SDR estimation technique is modified based upon the Two-Step Noise Reduction (TSNR) algorithm to suppress the background noise while preserving speech spectral components. In addition, a practical approach to determine accurately the Echo Spectrum Variance (ESV) is presented based upon the linear relationship assumption between the power spectrum of far-end speech and acoustic echo signals. The ESV estimation technique is then employed to alleviate the acoustic echo problem. The performance of the AENS system that employs these two proposed estimation techniques is evaluated through the Echo Attenuation (EA), Noise Attenuation (NA), and two speech distortion measures. Simulation results based upon real speech signals guarantee that our improved AENS system is able to mitigate efficiently the problem of acoustic echo and background noise, while preserving the speech quality and speech intelligibility.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chhiber, R; Usmanov, AV; Matthaeus, WH

    Simple estimates of the number of Coulomb collisions experienced by the interplanetary plasma to the point of observation, i.e., the “collisional age”, can be usefully employed in the study of non-thermal features of the solar wind. Usually these estimates are based on local plasma properties at the point of observation. Here we improve the method of estimation of the collisional age by employing solutions obtained from global three-dimensional magnetohydrodynamics simulations. This enables evaluation of the complete analytical expression for the collisional age without using approximations. The improved estimation of the collisional timescale is compared with turbulence and expansion timescales tomore » assess the relative importance of collisions. The collisional age computed using the approximate formula employed in previous work is compared with the improved simulation-based calculations to examine the validity of the simplified formula. We also develop an analytical expression for the evaluation of the collisional age and we find good agreement between the numerical and analytical results. Finally, we briefly discuss the implications for an improved estimation of collisionality along spacecraft trajectories, including Solar Probe Plus.« less

  7. A Robust Real Time Direction-of-Arrival Estimation Method for Sequential Movement Events of Vehicles.

    PubMed

    Liu, Huawei; Li, Baoqing; Yuan, Xiaobing; Zhou, Qianwei; Huang, Jingchang

    2018-03-27

    Parameters estimation of sequential movement events of vehicles is facing the challenges of noise interferences and the demands of portable implementation. In this paper, we propose a robust direction-of-arrival (DOA) estimation method for the sequential movement events of vehicles based on a small Micro-Electro-Mechanical System (MEMS) microphone array system. Inspired by the incoherent signal-subspace method (ISM), the method that is proposed in this work employs multiple sub-bands, which are selected from the wideband signals with high magnitude-squared coherence to track moving vehicles in the presence of wind noise. The field test results demonstrate that the proposed method has a better performance in emulating the DOA of a moving vehicle even in the case of severe wind interference than the narrowband multiple signal classification (MUSIC) method, the sub-band DOA estimation method, and the classical two-sided correlation transformation (TCT) method.

  8. Nursing Home Work Practices and Nursing Assistants' Job Satisfaction

    ERIC Educational Resources Information Center

    Bishop, Christine E.; Squillace, Marie R.; Meagher, Jennifer; Anderson, Wayne L.; Wiener, Joshua M.

    2009-01-01

    Purpose: To estimate the impact of nursing home work practices, specifically compensation and working conditions, on job satisfaction of nursing assistants employed in nursing homes. Design and Methods: Data are from the 2004 National Nursing Assistant Survey, responses by the nursing assistants' employers to the 2004 National Nursing Home Survey,…

  9. US hardwood lumber consumption and international trade from 1991 to 2014

    Treesearch

    William G. Luppold; Matt Bumgardner

    2016-01-01

    Apparent US hardwood lumber consumption (developed from production, import, and export data) was contrasted with estimated consumption based on employment data and lumber utilization coefficients. The two methods of measuring domestic consumption provided similar results, but the use of employment data allowed for a comparison of appearance lumber vs industrial lumber...

  10. Making the Ineffable Explicit: Estimating the Information Employed for Face Classifications

    ERIC Educational Resources Information Center

    Mangini, Michael C.; Biederman, Irving

    2004-01-01

    When we look at a face, we readily perceive that person's gender, expression, identity, age, and attractiveness. Perceivers as well as scientists have hitherto had little success in articulating just what information we are employing to achieve these subjectively immediate and effortless classifications. We describe here a method that estimates…

  11. The Effect of Environmental Regulation on Employment in Resource-Based Areas of China-An Empirical Research Based on the Mediating Effect Model.

    PubMed

    Cao, Wenbin; Wang, Hui; Ying, Huihui

    2017-12-19

    While environmental pollution is becoming more and more serious, many countries are adopting policies to control pollution. At the same time, the environmental regulation will inevitably affect economic and social development, especially employment growth. The environmental regulation will not only affect the scale of employment directly, but it will also have indirect effects by stimulating upgrades in the industrial structure and in technological innovation. This paper examines the impact of environmental regulation on employment, using a mediating model based on the data from five typical resource-based provinces in China from 2000 to 2015. The estimation is performed based on the system GMM (Generalized Method of Moments) estimator. The results show that the implementation of environmental regulation in resource-based areas has both a direct effect and a mediating effect on employment. These findings provide policy implications for these resource-based areas to promote the coordinating development between the environment and employment.

  12. The Effect of Environmental Regulation on Employment in Resource-Based Areas of China—An Empirical Research Based on the Mediating Effect Model

    PubMed Central

    Cao, Wenbin; Wang, Hui; Ying, Huihui

    2017-01-01

    While environmental pollution is becoming more and more serious, many countries are adopting policies to control pollution. At the same time, the environmental regulation will inevitably affect economic and social development, especially employment growth. The environmental regulation will not only affect the scale of employment directly, but it will also have indirect effects by stimulating upgrades in the industrial structure and in technological innovation. This paper examines the impact of environmental regulation on employment, using a mediating model based on the data from five typical resource-based provinces in China from 2000 to 2015. The estimation is performed based on the system GMM (Generalized Method of Moments) estimator. The results show that the implementation of environmental regulation in resource-based areas has both a direct effect and a mediating effect on employment. These findings provide policy implications for these resource-based areas to promote the coordinating development between the environment and employment. PMID:29257068

  13. Comparison of different estimation techniques for biomass concentration in large scale yeast fermentation.

    PubMed

    Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U

    2011-04-01

    In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  14. A Simple Method for Estimating Informative Node Age Priors for the Fossil Calibration of Molecular Divergence Time Analyses

    PubMed Central

    Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.

    2013-01-01

    Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303

  15. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    NASA Astrophysics Data System (ADS)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  16. Missing-value estimation using linear and non-linear regression with Bayesian gene selection.

    PubMed

    Zhou, Xiaobo; Wang, Xiaodong; Dougherty, Edward R

    2003-11-22

    Data from microarray experiments are usually in the form of large matrices of expression levels of genes under different experimental conditions. Owing to various reasons, there are frequently missing values. Estimating these missing values is important because they affect downstream analysis, such as clustering, classification and network design. Several methods of missing-value estimation are in use. The problem has two parts: (1) selection of genes for estimation and (2) design of an estimation rule. We propose Bayesian variable selection to obtain genes to be used for estimation, and employ both linear and nonlinear regression for the estimation rule itself. Fast implementation issues for these methods are discussed, including the use of QR decomposition for parameter estimation. The proposed methods are tested on data sets arising from hereditary breast cancer and small round blue-cell tumors. The results compare very favorably with currently used methods based on the normalized root-mean-square error. The appendix is available from http://gspsnap.tamu.edu/gspweb/zxb/missing_zxb/ (user: gspweb; passwd: gsplab).

  17. In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla.

    PubMed

    Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart

    2015-03-01

    Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1(-)) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla

    NASA Astrophysics Data System (ADS)

    Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart

    2015-03-01

    Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1-) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI.

  19. An Analysis of Costs in Institutions of Higher Education in England

    ERIC Educational Resources Information Center

    Johnes, Geraint; Johnes, Jill; Thanassoulis, Emmanuel

    2008-01-01

    Cost functions are estimated, using random effects and stochastic frontier methods, for English higher education institutions. The article advances on existing literature by employing finer disaggregation by subject, institution type and location, and by introducing consideration of quality effects. Estimates are provided of average incremental…

  20. Getting a Job is Only Half the Battle: Maternal Job Loss and Child Classroom Behavior in Low-Income Families

    PubMed Central

    Hill, Heather D.; Morris, Pamela A.; Castells, Nina; Walker, Jessica Thornton

    2011-01-01

    This study uses data from an experimental employment program and instrumental variables (IV) estimation to examine the effects of maternal job loss on child classroom behavior. Random assignment to the treatment at one of three program sites is an exogenous predictor of employment patterns. Cross-site variation in treatment-control differences is used to identify the effects of employment levels and transitions. Under certain assumptions, this method controls for unobserved correlates of job loss and child well-being, as well as measurement error and simultaneity. IV estimates suggest that maternal job loss sharply increases problem behavior but has neutral effects on positive social behavior. Current employment programs concentrate primarily on job entry, but these findings point to the importance of promoting job stability for workers and their children. PMID:22162901

  1. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.

  2. Grip Force and 3D Push-Pull Force Estimation Based on sEMG and GRNN

    PubMed Central

    Wu, Changcheng; Zeng, Hong; Song, Aiguo; Xu, Baoguo

    2017-01-01

    The estimation of the grip force and the 3D push-pull force (push and pull force in the three dimension space) from the electromyogram (EMG) signal is of great importance in the dexterous control of the EMG prosthetic hand. In this paper, an action force estimation method which is based on the eight channels of the surface EMG (sEMG) and the Generalized Regression Neural Network (GRNN) is proposed to meet the requirements of the force control of the intelligent EMG prosthetic hand. Firstly, the experimental platform, the acquisition of the sEMG, the feature extraction of the sEMG and the construction of GRNN are described. Then, the multi-channels of the sEMG when the hand is moving are captured by the EMG sensors attached on eight different positions of the arm skin surface. Meanwhile, a grip force sensor and a three dimension force sensor are adopted to measure the output force of the human's hand. The characteristic matrix of the sEMG and the force signals are used to construct the GRNN. The mean absolute value and the root mean square of the estimation errors, the correlation coefficients between the actual force and the estimated force are employed to assess the accuracy of the estimation. Analysis of variance (ANOVA) is also employed to test the difference of the force estimation. The experiments are implemented to verify the effectiveness of the proposed estimation method and the results show that the output force of the human's hand can be correctly estimated by using sEMG and GRNN method. PMID:28713231

  3. Grip Force and 3D Push-Pull Force Estimation Based on sEMG and GRNN.

    PubMed

    Wu, Changcheng; Zeng, Hong; Song, Aiguo; Xu, Baoguo

    2017-01-01

    The estimation of the grip force and the 3D push-pull force (push and pull force in the three dimension space) from the electromyogram (EMG) signal is of great importance in the dexterous control of the EMG prosthetic hand. In this paper, an action force estimation method which is based on the eight channels of the surface EMG (sEMG) and the Generalized Regression Neural Network (GRNN) is proposed to meet the requirements of the force control of the intelligent EMG prosthetic hand. Firstly, the experimental platform, the acquisition of the sEMG, the feature extraction of the sEMG and the construction of GRNN are described. Then, the multi-channels of the sEMG when the hand is moving are captured by the EMG sensors attached on eight different positions of the arm skin surface. Meanwhile, a grip force sensor and a three dimension force sensor are adopted to measure the output force of the human's hand. The characteristic matrix of the sEMG and the force signals are used to construct the GRNN. The mean absolute value and the root mean square of the estimation errors, the correlation coefficients between the actual force and the estimated force are employed to assess the accuracy of the estimation. Analysis of variance (ANOVA) is also employed to test the difference of the force estimation. The experiments are implemented to verify the effectiveness of the proposed estimation method and the results show that the output force of the human's hand can be correctly estimated by using sEMG and GRNN method.

  4. A robust nonparametric framework for reconstruction of stochastic differential equation models

    NASA Astrophysics Data System (ADS)

    Rajabzadeh, Yalda; Rezaie, Amir Hossein; Amindavar, Hamidreza

    2016-05-01

    In this paper, we employ a nonparametric framework to robustly estimate the functional forms of drift and diffusion terms from discrete stationary time series. The proposed method significantly improves the accuracy of the parameter estimation. In this framework, drift and diffusion coefficients are modeled through orthogonal Legendre polynomials. We employ the least squares regression approach along with the Euler-Maruyama approximation method to learn coefficients of stochastic model. Next, a numerical discrete construction of mean squared prediction error (MSPE) is established to calculate the order of Legendre polynomials in drift and diffusion terms. We show numerically that the new method is robust against the variation in sample size and sampling rate. The performance of our method in comparison with the kernel-based regression (KBR) method is demonstrated through simulation and real data. In case of real dataset, we test our method for discriminating healthy electroencephalogram (EEG) signals from epilepsy ones. We also demonstrate the efficiency of the method through prediction in the financial data. In both simulation and real data, our algorithm outperforms the KBR method.

  5. Estimation of the sugar cane cultivated area from LANDSAT images using the two phase sampling method

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Cappelletti, C. A.; Mendonca, F. J.; Lee, D. C. L.; Shimabukuro, Y. E.

    1982-01-01

    A two phase sampling method and the optimal sampling segment dimensions for the estimation of sugar cane cultivated area were developed. This technique employs visual interpretations of LANDSAT images and panchromatic aerial photographs considered as the ground truth. The estimates, as a mean value of 100 simulated samples, represent 99.3% of the true value with a CV of approximately 1%; the relative efficiency of the two phase design was 157% when compared with a one phase aerial photographs sample.

  6. A Survey of Methods for Computing Best Estimates of Endoatmospheric and Exoatmospheric Trajectories

    NASA Technical Reports Server (NTRS)

    Bernard, William P.

    2018-01-01

    Beginning with the mathematical prediction of planetary orbits in the early seventeenth century up through the most recent developments in sensor fusion methods, many techniques have emerged that can be employed on the problem of endo and exoatmospheric trajectory estimation. Although early methods were ad hoc, the twentieth century saw the emergence of many systematic approaches to estimation theory that produced a wealth of useful techniques. The broad genesis of estimation theory has resulted in an equally broad array of mathematical principles, methods and vocabulary. Among the fundamental ideas and methods that are briefly touched on are batch and sequential processing, smoothing, estimation, and prediction, sensor fusion, sensor fusion architectures, data association, Bayesian and non Bayesian filtering, the family of Kalman filters, models of the dynamics of the phases of a rocket's flight, and asynchronous, delayed, and asequent data. Along the way, a few trajectory estimation issues are addressed and much of the vocabulary is defined.

  7. Employment and the Risk of Domestic Abuse among Low-Income Women

    ERIC Educational Resources Information Center

    Gibson-Davis, Christina M.; Magnuson, Katherine; Gennetian, Lisa A.; Duncan, Greg J.

    2005-01-01

    This paper uses data from 2 randomized evaluations of welfare-to-work programs--the Minnesota Family Investment Program and the National Evaluation of Welfare-to-Work Strategies--to estimate the effect of employment on domestic abuse among low-income single mothers. Unique to our analysis is the application of a 2-stage least squares method, in…

  8. Estimating the Octanol/Water Partition Coefficient for Aliphatic Organic Compounds Using Semi-Empirical Electrotopological Index

    PubMed Central

    Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca

    2011-01-01

    A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (ISET). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the ISET in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P. PMID:22072945

  9. Estimating the octanol/water partition coefficient for aliphatic organic compounds using semi-empirical electrotopological index.

    PubMed

    Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca

    2011-01-01

    A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (I(SET)). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the I(SET) in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P.

  10. Quantum State Tomography via Linear Regression Estimation

    PubMed Central

    Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan

    2013-01-01

    A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519

  11. Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.

    PubMed

    Li, Qiang; Doi, Kunio

    2006-04-01

    Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.

  12. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  13. Uncertainty of fast biological radiation dose assessment for emergency response scenarios.

    PubMed

    Ainsbury, Elizabeth A; Higueras, Manuel; Puig, Pedro; Einbeck, Jochen; Samaga, Daniel; Barquinero, Joan Francesc; Barrios, Lleonard; Brzozowska, Beata; Fattibene, Paola; Gregoire, Eric; Jaworska, Alicja; Lloyd, David; Oestreicher, Ursula; Romm, Horst; Rothkamm, Kai; Roy, Laurence; Sommer, Sylwester; Terzoudi, Georgia; Thierens, Hubert; Trompier, Francois; Vral, Anne; Woda, Clemens

    2017-01-01

    Reliable dose estimation is an important factor in appropriate dosimetric triage categorization of exposed individuals to support radiation emergency response. Following work done under the EU FP7 MULTIBIODOSE and RENEB projects, formal methods for defining uncertainties on biological dose estimates are compared using simulated and real data from recent exercises. The results demonstrate that a Bayesian method of uncertainty assessment is the most appropriate, even in the absence of detailed prior information. The relative accuracy and relevance of techniques for calculating uncertainty and combining assay results to produce single dose and uncertainty estimates is further discussed. Finally, it is demonstrated that whatever uncertainty estimation method is employed, ignoring the uncertainty on fast dose assessments can have an important impact on rapid biodosimetric categorization.

  14. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  15. Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.

    PubMed

    Lee, Soojeong; Chang, Joon-Hyuk

    2017-11-01

    This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57  mmHg, respectively. These indicate that the proposed method actually enhances the performance by 9.18% and 10.88% compared with the DBN-DNN single estimator. The proposed methodology improves the accuracy of BP estimation and reduces the uncertainty for BP estimation. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. An empirical Bayes approach to analyzing recurring animal surveys

    USGS Publications Warehouse

    Johnson, D.H.

    1989-01-01

    Recurring estimates of the size of animal populations are often required by biologists or wildlife managers. Because of cost or other constraints, estimates frequently lack the accuracy desired but cannot readily be improved by additional sampling. This report proposes a statistical method employing empirical Bayes (EB) estimators as alternatives to those customarily used to estimate population size, and evaluates them by a subsampling experiment on waterfowl surveys. EB estimates, especially a simple limited-translation version, were more accurate and provided shorter confidence intervals with greater coverage probabilities than customary estimates.

  17. THE UNIQUE VALUE OF BREATH BIOMARKERS FOR ESTIMATING PHAMACOKINETIC RATE CONSTANTS AND BODY BURDEN FROM RANDOM/INTERMITTENT DOSE

    EPA Science Inventory

    Biomarker measurements are used in three ways: 1) evaluating the time course and distribution of a chemical in the body, 2) estimating previous exposure or dose, and 3) assessing disease state. Blood and urine measurements are the primary methods employed. Of late, it has been ...

  18. An ecoregional approach to the economic valuation of land- and water-based recreation in the United States

    Treesearch

    Gajana Bhat; John Bergsrom; R. Jeff Teasley

    1998-01-01

    This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such...

  19. Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter

    NASA Astrophysics Data System (ADS)

    Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei

    2017-10-01

    To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.

  20. Improved alternatives for estimating in-use material stocks.

    PubMed

    Chen, Wei-Qiang; Graedel, T E

    2015-03-03

    Determinations of in-use material stocks are useful for exploring past patterns and future scenarios of materials use, for estimating end-of-life flows of materials, and thereby for guiding policies on recycling and sustainable management of materials. This is especially true when those determinations are conducted for individual products or product groups such as "automobiles" rather than general (and sometimes nebulous) sectors such as "transportation". We propose four alternatives to the existing top-down and bottom-up methods for estimating in-use material stocks, with the choice depending on the focus of the study and on the available data. We illustrate with aluminum use in automobiles the robustness of and consistencies and differences among these four alternatives and demonstrate that a suitable combination of the four methods permits estimation of the in-use stock of a material contained in all products employing that material, or in-use stocks of different materials contained in a particular product. Therefore, we anticipate the estimation in the future of in-use stocks for many materials in many products or product groups, for many regions, and for longer time periods, by taking advantage of methodologies that fully employ the detailed data sets now becoming available.

  1. Drogue pose estimation for unmanned aerial vehicle autonomous aerial refueling system based on infrared vision sensor

    NASA Astrophysics Data System (ADS)

    Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan

    2017-12-01

    Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.

  2. Genetic Algorithm-Based Motion Estimation Method using Orientations and EMGs for Robot Controls

    PubMed Central

    Chae, Jeongsook; Jin, Yong; Sung, Yunsick

    2018-01-01

    Demand for interactive wearable devices is rapidly increasing with the development of smart devices. To accurately utilize wearable devices for remote robot controls, limited data should be analyzed and utilized efficiently. For example, the motions by a wearable device, called Myo device, can be estimated by measuring its orientation, and calculating a Bayesian probability based on these orientation data. Given that Myo device can measure various types of data, the accuracy of its motion estimation can be increased by utilizing these additional types of data. This paper proposes a motion estimation method based on weighted Bayesian probability and concurrently measured data, orientations and electromyograms (EMG). The most probable motion among estimated is treated as a final estimated motion. Thus, recognition accuracy can be improved when compared to the traditional methods that employ only a single type of data. In our experiments, seven subjects perform five predefined motions. When orientation is measured by the traditional methods, the sum of the motion estimation errors is 37.3%; likewise, when only EMG data are used, the error in motion estimation by the proposed method was also 37.3%. The proposed combined method has an error of 25%. Therefore, the proposed method reduces motion estimation errors by 12%. PMID:29324641

  3. Tidal frequency estimation for closed basins

    NASA Technical Reports Server (NTRS)

    Eades, J. B., Jr.

    1978-01-01

    A method was developed for determining the fundamental tidal frequencies for closed basins of water, by means of an eigenvalue analysis. The mathematical model employed, was the Laplace tidal equations.

  4. Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Takane, Yoshio

    2004-01-01

    We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…

  5. A comparison of skyshine computational methods.

    PubMed

    Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

    2005-01-01

    A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.

  6. A Novel Method for Block Size Forensics Based on Morphological Operations

    NASA Astrophysics Data System (ADS)

    Luo, Weiqi; Huang, Jiwu; Qiu, Guoping

    Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.

  7. Evaluation of volatile organic emissions from hazardous waste incinerators.

    PubMed Central

    Sedman, R M; Esparza, J R

    1991-01-01

    Conventional methods of risk assessment typically employed to evaluate the impact of hazardous waste incinerators on public health must rely on somewhat speculative emissions estimates or on complicated and expensive sampling and analytical methods. The limited amount of toxicological information concerning many of the compounds detected in stack emissions also complicates the evaluation of the public health impacts of these facilities. An alternative approach aimed at evaluating the public health impacts associated with volatile organic stack emissions is presented that relies on a screening criterion to evaluate total stack hydrocarbon emissions. If the concentration of hydrocarbons in ambient air is below the screening criterion, volatile emissions from the incinerator are judged not to pose a significant threat to public health. Both the screening criterion and a conventional method of risk assessment were employed to evaluate the emissions from 20 incinerators. Use of the screening criterion always yielded a substantially greater estimate of risk than that derived by the conventional method. Since the use of the screening criterion always yielded estimates of risk that were greater than that determined by conventional methods and measuring total hydrocarbon emissions is a relatively simple analytical procedure, the use of the screening criterion would appear to facilitate the evaluation of operating hazardous waste incinerators. PMID:1954928

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balsa Terzic, Gabriele Bassi

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less

  9. The Independent Evolution Method Is Not a Viable Phylogenetic Comparative Method

    PubMed Central

    2015-01-01

    Phylogenetic comparative methods (PCMs) use data on species traits and phylogenetic relationships to shed light on evolutionary questions. Recently, Smaers and Vinicius suggested a new PCM, Independent Evolution (IE), which purportedly employs a novel model of evolution based on Felsenstein’s Adaptive Peak Model. The authors found that IE improves upon previous PCMs by producing more accurate estimates of ancestral states, as well as separate estimates of evolutionary rates for each branch of a phylogenetic tree. Here, we document substantial theoretical and computational issues with IE. When data are simulated under a simple Brownian motion model of evolution, IE produces severely biased estimates of ancestral states and changes along individual branches. We show that these branch-specific changes are essentially ancestor-descendant or “directional” contrasts, and draw parallels between IE and previous PCMs such as “minimum evolution”. Additionally, while comparisons of branch-specific changes between variables have been interpreted as reflecting the relative strength of selection on those traits, we demonstrate through simulations that regressing IE estimated branch-specific changes against one another gives a biased estimate of the scaling relationship between these variables, and provides no advantages or insights beyond established PCMs such as phylogenetically independent contrasts. In light of our findings, we discuss the results of previous papers that employed IE. We conclude that Independent Evolution is not a viable PCM, and should not be used in comparative analyses. PMID:26683838

  10. Recruitment, Job Search, and the United States Employment Service. Volume II: Tables and Methods.

    ERIC Educational Resources Information Center

    Camil Associates, Inc., Philadelphia, PA.

    This volume contains the appendixes to Volume I of the report on recruitment, job search, and the United States Employment Service in 20 middle-sized American cities. Appendix A contains 165 pages of tables. Appendix B (63 pages) contains details of sample design, data analysis, and estimate precision under the categories of: Overview of the study…

  11. Functional Linear Model with Zero-value Coefficient Function at Sub-regions.

    PubMed

    Zhou, Jianhui; Wang, Nae-Yuh; Wang, Naisyin

    2013-01-01

    We propose a shrinkage method to estimate the coefficient function in a functional linear regression model when the value of the coefficient function is zero within certain sub-regions. Besides identifying the null region in which the coefficient function is zero, we also aim to perform estimation and inferences for the nonparametrically estimated coefficient function without over-shrinking the values. Our proposal consists of two stages. In stage one, the Dantzig selector is employed to provide initial location of the null region. In stage two, we propose a group SCAD approach to refine the estimated location of the null region and to provide the estimation and inference procedures for the coefficient function. Our considerations have certain advantages in this functional setup. One goal is to reduce the number of parameters employed in the model. With a one-stage procedure, it is needed to use a large number of knots in order to precisely identify the zero-coefficient region; however, the variation and estimation difficulties increase with the number of parameters. Owing to the additional refinement stage, we avoid this necessity and our estimator achieves superior numerical performance in practice. We show that our estimator enjoys the Oracle property; it identifies the null region with probability tending to 1, and it achieves the same asymptotic normality for the estimated coefficient function on the non-null region as the functional linear model estimator when the non-null region is known. Numerically, our refined estimator overcomes the shortcomings of the initial Dantzig estimator which tends to under-estimate the absolute scale of non-zero coefficients. The performance of the proposed method is illustrated in simulation studies. We apply the method in an analysis of data collected by the Johns Hopkins Precursors Study, where the primary interests are in estimating the strength of association between body mass index in midlife and the quality of life in physical functioning at old age, and in identifying the effective age ranges where such associations exist.

  12. Slice sampling technique in Bayesian extreme of gold price modelling

    NASA Astrophysics Data System (ADS)

    Rostami, Mohammad; Adam, Mohd Bakri; Ibrahim, Noor Akma; Yahya, Mohamed Hisham

    2013-09-01

    In this paper, a simulation study of Bayesian extreme values by using Markov Chain Monte Carlo via slice sampling algorithm is implemented. We compared the accuracy of slice sampling with other methods for a Gumbel model. This study revealed that slice sampling algorithm offers more accurate and closer estimates with less RMSE than other methods . Finally we successfully employed this procedure to estimate the parameters of Malaysia extreme gold price from 2000 to 2011.

  13. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  14. Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.

    Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes amore » straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.« less

  15. Improvement of Vehicle Positioning Using Car-to-Car Communications in Consideration of Communication Delay

    NASA Astrophysics Data System (ADS)

    Hontani, Hidekata; Higuchi, Yuya

    In this article, we propose a vehicle positioning method that can estimate positions of cars even in areas where the GPS is not available. For the estimation, each car measures the relative distance to a car running in front, communicates the measurements with other cars, and uses the received measurements for estimating its position. In order to estimate the position even if the measurements are received with time-delay, we employed the time-delay tolerant Kalman filtering. For sharing the measurements, it is assumed that a car-to-car communication system is used. Then, the measurements sent from farther cars are received with larger time-delay. It follows that the accuracy of the estimates of farther cars become worse. Hence, the proposed method manages only the states of nearby cars to reduce computing effort. The authors simulated the proposed filtering method and found that the proposed method estimates the positions of nearby cars as accurate as the distributed Kalman filtering.

  16. State estimation for autopilot control of small unmanned aerial vehicles in windy conditions

    NASA Astrophysics Data System (ADS)

    Poorman, David Paul

    The use of small unmanned aerial vehicles (UAVs) both in the military and civil realms is growing. This is largely due to the proliferation of inexpensive sensors and the increase in capability of small computers that has stemmed from the personal electronic device market. Methods for performing accurate state estimation for large scale aircraft have been well known and understood for decades, which usually involve a complex array of expensive high accuracy sensors. Performing accurate state estimation for small unmanned aircraft is a newer area of study and often involves adapting known state estimation methods to small UAVs. State estimation for small UAVs can be more difficult than state estimation for larger UAVs due to small UAVs employing limited sensor suites due to cost, and the fact that small UAVs are more susceptible to wind than large aircraft. The purpose of this research is to evaluate the ability of existing methods of state estimation for small UAVs to accurately capture the states of the aircraft that are necessary for autopilot control of the aircraft in a Dryden wind field. The research begins by showing which aircraft states are necessary for autopilot control in Dryden wind. Then two state estimation methods that employ only accelerometer, gyro, and GPS measurements are introduced. The first method uses assumptions on aircraft motion to directly solve for attitude information and smooth GPS data, while the second method integrates sensor data to propagate estimates between GPS measurements and then corrects those estimates with GPS information. The performance of both methods is analyzed with and without Dryden wind, in straight and level flight, in a coordinated turn, and in a wings level ascent. It is shown that in zero wind, the first method produces significant steady state attitude errors in both a coordinated turn and in a wings level ascent. In Dryden wind, it produces large noise on the estimates for its attitude states, and has a non-zero mean error that increases when gyro bias is increased. The second method is shown to not exhibit any steady state error in the tested scenarios that is inherent to its design. The second method can correct for attitude errors that arise from both integration error and gyro bias states, but it suffers from lack of attitude error observability. The attitude errors are shown to be more observable in wind, but increased integration error in wind outweighs the increase in attitude corrections that such increased observability brings, resulting in larger attitude errors in wind. Overall, this work highlights many technical deficiencies of both of these methods of state estimation that could be improved upon in the future to enhance state estimation for small UAVs in windy conditions.

  17. Born iterative reconstruction using perturbed-phase field estimates.

    PubMed

    Astheimer, Jeffrey P; Waag, Robert C

    2008-10-01

    A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements.

  18. Attention control learning in the decision space using state estimation

    NASA Astrophysics Data System (ADS)

    Gharaee, Zahra; Fatehi, Alireza; Mirian, Maryam S.; Nili Ahmadabadi, Majid

    2016-05-01

    The main goal of this paper is modelling attention while using it in efficient path planning of mobile robots. The key challenge in concurrently aiming these two goals is how to make an optimal, or near-optimal, decision in spite of time and processing power limitations, which inherently exist in a typical multi-sensor real-world robotic application. To efficiently recognise the environment under these two limitations, attention of an intelligent agent is controlled by employing the reinforcement learning framework. We propose an estimation method using estimated mixture-of-experts task and attention learning in perceptual space. An agent learns how to employ its sensory resources, and when to stop observing, by estimating its perceptual space. In this paper, static estimation of the state space in a learning task problem, which is examined in the WebotsTM simulator, is performed. Simulation results show that a robot learns how to achieve an optimal policy with a controlled cost by estimating the state space instead of continually updating sensory information.

  19. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, Francis J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  20. Titrimetric and photometric methods for determination of hypochlorite in commercial bleaches.

    PubMed

    Jonnalagadda, Sreekanth B; Gengan, Prabhashini

    2010-01-01

    Two methods, simple titration and photometric methods for determination of hypochlorite are developed, based its reaction with hydrogen peroxide and titration of the residual peroxide by acidic permanganate. In the titration method, the residual hydrogen peroxide is estimated by titration with standard permanganate solution to estimate the hypochlorite concentration. The photometric method is devised to measure the concentration of remaining permanganate, after the reaction with residual hydrogen peroxide. It employs 4 ranges of calibration curves to enable the determination of hypochlorite accurately. The new photometric method measures hypochlorite in the range 1.90 x 10(-3) to 1.90 x 10(-2) M, with high accuracy and with low variance. The concentrations of hypochlorite in diverse commercial bleach samples and in seawater which is enriched with hypochlorite were estimated using the proposed method and compared with the arsenite method. The statistical analysis validates the superiority of the proposed method.

  1. Ex Post Facto Monte Carlo Variance Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, Thomas E.

    The variance in Monte Carlo particle transport calculations is often dominated by a few particles whose importance increases manyfold on a single transport step. This paper describes a novel variance reduction method that uses a large importance change as a trigger to resample the offending transport step. That is, the method is employed only after (ex post facto) a random walk attempts a transport step that would otherwise introduce a large variance in the calculation.Improvements in two Monte Carlo transport calculations are demonstrated empirically using an ex post facto method. First, the method is shown to reduce the variance inmore » a penetration problem with a cross-section window. Second, the method empirically appears to modify a point detector estimator from an infinite variance estimator to a finite variance estimator.« less

  2. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    PubMed Central

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  3. A decentralized square root information filter/smoother

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Belzer, M. R.

    1985-01-01

    A number of developments has recently led to a considerable interest in the decentralization of linear least squares estimators. The developments are partly related to the impending emergence of VLSI technology, the realization of parallel processing, and the need for algorithmic ways to speed the solution of dynamically decoupled, high dimensional estimation problems. A new method is presented for combining Square Root Information Filters (SRIF) estimates obtained from independent data sets. The new method involves an orthogonal transformation, and an information matrix filter 'homework' problem discussed by Schweppe (1973) is generalized. The employed SRIF orthogonal transformation methodology has been described by Bierman (1977).

  4. Estimating dead wood during national forest inventories: a review of inventory methodologies and suggestions for harmonization.

    PubMed

    Woodall, Christopher W; Rondeux, Jacques; Verkerk, Pieter J; Ståhl, Göran

    2009-10-01

    Efforts to assess forest ecosystem carbon stocks, biodiversity, and fire hazards have spurred the need for comprehensive assessments of forest ecosystem dead wood (DW) components around the world. Currently, information regarding the prevalence, status, and methods of DW inventories occurring in the world's forested landscapes is scattered. The goal of this study is to describe the status, DW components measured, sample methods employed, and DW component thresholds used by national forest inventories that currently inventory DW around the world. Study results indicate that most countries do not inventory forest DW. Globally, we estimate that about 13% of countries inventory DW using a diversity of sample methods and DW component definitions. A common feature among DW inventories was that most countries had only just begun DW inventories and employ very low sample intensities. There are major hurdles to harmonizing national forest inventories of DW: differences in population definitions, lack of clarity on sample protocols/estimation procedures, and sparse availability of inventory data/reports. Increasing database/estimation flexibility, developing common dimensional thresholds of DW components, publishing inventory procedures/protocols, releasing inventory data/reports to international peer review, and increasing communication (e.g., workshops) among countries inventorying DW are suggestions forwarded by this study to increase DW inventory harmonization.

  5. Mixture distributions of wind speed in the UAE

    NASA Astrophysics Data System (ADS)

    Shin, J.; Ouarda, T.; Lee, T. S.

    2013-12-01

    Wind speed probability distribution is commonly used to estimate potential wind energy. The 2-parameter Weibull distribution has been most widely used to characterize the distribution of wind speed. However, it is unable to properly model wind speed regimes when wind speed distribution presents bimodal and kurtotic shapes. Several studies have concluded that the Weibull distribution should not be used for frequency analysis of wind speed without investigation of wind speed distribution. Due to these mixture distributional characteristics of wind speed data, the application of mixture distributions should be further investigated in the frequency analysis of wind speed. A number of studies have investigated the potential wind energy in different parts of the Arabian Peninsula. Mixture distributional characteristics of wind speed were detected from some of these studies. Nevertheless, mixture distributions have not been employed for wind speed modeling in the Arabian Peninsula. In order to improve our understanding of wind energy potential in Arabian Peninsula, mixture distributions should be tested for the frequency analysis of wind speed. The aim of the current study is to assess the suitability of mixture distributions for the frequency analysis of wind speed in the UAE. Hourly mean wind speed data at 10-m height from 7 stations were used in the current study. The Weibull and Kappa distributions were employed as representatives of the conventional non-mixture distributions. 10 mixture distributions are used and constructed by mixing four probability distributions such as Normal, Gamma, Weibull and Extreme value type-one (EV-1) distributions. Three parameter estimation methods such as Expectation Maximization algorithm, Least Squares method and Meta-Heuristic Maximum Likelihood (MHML) method were employed to estimate the parameters of the mixture distributions. In order to compare the goodness-of-fit of tested distributions and parameter estimation methods for sample wind data, the adjusted coefficient of determination, Bayesian Information Criterion (BIC) and Chi-squared statistics were computed. Results indicate that MHML presents the best performance of parameter estimation for the used mixture distributions. In most of the employed 7 stations, mixture distributions give the best fit. When the wind speed regime shows mixture distributional characteristics, most of these regimes present the kurtotic statistical characteristic. Particularly, applications of mixture distributions for these stations show a significant improvement in explaining the whole wind speed regime. In addition, the Weibull-Weibull mixture distribution presents the best fit for the wind speed data in the UAE.

  6. Comparison of Fatigue Life Estimation Using Equivalent Linearization and Time Domain Simulation Methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Dhainaut, Jean-Michel

    2000-01-01

    The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.

  7. Vision for action and perception elicit dissociable adherence to Weber's law across a range of 'graspable' target objects.

    PubMed

    Heath, Matthew; Manzone, Joseph; Khan, Michaela; Davarpanah Jazi, Shirin

    2017-10-01

    A number of studies have reported that grasps and manual estimations of differently sized target objects (e.g., 20 through 70 mm) violate and adhere to Weber's law, respectively (e.g., Ganel et al. 2008a, Curr Biol 18:R599-R601)-a result interpreted as evidence that separate visual codes support actions (i.e., absolute) and perceptions (i.e., relative). More recent work employing a broader range of target objects (i.e., 5 through 120 mm) has laid question to this claim and proposed that grasps for 'larger' target objects (i.e., >20 mm) elicit an inverse relationship to Weber's law and that manual estimations for target objects greater than 40 mm violate the law (Bruno et al. 2016, Neuropsychologia 91:327-334). In accounting for this finding, it was proposed that biomechanical limits in aperture shaping preclude the application of Weber's law for larger target objects. It is, however, important to note that the work supporting a biomechanical account may have employed target objects that approached -or were beyond-some participants' maximal aperture separation. The present investigation examined whether grasps and manual estimations differentially adhere to Weber's law across a continuous range of functionally 'graspable' target objects (i.e., 10,…,80% of participant-specific maximal aperture separation). In addition, we employed a method of adjustment task to examine whether manual estimation provides a valid proxy for a traditional measure of perceptual judgment. Manual estimation and method of adjustment tasks demonstrated adherence to Weber's law across the continuous range of target objects used here, whereas grasps violated the law. Thus, results evince that grasps and manual estimations of graspable target objects are, respectively, mediated via absolute and relative visual information.

  8. Molar axis estimation from computed tomography images.

    PubMed

    Dongxia Zhang; Yangzhou Gan; Zeyang Xia; Xinwen Zhou; Shoubin Liu; Jing Xiong; Guanglin Li

    2016-08-01

    Estimation of tooth axis is needed for some clinical dental treatment. Existing methods require to segment the tooth volume from Computed Tomography (CT) images, and then estimate the axis from the tooth volume. However, they may fail during estimating molar axis due to that the tooth segmentation from CT images is challenging and current segmentation methods may get poor segmentation results especially for these molars with angle which will result in the failure of axis estimation. To resolve this problem, this paper proposes a new method for molar axis estimation from CT images. The key innovation point is that: instead of estimating the 3D axis of each molar from the segmented volume, the method estimates the 3D axis from two projection images. The method includes three steps. (1) The 3D images of each molar are projected to two 2D image planes. (2) The molar contour are segmented and the contour's 2D axis are extracted in each 2D projection image. Principal Component Analysis (PCA) and a modified symmetry axis detection algorithm are employed to extract the 2D axis from the segmented molar contour. (3) A 3D molar axis is obtained by combining the two 2D axes. Experimental results verified that the proposed method was effective to estimate the axis of molar from CT images.

  9. Trends in study design and the statistical methods employed in a leading general medicine journal.

    PubMed

    Gosho, M; Sato, Y; Nagashima, K; Takahashi, S

    2018-02-01

    Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing after the presentation of the FDA guidance for adaptive design. © 2017 John Wiley & Sons Ltd.

  10. A subagging regression method for estimating the qualitative and quantitative state of groundwater

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young

    2017-08-01

    A subsample aggregating (subagging) regression (SBR) method for the analysis of groundwater data pertaining to trend-estimation-associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of other methods, and the uncertainties are reasonably estimated; the others have no uncertainty analysis option. To validate further, actual groundwater data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by both SBR and GPR regardless of Gaussian or non-Gaussian skewed data. However, it is expected that GPR has a limitation in applications to severely corrupted data by outliers owing to its non-robustness. From the implementations, it is determined that the SBR method has the potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data such as the groundwater level and contaminant concentration.

  11. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  12. An economic analysis of localized pollution: rendering emissions in a residential setting

    Treesearch

    J. Michael Bowker; H.F. MacDonald

    1991-01-01

    The contingent value method is employed to estimate economic damages to households resulting from rendering plant emissions in a small town. Household willingness to accept (WTA) and willingness to pay (WTP) are estimated individually and in aggregate. The influence of household characteristics on WTP and WTA is examined via regression models. The perception of health...

  13. Sine Rotation Vector Method for Attitude Estimation of an Underwater Robot

    PubMed Central

    Ko, Nak Yong; Jeong, Seokki; Bae, Youngchul

    2016-01-01

    This paper describes a method for estimating the attitude of an underwater robot. The method employs a new concept of sine rotation vector and uses both an attitude heading and reference system (AHRS) and a Doppler velocity log (DVL) for the purpose of measurement. First, the acceleration and magnetic-field measurements are transformed into sine rotation vectors and combined. The combined sine rotation vector is then transformed into the differences between the Euler angles of the measured attitude and the predicted attitude; the differences are used to correct the predicted attitude. The method was evaluated according to field-test data and simulation data and compared to existing methods that calculate angular differences directly without a preceding sine rotation vector transformation. The comparison verifies that the proposed method improves the attitude estimation performance. PMID:27490549

  14. Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation.

    PubMed

    Pillai, S; Singhvi, I

    2008-09-01

    Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C(18) column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies.

  15. Quantitative Estimation of Itopride Hydrochloride and Rabeprazole Sodium from Capsule Formulation

    PubMed Central

    Pillai, S.; Singhvi, I.

    2008-01-01

    Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C18 column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies. PMID:21394269

  16. An applicable method for efficiency estimation of operating tray distillation columns and its comparison with the methods utilized in HYSYS and Aspen Plus

    NASA Astrophysics Data System (ADS)

    Sadeghifar, Hamidreza

    2015-10-01

    Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.

  17. The Foraging Ecology of Royal and Sandwich Terns in North Carolina, USA

    USGS Publications Warehouse

    McGinnis, T.W.; Emslie, S.D.

    2001-01-01

    Population sizes of territorial male red-winged blackbirds (Agelaius phoeniceus) were determined with counts of territorial males (area count) and a Petersen-Lincoln Index method for roadsides (roadside estimate). Weather conditions and time of day did not influence either method. Combined roadside estimates had smaller error bounds than the individual transect estimates and were not hindered by the problem of zero recaptures. Roadside estimates were usually one-half as large as the area counts, presumably due to an observer bias for marked birds. The roadside estimate provides only an index of major changes in populations of territorial male redwings. When the roadside estimate is employed, the area count should be used to determine the amount and nature of observer bias. For small population surveys, the area count is probably more reliable and accurate than the roadside estimate.

  18. Determining population size of territorial red-winged blackbirds

    USGS Publications Warehouse

    Albers, P.H.

    1976-01-01

    Population sizes of territorial male red-winged blackbirds (Agelaius phoeniceus) were determined with counts of territorial males (area count) and a Petersen-Lincoln Index method for roadsides (roadside estimate). Weather conditions and time of day did not influence either method. Combined roadside estimates had smaller error bounds than the individual transect estimates and were not hindered by the problem of zero recaptures. Roadside estimates were usually one-half as large as the area counts, presumably due to an observer bias for marked birds. The roadside estimate provides only an index of major changes in populations of territorial male redwings. When the roadside estimate is employed, the area count should be used to determine the amount and nature of observer bias. For small population surveys, the area count is probably more reliable and accurate than the roadside estimate.

  19. An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers

    USGS Publications Warehouse

    Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.

    2016-01-01

    Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.

  20. Estimating the Wind Resource in Uttarakhand: Comparison of Dynamic Downscaling with Doppler Lidar Wind Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundquist, J. K.; Pukayastha, A.; Martin, C.

    Previous estimates of the wind resources in Uttarakhand, India, suggest minimal wind resources in this region. To explore whether or not the complex terrain in fact provides localized regions of wind resource, the authors of this study employed a dynamic down scaling method with the Weather Research and Forecasting model, providing detailed estimates of winds at approximately 1 km resolution in the finest nested simulation.

  1. Monte Carlo approaches to sampling forested tracts with lines or points

    Treesearch

    Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire

    2001-01-01

    Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...

  2. Economic Impacts of Wind Turbine Development in U.S. Counties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J., Brown; B., Hoen; E., Lantz

    2011-07-25

    The objective is to address the research question using post-project construction, county-level data, and econometric evaluation methods. Wind energy is expanding rapidly in the United States: Over the last 4 years, wind power has contributed approximately 35 percent of all new electric power capacity. Wind power plants are often developed in rural areas where local economic development impacts from the installation are projected, including land lease and property tax payments and employment growth during plant construction and operation. Wind energy represented 2.3 percent of the U.S. electricity supply in 2010, but studies show that penetrations of at least 20 percentmore » are feasible. Several studies have used input-output models to predict direct, indirect, and induced economic development impacts. These analyses have often been completed prior to project construction. Available studies have not yet investigated the economic development impacts of wind development at the county level using post-construction econometric evaluation methods. Analysis of county-level impacts is limited. However, previous county-level analyses have estimated operation-period employment at 0.2 to 0.6 jobs per megawatt (MW) of power installed and earnings at $9,000/MW to $50,000/MW. We find statistically significant evidence of positive impacts of wind development on county-level per capita income from the OLS and spatial lag models when they are applied to the full set of wind and non-wind counties. The total impact on annual per capita income of wind turbine development (measured in MW per capita) in the spatial lag model was $21,604 per MW. This estimate is within the range of values estimated in the literature using input-output models. OLS results for the wind-only counties and matched samples are similar in magnitude, but are not statistically significant at the 10-percent level. We find a statistically significant impact of wind development on employment in the OLS analysis for wind counties only, but not in the other models. Our estimates of employment impacts are not precise enough to assess the validity of employment impacts from input-output models applied in advance of wind energy project construction. The analysis provides empirical evidence of positive income effects at the county level from cumulative wind turbine development, consistent with the range of impacts estimated using input-output models. Employment impacts are less clear.« less

  3. Receiver Statistics for Cognitive Radios in Dynamic Spectrum Access Networks

    DTIC Science & Technology

    2012-02-28

    SNR) are employed by many protocols and processes in direct-sequence ( DS ) spread-spectrum packet radio networks, including soft-decision decoding...adaptive modulation protocols, and power adjustment protocols. For DS spread spectrum, we have introduced and evaluated SNR estimators that employ...obtained during demodulation in a binary CDMA receiver. We investigated several methods to apply the proposed metric to the demodulator’s soft-decision

  4. Fitting Residual Error Structures for Growth Models in SAS PROC MCMC

    ERIC Educational Resources Information Center

    McNeish, Daniel

    2017-01-01

    In behavioral sciences broadly, estimating growth models with Bayesian methods is becoming increasingly common, especially to combat small samples common with longitudinal data. Although Mplus is becoming an increasingly common program for applied research employing Bayesian methods, the limited selection of prior distributions for the elements of…

  5. In vitro to In vivo extrapolation of hepatic metabolism in fish: An inter-laboratory comparison of In vitro methods

    EPA Science Inventory

    Chemical biotransformation represents the single largest source of uncertainty in chemical bioaccumulation assessments for fish. In vitro methods employing isolated hepatocytes and liver subcellular fractions (S9) can be used to estimate whole-body rates of chemical metabolism, ...

  6. A hybrid localization technique for patient tracking.

    PubMed

    Rodionov, Denis; Kolev, George; Bushminkin, Kirill

    2013-01-01

    Nowadays numerous technologies are employed for tracking patients and assets in hospitals or nursing homes. Each of them has advantages and drawbacks. For example, WiFi localization has relatively good accuracy but cannot be used in case of power outage or in the areas with poor WiFi coverage. Magnetometer positioning or cellular network does not have such problems but they are not as accurate as localization with WiFi. This paper describes technique that simultaneously employs different localization technologies for enhancing stability and average accuracy of localization. The proposed algorithm is based on fingerprinting method paired with data fusion and prediction algorithms for estimating the object location. The core idea of the algorithm is technology fusion using error estimation methods. For testing accuracy and performance of the algorithm testing simulation environment has been implemented. Significant accuracy improvement was showed in practical scenarios.

  7. [Influenza vaccination of hospital healthcare staff from the perspective of the employer: a positive balance].

    PubMed

    Hak, Eelko; Knol, Lisanne M; Wilschut, Jan C; Postma, Maarten J

    2010-01-01

    To assess the annual productivity loss among hospital healthcare workers attributable to influenza and to estimate the costs and economic benefits of a vaccination programme from the perspective of the the employer. Cost-benefit analysis. The percentage of work loss due to influenza was determined using monthly age and gender specific figures for productivity loss among healthcare workers of the University Medical Center Groningen (UMCG), the Netherlands over the period January 2006-June 2008. Influenza periods were determined on the basis of national surveillance data. The average increase in productivity loss in these periods was estimated by comparison with the periods outside influenza seasons. The direct costs of productivity loss from the perspective of the employer were estimated using the friction cost method. In the sensitivity analyses various modelling parameters were varied, such as the vaccination coverage. In the UMCG, with approximately 9,400 employees, the estimated annual costs associated with productivity loss due to influenza before the introduction of the yearly influenza vaccination program were € 675,242 or on average, € 72 per employee. The economic benefits of the current vaccination program with a vaccination coverage of 24% with a vaccine effectiveness of 71% were estimated at € 89,858 or € 10 per employee. The nett economic benefits of a vaccination program with a target vaccination coverage of 70% with a vaccine effectiveness of 71% were estimated at € 244,325 or € 26 per employee. This modelling study performed from the perspective of the employer showed that an annual influenza vaccination programme for hospital personnel can save costs.

  8. Estimation of slip distribution using an inverse method based on spectral decomposition of Green's function utilizing Global Positioning System (GPS) data

    NASA Astrophysics Data System (ADS)

    Jin, Honglin; Kato, Teruyuki; Hori, Muneo

    2007-07-01

    An inverse method based on the spectral decomposition of the Green's function was employed for estimating a slip distribution. We conducted numerical simulations along the Philippine Sea plate (PH) boundary in southwest Japan using this method to examine how to determine the essential parameters which are the number of deformation function modes and their coefficients. Japanese GPS Earth Observation Network (GEONET) Global Positioning System (GPS) data were used for three years covering 1997-1999 to estimate interseismic back slip distribution in this region. The estimated maximum back slip rate is about 7 cm/yr, which is consistent with the Philippine Sea plate convergence rate. Areas of strong coupling are confined between depths of 10 and 30 km and three areas of strong coupling were delineated. These results are consistent with other studies that have estimated locations of coupling distribution.

  9. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  10. Validity and reliability of dental age estimation of teeth root translucency based on digital luminance determination.

    PubMed

    Ramsthaler, Frank; Kettner, Mattias; Verhoff, Marcel A

    2014-01-01

    In forensic anthropological casework, estimating age-at-death is key to profiling unknown skeletal remains. The aim of this study was to examine the reliability of a new, simple, fast, and inexpensive digital odontological method for age-at-death estimation. The method is based on the original Lamendin method, which is a widely used technique in the repertoire of odontological aging methods in forensic anthropology. We examined 129 single root teeth employing a digital camera and imaging software for the measurement of the luminance of the teeth's translucent root zone. Variability in luminance detection was evaluated using statistical technical error of measurement analysis. The method revealed stable values largely unrelated to observer experience, whereas requisite formulas proved to be camera-specific and should therefore be generated for an individual recording setting based on samples of known chronological age. Multiple regression analysis showed a highly significant influence of the coefficients of the variables "arithmetic mean" and "standard deviation" of luminance for the regression formula. For the use of this primer multivariate equation for age-at-death estimation in casework, a standard error of the estimate of 6.51 years was calculated. Step-by-step reduction of the number of embedded variables to linear regression analysis employing the best contributor "arithmetic mean" of luminance yielded a regression equation with a standard error of 6.72 years (p < 0.001). The results of this study not only support the premise of root translucency as an age-related phenomenon, but also demonstrate that translucency reflects a number of other influencing factors in addition to age. This new digital measuring technique of the zone of dental root luminance can broaden the array of methods available for estimating chronological age, and furthermore facilitate measurement and age classification due to its low dependence on observer experience.

  11. Free Energy Reconstruction from Logarithmic Mean-Force Dynamics Using Multiple Nonequilibrium Trajectories.

    PubMed

    Morishita, Tetsuya; Yonezawa, Yasushige; Ito, Atsushi M

    2017-07-11

    Efficient and reliable estimation of the mean force (MF), the derivatives of the free energy with respect to a set of collective variables (CVs), has been a challenging problem because free energy differences are often computed by integrating the MF. Among various methods for computing free energy differences, logarithmic mean-force dynamics (LogMFD) [ Morishita et al., Phys. Rev. E 2012 , 85 , 066702 ] invokes the conservation law in classical mechanics to integrate the MF, which allows us to estimate the free energy profile along the CVs on-the-fly. Here, we present a method called parallel dynamics, which improves the estimation of the MF by employing multiple replicas of the system and is straightforwardly incorporated in LogMFD or a related method. In the parallel dynamics, the MF is evaluated by a nonequilibrium path-ensemble using the multiple replicas based on the Crooks-Jarzynski nonequilibrium work relation. Thanks to the Crooks relation, realizing full-equilibrium states is no longer mandatory for estimating the MF. Additionally, sampling in the hidden subspace orthogonal to the CV space is highly improved with appropriate weights for each metastable state (if any), which is hardly achievable by typical free energy computational methods. We illustrate how to implement parallel dynamics by combining it with LogMFD, which we call logarithmic parallel dynamics (LogPD). Biosystems of alanine dipeptide and adenylate kinase in explicit water are employed as benchmark systems to which LogPD is applied to demonstrate the effect of multiple replicas on the accuracy and efficiency in estimating the free energy profiles using parallel dynamics.

  12. Development of an Agent-Based Model (ABM) to Simulate the Immune System and Integration of a Regression Method to Estimate the Key ABM Parameters by Fitting the Experimental Data

    PubMed Central

    Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le

    2015-01-01

    Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589

  13. Comparing exposure assessment methods for traffic-related air pollution in an adverse pregnancy outcome study.

    PubMed

    Wu, Jun; Wilhelm, Michelle; Chung, Judith; Ritz, Beate

    2011-07-01

    Previous studies reported adverse impacts of traffic-related air pollution exposure on pregnancy outcomes. Yet, little information exists on how effect estimates are impacted by the different exposure assessment methods employed in these studies. To compare effect estimates for traffic-related air pollution exposure and preeclampsia, preterm birth (gestational age less than 37 weeks), and very preterm birth (gestational age less than 30 weeks) based on four commonly used exposure assessment methods. We identified 81,186 singleton births during 1997-2006 at four hospitals in Los Angeles and Orange Counties, California. Exposures were assigned to individual subjects based on residential address at delivery using the nearest ambient monitoring station data [carbon monoxide (CO), nitrogen dioxide (NO(2)), nitric oxide (NO), nitrogen oxides (NO(x)), ozone (O(3)), and particulate matter less than 2.5 (PM(2.5)) or less than 10 (PM(10))μm in aerodynamic diameter], both unadjusted and temporally adjusted land-use regression (LUR) model estimates (NO, NO(2), and NO(x)), CALINE4 line-source air dispersion model estimates (NO(x) and PM(2.5)), and a simple traffic-density measure. We employed unconditional logistic regression to analyze preeclampsia in our birth cohort, while for gestational age-matched risk sets with preterm and very preterm birth we employed conditional logistic regression. We observed elevated risks for preeclampsia, preterm birth, and very preterm birth from maternal exposures to traffic air pollutants measured at ambient stations (CO, NO, NO(2), and NO(x)) and modeled through CALINE4 (NO(x) and PM(2.5)) and LUR (NO(2) and NO(x)). Increased risk of preterm birth and very preterm birth were also positively associated with PM(10) and PM(2.5) air pollution measured at ambient stations. For LUR-modeled NO(2) and NO(x) exposures, elevated risks for all the outcomes were observed in Los Angeles only--the region for which the LUR models were initially developed. Unadjusted LUR models often produced odds ratios somewhat larger in size than temporally adjusted models. The size of effect estimates was smaller for exposures based on simpler traffic density measures than the other exposure assessment methods. We generally confirmed that traffic-related air pollution was associated with adverse reproductive outcomes regardless of the exposure assessment method employed, yet the size of the estimated effect depended on how both temporal and spatial variations were incorporated into exposure assessment. The LUR model was not transferable even between two contiguous areas within the same large metropolitan area in Southern California. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    NASA Astrophysics Data System (ADS)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  15. Flood Frequency Analysis With Historical and Paleoflood Information

    NASA Astrophysics Data System (ADS)

    Stedinger, Jery R.; Cohn, Timothy A.

    1986-05-01

    An investigation is made of flood quantile estimators which can employ "historical" and paleoflood information in flood frequency analyses. Two categories of historical information are considered: "censored" data, where the magnitudes of historical flood peaks are known; and "binomial" data, where only threshold exceedance information is available. A Monte Carlo study employing the two-parameter lognormal distribution shows that maximum likelihood estimators (MLEs) can extract the equivalent of an additional 10-30 years of gage record from a 50-year period of historical observation. The MLE routines are shown to be substantially better than an adjusted-moment estimator similar to the one recommended in Bulletin 17B of the United States Water Resources Council Hydrology Committee (1982). The MLE methods performed well even when floods were drawn from other than the assumed lognormal distribution.

  16. Strain Gauge Balance Uncertainty Analysis at NASA Langley: A Technical Review

    NASA Technical Reports Server (NTRS)

    Tripp, John S.

    1999-01-01

    This paper describes a method to determine the uncertainties of measured forces and moments from multi-component force balances used in wind tunnel tests. A multivariate regression technique is first employed to estimate the uncertainties of the six balance sensitivities and 156 interaction coefficients derived from established balance calibration procedures. These uncertainties are then employed to calculate the uncertainties of force-moment values computed from observed balance output readings obtained during tests. Confidence and prediction intervals are obtained for each computed force and moment as functions of the actual measurands. Techniques are discussed for separate estimation of balance bias and precision uncertainties.

  17. Value of Landsat in urban water resources planning

    NASA Technical Reports Server (NTRS)

    Jackson, T. J.; Ragan, R. M.

    1977-01-01

    The reported investigation had the objective to evaluate the utility of satellite multispectral remote sensing in urban water resources planning. The results are presented of a study which was conducted to determine the economic impact of Landsat data. The use of Landsat data to estimate hydrologic model parameters employed in urban water resources planning is discussed. A decision regarding an employment of the Landsat data has to consider the tradeoff between data accuracy and cost. Bayesian decision theory is used in this connection. It is concluded that computer-aided interpretation of Landsat data is a highly cost-effective method of estimating the percentage of impervious area.

  18. A Method for Semi-quantitative Assessment of Exposure to Pesticides of Applicators and Re-entry Workers: An Application in Three Farming Systems in Ethiopia.

    PubMed

    Negatu, Beyene; Vermeulen, Roel; Mekonnen, Yalemtshay; Kromhout, Hans

    2016-07-01

    To develop an inexpensive and easily adaptable semi-quantitative exposure assessment method to characterize exposure to pesticide in applicators and re-entry farmers and farm workers in Ethiopia. Two specific semi-quantitative exposure algorithms for pesticides applicators and re-entry workers were developed and applied to 601 farm workers employed in 3 distinctly different farming systems [small-scale irrigated, large-scale greenhouses (LSGH), and large-scale open (LSO)] in Ethiopia. The algorithm for applicators was based on exposure-modifying factors including application methods, farm layout (open or closed), pesticide mixing conditions, cleaning of spraying equipment, intensity of pesticide application per day, utilization of personal protective equipment (PPE), personal hygienic behavior, annual frequency of application, and duration of employment at the farm. The algorithm for re-entry work was based on an expert-based re-entry exposure intensity score, utilization of PPE, personal hygienic behavior, annual frequency of re-entry work, and duration of employment at the farm. The algorithms allowed estimation of daily, annual and cumulative lifetime exposure for applicators, and re-entry workers by farming system, by gender, and by age group. For all metrics, highest exposures occurred in LSGH for both applicators and female re-entry workers. For male re-entry workers, highest cumulative exposure occurred in LSO farms. Female re-entry workers appeared to be higher exposed on a daily or annual basis than male re-entry workers, but their cumulative exposures were similar due to the fact that on average males had longer tenure. Factors related to intensity of exposure (like application method and farm layout) were indicated as the main driving factors for estimated potential exposure. Use of personal protection, hygienic behavior, and duration of employment in surveyed farm workers contributed less to the contrast in exposure estimates. This study indicated that farmers' and farm workers' exposure to pesticides can be inexpensively characterized, ranked, and classified. Our method could be extended to assess exposure to specific active ingredients provided that detailed information on pesticides used is available. The resulting exposure estimates will consequently be used in occupational epidemiology studies in Ethiopia and other similar countries with few resources. © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  19. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445

  20. A scientific and statistical analysis of accelerated aging for pharmaceuticals. Part 1: accuracy of fitting methods.

    PubMed

    Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L

    2014-10-01

    Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  1. Born iterative reconstruction using perturbed-phase field estimates

    PubMed Central

    Astheimer, Jeffrey P.; Waag, Robert C.

    2008-01-01

    A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements. PMID:19062873

  2. Composite Partial Likelihood Estimation Under Length-Biased Sampling, With Application to a Prevalent Cohort Study of Dementia

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing

    2013-01-01

    The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265

  3. A novel algorithm for laser self-mixing sensors used with the Kalman filter to measure displacement

    NASA Astrophysics Data System (ADS)

    Sun, Hui; Liu, Ji-Gou

    2018-07-01

    This paper proposes a simple and effective method for estimating the feedback level factor C in a self-mixing interferometric sensor. It is used with a Kalman filter to retrieve the displacement. Without the complicated and onerous calculation process of the general C estimation method, a final equation is obtained. Thus, the estimation of C only involves a few simple calculations. It successfully retrieves the sinusoidal and aleatory displacement by means of simulated self-mixing signals in both weak and moderate feedback regimes. To deal with the errors resulting from noise and estimate bias of C and to further improve the retrieval precision, a Kalman filter is employed following the general phase unwrapping method. The simulation and experiment results show that the retrieved displacement using the C obtained with the proposed method is comparable to the joint estimation of C and α. Besides, the Kalman filter can significantly decrease measurement errors, especially the error caused by incorrectly locating the peak and valley positions of the signal.

  4. On state-of-charge determination for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Huang, Jun; Liaw, Bor Yann; Zhang, Jianbo

    2017-04-01

    Accurate estimation of state-of-charge (SOC) of a battery through its life remains challenging in battery research. Although improved precisions continue to be reported at times, almost all are based on regression methods empirically, while the accuracy is often not properly addressed. Here, a comprehensive review is set to address such issues, from fundamental principles that are supposed to define SOC to methodologies to estimate SOC for practical use. It covers topics from calibration, regression (including modeling methods) to validation in terms of precision and accuracy. At the end, we intend to answer the following questions: 1) can SOC estimation be self-adaptive without bias? 2) Why Ah-counting is a necessity in almost all battery-model-assisted regression methods? 3) How to establish a consistent framework of coupling in multi-physics battery models? 4) To assess the accuracy in SOC estimation, statistical methods should be employed to analyze factors that contribute to the uncertainty. We hope, through this proper discussion of the principles, accurate SOC estimation can be widely achieved.

  5. New density estimation methods for charged particle beams with applications to microbunching instability

    NASA Astrophysics Data System (ADS)

    Terzić, Balša; Bassi, Gabriele

    2011-07-01

    In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzić, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  6. Population estimation with sparse data: The role of estimators versus indices revisited

    Treesearch

    Kevin S. McKelvey; Dean E. Pearson

    2001-01-01

    The use of indices to evaluate small-mammal populations has been heavily criticized, yet a review of small-mammal studies published from 1996 through 2000 indicated that indices are still the primary methods employed for measuring populations. The literature review also found that 98% of the samples collected in these studies were too small for reliable...

  7. Stability indicating high performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in combined dosage form

    PubMed Central

    Bageshwar, Deepak; Khanvilkar, Vineeta; Kadam, Vilasrao

    2011-01-01

    A specific, precise and stability indicating high-performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in pharmaceutical formulations was developed and validated. The method employed TLC aluminium plates precoated with silica gel 60F254 as the stationary phase. The solvent system consisted of methanol:water:ammonium acetate; 4.0:1.0:0.5 (v/v/v). This system was found to give compact and dense spots for both itopride hydrochloride (Rf value of 0.55±0.02) and pantoprazole sodium (Rf value of 0.85±0.04). Densitometric analysis of both drugs was carried out in the reflectance–absorbance mode at 289 nm. The linear regression analysis data for the calibration plots showed a good linear relationship with R2=0.9988±0.0012 in the concentration range of 100–400 ng for pantoprazole sodium. Also, the linear regression analysis data for the calibration plots showed a good linear relationship with R2=0.9990±0.0008 in the concentration range of 200–1200 ng for itopride hydrochloride. The method was validated for specificity, precision, robustness and recovery. Statistical analysis proves that the method is repeatable and selective for the estimation of both the said drugs. As the method could effectively separate the drug from its degradation products, it can be employed as a stability indicating method. PMID:29403710

  8. Stability indicating high performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in combined dosage form.

    PubMed

    Bageshwar, Deepak; Khanvilkar, Vineeta; Kadam, Vilasrao

    2011-11-01

    A specific, precise and stability indicating high-performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in pharmaceutical formulations was developed and validated. The method employed TLC aluminium plates precoated with silica gel 60F 254 as the stationary phase. The solvent system consisted of methanol:water:ammonium acetate; 4.0:1.0:0.5 (v/v/v). This system was found to give compact and dense spots for both itopride hydrochloride ( R f value of 0.55±0.02) and pantoprazole sodium ( R f value of 0.85±0.04). Densitometric analysis of both drugs was carried out in the reflectance-absorbance mode at 289 nm. The linear regression analysis data for the calibration plots showed a good linear relationship with R 2 =0.9988±0.0012 in the concentration range of 100-400 ng for pantoprazole sodium. Also, the linear regression analysis data for the calibration plots showed a good linear relationship with R 2 =0.9990±0.0008 in the concentration range of 200-1200 ng for itopride hydrochloride. The method was validated for specificity, precision, robustness and recovery. Statistical analysis proves that the method is repeatable and selective for the estimation of both the said drugs. As the method could effectively separate the drug from its degradation products, it can be employed as a stability indicating method.

  9. Estimation of Qualitative and Quantitative Parameters of Air Cleaning by a Pulsed Corona Discharge Using Multicomponent Standard Mixtures

    NASA Astrophysics Data System (ADS)

    Filatov, I. E.; Uvarin, V. V.; Kuznetsov, D. L.

    2018-05-01

    The efficiency of removal of volatile organic impurities in air by a pulsed corona discharge is investigated using model mixtures. Based on the method of competing reactions, an approach to estimating the qualitative and quantitative parameters of the employed electrophysical technique is proposed. The concept of the "toluene coefficient" characterizing the relative reactivity of a component as compared to toluene is introduced. It is proposed that the energy efficiency of the electrophysical method be estimated using the concept of diversified yield of the removal process. Such an approach makes it possible to substantially intensify the determination of energy parameters of removal of impurities and can also serve as a criterion for estimating the effectiveness of various methods in which a nonequilibrium plasma is used for air cleaning from volatile impurities.

  10. [The estimation of possibilities for the application of the laser capture microdissection technology for the molecular-genetic expert analysis (genotyping) of human chromosomal DNA].

    PubMed

    Ivanov, P L; Leonov, S N; Zemskova, E Iu

    2012-01-01

    The present study was designed to estimate the possibilities of application of the laser capture microdissection (LCM) technology for the molecular-genetic expert analysis (genotyping) of human chromosomal DNA. The experimental method employed for the purpose was the multiplex multilocus analysis of autosomal DNA polymorphism in the preparations of buccal epitheliocytes obtained by LCM. The key principles of the study were the application of physical methods for contrast enhancement of the micropreparations (such as phase-contrast microscopy and dark-field microscopy) and PCR-compatible cell lysis. Genotyping was carried out with the use of AmpFISTR Minifiler TM PCR Amplification Kits ("Applied Biosynthesis", USA). It was shown that the technique employed in the present study ensures reliable genotyping of human chromosomal DNA in the pooled preparations containing 10-20 dissected diploid cells each. This result fairly well agrees with the calculated sensitivity of the method. A few practical recommendations are offered.

  11. Maximum-likelihood estimation of parameterized wavefronts from multifocal data

    PubMed Central

    Sakamoto, Julia A.; Barrett, Harrison H.

    2012-01-01

    A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282

  12. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, S; Wang, Y; Lue, K

    2014-06-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends onmore » the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient tail information and therefore improve the accuracy of scatter estimation.« less

  13. Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities

    NASA Astrophysics Data System (ADS)

    Margolis, Robert; Gagnon, Pieter; Melius, Jennifer; Phillips, Caleb; Elmore, Ryan

    2017-07-01

    We estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV’s ability to meet estimated city electricity consumption varies widely—from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city’s estimated electricity consumption. In addition to city-wide results, we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.

  14. Fisher Scoring Method for Parameter Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    NASA Astrophysics Data System (ADS)

    Widyaningsih, Purnami; Retno Sari Saputro, Dewi; Nugrahani Putri, Aulia

    2017-06-01

    GWOLR model combines geographically weighted regression (GWR) and (ordinal logistic reression) OLR models. Its parameter estimation employs maximum likelihood estimation. Such parameter estimation, however, yields difficult-to-solve system of nonlinear equations, and therefore numerical approximation approach is required. The iterative approximation approach, in general, uses Newton-Raphson (NR) method. The NR method has a disadvantage—its Hessian matrix is always the second derivatives of each iteration so it does not always produce converging results. With regard to this matter, NR model is modified by substituting its Hessian matrix into Fisher information matrix, which is termed Fisher scoring (FS). The present research seeks to determine GWOLR model parameter estimation using Fisher scoring method and apply the estimation on data of the level of vulnerability to Dengue Hemorrhagic Fever (DHF) in Semarang. The research concludes that health facilities give the greatest contribution to the probability of the number of DHF sufferers in both villages. Based on the number of the sufferers, IR category of DHF in both villages can be determined.

  15. Assessment of Thematic Mapper Band-to-band Registration by the Block Correlation Method

    NASA Technical Reports Server (NTRS)

    Card, D. H.; Wrigley, R. C.; Mertz, F. C.; Hall, J. R.

    1984-01-01

    The design of the Thematic Mapper (TM) multispectral radiometer makes it susceptible to band-to-band misregistration. To estimate band-to-band misregistration a block correlation method is employed. This method is chosen over other possible techniques (band differencing and flickering) because quantitative results are produced. The method correlates rectangular blocks of pixels from one band against blocks centered on identical pixels from a second band. The block pairs are shifted in pixel increments both vertically and horizontally with respect to each other and the correlation coefficient for each shift position is computed. The displacement corresponding to the maximum correlation is taken as the best estimate of registration error for each block pair. Subpixel shifts are estimated by a bi-quadratic interpolation of the correlation values surrounding the maximum correlation. To obtain statistical summaries for each band combination post processing of the block correlation results performed. The method results in estimates of registration error that are consistent with expectations.

  16. 42 CFR 82.27 - How can claimants obtain reviews of their NIOSH dose reconstruction results by NIOSH?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE ENERGY EMPLOYEES OCCUPATIONAL ILLNESS COMPENSATION... estimated in the completed dose reconstructions; or (2) NIOSH changes a scientific element underlying dose..., the methods employed in the review, and the review findings to the claimant, DOL, and DOE. ...

  17. 42 CFR 82.27 - How can claimants obtain reviews of their NIOSH dose reconstruction results by NIOSH?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE ENERGY EMPLOYEES OCCUPATIONAL ILLNESS COMPENSATION... estimated in the completed dose reconstructions; or (2) NIOSH changes a scientific element underlying dose..., the methods employed in the review, and the review findings to the claimant, DOL, and DOE. ...

  18. 42 CFR 82.27 - How can claimants obtain reviews of their NIOSH dose reconstruction results by NIOSH?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE ENERGY EMPLOYEES OCCUPATIONAL ILLNESS COMPENSATION... estimated in the completed dose reconstructions; or (2) NIOSH changes a scientific element underlying dose..., the methods employed in the review, and the review findings to the claimant, DOL, and DOE. ...

  19. 42 CFR 82.27 - How can claimants obtain reviews of their NIOSH dose reconstruction results by NIOSH?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE ENERGY EMPLOYEES OCCUPATIONAL ILLNESS COMPENSATION... estimated in the completed dose reconstructions; or (2) NIOSH changes a scientific element underlying dose..., the methods employed in the review, and the review findings to the claimant, DOL, and DOE. ...

  20. Accurate acceleration of kinetic Monte Carlo simulations through the modification of rate constants.

    PubMed

    Chatterjee, Abhijit; Voter, Arthur F

    2010-05-21

    We present a novel computational algorithm called the accelerated superbasin kinetic Monte Carlo (AS-KMC) method that enables a more efficient study of rare-event dynamics than the standard KMC method while maintaining control over the error. In AS-KMC, the rate constants for processes that are observed many times are lowered during the course of a simulation. As a result, rare processes are observed more frequently than in KMC and the time progresses faster. We first derive error estimates for AS-KMC when the rate constants are modified. These error estimates are next employed to develop a procedure for lowering process rates with control over the maximum error. Finally, numerical calculations are performed to demonstrate that the AS-KMC method captures the correct dynamics, while providing significant CPU savings over KMC in most cases. We show that the AS-KMC method can be employed with any KMC model, even when no time scale separation is present (although in such cases no computational speed-up is observed), without requiring the knowledge of various time scales present in the system.

  1. Estimating short-run and long-run interaction mechanisms in interictal state.

    PubMed

    Ozkaya, Ata; Korürek, Mehmet

    2010-04-01

    We address the issue of analyzing electroencephalogram (EEG) from seizure patients in order to test, model and determine the statistical properties that distinguish between EEG states (interictal, pre-ictal, ictal) by introducing a new class of time series analysis methods. In the present study: firstly, we employ statistical methods to determine the non-stationary behavior of focal interictal epileptiform series within very short time intervals; secondly, for such intervals that are deemed non-stationary we suggest the concept of Autoregressive Integrated Moving Average (ARIMA) process modelling, well known in time series analysis. We finally address the queries of causal relationships between epileptic states and between brain areas during epileptiform activity. We estimate the interaction between different EEG series (channels) in short time intervals by performing Granger-causality analysis and also estimate such interaction in long time intervals by employing Cointegration analysis, both analysis methods are well-known in econometrics. Here we find: first, that the causal relationship between neuronal assemblies can be identified according to the duration and the direction of their possible mutual influences; second, that although the estimated bidirectional causality in short time intervals yields that the neuronal ensembles positively affect each other, in long time intervals neither of them is affected (increasing amplitudes) from this relationship. Moreover, Cointegration analysis of the EEG series enables us to identify whether there is a causal link from the interictal state to ictal state.

  2. Estimating the abundance of mouse populations of known size: promises and pitfalls of new methods

    USGS Publications Warehouse

    Conn, P.B.; Arthur, A.D.; Bailey, L.L.; Singleton, G.R.

    2006-01-01

    Knowledge of animal abundance is fundamental to many ecological studies. Frequently, researchers cannot determine true abundance, and so must estimate it using a method such as mark-recapture or distance sampling. Recent advances in abundance estimation allow one to model heterogeneity with individual covariates or mixture distributions and to derive multimodel abundance estimators that explicitly address uncertainty about which model parameterization best represents truth. Further, it is possible to borrow information on detection probability across several populations when data are sparse. While promising, these methods have not been evaluated using mark?recapture data from populations of known abundance, and thus far have largely been overlooked by ecologists. In this paper, we explored the utility of newly developed mark?recapture methods for estimating the abundance of 12 captive populations of wild house mice (Mus musculus). We found that mark?recapture methods employing individual covariates yielded satisfactory abundance estimates for most populations. In contrast, model sets with heterogeneity formulations consisting solely of mixture distributions did not perform well for several of the populations. We show through simulation that a higher number of trapping occasions would have been necessary to achieve good estimator performance in this case. Finally, we show that simultaneous analysis of data from low abundance populations can yield viable abundance estimates.

  3. Fusion of electromagnetic trackers to improve needle deflection estimation: simulation study.

    PubMed

    Sadjadi, Hossein; Hashtrudi-Zaad, Keyvan; Fichtinger, Gabor

    2013-10-01

    We present a needle deflection estimation method to anticipate needle bending during insertion into deformable tissue. Using limited additional sensory information, our approach reduces the estimation error caused by uncertainties inherent in the conventional needle deflection estimation methods. We use Kalman filters to combine a kinematic needle deflection model with the position measurements of the base and the tip of the needle taken by electromagnetic (EM) trackers. One EM tracker is installed on the needle base and estimates the needle tip position indirectly using the kinematic needle deflection model. Another EM tracker is installed on the needle tip and estimates the needle tip position through direct, but noisy measurements. Kalman filters are then employed to fuse these two estimates in real time and provide a reliable estimate of the needle tip position, with reduced variance in the estimation error. We implemented this method to compensate for needle deflection during simulated needle insertions and performed sensitivity analysis for various conditions. At an insertion depth of 150 mm, we observed needle tip estimation error reductions in the range of 28% (from 1.8 to 1.3 mm) to 74% (from 4.8 to 1.2 mm), which demonstrates the effectiveness of our method, offering a clinically practical solution.

  4. Groundwater Evapotranspiration from Diurnal Water Table Fluctuation: a Modified White Based Method Using Drainable and Fillable Porosity

    NASA Astrophysics Data System (ADS)

    Acharya, S.; Mylavarapu, R.; Jawitz, J. W.

    2012-12-01

    In shallow unconfined aquifers, the water table usually shows a distinct diurnal fluctuation pattern corresponding to the twenty-four hour solar radiation cycle. This diurnal water table fluctuation (DWTF) signal can be used to estimate the groundwater evapotranspiration (ETg) by vegetation, a method known as the White [1932] method. Water table fluctuations in shallow phreatic aquifers is controlled by two distinct storage parameters, drainable porosity (or specific yield) and the fillable porosity. Yet, it is implicitly assumed in most studies that these two parameters are equal, unless hysteresis effect is considered. The White based method available in the literature is also based on a single drainable porosity parameter to estimate the ETg. In this study, we present a modification of the White based method to estimate ETg from DWTF using separate drainable (λd) and fillable porosity (λf) parameters. Separate analytical expressions based on successive steady state moisture profiles are used to estimate λd and λf, instead of the commonly employed hydrostatic moisture profile approach. The modified method is then applied to estimate ETg using the DWTF data observed in a field in northeast Florida and the results are compared with ET estimations from the standard Penman-Monteith equation. It is found that the modified method resulted in significantly better estimates of ETg than the previously available method that used only a single, hydrostatic-moisture-profile based λd. Furthermore, the modified method is also used to estimate ETg even during rainfall events which produced significantly better estimates of ETg as compared to the single λd parameter method.

  5. Terrain feature recognition for synthetic aperture radar (SAR) imagery employing spatial attributes of targets

    NASA Astrophysics Data System (ADS)

    Iisaka, Joji; Sakurai-Amano, Takako

    1994-08-01

    This paper describes an integrated approach to terrain feature detection and several methods to estimate spatial information from SAR (synthetic aperture radar) imagery. Spatial information of image features as well as spatial association are key elements in terrain feature detection. After applying a small feature preserving despeckling operation, spatial information such as edginess, texture (smoothness), region-likeliness and line-likeness of objects, target sizes, and target shapes were estimated. Then a trapezoid shape fuzzy membership function was assigned to each spatial feature attribute. Fuzzy classification logic was employed to detect terrain features. Terrain features such as urban areas, mountain ridges, lakes and other water bodies as well as vegetated areas were successfully identified from a sub-image of a JERS-1 SAR image. In the course of shape analysis, a quantitative method was developed to classify spatial patterns by expanding a spatial pattern through the use of a series of pattern primitives.

  6. Parameter Estimation of Partial Differential Equation Models.

    PubMed

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  7. Improved accuracy of aboveground biomass and carbon estimates for live trees in forests of the eastern United States

    Treesearch

    Philip Radtke; David Walker; Jereme Frank; Aaron Weiskittel; Clara DeYoung; David MacFarlane; Grant Domke; Christopher Woodall; John Coulston; James Westfall

    2017-01-01

    Accurate estimation of forest biomass and carbon stocks at regional to national scales is a key requirement in determining terrestrial carbon sources and sinks on United States (US) forest lands. To that end, comprehensive assessment and testing of alternative volume and biomass models were conducted for individual tree models employed in the component ratio method (...

  8. Wood Specific Gravity Variation with Height and Its Implications for Biomass Estimation

    Treesearch

    Michael C. Wiemann; G. Bruce Williamson

    2014-01-01

    Wood specific gravity (SG) is widely employed by ecologists as a key variable in estimates of biomass. When it is important to have nondestructive methods for sampling wood for SG measurements, cores are extracted with an increment borer. While boring is a relatively difficult task even at breast height sampling, it is impossible at ground level and arduous at heights...

  9. Evaluation of Linking Methods for Placing Three-Parameter Logistic Item Parameter Estimates onto a One-Parameter Scale

    ERIC Educational Resources Information Center

    Karkee, Thakur B.; Wright, Karen R.

    2004-01-01

    Different item response theory (IRT) models may be employed for item calibration. Change of testing vendors, for example, may result in the adoption of a different model than that previously used with a testing program. To provide scale continuity and preserve cut score integrity, item parameter estimates from the new model must be linked to the…

  10. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    PubMed

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  11. Occupational Diesel Exposure, Duration of Employment, and Lung Cancer

    PubMed Central

    Picciotto, Sally; Costello, Sadie; Eisen, Ellen A.

    2016-01-01

    Background: If less healthy workers terminate employment earlier, thus accumulating less exposure, yet remain at greater risk of the health outcome, estimated health effects of cumulative exposure will be biased downward. If exposure also affects termination of employment, then the bias cannot be addressed using conventional methods. We examined these conditions as a prelude to a reanalysis of lung cancer mortality in the Diesel Exhaust in Miners Study. Methods: We applied an accelerated failure time model to assess the effect of exposures to respirable elemental carbon (a surrogate for diesel) on time to termination of employment among nonmetal miners who ever worked underground (n = 8,307). We then applied the parametric g-formula to assess how possible interventions setting respirable elemental carbon exposure limits would have changed lifetime risk of lung cancer, adjusting for time-varying employment status. Results: Median time to termination was 36% shorter (95% confidence interval = 33%, 39%), per interquartile range width increase in respirable elemental carbon exposure. Lung cancer risk decreased with more stringent interventions, with a risk ratio of 0.8 (95% confidence interval = 0.5, 1.1) comparing a limit of ≤25 µg/m3 respirable elemental carbon to no intervention. The fraction of cases attributable to diesel exposure was 27% in this population. Conclusions: The g-formula controlled for time-varying confounding by employment status, the signature of healthy worker survivor bias, which was also affected by diesel exposure. It also offers an alternative approach to risk assessment for estimating excess cumulative risk, and the impact of interventions based entirely on an observed population. PMID:26426944

  12. Semiempirical method for prediction of aerodynamic forces and moments on a steadily spinning light airplane

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Taylor, Lawrence W., Jr.

    1987-01-01

    A semi-empirical method is presented for the estimation of aerodynamic forces and moments acting on a steadily spinning (rotating) light airplane. The airplane is divided into wing, body, and tail surfaces. The effect of power is ignored. The strip theory is employed for each component of the spinning airplane to determine its contribution to the total aerodynamic coefficients. Then, increments to some of the coefficients which account for centrifugal effect are estimated. The results are compared to spin tunnel rotary balance test data.

  13. Informal employment in high-income countries for a health inequalities research: A scoping review.

    PubMed

    Julià, Mireia; Tarafa, Gemma; O'Campo, Patricia; Muntaner, Carles; Jódar, Pere; Benach, Joan

    2015-01-01

    Informal employment (IE) is one of the least studied employment conditions in public health research, mainly due to the difficulty of its conceptualization and its measurement, producing a lack of a unique concept and a common method of measurement. The aim of this review is to identify literature on IE in order to improve its definition and methods of measurement, with special attention given to high-income countries, to be able to study the possible impact on health inequalities within and between countries. A scoping review of definitions and methods of measurement of IE was conducted reviewing relevant databases and grey literature and analyzing selected articles. We found a wide spectrum of terms for describing IE as well as definitions and methods of measurement. We provide a definition of IE to be used in health inequalities research in high-income countries. Direct methods such as surveys can capture more information about workers and firms in order to estimate IE. These results can be used in further investigations about the impacts of this IE on health inequalities. Public health research must improve monitoring and analysis of IE in order to know the impacts of this employment condition on health inequalities.

  14. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  15. Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.

    PubMed

    Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela

    Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Hospitalization costs of severe bacterial pneumonia in children: comparative analysis considering different costing methods

    PubMed Central

    Nunes, Sheila Elke Araujo; Minamisava, Ruth; Vieira, Maria Aparecida da Silva; Itria, Alexander; Pessoa, Vicente Porfirio; de Andrade, Ana Lúcia Sampaio Sgambatti; Toscano, Cristiana Maria

    2017-01-01

    ABSTRACT Objective To determine and compare hospitalization costs of bacterial community-acquired pneumonia cases via different costing methods under the Brazilian Public Unified Health System perspective. Methods Cost-of-illness study based on primary data collected from a sample of 59 children aged between 28 days and 35 months and hospitalized due to bacterial pneumonia. Direct medical and non-medical costs were considered and three costing methods employed: micro-costing based on medical record review, micro-costing based on therapeutic guidelines and gross-costing based on the Brazilian Public Unified Health System reimbursement rates. Costs estimates obtained via different methods were compared using the Friedman test. Results Cost estimates of inpatient cases of severe pneumonia amounted to R$ 780,70/$Int. 858.7 (medical record review), R$ 641,90/$Int. 706.90 (therapeutic guidelines) and R$ 594,80/$Int. 654.28 (Brazilian Public Unified Health System reimbursement rates). Costs estimated via micro-costing (medical record review or therapeutic guidelines) did not differ significantly (p=0.405), while estimates based on reimbursement rates were significantly lower compared to estimates based on therapeutic guidelines (p<0.001) or record review (p=0.006). Conclusion Brazilian Public Unified Health System costs estimated via different costing methods differ significantly, with gross-costing yielding lower cost estimates. Given costs estimated by different micro-costing methods are similar and costing methods based on therapeutic guidelines are easier to apply and less expensive, this method may be a valuable alternative for estimation of hospitalization costs of bacterial community-acquired pneumonia in children. PMID:28767921

  17. Incident CTS in a large pooled cohort study: associations obtained by a Job Exposure Matrix versus associations obtained from observed exposures.

    PubMed

    Dale, Ann Marie; Ekenga, Christine C; Buckner-Petty, Skye; Merlino, Linda; Thiese, Matthew S; Bao, Stephen; Meyers, Alysha Rose; Harris-Adamson, Carisa; Kapellusch, Jay; Eisen, Ellen A; Gerr, Fred; Hegmann, Kurt T; Silverstein, Barbara; Garg, Arun; Rempel, David; Zeringue, Angelique; Evanoff, Bradley A

    2018-03-29

    There is growing use of a job exposure matrix (JEM) to provide exposure estimates in studies of work-related musculoskeletal disorders; few studies have examined the validity of such estimates, nor did compare associations obtained with a JEM with those obtained using other exposures. This study estimated upper extremity exposures using a JEM derived from a publicly available data set (Occupational Network, O*NET), and compared exposure-disease associations for incident carpal tunnel syndrome (CTS) with those obtained using observed physical exposure measures in a large prospective study. 2393 workers from several industries were followed for up to 2.8 years (5.5 person-years). Standard Occupational Classification (SOC) codes were assigned to the job at enrolment. SOC codes linked to physical exposures for forceful hand exertion and repetitive activities were extracted from O*NET. We used multivariable Cox proportional hazards regression models to describe exposure-disease associations for incident CTS for individually observed physical exposures and JEM exposures from O*NET. Both exposure methods found associations between incident CTS and exposures of force and repetition, with evidence of dose-response. Observed associations were similar across the two methods, with somewhat wider CIs for HRs calculated using the JEM method. Exposures estimated using a JEM provided similar exposure-disease associations for CTS when compared with associations obtained using the 'gold standard' method of individual observation. While JEMs have a number of limitations, in some studies they can provide useful exposure estimates in the absence of individual-level observed exposures. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    NASA Astrophysics Data System (ADS)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  19. Spatial hydrological drought characteristics in Karkheh River basin, southwest Iran using copulas

    NASA Astrophysics Data System (ADS)

    Dodangeh, Esmaeel; Shahedi, Kaka; Shiau, Jenq-Tzong; MirAkbari, Maryam

    2017-08-01

    Investigation on drought characteristics such as severity, duration, and frequency is crucial for water resources planning and management in a river basin. While the methodology for multivariate drought frequency analysis is well established by applying the copulas, the estimation on the associated parameters by various parameter estimation methods and the effects on the obtained results have not yet been investigated. This research aims at conducting a comparative analysis between the maximum likelihood parametric and non-parametric method of the Kendall τ estimation method for copulas parameter estimation. The methods were employed to study joint severity-duration probability and recurrence intervals in Karkheh River basin (southwest Iran) which is facing severe water-deficit problems. Daily streamflow data at three hydrological gauging stations (Tang Sazbon, Huleilan and Polchehr) near the Karkheh dam were used to draw flow duration curves (FDC) of these three stations. The Q_{75} index extracted from the FDC were set as threshold level to abstract drought characteristics such as drought duration and severity on the basis of the run theory. Drought duration and severity were separately modeled using the univariate probabilistic distributions and gamma-GEV, LN2-exponential, and LN2-gamma were selected as the best paired drought severity-duration inputs for copulas according to the Akaike Information Criteria (AIC), Kolmogorov-Smirnov and chi-square tests. Archimedean Clayton, Frank, and extreme value Gumbel copulas were employed to construct joint cumulative distribution functions (JCDF) of droughts for each station. Frank copula at Tang Sazbon and Gumbel at Huleilan and Polchehr stations were identified as the best copulas based on the performance evaluation criteria including AIC, BIC, log-likelihood and root mean square error (RMSE) values. Based on the RMSE values, nonparametric Kendall-τ is preferred to the parametric maximum likelihood estimation method. The results showed greater drought return periods by the parametric ML method in comparison to the nonparametric Kendall τ estimation method. The results also showed that stations located in tributaries (Huleilan and Polchehr) have close return periods, while the station along the main river (Tang Sazbon) has the smaller return periods for the drought events with identical drought duration and severity.

  20. An evaluation of methods for estimating decadal stream loads

    NASA Astrophysics Data System (ADS)

    Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-11-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.

  1. An evaluation of methods for estimating decadal stream loads

    USGS Publications Warehouse

    Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-01-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.

  2. Power system frequency estimation based on an orthogonal decomposition method

    NASA Astrophysics Data System (ADS)

    Lee, Chih-Hung; Tsai, Men-Shen

    2018-06-01

    In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.

  3. Multi-Unmanned Aerial Vehicle (UAV) Cooperative Fault Detection Employing Differential Global Positioning (DGPS), Inertial and Vision Sensors.

    PubMed

    Heredia, Guillermo; Caballero, Fernando; Maza, Iván; Merino, Luis; Viguria, Antidio; Ollero, Aníbal

    2009-01-01

    This paper presents a method to increase the reliability of Unmanned Aerial Vehicle (UAV) sensor Fault Detection and Identification (FDI) in a multi-UAV context. Differential Global Positioning System (DGPS) and inertial sensors are used for sensor FDI in each UAV. The method uses additional position estimations that augment individual UAV FDI system. These additional estimations are obtained using images from the same planar scene taken from two different UAVs. Since accuracy and noise level of the estimation depends on several factors, dynamic replanning of the multi-UAV team can be used to obtain a better estimation in case of faults caused by slow growing errors of absolute position estimation that cannot be detected by using local FDI in the UAVs. Experimental results with data from two real UAVs are also presented.

  4. Bruise chromophore concentrations over time

    NASA Astrophysics Data System (ADS)

    Duckworth, Mark G.; Caspall, Jayme J.; Mappus, Rudolph L., IV; Kong, Linghua; Yi, Dingrong; Sprigle, Stephen H.

    2008-03-01

    During investigations of potential child and elder abuse, clinicians and forensic practitioners are often asked to offer opinions about the age of a bruise. A commonality between existing methods of bruise aging is analysis of bruise color or estimation of chromophore concentration. Relative chromophore concentration is an underlying factor that determines bruise color. We investigate a method of chromophore concentration estimation that can be employed in a handheld imaging spectrometer with a small number of wavelengths. The method, based on absorbance properties defined by Beer-Lambert's law, allows estimation of differential chromophore concentration between bruised and normal skin. Absorption coefficient data for each chromophore are required to make the estimation. Two different sources of this data are used in the analysis- generated using Independent Component Analysis and taken from published values. Differential concentration values over time, generated using both sources, show correlation to published models of bruise color change over time and total chromophore concentration over time.

  5. Optimal distribution of integration time for intensity measurements in degree of linear polarization polarimetry.

    PubMed

    Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie

    2016-04-04

    We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.

  6. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  7. Distributed control of large space structures

    NASA Technical Reports Server (NTRS)

    Schaechter, D. B.

    1981-01-01

    Theoretical developments and the results of laboratory experiments are treated as they apply to active attitude and vibration control, as well as static shape control. Modern control theory was employed throughout as the method for obtaining estimation and control laws.

  8. Can We Spin Straw Into Gold? An Evaluation of Immigrant Legal Status Imputation Approaches

    PubMed Central

    Van Hook, Jennifer; Bachmeier, James D.; Coffman, Donna; Harel, Ofer

    2014-01-01

    Researchers have developed logical, demographic, and statistical strategies for imputing immigrants’ legal status, but these methods have never been empirically assessed. We used Monte Carlo simulations to test whether, and under what conditions, legal status imputation approaches yield unbiased estimates of the association of unauthorized status with health insurance coverage. We tested five methods under a range of missing data scenarios. Logical and demographic imputation methods yielded biased estimates across all missing data scenarios. Statistical imputation approaches yielded unbiased estimates only when unauthorized status was jointly observed with insurance coverage; when this condition was not met, these methods overestimated insurance coverage for unauthorized relative to legal immigrants. We next showed how bias can be reduced by incorporating prior information about unauthorized immigrants. Finally, we demonstrated the utility of the best-performing statistical method for increasing power. We used it to produce state/regional estimates of insurance coverage among unauthorized immigrants in the Current Population Survey, a data source that contains no direct measures of immigrants’ legal status. We conclude that commonly employed legal status imputation approaches are likely to produce biased estimates, but data and statistical methods exist that could substantially reduce these biases. PMID:25511332

  9. RESEARCH: An Ecoregional Approach to the Economic Valuation of Land- and Water-Based Recreation in the United States

    PubMed

    Bhat; Bergstrom; Teasley; Bowker; Cordell

    1998-01-01

    / This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such as motor boating and waterskiing, developed and primitive camping, coldwater fishing, sightseeing and pleasure driving, and big game hunting for each ecoregion. While our ecoregional approach differs conceptually from previous work, our results appear consistent with the previous travel cost method valuation studies.KEY WORDS: Recreation; Ecoregion; Travel cost method; Truncated Poisson model

  10. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  11. 75 FR 46958 - Proposed Fair Market Rents for the Housing Choice Voucher Program and Moderate Rehabilitation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-04

    ... program staff. Questions on how to conduct FMR surveys or concerning further methodological explanations... insufficient sample sizes. The areas covered by this estimation method had less than the HUD standard of 200...-bedroom FMR for that area's CBSA as calculated using methods employed for past metropolitan area FMR...

  12. Critique of Sikkink and Keane's comparison of surface fuel sampling techniques

    Treesearch

    Clinton S. Wright; Roger D. Ottmar; Robert E. Vihnanek

    2010-01-01

    The 2008 paper of Sikkink and Keane compared several methods to estimate surface fuel loading in western Montana: two widely used inventory techniques (planar intersect and fixed-area plot) and three methods that employ photographs as visual guides (photo load, photoload macroplot and photo series). We feel, however, that their study design was inadequate to evaluate...

  13. A Model for Minimizing Numeric Function Generator Complexity and Delay

    DTIC Science & Technology

    2007-12-01

    allow computation of difficult mathematical functions in less time and with less hardware than commonly employed methods. They compute piecewise...Programmable Gate Arrays (FPGAs). The algorithms and estimation techniques apply to various NFG architectures and mathematical functions. This...thesis compares hardware utilization and propagation delay for various NFG architectures, mathematical functions, word widths, and segmentation methods

  14. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.

  15. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  16. Branching-ratio approximation for the self-exciting Hawkes process

    NASA Astrophysics Data System (ADS)

    Hardiman, Stephen J.; Bouchaud, Jean-Philippe

    2014-12-01

    We introduce a model-independent approximation for the branching ratio of Hawkes self-exciting point processes. Our estimator requires knowing only the mean and variance of the event count in a sufficiently large time window, statistics that are readily obtained from empirical data. The method we propose greatly simplifies the estimation of the Hawkes branching ratio, recently proposed as a proxy for market endogeneity and formerly estimated using numerical likelihood maximization. We employ our method to support recent theoretical and experimental results indicating that the best fitting Hawkes model to describe S&P futures price changes is in fact critical (now and in the recent past) in light of the long memory of financial market activity.

  17. Estimating the cost of a smoking employee.

    PubMed

    Berman, Micah; Crane, Rob; Seiber, Eric; Munur, Mehmet

    2014-09-01

    We attempted to estimate the excess annual costs that a US private employer may attribute to employing an individual who smokes tobacco as compared to a non-smoking employee. Reviewing and synthesising previous literature estimating certain discrete costs associated with smoking employees, we developed a cost estimation approach that approximates the total of such costs for U.S. employers. We examined absenteeism, presenteesim, smoking breaks, healthcare costs and pension benefits for smokers. Our best estimate of the annual excess cost to employ a smoker is $5816. This estimate should be taken as a general indicator of the extent of excess costs, not as a predictive point value. Employees who smoke impose significant excess costs on private employers. The results of this study may help inform employer decisions about tobacco-related policies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Tire Force Estimation using a Proportional Integral Observer

    NASA Astrophysics Data System (ADS)

    Farhat, Ahmad; Koenig, Damien; Hernandez-Alcantara, Diana; Morales-Menendez, Ruben

    2017-01-01

    This paper addresses a method for detecting critical stability situations in the lateral vehicle dynamics by estimating the non-linear part of the tire forces. These forces indicate the road holding performance of the vehicle. The estimation method is based on a robust fault detection and estimation approach which minimize the disturbance and uncertainties to residual sensitivity. It consists in the design of a Proportional Integral Observer (PIO), while minimizing the well known H ∞ norm for the worst case uncertainties and disturbance attenuation, and combining a transient response specification. This multi-objective problem is formulated as a Linear Matrix Inequalities (LMI) feasibility problem where a cost function subject to LMI constraints is minimized. This approach is employed to generate a set of switched robust observers for uncertain switched systems, where the convergence of the observer is ensured using a Multiple Lyapunov Function (MLF). Whilst the forces to be estimated can not be physically measured, a simulation scenario with CarSimTM is presented to illustrate the developed method.

  19. Advantage of multiple spot urine collections for estimating daily sodium excretion: comparison with two 24-h urine collections as reference.

    PubMed

    Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi

    2016-02-01

    Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62  mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.

  20. The impacts of non-renewable and renewable energy on CO2 emissions in Turkey.

    PubMed

    Bulut, Umit

    2017-06-01

    As a result of great increases in CO 2 emissions in the last few decades, many papers have examined the relationship between renewable energy and CO 2 emissions in the energy economics literature, because as a clean energy source, renewable energy can reduce CO 2 emissions and solve environmental problems stemming from increases in CO 2 emissions. When one analyses these papers, he/she will observe that they employ fixed parameter estimation methods, and time-varying effects of non-renewable and renewable energy consumption/production on greenhouse gas emissions are ignored. In order to fulfil this gap in the literature, this paper examines the effects of non-renewable and renewable energy on CO 2 emissions in Turkey over the period 1970-2013 by employing fixed parameter and time-varying parameter estimation methods. Estimation methods reveal that CO 2 emissions are positively related to non-renewable energy and renewable energy in Turkey. Since policy makers expect renewable energy to decrease CO 2 emissions, this paper argues that renewable energy is not able to satisfy the expectations of policy makers though fewer CO 2 emissions arise through production of electricity using renewable sources. In conclusion, the paper argues that policy makers should implement long-term energy policies in Turkey.

  1. Estimation of Subpixel Snow-Covered Area by Nonparametric Regression Splines

    NASA Astrophysics Data System (ADS)

    Kuter, S.; Akyürek, Z.; Weber, G.-W.

    2016-10-01

    Measurement of the areal extent of snow cover with high accuracy plays an important role in hydrological and climate modeling. Remotely-sensed data acquired by earth-observing satellites offer great advantages for timely monitoring of snow cover. However, the main obstacle is the tradeoff between temporal and spatial resolution of satellite imageries. Soft or subpixel classification of low or moderate resolution satellite images is a preferred technique to overcome this problem. The most frequently employed snow cover fraction methods applied on Moderate Resolution Imaging Spectroradiometer (MODIS) data have evolved from spectral unmixing and empirical Normalized Difference Snow Index (NDSI) methods to latest machine learning-based artificial neural networks (ANNs). This study demonstrates the implementation of subpixel snow-covered area estimation based on the state-of-the-art nonparametric spline regression method, namely, Multivariate Adaptive Regression Splines (MARS). MARS models were trained by using MODIS top of atmospheric reflectance values of bands 1-7 as predictor variables. Reference percentage snow cover maps were generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also employed to estimate the percentage snow-covered area on the same data set. The results indicated that the developed MARS model performed better than th

  2. A method for estimating cost savings for population health management programs.

    PubMed

    Murphy, Shannon M E; McGready, John; Griswold, Michael E; Sylvia, Martha L

    2013-04-01

    To develop a quasi-experimental method for estimating Population Health Management (PHM) program savings that mitigates common sources of confounding, supports regular updates for continued program monitoring, and estimates model precision. Administrative, program, and claims records from January 2005 through June 2009. Data are aggregated by member and month. Study participants include chronically ill adult commercial health plan members. The intervention group consists of members currently enrolled in PHM, stratified by intensity level. Comparison groups include (1) members never enrolled, and (2) PHM participants not currently enrolled. Mixed model smoothing is employed to regress monthly medical costs on time (in months), a history of PHM enrollment, and monthly program enrollment by intensity level. Comparison group trends are used to estimate expected costs for intervention members. Savings are realized when PHM participants' costs are lower than expected. This method mitigates many of the limitations faced using traditional pre-post models for estimating PHM savings in an observational setting, supports replication for ongoing monitoring, and performs basic statistical inference. This method provides payers with a confident basis for making investment decisions. © Health Research and Educational Trust.

  3. Discharge estimation from H-ADCP measurements in a tidal river subject to sidewall effects and a mobile bed

    NASA Astrophysics Data System (ADS)

    Sassi, M. G.; Hoitink, A. J. F.; Vermeulen, B.; Hidayat, null

    2011-06-01

    Horizontal acoustic Doppler current profilers (H-ADCPs) can be employed to estimate river discharge based on water level measurements and flow velocity array data across a river transect. A new method is presented that accounts for the dip in velocity near the water surface, which is caused by sidewall effects that decrease with the width to depth ratio of a channel. A boundary layer model is introduced to convert single-depth velocity data from the H-ADCP to specific discharge. The parameters of the model include the local roughness length and a dip correction factor, which accounts for the sidewall effects. A regression model is employed to translate specific discharge to total discharge. The method was tested in the River Mahakam, representing a large river of complex bathymetry, where part of the flow is intrinsically three-dimensional and discharge rates exceed 8000 m3 s-1. Results from five moving boat ADCP campaigns covering separate semidiurnal tidal cycles are presented, three of which are used for calibration purposes, whereas the remaining two served for validation of the method. The dip correction factor showed a significant correlation with distance to the wall and bears a strong relation to secondary currents. The sidewall effects appeared to remain relatively constant throughout the tidal cycles under study. Bed roughness length is estimated at periods of maximum velocity, showing more variation at subtidal than at intratidal time scales. Intratidal variations were particularly obvious during bidirectional flow conditions, which occurred only during conditions of low river discharge. The new method was shown to outperform the widely used index velocity method by systematically reducing the relative error in the discharge estimates.

  4. A head motion estimation algorithm for motion artifact correction in dental CT imaging

    NASA Astrophysics Data System (ADS)

    Hernandez, Daniel; Elsayed Eldib, Mohamed; Hegazy, Mohamed A. A.; Hye Cho, Myung; Cho, Min Hyoung; Lee, Soo Yeol

    2018-03-01

    A small head motion of the patient can compromise the image quality in a dental CT, in which a slow cone-beam scan is adopted. We introduce a retrospective head motion estimation method by which we can estimate the motion waveform from the projection images without employing any external motion monitoring devices. We compute the cross-correlation between every two successive projection images, which results in a sinusoid-like displacement curve over the projection view when there is no patient motion. However, the displacement curve deviates from the sinusoid-like form when patient motion occurs. We develop a method to estimate the motion waveform with a single parameter derived from the displacement curve with aid of image entropy minimization. To verify the motion estimation method, we use a lab-built micro-CT that can emulate major head motions during dental CT scans, such as tilting and nodding, in a controlled way. We find that the estimated motion waveform conforms well to the actual motion waveform. To further verify the motion estimation method, we correct the motion artifacts with the estimated motion waveform. After motion artifact correction, the corrected images look almost identical to the reference images, with structural similarity index values greater than 0.81 in the phantom and rat imaging studies.

  5. State-Level Estimates of Cancer-Related Absenteeism Costs

    PubMed Central

    Tangka, Florence K.; Trogdon, Justin G.; Nwaise, Isaac; Ekwueme, Donatus U.; Guy, Gery P.; Orenstein, Diane

    2016-01-01

    Background Cancer is one of the top five most costly diseases in the United States and leads to substantial work loss. Nevertheless, limited state-level estimates of cancer absenteeism costs have been published. Methods In analyses of data from the 2004–2008 Medical Expenditure Panel Survey, the 2004 National Nursing Home Survey, the U.S. Census Bureau for 2008, and the 2009 Current Population Survey, we used regression modeling to estimate annual state-level absenteeism costs attributable to cancer from 2004 to 2008. Results We estimated that the state-level median number of days of absenteeism per year among employed cancer patients was 6.1 days and that annual state-level cancer absenteeism costs ranged from $14.9 million to $915.9 million (median = $115.9 million) across states in 2010 dollars. Absenteeism costs are approximately 6.5% of the costs of premature cancer mortality. Conclusions The results from this study suggest that lost productivity attributable to cancer is a substantial cost to employees and employers and contributes to estimates of the overall impact of cancer in a state population. PMID:23969498

  6. A review of sex estimation techniques during examination of skeletal remains in forensic anthropology casework.

    PubMed

    Krishan, Kewal; Chatterjee, Preetika M; Kanchan, Tanuj; Kaur, Sandeep; Baryah, Neha; Singh, R K

    2016-04-01

    Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Medical Malpractice Reform and Employer-Sponsored Health Insurance Premiums

    PubMed Central

    Morrisey, Michael A; Kilgore, Meredith L; Nelson, Leonard (Jack)

    2008-01-01

    Objective Tort reform may affect health insurance premiums both by reducing medical malpractice premiums and by reducing the extent of defensive medicine. The objective of this study is to estimate the effects of noneconomic damage caps on the premiums for employer-sponsored health insurance. Data Sources/Study Setting Employer premium data and plan/establishment characteristics were obtained from the 1999 through 2004 Kaiser/HRET Employer Health Insurance Surveys. Damage caps were obtained and dated based on state annotated codes, statutes, and judicial decisions. Study Design Fixed effects regression models were run to estimate the effects of the size of inflation-adjusted damage caps on the weighted average single premiums. Data Collection/Extraction Methods State tort reform laws were identified using Westlaw, LEXIS, and statutory compilations. Legislative repeal and amendment of statutes and court decisions resulting in the overturning or repealing state statutes were also identified using LEXIS. Principal Findings Using a variety of empirical specifications, there was no statistically significant evidence that noneconomic damage caps exerted any meaningful influence on the cost of employer-sponsored health insurance. Conclusions The findings suggest that tort reforms have not translated into insurance savings. PMID:18522666

  8. Comparative studies of copy number variation detection methods for next-generation sequencing technologies.

    PubMed

    Duan, Junbo; Zhang, Ji-Gang; Deng, Hong-Wen; Wang, Yu-Ping

    2013-01-01

    Copy number variation (CNV) has played an important role in studies of susceptibility or resistance to complex diseases. Traditional methods such as fluorescence in situ hybridization (FISH) and array comparative genomic hybridization (aCGH) suffer from low resolution of genomic regions. Following the emergence of next generation sequencing (NGS) technologies, CNV detection methods based on the short read data have recently been developed. However, due to the relatively young age of the procedures, their performance is not fully understood. To help investigators choose suitable methods to detect CNVs, comparative studies are needed. We compared six publicly available CNV detection methods: CNV-seq, FREEC, readDepth, CNVnator, SegSeq and event-wise testing (EWT). They are evaluated both on simulated and real data with different experiment settings. The receiver operating characteristic (ROC) curve is employed to demonstrate the detection performance in terms of sensitivity and specificity, box plot is employed to compare their performances in terms of breakpoint and copy number estimation, Venn diagram is employed to show the consistency among these methods, and F-score is employed to show the overlapping quality of detected CNVs. The computational demands are also studied. The results of our work provide a comprehensive evaluation on the performances of the selected CNV detection methods, which will help biological investigators choose the best possible method.

  9. Relationships between autofocus methods for SAR and self-survey techniques for SONAR. [Synthetic Aperture Radar (SAR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahl, D.E.; Jakowatz, C.V. Jr.; Ghiglia, D.C.

    1991-01-01

    Autofocus methods in SAR and self-survey techniques in SONAR have a common mathematical basis in that they both involve estimation and correction of phase errors introduced by sensor position uncertainties. Time delay estimation and correlation methods have been shown to be effective in solving the self-survey problem for towed SONAR arrays. Since it can be shown that platform motion errors introduce similar time-delay estimation problems in SAR imaging, the question arises as to whether such techniques could be effectively employed for autofocus of SAR imagery. With a simple mathematical model for motion errors in SAR, we will show why suchmore » correlation/time-delay techniques are not nearly as effective as established SAR autofocus algorithms such as phase gradient autofocus or sub-aperture based methods. This analysis forms an important bridge between signal processing methodologies for SAR and SONAR. 5 refs., 4 figs.« less

  10. Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities

    DOE PAGES

    Margolis, Robert; Gagnon, Pieter; Melius, Jennifer; ...

    2017-07-06

    Here, we estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV's ability to meet estimated city electricity consumption varies widely - from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city's estimated electricity consumption. In addition to city-wide results,more » we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.« less

  11. Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolis, Robert; Gagnon, Pieter; Melius, Jennifer

    Here, we estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV's ability to meet estimated city electricity consumption varies widely - from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city's estimated electricity consumption. In addition to city-wide results,more » we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.« less

  12. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  13. An inverse hyper-spherical harmonics-based formulation for reconstructing 3D volumetric lung deformations

    NASA Astrophysics Data System (ADS)

    Santhanam, Anand P.; Min, Yugang; Mudur, Sudhir P.; Rastogi, Abhinav; Ruddy, Bari H.; Shah, Amish; Divo, Eduardo; Kassab, Alain; Rolland, Jannick P.; Kupelian, Patrick

    2010-07-01

    A method to estimate the deformation operator for the 3D volumetric lung dynamics of human subjects is described in this paper. For known values of air flow and volumetric displacement, the deformation operator and subsequently the elastic properties of the lung are estimated in terms of a Green's function. A Hyper-Spherical Harmonic (HSH) transformation is employed to compute the deformation operator. The hyper-spherical coordinate transformation method discussed in this paper facilitates accounting for the heterogeneity of the deformation operator using a finite number of frequency coefficients. Spirometry measurements are used to provide values for the airflow inside the lung. Using a 3D optical flow-based method, the 3D volumetric displacement of the left and right lungs, which represents the local anatomy and deformation of a human subject, was estimated from 4D-CT dataset. Results from an implementation of the method show the estimation of the deformation operator for the left and right lungs of a human subject with non-small cell lung cancer. Validation of the proposed method shows that we can estimate the Young's modulus of each voxel within a 2% error level.

  14. Estimating Temporal Causal Interaction between Spike Trains with Permutation and Transfer Entropy

    PubMed Central

    Li, Zhaohui; Li, Xiaoli

    2013-01-01

    Estimating the causal interaction between neurons is very important for better understanding the functional connectivity in neuronal networks. We propose a method called normalized permutation transfer entropy (NPTE) to evaluate the temporal causal interaction between spike trains, which quantifies the fraction of ordinal information in a neuron that has presented in another one. The performance of this method is evaluated with the spike trains generated by an Izhikevich’s neuronal model. Results show that the NPTE method can effectively estimate the causal interaction between two neurons without influence of data length. Considering both the precision of time delay estimated and the robustness of information flow estimated against neuronal firing rate, the NPTE method is superior to other information theoretic method including normalized transfer entropy, symbolic transfer entropy and permutation conditional mutual information. To test the performance of NPTE on analyzing simulated biophysically realistic synapses, an Izhikevich’s cortical network that based on the neuronal model is employed. It is found that the NPTE method is able to characterize mutual interactions and identify spurious causality in a network of three neurons exactly. We conclude that the proposed method can obtain more reliable comparison of interactions between different pairs of neurons and is a promising tool to uncover more details on the neural coding. PMID:23940662

  15. Sampling studies to estimate the HIV prevalence rate in female commercial sex workers.

    PubMed

    Pascom, Ana Roberta Pati; Szwarcwald, Célia Landmann; Barbosa Júnior, Aristides

    2010-01-01

    We investigated sampling methods being used to estimate the HIV prevalence rate among female commercial sex workers. The studies were classified according to the adequacy or not of the sample size to estimate HIV prevalence rate and according to the sampling method (probabilistic or convenience). We identified 75 studies that estimated the HIV prevalence rate among female sex workers. Most of the studies employed convenience samples. The sample size was not adequate to estimate HIV prevalence rate in 35 studies. The use of convenience sample limits statistical inference for the whole group. It was observed that there was an increase in the number of published studies since 2005, as well as in the number of studies that used probabilistic samples. This represents a large advance in the monitoring of risk behavior practices and HIV prevalence rate in this group.

  16. On the selection of ordinary differential equation models with application to predator-prey dynamical models.

    PubMed

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2015-03-01

    We consider model selection and estimation in a context where there are competing ordinary differential equation (ODE) models, and all the models are special cases of a "full" model. We propose a computationally inexpensive approach that employs statistical estimation of the full model, followed by a combination of a least squares approximation (LSA) and the adaptive Lasso. We show the resulting method, here called the LSA method, to be an (asymptotically) oracle model selection method. The finite sample performance of the proposed LSA method is investigated with Monte Carlo simulations, in which we examine the percentage of selecting true ODE models, the efficiency of the parameter estimation compared to simply using the full and true models, and coverage probabilities of the estimated confidence intervals for ODE parameters, all of which have satisfactory performances. Our method is also demonstrated by selecting the best predator-prey ODE to model a lynx and hare population dynamical system among some well-known and biologically interpretable ODE models. © 2014, The International Biometric Society.

  17. Methods for estimating missing human skeletal element osteometric dimensions employed in the revised fully technique for estimating stature.

    PubMed

    Auerbach, Benjamin M

    2011-05-01

    One of the greatest limitations to the application of the revised Fully anatomical stature estimation method is the inability to measure some of the skeletal elements required in its calculation. These element dimensions cannot be obtained due to taphonomic factors, incomplete excavation, or disease processes, and result in missing data. This study examines methods of imputing these missing dimensions using observable Fully measurements from the skeleton and the accuracy of incorporating these missing element estimations into anatomical stature reconstruction. These are further assessed against stature estimations obtained from mathematical regression formulae for the lower limb bones (femur and tibia). Two thousand seven hundred and seventeen North and South American indigenous skeletons were measured, and subsets of these with observable Fully dimensions were used to simulate missing elements and create estimation methods and equations. Comparisons were made directly between anatomically reconstructed statures and mathematically derived statures, as well as with anatomically derived statures with imputed missing dimensions. These analyses demonstrate that, while mathematical stature estimations are more accurate, anatomical statures incorporating missing dimensions are not appreciably less accurate and are more precise. The anatomical stature estimation method using imputed missing dimensions is supported. Missing element estimation, however, is limited to the vertebral column (only when lumbar vertebrae are present) and to talocalcaneal height (only when femora and tibiae are present). Crania, entire vertebral columns, and femoral or tibial lengths cannot be reliably estimated. Further discussion of the applicability of these methods is discussed. Copyright © 2011 Wiley-Liss, Inc.

  18. Assessment of the effect of population and diary sampling methods on estimation of school-age children exposure to fine particles.

    PubMed

    Che, W W; Frey, H Christopher; Lau, Alexis K H

    2014-12-01

    Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.

  19. Implications of employer coverage of contraception: Cost-effectiveness analysis of contraception coverage under an employer mandate.

    PubMed

    Canestaro, W; Vodicka, E; Downing, D; Trussell, J

    2017-01-01

    Mandatory employer-based insurance coverage of contraception in the US has been a controversial component of the Affordable Care Act (ACA). Prior research has examined the cost-effectiveness of contraception in general; however, no studies have developed a formal decision model in the context of the new ACA provisions. As such, this study aims to estimate the relative cost-effectiveness of insurance coverage of contraception under employer-sponsored insurance coverage taking into consideration newer regulations allowing for religious exemptions. A decision model was developed from the employer perspective to simulate pregnancy costs and outcomes associated with insurance coverage. Method-specific estimates of contraception failure rates, outcomes and costs were derived from the literature. Uptake by marital status and age was drawn from a nationally representative database. Providing no contraception coverage resulted in 33 more unintended pregnancies per 1000 women (95% confidence range: 22.4; 44.0). This subsequently significantly increased the number of unintended births and terminations. Total costs were higher among uninsured women owing to higher costs of pregnancy outcomes. The effect of no insurance was greatest on unmarried women 20-29 years old. Denying female employees' full coverage of contraceptives increases total costs from the employer perspective, as well as the total number of terminations. Insurance coverage was found to be significantly associated with women's choice of contraceptive method in a large nationally representative sample. Using a decision model to extrapolate to pregnancy outcomes, we found a large and statistically significant difference in unintended pregnancy and terminations. Denying women contraception coverage may have significant consequences for pregnancy outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Robust Abundance Estimation in Animal Abundance Surveys with Imperfect Detection

    EPA Science Inventory

    Surveys of animal abundance are central to the conservation and management of living natural resources. However, detection uncertainty complicates the sampling process of many species. One sampling method employed to deal with this problem is depletion (or removal) surveys in whi...

  1. Robust Abundance Estimation in Animal Surveys with Imperfect Detection

    EPA Science Inventory

    Surveys of animal abundance are central to the conservation and management of living natural resources. However, detection uncertainty complicates the sampling process of many species. One sampling method employed to deal with this problem is depletion (or removal) surveys in whi...

  2. Climate reconstruction analysis using coexistence likelihood estimation (CRACLE): a method for the estimation of climate using vegetation.

    PubMed

    Harbert, Robert S; Nixon, Kevin C

    2015-08-01

    • Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.

  3. A Bayesian estimation of the helioseismic solar age

    NASA Astrophysics Data System (ADS)

    Bonanno, A.; Fröhlich, H.-E.

    2015-08-01

    Context. The helioseismic determination of the solar age has been a subject of several studies because it provides us with an independent estimation of the age of the solar system. Aims: We present the Bayesian estimates of the helioseismic age of the Sun, which are determined by means of calibrated solar models that employ different equations of state and nuclear reaction rates. Methods: We use 17 frequency separation ratios r02(n) = (νn,l = 0-νn-1,l = 2)/(νn,l = 1-νn-1,l = 1) from 8640 days of low-ℓBiSON frequencies and consider three likelihood functions that depend on the handling of the errors of these r02(n) ratios. Moreover, we employ the 2010 CODATA recommended values for Newton's constant, solar mass, and radius to calibrate a large grid of solar models spanning a conceivable range of solar ages. Results: It is shown that the most constrained posterior distribution of the solar age for models employing Irwin EOS with NACRE reaction rates leads to t⊙ = 4.587 ± 0.007 Gyr, while models employing the Irwin EOS and Adelberger, et al. (2011, Rev. Mod. Phys., 83, 195) reaction rate have t⊙ = 4.569 ± 0.006 Gyr. Implementing OPAL EOS in the solar models results in reduced evidence ratios (Bayes factors) and leads to an age that is not consistent with the meteoritic dating of the solar system. Conclusions: An estimate of the solar age that relies on an helioseismic age indicator such as r02(n) turns out to be essentially independent of the type of likelihood function. However, with respect to model selection, abandoning any information concerning the errors of the r02(n) ratios leads to inconclusive results, and this stresses the importance of evaluating the trustworthiness of error estimates.

  4. A physical mechanism for the prediction of the sunspot number during solar cycle 21. [graphs (charts)

    NASA Technical Reports Server (NTRS)

    Schatten, K. H.; Scherrer, P. H.; Svalgaard, L.; Wilcox, J. M.

    1978-01-01

    On physical grounds it is suggested that the sun's polar field strength near a solar minimum is closely related to the following cycle's solar activity. Four methods of estimating the sun's polar magnetic field strength near solar minimum are employed to provide an estimate of cycle 21's yearly mean sunspot number at solar maximum of 140 plus or minus 20. This estimate is considered to be a first order attempt to predict the cycle's activity using one parameter of physical importance.

  5. Electrochemical reduction of carbon fluorine bond in 4-fluorobenzonitrile Mechanistic analysis employing Marcus Hush quadratic activation-driving force relation

    NASA Astrophysics Data System (ADS)

    Muthukrishnan, A.; Sangaranarayanan, M. V.

    2007-10-01

    The reduction of carbon-fluorine bond in 4-fluorobenzonitrile in acetonitrile as the solvent, is analyzed using convolution potential sweep voltammetry and the dependence of the transfer coefficient on potential is investigated within the framework of Marcus-Hush quadratic activation-driving force theory. The validity of stepwise mechanism is inferred from solvent reorganization energy estimates as well as bond length calculations using B3LYP/6-31g(d) method. A novel method of estimating the standard reduction potential of the 4-fluorobenzonitrile in acetonitrile is proposed.

  6. The forgotten component of sub-glacial heat flow: Upper crustal heat production and resultant total heat flux on the Antarctic Peninsula

    NASA Astrophysics Data System (ADS)

    Burton-Johnson, Alex; Halpin, Jacqueline; Whittaker, Joanne; Watson, Sally

    2017-04-01

    Seismic and magnetic geophysical methods have both been employed to produce estimates of heat flux beneath the Antarctic ice sheet. However, both methods use a homogeneous upper crustal model despite the variable concentration of heat producing elements within its composite lithologies. Using geological and geochemical datasets from the Antarctic Peninsula we have developed a new methodology for incorporating upper crustal heat production in heat flux models and have shown the greater variability this introduces in to estimates of crustal heat flux, with implications for glaciological modelling.

  7. Actuarial calculation for PSAK-24 purposes post-employment benefit using market-consistent approach

    NASA Astrophysics Data System (ADS)

    Effendie, Adhitya Ronnie

    2015-12-01

    In this paper we use a market-consistent approach to calculate present value of obligation of a companies' post-employment benefit in accordance with PSAK-24 (the Indonesian accounting standard). We set some actuarial assumption such as Indonesian TMI 2011 mortality tables for mortality assumptions, accumulated salary function for wages assumption, a scaled (to mortality) disability assumption and a pre-defined turnover rate for termination assumption. For economic assumption, we use binomial tree method with estimated discount rate as its average movement. In accordance with PSAK-24, the Projected Unit Credit method has been adapted to determine the present value of obligation (actuarial liability), so we use this method with a modification in its discount function.

  8. Design of Passive Power Filter for Hybrid Series Active Power Filter using Estimation, Detection and Classification Method

    NASA Astrophysics Data System (ADS)

    Swain, Sushree Diptimayee; Ray, Pravat Kumar; Mohanty, K. B.

    2016-06-01

    This research paper discover the design of a shunt Passive Power Filter (PPF) in Hybrid Series Active Power Filter (HSAPF) that employs a novel analytic methodology which is superior than FFT analysis. This novel approach consists of the estimation, detection and classification of the signals. The proposed method is applied to estimate, detect and classify the power quality (PQ) disturbance such as harmonics. This proposed work deals with three methods: the harmonic detection through wavelet transform method, the harmonic estimation by Kalman Filter algorithm and harmonic classification by decision tree method. From different type of mother wavelets in wavelet transform method, the db8 is selected as suitable mother wavelet because of its potency on transient response and crouched oscillation at frequency domain. In harmonic compensation process, the detected harmonic is compensated through Hybrid Series Active Power Filter (HSAPF) based on Instantaneous Reactive Power Theory (IRPT). The efficacy of the proposed method is verified in MATLAB/SIMULINK domain and as well as with an experimental set up. The obtained results confirm the superiority of the proposed methodology than FFT analysis. This newly proposed PPF is used to make the conventional HSAPF more robust and stable.

  9. Graph reconstruction using covariance-based methods.

    PubMed

    Sulaimanov, Nurgazy; Koeppl, Heinz

    2016-12-01

    Methods based on correlation and partial correlation are today employed in the reconstruction of a statistical interaction graph from high-throughput omics data. These dedicated methods work well even for the case when the number of variables exceeds the number of samples. In this study, we investigate how the graphs extracted from covariance and concentration matrix estimates are related by using Neumann series and transitive closure and through discussing concrete small examples. Considering the ideal case where the true graph is available, we also compare correlation and partial correlation methods for large realistic graphs. In particular, we perform the comparisons with optimally selected parameters based on the true underlying graph and with data-driven approaches where the parameters are directly estimated from the data.

  10. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  11. Combined state and parameter identification of nonlinear structural dynamical systems based on Rao-Blackwellization and Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Abhinav, S.; Manohar, C. S.

    2018-03-01

    The problem of combined state and parameter estimation in nonlinear state space models, based on Bayesian filtering methods, is considered. A novel approach, which combines Rao-Blackwellized particle filters for state estimation with Markov chain Monte Carlo (MCMC) simulations for parameter identification, is proposed. In order to ensure successful performance of the MCMC samplers, in situations involving large amount of dynamic measurement data and (or) low measurement noise, the study employs a modified measurement model combined with an importance sampling based correction. The parameters of the process noise covariance matrix are also included as quantities to be identified. The study employs the Rao-Blackwellization step at two stages: one, associated with the state estimation problem in the particle filtering step, and, secondly, in the evaluation of the ratio of likelihoods in the MCMC run. The satisfactory performance of the proposed method is illustrated on three dynamical systems: (a) a computational model of a nonlinear beam-moving oscillator system, (b) a laboratory scale beam traversed by a loaded trolley, and (c) an earthquake shake table study on a bending-torsion coupled nonlinear frame subjected to uniaxial support motion.

  12. Automatic creation of object hierarchies for ray tracing

    NASA Technical Reports Server (NTRS)

    Goldsmith, Jeffrey; Salmon, John

    1987-01-01

    Various methods for evaluating generated trees are proposed. The use of the hierarchical extent method of Rubin and Whitted (1980) to find the objects that will be hit by a ray is examined. This method employs tree searching; the construction of a tree of bounding volumes in order to determine the number of objects that will be hit by a ray is discussed. A tree generation algorithm, which uses a heuristic tree search strategy, is described. The effects of shuffling and sorting on the input data are investigated. The cost of inserting an object into the hierarchy during the construction of a tree algorithm is estimated. The steps involved in estimating the number of intersection calculations are presented.

  13. Quantifying uncertainty in discharge measurements: A new approach

    USGS Publications Warehouse

    Kiang, J.E.; Cohn, T.A.; Mason, R.R.

    2009-01-01

    The accuracy of discharge measurements using velocity meters and the velocity-area method is typically assessed based on empirical studies that may not correspond to conditions encountered in practice. In this paper, a statistical approach for assessing uncertainty based on interpolated variance estimation (IVE) is introduced. The IVE method quantifies all sources of random uncertainty in the measured data. This paper presents results employing data from sites where substantial over-sampling allowed for the comparison of IVE-estimated uncertainty and observed variability among repeated measurements. These results suggest that the IVE approach can provide approximate estimates of measurement uncertainty. The use of IVE to estimate the uncertainty of a discharge measurement would provide the hydrographer an immediate determination of uncertainty and help determine whether there is a need for additional sampling in problematic river cross sections. ?? 2009 ASCE.

  14. The impact of prenatal employment on breastfeeding intentions and breastfeeding status at one week postpartum

    PubMed Central

    Attanasio, Laura; Kozhimannil, Katy Backes; McGovern, Patricia; Gjerdingen, Dwenda; Johnson, Pamela Jo

    2013-01-01

    Background Postpartum employment is associated with non-initiation and early cessation of breastfeeding, but less is known about the relationship between prenatal employment and breastfeeding intentions and behaviors. Objective To estimate the relationship between prenatal employment status, a strong predictor of postpartum return to work, and breastfeeding intentions and behaviors. Methods Using data from the Listening to Mothers II national survey (N = 1573), we used propensity score matching methods to account for non-random selection into employment patterns and to measure the impact of prenatal employment status on breastfeeding intentions and behaviors. We also examined whether hospital practices consistent with the Baby Friendly Hospital Initiative (BFHI), assessed based on maternal perception, were differentially associated with breastfeeding by employment status. Results Women who were employed (vs. unemployed) during pregnancy were older, more educated, less likely to have had a previous cesarean delivery, and had fewer children. After matching, these differences were eliminated. Although breastfeeding intention did not differ by employment, full-time employment (vs. no employment) during pregnancy was associated with decreased odds of exclusive breastfeeding one week postpartum (adjusted odds ratio [AOR] 0.48; 95% CI [0.25, 0.92]; p=0.028). Higher BFHI scores were associated with higher odds of breastfeeding at one week, but did not differentially impact women by employment status. Conclusions Women employed full-time during pregnancy were less likely to fulfill their intention to exclusively breastfeed, compared to women who were not employed during pregnancy. Clinicians should be aware that employment circumstances may impact women’s breastfeeding decisions; this may help guide discussions during clinical encounters. PMID:24047641

  15. Uncertainty evaluation in the chloroquine phosphate potentiometric titration: application of three different approaches.

    PubMed

    Rodomonte, Andrea Luca; Montinaro, Annalisa; Bartolomei, Monica

    2006-09-11

    A measurement result cannot be properly interpreted if not accompanied by its uncertainty. Several methods to estimate uncertainty have been developed. From those methods three in particular were chosen in this work to estimate the uncertainty of the Eu. Ph. chloroquine phosphate assay, a potentiometric titration commonly used in medicinal control laboratories. The famous error-budget approach (also called bottom-up or step-by-step) described by the ISO Guide to the expression of Uncertainty in Measurement (GUM) was the first method chosen. It is based on the combination of uncertainty contributions that have to be directly derived from the measurement process. The second method employed was the Analytical Method Committee top-down which estimates uncertainty through reproducibility obtained during inter-laboratory studies. Data for its application were collected in a proficiency testing study carried out by over 50 laboratories throughout Europe. The last method chosen was the one proposed by Barwick and Ellison. It uses a combination of precision, trueness and ruggedness data to estimate uncertainty. These data were collected from a validation process specifically designed for uncertainty estimation. All the three approaches presented a distinctive set of advantages and drawbacks in their implementation. An expanded uncertainty of about 1% was assessed for the assay investigated.

  16. Innovation Analysis | Energy Analysis | NREL

    Science.gov Websites

    . New empirical methods for estimating technical and commercial impact (based on patent citations and Commercial Breakthroughs, NREL employed regression models and multivariate simulations to compare social in the marketplace and found that: Web presence may provide a better representation of the commercial

  17. Predictive Model Development for Aviation Black Carbon Mass Emissions from Alternative and Conventional Fuels at Ground and Cruise.

    PubMed

    Abrahamson, Joseph P; Zelina, Joseph; Andac, M Gurhan; Vander Wal, Randy L

    2016-11-01

    The first order approximation (FOA3) currently employed to estimate BC mass emissions underpredicts BC emissions due to inaccuracies in measuring low smoke numbers (SNs) produced by modern high bypass ratio engines. The recently developed Formation and Oxidation (FOX) method removes the need for and hence uncertainty associated with (SNs), instead relying upon engine conditions in order to predict BC mass. Using the true engine operating conditions from proprietary engine cycle data an improved FOX (ImFOX) predictive relation is developed. Still, the current methods are not optimized to estimate cruise emissions nor account for the use of alternative jet fuels with reduced aromatic content. Here improved correlations are developed to predict engine conditions and BC mass emissions at ground and cruise altitude. This new ImFOX is paired with a newly developed hydrogen relation to predict emissions from alternative fuels and fuel blends. The ImFOX is designed for rich-quench-lean style combustor technologies employed predominately in the current aviation fleet.

  18. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul

    1994-01-01

    This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.

  19. A Practical Cone-beam CT Scatter Correction Method with Optimized Monte Carlo Simulations for Image-Guided Radiation Therapy

    PubMed Central

    Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun

    2015-01-01

    Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299

  20. Estimating the prevalence of negative attitudes towards people with disability: a comparison of direct questioning, projective questioning and randomised response.

    PubMed

    Ostapczuk, Martin; Musch, Jochen

    2011-01-01

    Despite being susceptible to social desirability bias, attitudes towards people with disabilities are traditionally assessed via self-report. We investigated two methods presumably providing more valid prevalence estimates of sensitive attitudes than direct questioning (DQ). Most people projective questioning (MPPQ) attempts to reduce bias by asking interviewees to estimate the number of other people holding a sensitive attribute, rather than confirming or denying the attribute for themselves. The randomised-response technique (RRT) tries to reduce bias by assuring confidentiality through a random scrambling of the respondent's answers. We assessed negative attitudes towards people with physical and mental disability via MPPQ, RRT and DQ to compare the resulting estimates. The MPPQ estimates exceeded the DQ estimates. Employing a cheating-detection extension of the RRT, we determined the proportion of respondents disregarding the RRT instructions and computed an upper bound for the prevalence of negative attitudes. MPPQ estimates exceeded this upper bound and were thus shown to overestimate the prevalence. Furthermore, we found more negative attitudes towards people with mental disabilities than those with physical disabilities in all three questioning conditions. We recommend employing the cheating-detection variant of the RRT to gain additional insight in future studies on attitudes towards people with disabilities.

  1. Two phase sampling for wheat acreage estimation. [large area crop inventory experiment

    NASA Technical Reports Server (NTRS)

    Thomas, R. W.; Hay, C. M.

    1977-01-01

    A two phase LANDSAT-based sample allocation and wheat proportion estimation method was developed. This technique employs manual, LANDSAT full frame-based wheat or cultivated land proportion estimates from a large number of segments comprising a first sample phase to optimally allocate a smaller phase two sample of computer or manually processed segments. Application to the Kansas Southwest CRD for 1974 produced a wheat acreage estimate for that CRD within 2.42 percent of the USDA SRS-based estimate using a lower CRD inventory budget than for a simulated reference LACIE system. Factor of 2 or greater cost or precision improvements relative to the reference system were obtained.

  2. Are There Long-Run Effects of the Minimum Wage?

    PubMed Central

    Sorkin, Isaac

    2014-01-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices. PMID:25937790

  3. Are There Long-Run Effects of the Minimum Wage?

    PubMed

    Sorkin, Isaac

    2015-04-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.

  4. A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Hezarkhani, Ardeshir

    2012-05-01

    The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called "Coactive Neuro-Fuzzy Inference System" (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) - as a well-known technique to solve the complex optimization problems - is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS-GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS-GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems.

  5. A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation

    PubMed Central

    Tahmasebi, Pejman; Hezarkhani, Ardeshir

    2012-01-01

    The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called “Coactive Neuro-Fuzzy Inference System” (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) – as a well-known technique to solve the complex optimization problems – is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS–GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS–GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems. PMID:25540468

  6. A Method for Estimating View Transformations from Image Correspondences Based on the Harmony Search Algorithm.

    PubMed

    Cuevas, Erik; Díaz, Margarita

    2015-01-01

    In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.

  7. A Comparison Between Heliosat-2 and Artificial Neural Network Methods for Global Horizontal Irradiance Retrievals over Desert Environments

    NASA Astrophysics Data System (ADS)

    Ghedira, H.; Eissa, Y.

    2012-12-01

    Global horizontal irradiance (GHI) retrievals at the surface of any given location could be used for preliminary solar resource assessments. More accurately, the direct normal irradiance (DNI) and diffuse horizontal irradiance (DHI) are also required to estimate the global tilt irradiance, mainly used for fixed flat plate collectors. Two different satellite-based models for solar irradiance retrievals have been applied over the desert environment of the United Arab Emirates (UAE). Both models employ channels of the SEVIRI instrument, onboard the geostationary satellite Meteosat Second Generation, as their main inputs. The satellite images used in this study have a temporal resolution of 15-min and a spatial resolution of 3-km. The objective of this study is to compare between the GHI retrieved using the Heliosat-2 method and an artificial neural network (ANN) ensemble method over the UAE. The high-resolution visible channel of SEVIRI is used in the Heliosat-2 method to derive the cloud index. The cloud index is then used to compute the cloud transmission, while the cloud-free GHI is computed from the Linke turbidity factor. The product of the cloud transmission and the cloud-free GHI denotes the estimated GHI. A constant underestimation is observed in the estimated GHI over the dataset available in the UAE. Therefore, the cloud-free DHI equation in the model was recalibrated to fix the bias. After recalibration, results over the UAE show a root mean square error (RMSE) value of 10.1% and a mean bias error (MBE) of -0.5%. As for the ANN approach, six thermal channels of SEVIRI were used to estimate the DHI and the total optical depth of the atmosphere (δ). An ensemble approach is employed to obtain a better generalizability of the results, as opposed to using one single weak network. The DNI is then computed from the estimated δ using the Beer-Bouguer-Lambert law. The GHI is computed from the DNI and DHI estimates. The RMSE for the estimated GHI obtained over an independent dataset over the UAE is 7.2% and the MBE is +1.9%. The results obtained by the two methods have shown that both the recalibrated Heliosat-2 and the ANN ensemble methods estimate the GHI at a 15-min resolution with high accuracy. The advantage of the ANN ensemble approach is that it derives the GHI from accurate DNI and DHI estimates. The DNI and DHI estimates are valuable when computing the global tilt irradiance. Also, accurate DNI estimates are beneficial for preliminary site selection for concentrating solar powered plants.

  8. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.

  9. On the inverse problem of blade design for centrifugal pumps and fans

    NASA Astrophysics Data System (ADS)

    Kruyt, N. P.; Westra, R. W.

    2014-06-01

    The inverse problem of blade design for centrifugal pumps and fans has been studied. The solution to this problem provides the geometry of rotor blades that realize specified performance characteristics, together with the corresponding flow field. Here a three-dimensional solution method is described in which the so-called meridional geometry is fixed and the distribution of the azimuthal angle at the three-dimensional blade surface is determined for blades of infinitesimal thickness. The developed formulation is based on potential-flow theory. Besides the blade impermeability condition at the pressure and suction side of the blades, an additional boundary condition at the blade surface is required in order to fix the unknown blade geometry. For this purpose the mean-swirl distribution is employed. The iterative numerical method is based on a three-dimensional finite element method approach in which the flow equations are solved on the domain determined by the latest estimate of the blade geometry, with the mean-swirl distribution boundary condition at the blade surface being enforced. The blade impermeability boundary condition is then used to find an improved estimate of the blade geometry. The robustness of the method is increased by specific techniques, such as spanwise-coupled solution of the discretized impermeability condition and the use of under-relaxation in adjusting the estimates of the blade geometry. Various examples are shown that demonstrate the effectiveness and robustness of the method in finding a solution for the blade geometry of different types of centrifugal pumps and fans. The influence of the employed mean-swirl distribution on the performance characteristics is also investigated.

  10. Machine Learning Methods for Attack Detection in the Smart Grid.

    PubMed

    Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent

    2016-08-01

    Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.

  11. The two-phase flow IPTT method for measurement of nonwetting-wetting liquid interfacial areas at higher nonwetting saturations in natural porous media

    PubMed Central

    Zhong, Hua; Ouni, Asma El; Lin, Dan; Wang, Bingguo; Brusseau, Mark L

    2017-01-01

    Interfacial areas between nonwetting-wetting (NW-W) liquids in natural porous media were measured using a modified version of the interfacial partitioning tracer test (IPTT) method that employed simultaneous two-phase flow conditions, which allowed measurement at NW saturations higher than trapped residual saturation. Measurements were conducted over a range of saturations for a well-sorted quartz sand under three wetting scenarios of primary drainage (PD), secondary imbibition (SI), and secondary drainage (SD). Limited sets of experiments were also conducted for a model glass-bead medium and for a soil. The measured interfacial areas were compared to interfacial areas measured using the standard IPTT method for liquid-liquid systems, which employs residual NW saturations. In addition, the theoretical maximum interfacial areas estimated from the measured data are compared to specific solid surface areas measured with the N2/BET method and estimated based on geometrical calculations for smooth spheres. Interfacial areas increase linearly with decreasing water saturation over the range of saturations employed. The maximum interfacial areas determined for the glass beads, which have no surface roughness, are 32±4 and 36±5 cm−1 for PD and SI cycles, respectively. The values are similar to the geometric specific solid surface area (31±2 cm−1) and the N2/BET solid surface area (28±2 cm−1). The maximum interfacial areas are 274±38, 235±27, and 581±160 cm−1 for the sand for PD, SI, and SD cycles, respectively, and ~7625 cm−1 for the soil for PD and SI. The maximum interfacial areas for the sand and soil are significantly larger than the estimated smooth-sphere specific solid surface areas (107±8 cm−1 and 152±8 cm−1, respectively), but much smaller than the N2/BET solid surface area (1387±92 cm−1 and 55224 cm−1, respectively). The NW-W interfacial areas measured with the two-phase flow method compare well to values measured using the standard IPTT method. PMID:28959079

  12. Cascaded Kalman and particle filters for photogrammetry based gyroscope drift and robot attitude estimation.

    PubMed

    Sadaghzadeh N, Nargess; Poshtan, Javad; Wagner, Achim; Nordheimer, Eugen; Badreddin, Essameddin

    2014-03-01

    Based on a cascaded Kalman-Particle Filtering, gyroscope drift and robot attitude estimation method is proposed in this paper. Due to noisy and erroneous measurements of MEMS gyroscope, it is combined with Photogrammetry based vision navigation scenario. Quaternions kinematics and robot angular velocity dynamics with augmented drift dynamics of gyroscope are employed as system state space model. Nonlinear attitude kinematics, drift and robot angular movement dynamics each in 3 dimensions result in a nonlinear high dimensional system. To reduce the complexity, we propose a decomposition of system to cascaded subsystems and then design separate cascaded observers. This design leads to an easier tuning and more precise debugging from the perspective of programming and such a setting is well suited for a cooperative modular system with noticeably reduced computation time. Kalman Filtering (KF) is employed for the linear and Gaussian subsystem consisting of angular velocity and drift dynamics together with gyroscope measurement. The estimated angular velocity is utilized as input of the second Particle Filtering (PF) based observer in two scenarios of stochastic and deterministic inputs. Simulation results are provided to show the efficiency of the proposed method. Moreover, the experimental results based on data from a 3D MEMS IMU and a 3D camera system are used to demonstrate the efficiency of the method. © 2013 ISA Published by ISA All rights reserved.

  13. Macroeconomic costs of the unmet burden of surgical disease in Sierra Leone: a retrospective economic analysis.

    PubMed

    Grimes, Caris E; Quaife, Matthew; Kamara, Thaim B; Lavy, Christopher B D; Leather, Andy J M; Bolkan, Håkon A

    2018-03-14

    The Lancet Commission on Global Surgery estimated that low/middle-income countries will lose an estimated cumulative loss of US$12.3 trillion from gross domestic product (GDP) due to the unmet burden of surgical disease. However, no country-specific data currently exist. We aimed to estimate the costs to the Sierra Leone economy from death and disability which may have been averted by surgical care. We used estimates of total, met and unmet need from two main sources-a cluster randomised, cross-sectional, countrywide survey and a retrospective, nationwide study on surgery in Sierra Leone. We calculated estimated disability-adjusted life years from morbidity and mortality for the estimated unmet burden and modelled the likely economic impact using three different methods-gross national income per capita, lifetime earnings foregone and value of a statistical life. In 2012, estimated, discounted lifetime losses to the Sierra Leone economy from the unmet burden of surgical disease was between US$1.1 and US$3.8 billion, depending on the economic method used. These lifetime losses equate to between 23% and 100% of the annual GDP for Sierra Leone. 80% of economic losses were due to mortality. The incremental losses averted by scale up of surgical provision to the Lancet Commission target of 80% were calculated to be between US$360 million and US$2.9 billion. There is a large economic loss from the unmet need for surgical care in Sierra Leone. There is an immediate need for massive investment to counteract ongoing economic losses. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    PubMed

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    NASA Astrophysics Data System (ADS)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  16. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    PubMed Central

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-01-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states. PMID:26996254

  17. Model uncertainty of various settlement estimation methods in shallow tunnels excavation; case study: Qom subway tunnel

    NASA Astrophysics Data System (ADS)

    Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb

    2017-10-01

    In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.

  18. Trade-offs in experimental designs for estimating post-release mortality in containment studies

    USGS Publications Warehouse

    Rogers, Mark W.; Barbour, Andrew B; Wilson, Kyle L

    2014-01-01

    Estimates of post-release mortality (PRM) facilitate accounting for unintended deaths from fishery activities and contribute to development of fishery regulations and harvest quotas. The most popular method for estimating PRM employs containers for comparing control and treatment fish, yet guidance for experimental design of PRM studies with containers is lacking. We used simulations to evaluate trade-offs in the number of containers (replicates) employed versus the number of fish-per container when estimating tagging mortality. We also investigated effects of control fish survival and how among container variation in survival affects the ability to detect additive mortality. Simulations revealed that high experimental effort was required when: (1) additive treatment mortality was small, (2) control fish mortality was non-negligible, and (3) among container variability in control fish mortality exceeded 10% of the mean. We provided programming code to allow investigators to compare alternative designs for their individual scenarios and expose trade-offs among experimental design options. Results from our simulations and simulation code will help investigators develop efficient PRM experimental designs for precise mortality assessment.

  19. An employee total health management-based survey of Iowa employers.

    PubMed

    Merchant, James A; Lind, David P; Kelly, Kevin M; Hall, Jennifer L

    2013-12-01

    To implement an Employee Total Health Management (ETHM) model-based questionnaire and provide estimates of model program elements among a statewide sample of Iowa employers. Survey a stratified random sample of Iowa employers, and characterize and estimate employer participation in ETHM program elements. Iowa employers are implementing less than 30% of all 12 components of ETHM, with the exception of occupational safety and health (46.6%) and workers' compensation insurance coverage (89.2%), but intend modest expansion of all components in the coming year. The ETHM questionnaire-based survey provides estimates of progress Iowa employers are making toward implementing components of Total Worker Health programs.

  20. Modal vector estimation for closely spaced frequency modes

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.; Chung, Y. T.; Blair, M.

    1982-01-01

    Techniques for obtaining improved modal vector estimates for systems with closely spaced frequency modes are discussed. In describing the dynamical behavior of a complex structure modal parameters are often analyzed: undamped natural frequency, mode shape, modal mass, modal stiffness and modal damping. From both an analytical standpoint and an experimental standpoint, identification of modal parameters is more difficult if the system has repeated frequencies or even closely spaced frequencies. The more complex the structure, the more likely it is to have closely spaced frequencies. This makes it difficult to determine valid mode shapes using single shaker test methods. By employing band selectable analysis (zoom) techniques and by employing Kennedy-Pancu circle fitting or some multiple degree of freedom (MDOF) curve fit procedure, the usefulness of the single shaker approach can be extended.

  1. Antioxidant capacity of human blood plasma and human urine: simultaneous evaluation of the ORAC index and ascorbic acid concentration employing pyrogallol red as probe.

    PubMed

    Torres, P; Galleguillos, P; Lissi, E; López-Alarcón, C

    2008-10-15

    The oxygen radical absorbance capacity (ORAC) methodology has been employed to estimate the antioxidant capacity of human blood plasma and human urine using pyrogallol red (ORAC-PGR) as target molecule. Uric acid, reduced glutathione, human serum albumin, and ascorbic acid (ASC) inhibited the consumption of pyrogallol red, but only ASC generated an induction time. Human blood plasma and human urine protected efficiently pyrogallol red. In these assays, both biological fluids generated neat induction times that were removed by ascorbate oxidase. From these results, ORAC-PGR method could be proposed as a simple alternative to evaluate an ORAC index and, simultaneously, to estimate the concentration of ascorbic acid in human blood plasma or human urine.

  2. Incorporating spatial context into statistical classification of multidimensional image data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.

    1981-01-01

    Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.

  3. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.

    PubMed

    Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-09-13

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  4. Estimation of pulse rate from ambulatory PPG using ensemble empirical mode decomposition and adaptive thresholding.

    PubMed

    Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina

    2017-07-01

    A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.

  5. Dense motion estimation using regularization constraints on local parametric models.

    PubMed

    Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein

    2004-11-01

    This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.

  6. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter

    PubMed Central

    Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-01-01

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154

  7. Use of statistical and pharmacokinetic-pharmacodynamic modeling and simulation to improve decision-making: A section summary report of the trends and innovations in clinical trial statistics conference.

    PubMed

    Kimko, Holly; Berry, Seth; O'Kelly, Michael; Mehrotra, Nitin; Hutmacher, Matthew; Sethuraman, Venkat

    2017-01-01

    The application of modeling and simulation (M&S) methods to improve decision-making was discussed during the Trends & Innovations in Clinical Trial Statistics Conference held in Durham, North Carolina, USA on May 1-4, 2016. Uses of both pharmacometric and statistical M&S were presented during the conference, highlighting the diversity of the methods employed by pharmacometricians and statisticians to address a broad range of quantitative issues in drug development. Five presentations are summarized herein, which cover the development strategy of employing M&S to drive decision-making; European initiatives on best practice in M&S; case studies of pharmacokinetic/pharmacodynamics modeling in regulatory decisions; estimation of exposure-response relationships in the presence of confounding; and the utility of estimating the probability of a correct decision for dose selection when prior information is limited. While M&S has been widely used during the last few decades, it is expected to play an essential role as more quantitative assessments are employed in the decision-making process. By integrating M&S as a tool to compile the totality of evidence collected throughout the drug development program, more informed decisions will be made.

  8. Correction of odds ratios in case-control studies for exposure misclassification with partial knowledge of the degree of agreement among experts who assessed exposures.

    PubMed

    Burstyn, Igor; Gustafson, Paul; Pintos, Javier; Lavoué, Jérôme; Siemiatycki, Jack

    2018-02-01

    Estimates of association between exposures and diseases are often distorted by error in exposure classification. When the validity of exposure assessment is known, this can be used to adjust these estimates. When exposure is assessed by experts, even if validity is not known, we sometimes have information about interrater reliability. We present a Bayesian method for translating the knowledge of interrater reliability, which is often available, into knowledge about validity, which is often needed but not directly available, and applying this to correct odds ratios (OR). The method allows for inclusion of observed potential confounders in the analysis, as is common in regression-based control for confounding. Our method uses a novel type of prior on sensitivity and specificity. The approach is illustrated with data from a case-control study of lung cancer risk and occupational exposure to diesel engine emissions, in which exposure assessment was made by detailed job history interviews with study subjects followed by expert judgement. Using interrater agreement measured by kappas (κ), we estimate sensitivity and specificity of exposure assessment and derive misclassification-corrected confounder-adjusted OR. Misclassification-corrected and confounder-adjusted OR obtained with the most defensible prior had a posterior distribution centre of 1.6 with 95% credible interval (Crl) 1.1 to 2.6. This was on average greater in magnitude than frequentist point estimate of 1.3 (95% Crl 1.0 to 1.7). The method yields insights into the degree of exposure misclassification and appears to reduce attenuation bias due to misclassification of exposure while the estimated uncertainty increased. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. A sparse reconstruction method for the estimation of multiresolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2014-08-20

    We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO 2 (ffCO 2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  10. A novel method for state of charge estimation of lithium-ion batteries using a nonlinear observer

    NASA Astrophysics Data System (ADS)

    Xia, Bizhong; Chen, Chaoren; Tian, Yong; Sun, Wei; Xu, Zhihui; Zheng, Weiwei

    2014-12-01

    The state of charge (SOC) is important for the safety and reliability of battery operation since it indicates the remaining capacity of a battery. However, as the internal state of each cell cannot be directly measured, the value of the SOC has to be estimated. In this paper, a novel method for SOC estimation in electric vehicles (EVs) using a nonlinear observer (NLO) is presented. One advantage of this method is that it does not need complicated matrix operations, so the computation cost can be reduced. As a key step in design of the nonlinear observer, the state-space equations based on the equivalent circuit model are derived. The Lyapunov stability theory is employed to prove the convergence of the nonlinear observer. Four experiments are carried out to evaluate the performance of the presented method. The results show that the SOC estimation error converges to 3% within 130 s while the initial SOC error reaches 20%, and does not exceed 4.5% while the measurement suffers both 2.5% voltage noise and 5% current noise. Besides, the presented method has advantages over the extended Kalman filter (EKF) and sliding mode observer (SMO) algorithms in terms of computation cost, estimation accuracy and convergence rate.

  11. Monitoring vegetation conditions from LANDSAT for use in range management

    NASA Technical Reports Server (NTRS)

    Haas, R. H.; Deering, D. W.; Rouse, J. W., Jr.; Schell, J. A.

    1975-01-01

    A summary of the LANDSAT Great Plains Corridor projects and the principal results are presented. Emphasis is given to the use of satellite acquired phenological data for range management and agri-business activities. A convenient method of reducing LANDSAT MSS data to provide quantitative estimates of green biomass on rangelands in the Great Plains is explained. Suggestions for the use of this approach for evaluating range feed conditions are presented. A LANDSAT Follow-on project has been initiated which will employ the green biomass estimation method in a quasi-operational monitoring of range readiness and range feed conditions on a regional scale.

  12. Estimation of the biserial correlation and its sampling variance for use in meta-analysis.

    PubMed

    Jacobs, Perke; Viechtbauer, Wolfgang

    2017-06-01

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Optimal distribution of integration time for intensity measurements in Stokes polarimetry.

    PubMed

    Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng

    2015-10-19

    We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.

  14. A partially reflecting random walk on spheres algorithm for electrical impedance tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de

    2015-12-15

    In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less

  15. Compressed Symmetric Nested Arrays and Their Application for Direction-of-Arrival Estimation of Near-Field Sources.

    PubMed

    Li, Shuang; Xie, Dongfeng

    2016-11-17

    In this paper, a new sensor array geometry, called a compressed symmetric nested array (CSNA), is designed to increase the degrees of freedom in the near field. As its name suggests, a CSNA is constructed by getting rid of some elements from two identical nested arrays. The closed form expressions are also presented for the sensor locations and the largest degrees of freedom obtainable as a function of the total number of sensors. Furthermore, a novel DOA estimation method is proposed by utilizing the CSNA in the near field. By employing this new array geometry, our method can identify more sources than sensors. Compared with other existing methods, the proposed method achieves higher resolution because of increased array aperture. Simulation results are demonstrated to verify the effectiveness of the proposed method.

  16. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images

    PubMed Central

    Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun

    2017-01-01

    To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. PMID:28208837

  17. Scale of association: hierarchical linear models and the measurement of ecological systems

    Treesearch

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  18. A SPATIAL ANALYSIS OF FINE-ROOT BIOMASS FROM STAND DATA IN OREGON AND WASHINGTON

    EPA Science Inventory

    Because of the high spatial variability of fine roots in natural forest stands, accurate estimates of stand-level fine root biomass are difficult and expensive to obtain by standard coring methods. This study compares two different approaches that employ aboveground tree metrics...

  19. Overskilling Dynamics and Education Pathways

    ERIC Educational Resources Information Center

    Mavromaras, Kostas; McGuinness, Seamus

    2012-01-01

    This paper uses panel data and econometric methods to estimate the incidence and the dynamic properties of overskilling among employed individuals. The paper begins by asking whether there is extensive overskilling in the labour market, and whether overskilling differs by education pathway. The answer to both questions is yes. The paper continues…

  20. Mimicking multivalent protein-carbohydrate interactions for monitoring the glucosamine level in biological fluids and pharmaceutical tablets.

    PubMed

    Dey, Nilanjan; Bhattacharya, Santanu

    2017-05-11

    An easily synthesizable probe has been employed for dual mode sensing of glucosamine in pure water. The method was also applied for glucosamine estimation in blood serum samples and pharmaceutical tablets. Further, selective detection of glucosamine was also achieved using portable color strips.

  1. Retirement Patterns from Career Employment

    ERIC Educational Resources Information Center

    Cahill, Kevin E.; Giandrea, Michael D.; Quinn, Joseph F.

    2006-01-01

    Purpose: This article investigates how older Americans leave their career jobs and estimates the extent of intermediate labor force activity (bridge jobs) between full-time work on a career job and complete labor-force withdrawal. Design and Methods: Using data from the Health and Retirement Study, we explored the work histories and retirement…

  2. Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2015-12-01

    In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.

  3. Fusion of magnetometer and gradiometer sensors of MEG in the presence of multiplicative error.

    PubMed

    Mohseni, Hamid R; Woolrich, Mark W; Kringelbach, Morten L; Luckhoo, Henry; Smith, Penny Probert; Aziz, Tipu Z

    2012-07-01

    Novel neuroimaging techniques have provided unprecedented information on the structure and function of the living human brain. Multimodal fusion of data from different sensors promises to radically improve this understanding, yet optimal methods have not been developed. Here, we demonstrate a novel method for combining multichannel signals. We show how this method can be used to fuse signals from the magnetometer and gradiometer sensors used in magnetoencephalography (MEG), and through extensive experiments using simulation, head phantom and real MEG data, show that it is both robust and accurate. This new approach works by assuming that the lead fields have multiplicative error. The criterion to estimate the error is given within a spatial filter framework such that the estimated power is minimized in the worst case scenario. The method is compared to, and found better than, existing approaches. The closed-form solution and the conditions under which the multiplicative error can be optimally estimated are provided. This novel approach can also be employed for multimodal fusion of other multichannel signals such as MEG and EEG. Although the multiplicative error is estimated based on beamforming, other methods for source analysis can equally be used after the lead-field modification.

  4. Efficiency of personal dosimetry methods in vascular interventional radiology.

    PubMed

    Bacchim Neto, Fernando Antonio; Alves, Allan Felipe Fattori; Mascarenhas, Yvone Maria; Giacomini, Guilherme; Maués, Nadine Helena Pelegrino Bastos; Nicolucci, Patrícia; de Freitas, Carlos Clayton Macedo; Alvarez, Matheus; Pina, Diana Rodrigues de

    2017-05-01

    The aim of the present study was to determine the efficiency of six methods for calculate the effective dose (E) that is received by health professionals during vascular interventional procedures. We evaluated the efficiency of six methods that are currently used to estimate professionals' E, based on national and international recommendations for interventional radiology. Equivalent doses on the head, neck, chest, abdomen, feet, and hands of seven professionals were monitored during 50 vascular interventional radiology procedures. Professionals' E was calculated for each procedure according to six methods that are commonly employed internationally. To determine the best method, a more efficient E calculation method was used to determine the reference value (reference E) for comparison. The highest equivalent dose were found for the hands (0.34±0.93mSv). The two methods that are described by Brazilian regulations overestimated E by approximately 100% and 200%. The more efficient method was the one that is recommended by the United States National Council on Radiological Protection and Measurements (NCRP). The mean and median differences of this method relative to reference E were close to 0%, and its standard deviation was the lowest among the six methods. The present study showed that the most precise method was the one that is recommended by the NCRP, which uses two dosimeters (one over and one under protective aprons). The use of methods that employ at least two dosimeters are more efficient and provide better information regarding estimates of E and doses for shielded and unshielded regions. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. Design rainfall depth estimation through two regional frequency analysis methods in Hanjiang River Basin, China

    NASA Astrophysics Data System (ADS)

    Xu, Yue-Ping; Yu, Chaofeng; Zhang, Xujie; Zhang, Qingqing; Xu, Xiao

    2012-02-01

    Hydrological predictions in ungauged basins are of significant importance for water resources management. In hydrological frequency analysis, regional methods are regarded as useful tools in estimating design rainfall/flood for areas with only little data available. The purpose of this paper is to investigate the performance of two regional methods, namely the Hosking's approach and the cokriging approach, in hydrological frequency analysis. These two methods are employed to estimate 24-h design rainfall depths in Hanjiang River Basin, one of the largest tributaries of Yangtze River, China. Validation is made through comparing the results to those calculated from the provincial handbook approach which uses hundreds of rainfall gauge stations. Also for validation purpose, five hypothetically ungauged sites from the middle basin are chosen. The final results show that compared to the provincial handbook approach, the Hosking's approach often overestimated the 24-h design rainfall depths while the cokriging approach most of the time underestimated. Overall, the Hosking' approach produced more accurate results than the cokriging approach.

  6. Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.

    PubMed

    Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona

    2016-05-31

    Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.

  7. Bounded Kalman filter method for motion-robust, non-contact heart rate estimation

    PubMed Central

    Prakash, Sakthi Kumar Arul; Tucker, Conrad S.

    2018-01-01

    The authors of this work present a real-time measurement of heart rate across different lighting conditions and motion categories. This is an advancement over existing remote Photo Plethysmography (rPPG) methods that require a static, controlled environment for heart rate detection, making them impractical for real-world scenarios wherein a patient may be in motion, or remotely connected to a healthcare provider through telehealth technologies. The algorithm aims to minimize motion artifacts such as blurring and noise due to head movements (uniform, random) by employing i) a blur identification and denoising algorithm for each frame and ii) a bounded Kalman filter technique for motion estimation and feature tracking. A case study is presented that demonstrates the feasibility of the algorithm in non-contact estimation of the pulse rate of subjects performing everyday head and body movements. The method in this paper outperforms state of the art rPPG methods in heart rate detection, as revealed by the benchmarked results. PMID:29552419

  8. Radiation exposure assessment for portsmouth naval shipyard health studies.

    PubMed

    Daniels, R D; Taulbee, T D; Chen, P

    2004-01-01

    Occupational radiation exposures of 13,475 civilian nuclear shipyard workers were investigated as part of a retrospective mortality study. Estimates of annual, cumulative and collective doses were tabulated for future dose-response analysis. Record sets were assembled and amended through range checks, examination of distributions and inspection. Methods were developed to adjust for administrative overestimates and dose from previous employment. Uncertainties from doses below the recording threshold were estimated. Low-dose protracted radiation exposures from submarine overhaul and repair predominated. Cumulative doses are best approximated by a hybrid log-normal distribution with arithmetic mean and median values of 20.59 and 3.24 mSv, respectively. The distribution is highly skewed with more than half the workers having cumulative doses <10 mSv and >95% having doses <100 mSv. The maximum cumulative dose is estimated at 649.39 mSv from 15 person-years of exposure. The collective dose was 277.42 person-Sv with 96.8% attributed to employment at Portsmouth Naval Shipyard.

  9. Health care demand elasticities by type of service.

    PubMed

    Ellis, Randall P; Martins, Bruno; Zhu, Wenjia

    2017-09-01

    We estimate within-year price elasticities of demand for detailed health care services using an instrumental variable strategy, in which individual monthly cost shares are instrumented by employer-year-plan-month average cost shares. A specification using backward myopic prices gives more plausible and stable results than using forward myopic prices. Using 171 million person-months spanning 73 employers from 2008 to 2014, we estimate that the overall demand elasticity by backward myopic consumers is -0.44, with higher elasticities of demand for pharmaceuticals (-0.44), specialists visits (-0.32), MRIs (-0.29) and mental health/substance abuse (-0.26), and lower elasticities for prevention visits (-0.02) and emergency rooms (-0.04). Demand response is lower for children, in larger firms, among hourly waged employees, and for sicker people. Overall the method appears promising for estimating elasticities for highly disaggregated services although the approach does not work well on services that are very expensive or persistent. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. A robust vision-based sensor fusion approach for real-time pose estimation.

    PubMed

    Assa, Akbar; Janabi-Sharifi, Farrokh

    2014-02-01

    Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.

  11. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  12. Calculation of photoionization cross section near auto-ionizing lines and magnesium photoionization cross section near threshold

    NASA Technical Reports Server (NTRS)

    Moore, E. N.; Altick, P. L.

    1972-01-01

    The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.

  13. Estimating population diversity with CatchAll

    PubMed Central

    Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.

    2012-01-01

    Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246

  14. Sliding mode output feedback control based on tracking error observer with disturbance estimator.

    PubMed

    Xiao, Lingfei; Zhu, Yue

    2014-07-01

    For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Improving RNA-Seq expression estimation by modeling isoform- and exon-specific read sequencing rate.

    PubMed

    Liu, Xuejun; Shi, Xinxin; Chen, Chunlin; Zhang, Li

    2015-10-16

    The high-throughput sequencing technology, RNA-Seq, has been widely used to quantify gene and isoform expression in the study of transcriptome in recent years. Accurate expression measurement from the millions or billions of short generated reads is obstructed by difficulties. One is ambiguous mapping of reads to reference transcriptome caused by alternative splicing. This increases the uncertainty in estimating isoform expression. The other is non-uniformity of read distribution along the reference transcriptome due to positional, sequencing, mappability and other undiscovered sources of biases. This violates the uniform assumption of read distribution for many expression calculation approaches, such as the direct RPKM calculation and Poisson-based models. Many methods have been proposed to address these difficulties. Some approaches employ latent variable models to discover the underlying pattern of read sequencing. However, most of these methods make bias correction based on surrounding sequence contents and share the bias models by all genes. They therefore cannot estimate gene- and isoform-specific biases as revealed by recent studies. We propose a latent variable model, NLDMseq, to estimate gene and isoform expression. Our method adopts latent variables to model the unknown isoforms, from which reads originate, and the underlying percentage of multiple spliced variants. The isoform- and exon-specific read sequencing biases are modeled to account for the non-uniformity of read distribution, and are identified by utilizing the replicate information of multiple lanes of a single library run. We employ simulation and real data to verify the performance of our method in terms of accuracy in the calculation of gene and isoform expression. Results show that NLDMseq obtains competitive gene and isoform expression compared to popular alternatives. Finally, the proposed method is applied to the detection of differential expression (DE) to show its usefulness in the downstream analysis. The proposed NLDMseq method provides an approach to accurately estimate gene and isoform expression from RNA-Seq data by modeling the isoform- and exon-specific read sequencing biases. It makes use of a latent variable model to discover the hidden pattern of read sequencing. We have shown that it works well in both simulations and real datasets, and has competitive performance compared to popular methods. The method has been implemented as a freely available software which can be found at https://github.com/PUGEA/NLDMseq.

  16. Estimating seat belt effectiveness using matched-pair cohort methods.

    PubMed

    Cummings, Peter; Wells, James D; Rivara, Frederick P

    2003-01-01

    Using US data for 1986-1998 fatal crashes, we employed matched-pair analysis methods to estimate that the relative risk of death among belted compared with unbelted occupants was 0.39 (95% confidence interval (CI) 0.37-0.41). This differs from relative risk estimates of about 0.55 in studies that used crash data collected prior to 1986. Using 1975-1998 data, we examined and rejected three theories that might explain the difference between our estimate and older estimates: (1) differences in the analysis methods; (2) changes related to car model year; (3) changes in crash characteristics over time. A fourth theory, that the introduction of seat belt laws would induce some survivors to claim belt use when they were not restrained, could explain part of the difference in our estimate and older estimates; but even in states without seat belt laws, from 1986 through 1998, the relative risk estimate was 0.45 (95% CI 0.39-0.52). All of the difference between our estimate and older estimates could be explained by some misclassification of seat belt use. Relative risk estimates would move away from 1, toward their true value, if misclassification of both the belted and unbelted decreased over time, or if the degree of misclassification remained constant, as the prevalence of belt use increased. We conclude that estimates of seat belt effects based upon data prior to 1986 may be biased toward 1 by misclassification.

  17. Combining optimization methods with response spectra curve-fitting toward improved damping ratio estimation

    NASA Astrophysics Data System (ADS)

    Brewick, Patrick T.; Smyth, Andrew W.

    2016-12-01

    The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.

  18. Comparing epidemiologically estimated treatment need with treatment provided in two dental schemes in Ireland

    PubMed Central

    2012-01-01

    Background Valid estimation of dental treatment needed at population level is important for service planning. In many instances, planning is informed by survey data, which provide epidemiologically estimated need from the dental fieldworkers’ perspective. The aim of this paper is to determine the validity of this type of information for planning. A comparison of normative (epidemiologically estimated) need for selected treatments, as measured on a randomly-selected representative sample, is compared with treatment actually provided in the population from which the sample was drawn. Methods This paper compares dental treatment need-estimates, from a national survey, with treatment provided within two choice-of-dentist schemes: Scheme 1, a co-payment scheme for employed adults, and Scheme 2, a ‘free’ service for less-well-off adults. Epidemiologically estimated need for extractions, restorations, advanced restorations and denture treatments was recorded for a nationally representative sample in 2000/02. Treatments provided to employed and less-well-off adults were retrieved from the claims databases for both schemes. We used the chi-square test to compare proportions, and the student’s t-test to compare means between the survey and claims databases. Results Among employed adults, the proportion of 35-44-year-olds whose teeth had restorations was greater than estimated as needed in the survey (55.7% vs. 36.7%;p <0.0001). Mean number of teeth extracted was less than estimated as needed among 35-44 and 65+ year-olds. Among less-well-off adults, the proportion of 16-24-year-olds who had teeth extracted was greater than estimated as needed in the survey (27.4% vs. 7.9%;p <0.0001). Mean number of restorations provided was greater than estimated as needed in the survey for 16-24-year-olds (3.0 vs. 0.9; p <0.0001) and 35-44-year-olds (2.7 vs. 1.4;p <0.01). Conclusions Significant differences were found between epidemiologically estimated need and treatment provided for selected treatments, which may be accounted for by measurement differences. The gap between epidemiologically estimated need and treatment provided seems to be greatest for less-well-off adults. PMID:22898307

  19. Box-Counting Dimension Revisited: Presenting an Efficient Method of Minimizing Quantization Error and an Assessment of the Self-Similarity of Structural Root Systems

    PubMed Central

    Bouda, Martin; Caplan, Joshua S.; Saiers, James E.

    2016-01-01

    Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073

  20. Determining the in-hospital cost of bleeding in patients undergoing percutaneous coronary intervention.

    PubMed

    Ewen, Edward F; Zhao, Liping; Kolm, Paul; Jurkovitz, Claudine; Fidan, Dogan; White, Harvey D; Gallo, Richard; Weintraub, William S

    2009-06-01

    The economic impact of bleeding in the setting of nonemergent percutaneous coronary intervention (PCI) is poorly understood and complicated by the variety of bleeding definitions currently employed. This retrospective analysis examines and contrasts the in-hospital cost of bleeding associated with this procedure using six bleeding definitions employed in recent clinical trials. All nonemergent PCI cases at Christiana Care Health System not requiring a subsequent coronary artery bypass were identified between January 2003 and March 2006. Bleeding events were identified by chart review, registry, laboratory, and administrative data. A microcosting strategy was applied utilizing hospital charges converted to costs using departmental level direct cost-to-charge ratios. The independent contributions of bleeding, both major and minor, to cost were determined by multiple regression. Bootstrap methods were employed to obtain estimates of regression parameters and their standard errors. A total of 6,008 cases were evaluated. By GUSTO definitions there were 65 (1.1%) severe, 52 (0.9%) moderate, and 321 (5.3%) mild bleeding episodes with estimated bleeding costs of $14,006; $6,980; and $4,037, respectively. When applying TIMI definitions there were 91 (1.5%) major and 178 (3.0%) minor bleeding episodes with estimated costs of $8,794 and $4,310, respectively. In general, the four additional trial-specific definitions identified more bleeding events, provided lower estimates of major bleeding cost, and similar estimates of minor bleeding costs. Bleeding is associated with considerable cost over and above interventional procedures; however, the choice of bleeding definition impacts significantly on both the incidence and economic consequences of these events.

  1. A Bayesian approach to estimate evoked potentials.

    PubMed

    Sparacino, Giovanni; Milani, Stefano; Arslan, Edoardo; Cobelli, Claudio

    2002-06-01

    Several approaches, based on different assumptions and with various degree of theoretical sophistication and implementation complexity, have been developed for improving the measurement of evoked potentials (EP) performed by conventional averaging (CA). In many of these methods, one of the major challenges is the exploitation of a priori knowledge. In this paper, we present a new method where the 2nd-order statistical information on the background EEG and on the unknown EP, necessary for the optimal filtering of each sweep in a Bayesian estimation framework, is, respectively, estimated from pre-stimulus data and obtained through a multiple integration of a white noise process model. The latter model is flexible (i.e. it can be employed for a large class of EP) and simple enough to be easily identifiable from the post-stimulus data thanks to a smoothing criterion. The mean EP is determined as the weighted average of the filtered sweeps, where each weight is inversely proportional to the expected value of the norm of the correspondent filter error, a quantity determinable thanks to the employment of the Bayesian approach. The performance of the new approach is shown on both simulated and real auditory EP. A signal-to-noise ratio enhancement is obtained that can allow the (possibly automatic) identification of peak latencies and amplitudes with less sweeps than those required by CA. For cochlear EP, the method also allows the audiology investigator to gather new and clinically important information. The possibility of handling single-sweep analysis with further development of the method is also addressed.

  2. Theoretical basis to measure the impact of short-lasting control of an infectious disease on the epidemic peak

    PubMed Central

    2011-01-01

    Background While many pandemic preparedness plans have promoted disease control effort to lower and delay an epidemic peak, analytical methods for determining the required control effort and making statistical inferences have yet to be sought. As a first step to address this issue, we present a theoretical basis on which to assess the impact of an early intervention on the epidemic peak, employing a simple epidemic model. Methods We focus on estimating the impact of an early control effort (e.g. unsuccessful containment), assuming that the transmission rate abruptly increases when control is discontinued. We provide analytical expressions for magnitude and time of the epidemic peak, employing approximate logistic and logarithmic-form solutions for the latter. Empirical influenza data (H1N1-2009) in Japan are analyzed to estimate the effect of the summer holiday period in lowering and delaying the peak in 2009. Results Our model estimates that the epidemic peak of the 2009 pandemic was delayed for 21 days due to summer holiday. Decline in peak appears to be a nonlinear function of control-associated reduction in the reproduction number. Peak delay is shown to critically depend on the fraction of initially immune individuals. Conclusions The proposed modeling approaches offer methodological avenues to assess empirical data and to objectively estimate required control effort to lower and delay an epidemic peak. Analytical findings support a critical need to conduct population-wide serological survey as a prior requirement for estimating the time of peak. PMID:21269441

  3. Deaths and tumours among workers grinding stainless steel: a follow up.

    PubMed Central

    Jakobsson, K; Mikoczy, Z; Skerfving, S

    1997-01-01

    OBJECTIVE: To study cause specific mortality and cancer morbidity in workers exposed to the dust of grinding materials, grinding agents, and stainless steel, especially with regard to a possibly increased risk of respiratory, stomach, and colorectal cancer. METHODS: Retrospective cohort study, using reference cohorts of blue collar workers and population rates for comparison. The exposed cohort comprises workers with at least 12 months employment time at two plants, producing stainless steel sinks and saucepans (n = 727). Also, reference cohorts of other industrial workers (n = 3965) and fishermen (n = 8092) were analysed. The observation period began 15 years after the start of employment. Standardised mortality or incidence ratios (SMRs, SIRs; county reference rates) were calculated for cause-specific mortality between 1952 and 1993, and for cancer morbidity between 1958 and 1992. RESULTS: In the exposed cohort, overall mortality, cardiovascular mortality, and all malignant mortality and morbidity were slightly lower than expected. Also, the risk estimates for cancer in the upper and lower respiratory tracts and for stomach cancer were lower than expected. There was an increase in morbidity from colon cancer, which was explained by an excess of tumours in the sigmoid part only. Here, the risk estimates were higher in workers with long employment time (1-14 years: four observed cases, SIR 1.7, 95% confidence interval (95% CI) 0.4 to 4.5; > or = 15 years: three observed cases, SIR 4.3, 95% CI 0.9 to 13) and the increased risk was especially pronounced among those first employed before 1942. A slight nominal excess of rectal cancers (nine observed cases, SIR 1.4, 95% CI 0.6 to 2.6), and a significant excess of prostate cancer morbidity (36 observed cases, SIR 1.7, 95% CI 1.2 to 2.4) were found. These risk estimates did not, however, increase with employment time. CONCLUSIONS: The finding of an increased risk of cancer in the sigmoid part of the colon, which was not found in the reference cohorts, and with indication of a relation between duration of employment and response, is consistent with a causal relation. The limited size of the exposed cohort makes a detailed exposure-response analysis unstable, and the confidence limits are wide. Albeit slightly raised, the risk estimate for rectal cancer in the exposed cohort was not different from the estimate among the other industrial workers. PMID:9538356

  4. The burden of cancer attributable to modifiable risk factors: the Australian cancer-PAF cohort consortium.

    PubMed

    Arriaga, Maria E; Vajdic, Claire M; Canfell, Karen; MacInnis, Robert; Hull, Peter; Magliano, Dianna J; Banks, Emily; Giles, Graham G; Cumming, Robert G; Byles, Julie E; Taylor, Anne W; Shaw, Jonathan E; Price, Kay; Hirani, Vasant; Mitchell, Paul; Adelstein, Barbara-Ann; Laaksonen, Maarit A

    2017-06-14

    To estimate the Australian cancer burden attributable to lifestyle-related risk factors and their combinations using a novel population attributable fraction (PAF) method that accounts for competing risk of death, risk factor interdependence and statistical uncertainty. 365 173 adults from seven Australian cohort studies. We linked pooled harmonised individual participant cohort data with population-based cancer and death registries to estimate exposure-cancer and exposure-death associations. Current Australian exposure prevalence was estimated from representative external sources. To illustrate the utility of the new PAF method, we calculated fractions of cancers causally related to body fatness or both tobacco and alcohol consumption avoidable in the next 10 years by risk factor modifications, comparing them with fractions produced by traditional PAF methods. Over 10 years of follow-up, we observed 27 483 incident cancers and 22 078 deaths. Of cancers related to body fatness (n=9258), 13% (95% CI 11% to 16%) could be avoided if those currently overweight or obese had body mass index of 18.5-24.9 kg/m 2 . Of cancers causally related to both tobacco and alcohol (n=4283), current or former smoking explains 13% (11% to 16%) and consuming more than two alcoholic drinks per day explains 6% (5% to 8%). The two factors combined explain 16% (13% to 19%): 26% (21% to 30%) in men and 8% (4% to 11%) in women. Corresponding estimates using the traditional PAF method were 20%, 31% and 10%. Our PAF estimates translate to 74 000 avoidable body fatness-related cancers and 40 000 avoidable tobacco- and alcohol-related cancers in Australia over the next 10 years (2017-2026). Traditional PAF methods not accounting for competing risk of death and interdependence of risk factors may overestimate PAFs and avoidable cancers. We will rank the most important causal factors and their combinations for a spectrum of cancers and inform cancer control activities. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Direct Parametric Image Reconstruction in Reduced Parameter Space for Rapid Multi-Tracer PET Imaging.

    PubMed

    Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu

    2015-02-12

    The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.

  6. Accurate determination of electronic transport properties of silicon wafers by nonlinear photocarrier radiometry with multiple pump beam sizes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qian; University of the Chinese Academy of Sciences, Beijing 100039; Li, Bincheng, E-mail: bcli@uestc.ac.cn

    2015-12-07

    In this paper, photocarrier radiometry (PCR) technique with multiple pump beam sizes is employed to determine simultaneously the electronic transport parameters (the carrier lifetime, the carrier diffusion coefficient, and the front surface recombination velocity) of silicon wafers. By employing the multiple pump beam sizes, the influence of instrumental frequency response on the multi-parameter estimation is totally eliminated. A nonlinear PCR model is developed to interpret the PCR signal. Theoretical simulations are performed to investigate the uncertainties of the estimated parameter values by investigating the dependence of a mean square variance on the corresponding transport parameters and compared to that obtainedmore » by the conventional frequency-scan method, in which only the frequency dependences of the PCR amplitude and phase are recorded at single pump beam size. Simulation results show that the proposed multiple-pump-beam-size method can improve significantly the accuracy of the determination of the electronic transport parameters. Comparative experiments with a p-type silicon wafer with resistivity 0.1–0.2 Ω·cm are performed, and the electronic transport properties are determined simultaneously. The estimated uncertainties of the carrier lifetime, diffusion coefficient, and front surface recombination velocity are approximately ±10.7%, ±8.6%, and ±35.4% by the proposed multiple-pump-beam-size method, which is much improved than ±15.9%, ±29.1%, and >±50% by the conventional frequency-scan method. The transport parameters determined by the proposed multiple-pump-beam-size PCR method are in good agreement with that obtained by a steady-state PCR imaging technique.« less

  7. A comparison of two methods for quantifying soil organic carbon of alpine grasslands on the Tibetan Plateau.

    PubMed

    Chen, Litong; Flynn, Dan F B; Jing, Xin; Kühn, Peter; Scholten, Thomas; He, Jin-Sheng

    2015-01-01

    As CO2 concentrations continue to rise and drive global climate change, much effort has been put into estimating soil carbon (C) stocks and dynamics over time. However, the inconsistent methods employed by researchers hamper the comparability of such works, creating a pressing need to standardize the methods for soil organic C (SOC) quantification by the various methods. Here, we collected 712 soil samples from 36 sites of alpine grasslands on the Tibetan Plateau covering different soil depths and vegetation and soil types. We used an elemental analyzer for soil total C (STC) and an inorganic carbon analyzer for soil inorganic C (SIC), and then defined the difference between STC and SIC as SOCCNS. In addition, we employed the modified Walkley-Black (MWB) method, hereafter SOCMWB. Our results showed that there was a strong correlation between SOCCNS and SOCMWB across the data set, given the application of a correction factor of 1.103. Soil depth and soil type significantly influenced on the recovery, defined as the ratio of SOCMWB to SOCCNS, and the recovery was closely associated with soil carbonate content and pH value as well. The differences of recovery between alpine meadow and steppe were largely driven by soil pH. In addition, statistically, a relatively strong correlation between SOCCNS and STC was also found, suggesting that it is feasible to estimate SOCCNS stocks through the STC data across the Tibetan grasslands. Therefore, our results suggest that in order to accurately estimate the absolute SOC stocks and its change in the Tibetan alpine grasslands, adequate correction of the modified WB measurements is essential with correct consideration of the effects of soil types, vegetation, soil pH and soil depth.

  8. A Comparison of Two Methods for Quantifying Soil Organic Carbon of Alpine Grasslands on the Tibetan Plateau

    PubMed Central

    Chen, Litong; Flynn, Dan F. B.; Jing, Xin; Kühn, Peter; Scholten, Thomas; He, Jin-Sheng

    2015-01-01

    As CO2 concentrations continue to rise and drive global climate change, much effort has been put into estimating soil carbon (C) stocks and dynamics over time. However, the inconsistent methods employed by researchers hamper the comparability of such works, creating a pressing need to standardize the methods for soil organic C (SOC) quantification by the various methods. Here, we collected 712 soil samples from 36 sites of alpine grasslands on the Tibetan Plateau covering different soil depths and vegetation and soil types. We used an elemental analyzer for soil total C (STC) and an inorganic carbon analyzer for soil inorganic C (SIC), and then defined the difference between STC and SIC as SOCCNS. In addition, we employed the modified Walkley-Black (MWB) method, hereafter SOCMWB. Our results showed that there was a strong correlation between SOCCNS and SOCMWB across the data set, given the application of a correction factor of 1.103. Soil depth and soil type significantly influenced on the recovery, defined as the ratio of SOCMWB to SOCCNS, and the recovery was closely associated with soil carbonate content and pH value as well. The differences of recovery between alpine meadow and steppe were largely driven by soil pH. In addition, statistically, a relatively strong correlation between SOCCNS and STC was also found, suggesting that it is feasible to estimate SOCCNS stocks through the STC data across the Tibetan grasslands. Therefore, our results suggest that in order to accurately estimate the absolute SOC stocks and its change in the Tibetan alpine grasslands, adequate correction of the modified WB measurements is essential with correct consideration of the effects of soil types, vegetation, soil pH and soil depth. PMID:25946085

  9. A Method for Estimating View Transformations from Image Correspondences Based on the Harmony Search Algorithm

    PubMed Central

    Cuevas, Erik; Díaz, Margarita

    2015-01-01

    In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness. PMID:26339228

  10. Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor

    PubMed Central

    Cheong, Hejin; Chae, Eunjung; Lee, Eunsung; Jo, Gwanghyun; Paik, Joonki

    2015-01-01

    This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing. PMID:25569760

  11. Extension of biomass estimates to pre-assessment periods using density dependent surplus production approach.

    PubMed

    Horbowy, Jan; Tomczak, Maciej T

    2017-01-01

    Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low.

  12. Extension of biomass estimates to pre-assessment periods using density dependent surplus production approach

    PubMed Central

    Horbowy, Jan

    2017-01-01

    Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low. PMID:29131850

  13. A parameter estimation algorithm for LFM/BPSK hybrid modulated signal intercepted by Nyquist folding receiver

    NASA Astrophysics Data System (ADS)

    Qiu, Zhaoyang; Wang, Pei; Zhu, Jun; Tang, Bin

    2016-12-01

    Nyquist folding receiver (NYFR) is a novel ultra-wideband receiver architecture which can realize wideband receiving with a small amount of equipment. Linear frequency modulated/binary phase shift keying (LFM/BPSK) hybrid modulated signal is a novel kind of low probability interception signal with wide bandwidth. The NYFR is an effective architecture to intercept the LFM/BPSK signal and the LFM/BPSK signal intercepted by the NYFR will add the local oscillator modulation. A parameter estimation algorithm for the NYFR output signal is proposed. According to the NYFR prior information, the chirp singular value ratio spectrum is proposed to estimate the chirp rate. Then, based on the output self-characteristic, matching component function is designed to estimate Nyquist zone (NZ) index. Finally, matching code and subspace method are employed to estimate the phase change points and code length. Compared with the existing methods, the proposed algorithm has a better performance. It also has no need to construct a multi-channel structure, which means the computational complexity for the NZ index estimation is small. The simulation results demonstrate the efficacy of the proposed algorithm.

  14. A Differential Evolution Based Approach to Estimate the Shape and Size of Complex Shaped Anomalies Using EIT Measurements

    NASA Astrophysics Data System (ADS)

    Rashid, Ahmar; Khambampati, Anil Kumar; Kim, Bong Seok; Liu, Dong; Kim, Sin; Kim, Kyung Youn

    EIT image reconstruction is an ill-posed problem, the spatial resolution of the estimated conductivity distribution is usually poor and the external voltage measurements are subject to variable noise. Therefore, EIT conductivity estimation cannot be used in the raw form to correctly estimate the shape and size of complex shaped regional anomalies. An efficient algorithm employing a shape based estimation scheme is needed. The performance of traditional inverse algorithms, such as the Newton Raphson method, used for this purpose is below par and depends upon the initial guess and the gradient of the cost functional. This paper presents the application of differential evolution (DE) algorithm to estimate complex shaped region boundaries, expressed as coefficients of truncated Fourier series, using EIT. DE is a simple yet powerful population-based, heuristic algorithm with the desired features to solve global optimization problems under realistic conditions. The performance of the algorithm has been tested through numerical simulations, comparing its results with that of the traditional modified Newton Raphson (mNR) method.

  15. Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2011-01-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  16. Ensemble Space-Time Correlation of Plasma Turbulence in the Solar Wind.

    PubMed

    Matthaeus, W H; Weygand, J M; Dasso, S

    2016-06-17

    Single point measurement turbulence cannot distinguish variations in space and time. We employ an ensemble of one- and two-point measurements in the solar wind to estimate the space-time correlation function in the comoving plasma frame. The method is illustrated using near Earth spacecraft observations, employing ACE, Geotail, IMP-8, and Wind data sets. New results include an evaluation of both correlation time and correlation length from a single method, and a new assessment of the accuracy of the familiar frozen-in flow approximation. This novel view of the space-time structure of turbulence may prove essential in exploratory space missions such as Solar Probe Plus and Solar Orbiter for which the frozen-in flow hypothesis may not be a useful approximation.

  17. Production of NOx by Lightning and its Effects on Atmospheric Chemistry

    NASA Technical Reports Server (NTRS)

    Pickering, Kenneth E.

    2009-01-01

    Production of NO(x) by lightning remains the NO(x) source with the greatest uncertainty. Current estimates of the global source strength range over a factor of four (from 2 to 8 TgN/year). Ongoing efforts to reduce this uncertainty through field programs, cloud-resolved modeling, global modeling, and satellite data analysis will be described in this seminar. Representation of the lightning source in global or regional chemical transport models requires three types of information: the distribution of lightning flashes as a function of time and space, the production of NO(x) per flash, and the effective vertical distribution of the lightning-injected NO(x). Methods of specifying these items in a model will be discussed. For example, the current method of specifying flash rates in NASA's Global Modeling Initiative (GMI) chemical transport model will be discussed, as well as work underway in developing algorithms for use in the regional models CMAQ and WRF-Chem. A number of methods have been employed to estimate either production per lightning flash or the production per unit flash length. Such estimates derived from cloud-resolved chemistry simulations and from satellite NO2 retrievals will be presented as well as the methodologies employed. Cloud-resolved model output has also been used in developing vertical profiles of lightning NO(x) for use in global models. Effects of lightning NO(x) on O3 and HO(x) distributions will be illustrated regionally and globally.

  18. The Expense Tied to Secondary Course Failure: The Case of Ontario

    ERIC Educational Resources Information Center

    Faubert, Brenton

    2016-01-01

    This article describes a study that examined the volume of secondary course failure and its direct budget impact on Ontario's K-12 public education system. The study employed a straightforward, descriptive accounting method to estimate the annual expenditure tied to secondary course failure, taking into account some factors known to be…

  19. A Careful Look at Modern Case Selection Methods

    ERIC Educational Resources Information Center

    Herron, Michael C.; Quinn, Kevin M.

    2016-01-01

    Case studies appear prominently in political science, sociology, and other social science fields. A scholar employing a case study research design in an effort to estimate causal effects must confront the question, how should cases be selected for analysis? This question is important because the results derived from a case study research program…

  20. Restoration and economics: A union waiting to happen?

    Treesearch

    Alicia S.T. Robbins; Jean M. Daniels

    2012-01-01

    In this article, our objective is to introduce economics as a tool for the planning, prioritization, and evaluation of restoration projects. Studies that develop economic estimates of public values for ecological restoration employ methods that may be unfamiliar to practitioners. We hope to address this knowledge gap by describing economic concepts in the context of...

  1. STATISTICAL ESTIMATES OF VARIANCE FOR 15N ISOTOPE DILUTION MEASUREMENTS OF GROSS RATES OF NITROGEN CYCLE PROCESSES

    EPA Science Inventory

    It has been fifty years since Kirkham and Bartholmew (1954) presented the conceptual framework and derived the mathematical equations that formed the basis of the now commonly employed method of 15N isotope dilution. Although many advances in methodology and analysis have been ma...

  2. Evaluation of hyperspectral reflectance for estimating dry matter and sugar concentration in processing potatoes

    USDA-ARS?s Scientific Manuscript database

    The measurement of sugar concentration and dry matter in processing potatoes is a time and resource intensive activity, cannot be performed in the field, and does not easily measure within tuber variation. A proposed method to improve the phenotyping of processing potatoes is to employ hyperspectral...

  3. Group Comparisons in the Presence of Missing Data Using Latent Variable Modeling Techniques

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2010-01-01

    A latent variable modeling approach for examining population similarities and differences in observed variable relationship and mean indexes in incomplete data sets is discussed. The method is based on the full information maximum likelihood procedure of model fitting and parameter estimation. The procedure can be employed to test group identities…

  4. Legal Status and Wage Disparities for Mexican Immigrants

    ERIC Educational Resources Information Center

    Hall, Matthew; Greenman, Emily; Farkas, George

    2010-01-01

    This article employs a unique method of inferring the legal status of Mexican immigrants in the Survey of Income and Program Participation to offer new evidence of the role of legal authorization in the United States on workers' wages. We estimate wage trajectories for four groups: documented Mexican immigrants, undocumented Mexican immigrants,…

  5. A Critical Look at Methodologies Used to Evaluate Charter School Effectiveness

    ERIC Educational Resources Information Center

    Ackerman, Matthew; Egalite, Anna J.

    2017-01-01

    There is no consensus among researchers on charter school effectiveness in the USA, in part because of discrepancies in the research methods employed across various studies. Causal impact estimates from experimental studies demonstrate large positive impacts, but concerns about the generalizability of these results have prompted the development of…

  6. Premixed Digestion Salts for Kjeldahl Determination of Total Nitrogen in Selected Forest Soils

    Treesearch

    B. G. Blackmon

    1971-01-01

    Estimates of total soil nitrogen by a standard Kjeldahl procedure and a modified procedure employing packets of premixed digestion salts were closely correlated. (r2 = 0.983). The modified procedure appears to be as reliable all the standard method for determining total nitrogen in southern alluvial forest soils.

  7. Evaluation of three aging techniques and back-calculated growth for introduced Blue Catfish from Lake Oconee, Georgia

    USGS Publications Warehouse

    Homer, Michael D.; Peterson, James T.; Jennings, Cecil A.

    2015-01-01

    Back-calculation of length-at-age from otoliths and spines is a common technique employed in fisheries biology, but few studies have compared the precision of data collected with this method for catfish populations. We compared precision of back-calculated lengths-at-age for an introducedIctalurus furcatus (Blue Catfish) population among 3 commonly used cross-sectioning techniques. We used gillnets to collect Blue Catfish (n = 153) from Lake Oconee, GA. We estimated ages from a basal recess, articulating process, and otolith cross-section from each fish. We employed the Frasier-Lee method to back-calculate length-at-age for each fish, and compared the precision of back-calculated lengths among techniques using hierarchical linear models. Precision in age assignments was highest for otoliths (83.5%) and lowest for basal recesses (71.4%). Back-calculated lengths were variable among fish ages 1–3 for the techniques compared; otoliths and basal recesses yielded variable lengths at age 8. We concluded that otoliths and articulating processes are adequate for age estimation of Blue Catfish.

  8. Quantification and Compensation of Eddy-Current-Induced Magnetic Field Gradients

    PubMed Central

    Spees, William M.; Buhl, Niels; Sun, Peng; Ackerman, Joseph J.H.; Neil, Jeffrey J.; Garbow, Joel R.

    2011-01-01

    Two robust techniques for quantification and compensation of eddy-current-induced magnetic-field gradients and static magnetic-field shifts (ΔB0) in MRI systems are described. Purpose-built 1-D or 6-point phantoms are employed. Both procedures involve measuring the effects of a prior magnetic-field-gradient test pulse on the phantom’s free induction decay (FID). Phantom-specific analysis of the resulting FID data produces estimates of the time-dependent, eddy-current-induced magnetic field gradient(s) and ΔB0 shift. Using Bayesian methods, the time dependencies of the eddy-current-induced decays are modeled as sums of exponentially decaying components, each defined by an amplitude and time constant. These amplitudes and time constants are employed to adjust the scanner’s gradient pre-emphasis unit and eliminate undesirable eddy-current effects. Measurement with the six-point sample phantom allows for simultaneous, direct estimation of both on-axis and cross-term eddy-current-induced gradients. The two methods are demonstrated and validated on several MRI systems with actively-shielded gradient coil sets. PMID:21764614

  9. Quantification and compensation of eddy-current-induced magnetic-field gradients.

    PubMed

    Spees, William M; Buhl, Niels; Sun, Peng; Ackerman, Joseph J H; Neil, Jeffrey J; Garbow, Joel R

    2011-09-01

    Two robust techniques for quantification and compensation of eddy-current-induced magnetic-field gradients and static magnetic-field shifts (ΔB0) in MRI systems are described. Purpose-built 1-D or six-point phantoms are employed. Both procedures involve measuring the effects of a prior magnetic-field-gradient test pulse on the phantom's free induction decay (FID). Phantom-specific analysis of the resulting FID data produces estimates of the time-dependent, eddy-current-induced magnetic field gradient(s) and ΔB0 shift. Using Bayesian methods, the time dependencies of the eddy-current-induced decays are modeled as sums of exponentially decaying components, each defined by an amplitude and time constant. These amplitudes and time constants are employed to adjust the scanner's gradient pre-emphasis unit and eliminate undesirable eddy-current effects. Measurement with the six-point sample phantom allows for simultaneous, direct estimation of both on-axis and cross-term eddy-current-induced gradients. The two methods are demonstrated and validated on several MRI systems with actively-shielded gradient coil sets. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. A novel approach of battery pack state of health estimation using artificial intelligence optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Wang, Yujie; Liu, Chang; Chen, Zonghai

    2018-02-01

    An accurate battery pack state of health (SOH) estimation is important to characterize the dynamic responses of battery pack and ensure the battery work with safety and reliability. However, the different performances in battery discharge/charge characteristics and working conditions in battery pack make the battery pack SOH estimation difficult. In this paper, the battery pack SOH is defined as the change of battery pack maximum energy storage. It contains all the cells' information including battery capacity, the relationship between state of charge (SOC) and open circuit voltage (OCV), and battery inconsistency. To predict the battery pack SOH, the method of particle swarm optimization-genetic algorithm is applied in battery pack model parameters identification. Based on the results, a particle filter is employed in battery SOC and OCV estimation to avoid the noise influence occurring in battery terminal voltage measurement and current drift. Moreover, a recursive least square method is used to update cells' capacity. Finally, the proposed method is verified by the profiles of New European Driving Cycle and dynamic test profiles. The experimental results indicate that the proposed method can estimate the battery states with high accuracy for actual operation. In addition, the factors affecting the change of SOH is analyzed.

  11. Improving Hip-Worn Accelerometer Estimates of Sitting Using Machine Learning Methods.

    PubMed

    Kerr, Jacqueline; Carlson, Jordan; Godbole, Suneeta; Cadmus-Bertram, Lisa; Bellettiere, John; Hartman, Sheri

    2018-02-13

    To improve estimates of sitting time from hip worn accelerometers used in large cohort studies by employing machine learning methods developed on free living activPAL data. Thirty breast cancer survivors concurrently wore a hip worn accelerometer and a thigh worn activPAL for 7 days. A random forest classifier, trained on the activPAL data, was employed to detect sitting, standing and sit-stand transitions in 5 second windows in the hip worn accelerometer. The classifier estimates were compared to the standard accelerometer cut point and significant differences across different bout lengths were investigated using mixed effect models. Overall, the algorithm predicted the postures with moderate accuracy (stepping 77%, standing 63%, sitting 67%, sit to stand 52% and stand to sit 51%). Daily level analyses indicated that errors in transition estimates were only occurring during sitting bouts of 2 minutes or less. The standard cut point was significantly different from the activPAL across all bout lengths, overestimating short bouts and underestimating long bouts. This is among the first algorithms for sitting and standing for hip worn accelerometer data to be trained from entirely free living activPAL data. The new algorithm detected prolonged sitting which has been shown to be most detrimental to health. Further validation and training in larger cohorts is warranted.This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  12. Price, tax and tobacco product substitution in Zambia.

    PubMed

    Stoklosa, Michal; Goma, Fastone; Nargis, Nigar; Drope, Jeffrey; Chelwa, Grieve; Chisha, Zunda; Fong, Geoffrey T

    2018-03-24

    In Zambia, the number of cigarette users is growing, and the lack of strong tax policies is likely an important cause. When adjusted for inflation, levels of tobacco tax have not changed since 2007. Moreover, roll-your-own (RYO) tobacco, a less-costly alternative to factory-made (FM) cigarettes, is highly prevalent. We modelled the probability of FM and RYO cigarette smoking using individual-level data obtained from the 2012 and 2014 waves of the International Tobacco Control (ITC) Zambia Survey. We used two estimation methods: the standard estimation method involving separate random effects probit models and a method involving a system of equations (incorporating bivariate seemingly unrelated random effects probit) to estimate price elasticities of FM and RYO cigarettes and their cross-price elasticities. The estimated price elasticities of smoking prevalence are -0.20 and -0.03 for FM and RYO cigarettes, respectively. FM and RYO are substitutes; that is, when the price of one of the products goes up, some smokers switch to the other product. The effects are stronger for substitution from FM to RYO than vice versa. This study affirms that increasing cigarette tax with corresponding price increases could significantly reduce cigarette use in Zambia. Furthermore, reducing between-product price differences would reduce substitution from FM to RYO. Since RYO use is associated with lower socioeconomic status, efforts to decrease RYO use, including through tax/price approaches and cessation assistance, would decrease health inequalities in Zambian society and reduce the negative economic consequences of tobacco use experienced by the poor. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. How low can you go? The impact of reduced benefits and increased cost sharing.

    PubMed

    Lee, Jason S; Tollen, Laura

    2002-01-01

    Amid escalating health care costs and a managed care backlash, employers are considering traditional cost control methods from the pre-managed care era. We use an actuarial model to estimate the premium-reducing effects of two such methods: increasing employee cost sharing and reducing benefits. Starting from a baseline plan with rich benefits and low cost sharing, estimated premium savings as a result of eliminating five specific benefits were about 22 percent. The same level of savings was also achieved by increasing cost sharing from a 15 dollars copayment with no deductible to 20 percent coinsurance and a 250 dollars deductible. Further increases in cost sharing produced estimated savings of up to 50 percent. We discuss possible market- and individual-level effects of the proliferation of plans with high cost sharing and low benefits.

  14. The effect of different methods to compute N on estimates of mixing in stratified flows

    NASA Astrophysics Data System (ADS)

    Fringer, Oliver; Arthur, Robert; Venayagamoorthy, Subhas; Koseff, Jeffrey

    2017-11-01

    The background stratification is typically well defined in idealized numerical models of stratified flows, although it is more difficult to define in observations. This may have important ramifications for estimates of mixing which rely on knowledge of the background stratification against which turbulence must work to mix the density field. Using direct numerical simulation data of breaking internal waves on slopes, we demonstrate a discrepancy in ocean mixing estimates depending on the method in which the background stratification is computed. Two common methods are employed to calculate the buoyancy frequency N, namely a three-dimensionally resorted density field (often used in numerical models) and a locally-resorted vertical density profile (often used in the field). We show that how N is calculated has a significant effect on the flux Richardson number Rf, which is often used to parameterize turbulent mixing, and the turbulence activity number Gi, which leads to errors when estimating the mixing efficiency using Gi-based parameterizations. Supported by ONR Grant N00014-08-1-0904 and LLNL Contract DE-AC52-07NA27344.

  15. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  16. Stacked Weak Lensing Mass Calibration: Estimators, Systematics, and Impact on Cosmological Parameter Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozo, Eduardo; /U. Chicago /Chicago U., KICP; Wu, Hao-Yi

    2011-11-04

    When extracting the weak lensing shear signal, one may employ either locally normalized or globally normalized shear estimators. The former is the standard approach when estimating cluster masses, while the latter is the more common method among peak finding efforts. While both approaches have identical signal-to-noise in the weak lensing limit, it is possible that higher order corrections or systematic considerations make one estimator preferable over the other. In this paper, we consider the efficacy of both estimators within the context of stacked weak lensing mass estimation in the Dark Energy Survey (DES). We find that the two estimators havemore » nearly identical statistical precision, even after including higher order corrections, but that these corrections must be incorporated into the analysis to avoid observationally relevant biases in the recovered masses. We also demonstrate that finite bin-width effects may be significant if not properly accounted for, and that the two estimators exhibit different systematics, particularly with respect to contamination of the source catalog by foreground galaxies. Thus, the two estimators may be employed as a systematic cross-check of each other. Stacked weak lensing in the DES should allow for the mean mass of galaxy clusters to be calibrated to {approx}2% precision (statistical only), which can improve the figure of merit of the DES cluster abundance experiment by a factor of {approx}3 relative to the self-calibration expectation. A companion paper investigates how the two types of estimators considered here impact weak lensing peak finding efforts.« less

  17. Oracle estimation of parametric models under boundary constraints.

    PubMed

    Wong, Kin Yau; Goldberg, Yair; Fine, Jason P

    2016-12-01

    In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.

  18. Limb muscle sound speed estimation by ultrasound computed tomography excluding receivers in bone shadow

    NASA Astrophysics Data System (ADS)

    Qu, Xiaolei; Azuma, Takashi; Lin, Hongxiang; Takeuchi, Hideki; Itani, Kazunori; Tamano, Satoshi; Takagi, Shu; Sakuma, Ichiro

    2017-03-01

    Sarcopenia is the degenerative loss of skeletal muscle ability associated with aging. One reason is the increasing of adipose ratio of muscle, which can be estimated by the speed of sound (SOS), since SOSs of muscle and adipose are different (about 7%). For SOS imaging, the conventional bent-ray method iteratively finds ray paths and corrects SOS along them by travel-time. However, the iteration is difficult to converge for soft tissue with bone inside, because of large speed variation. In this study, the bent-ray method is modified to produce SOS images for limb muscle with bone inside. The modified method includes three steps. First, travel-time is picked up by a proposed Akaike Information Criterion (AIC) with energy term (AICE) method. The energy term is employed for detecting and abandoning the transmissive wave through bone (low energy wave). It results in failed reconstruction for bone, but makes iteration convergence and gives correct SOS for skeletal muscle. Second, ray paths are traced using Fermat's principle. Finally, simultaneous algebraic reconstruction technique (SART) is employed to correct SOS along ray paths, but excluding paths with low energy wave which may pass through bone. The simulation evaluation was implemented by k-wave toolbox using a model of upper arm. As the result, SOS of muscle was 1572.0+/-7.3 m/s, closing to 1567.0 m/s in the model. For vivo evaluation, a ring transducer prototype was employed to scan the cross sections of lower arm and leg of a healthy volunteer. And the skeletal muscle SOSs were 1564.0+/-14.8 m/s and 1564.1±18.0 m/s, respectively.

  19. Estimation of additive forces and moments for supersonic inlets

    NASA Technical Reports Server (NTRS)

    Perkins, Stanley C., Jr.; Dillenius, Marnix F. E.

    1991-01-01

    A technique for estimating the additive forces and moments associated with supersonic, external compression inlets as a function of mass flow ratio has been developed. The technique makes use of a low order supersonic paneling method for calculating minimum additive forces at maximum mass flow conditions. A linear relationship between the minimum additive forces and the maximum values for fully blocked flow is employed to obtain the additive forces at a specified mass flow ratio. The method is applicable to two-dimensional inlets at zero or nonzero angle of attack, and to axisymmetric inlets at zero angle of attack. Comparisons with limited available additive drag data indicate fair to good agreement.

  20. Diffusion Weighted Image Denoising Using Overcomplete Local PCA

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  1. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  2. An algorithm for computing moments-based flood quantile estimates when historical flood information is available

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Baier, W.G.

    1997-01-01

    This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.

  3. Automatic estimation of heart boundaries and cardiothoracic ratio from chest x-ray images

    NASA Astrophysics Data System (ADS)

    Dallal, Ahmed H.; Agarwal, Chirag; Arbabshirani, Mohammad R.; Patel, Aalpen; Moore, Gregory

    2017-03-01

    Cardiothoracic ratio (CTR) is a widely used radiographic index to assess heart size on chest X-rays (CXRs). Recent studies have suggested that also two-dimensional CTR might contain clinical information about the heart function. However, manual measurement of such indices is both subjective and time consuming. This study proposes a fast algorithm to automatically estimate CTR indices based on CXRs. The algorithm has three main steps: 1) model based lung segmentation, 2) estimation of heart boundaries from lung contours, and 3) computation of cardiothoracic indices from the estimated boundaries. We extended a previously employed lung detection algorithm to automatically estimate heart boundaries without using ground truth heart markings. We used two datasets: a publicly available dataset with 247 images as well as clinical dataset with 167 studies from Geisinger Health System. The models of lung fields are learned from both datasets. The lung regions in a given test image are estimated by registering the learned models to patient CXRs. Then, heart region is estimated by applying Harris operator on segmented lung fields to detect the corner points corresponding to the heart boundaries. The algorithm calculates three indices, CTR1D, CTR2D, and cardiothoracic area ratio (CTAR). The method was tested on 103 clinical CXRs and average error rates of 7.9%, 25.5%, and 26.4% (for CTR1D, CTR2D, and CTAR respectively) were achieved. The proposed method outperforms previous CTR estimation methods without using any heart templates. This method can have important clinical implications as it can provide fast and accurate estimate of cardiothoracic indices.

  4. An algorithm for computing moments-based flood quantile estimates when historical flood information is available

    NASA Astrophysics Data System (ADS)

    Cohn, T. A.; Lane, W. L.; Baier, W. G.

    This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.

  5. Enhancing the estimation of blood pressure using pulse arrival time and two confounding factors.

    PubMed

    Baek, Hyun Jae; Kim, Ko Keun; Kim, Jung Soo; Lee, Boreom; Park, Kwang Suk

    2010-02-01

    A new method of blood pressure (BP) estimation using multiple regression with pulse arrival time (PAT) and two confounding factors was evaluated in clinical and unconstrained monitoring situations. For the first analysis with clinical data, electrocardiogram (ECG), photoplethysmogram (PPG) and invasive BP signals were obtained by a conventional patient monitoring device during surgery. In the second analysis, ECG, PPG and non-invasive BP were measured using systems developed to obtain data under conditions in which the subject was not constrained. To enhance the performance of BP estimation methods, heart rate (HR) and arterial stiffness were considered as confounding factors in regression analysis. The PAT and HR were easily extracted from ECG and PPG signals. For arterial stiffness, the duration from the maximum derivative point to the maximum of the dicrotic notch in the PPG signal, a parameter called TDB, was employed. In two experiments that normally cause BP variation, the correlation between measured BP and the estimated BP was investigated. Multiple-regression analysis with the two confounding factors improved correlation coefficients for diastolic blood pressure and systolic blood pressure to acceptable confidence levels, compared to existing methods that consider PAT only. In addition, reproducibility for the proposed method was determined using constructed test sets. Our results demonstrate that non-invasive, non-intrusive BP estimation can be obtained using methods that can be applied in both clinical and daily healthcare situations.

  6. Did Better Colleges Bring Better Jobs? Estimating the Effects of College Quality on Initial Employment for College Graduates in China

    ERIC Educational Resources Information Center

    Yu, Li

    2017-01-01

    The unemployment problem of college students in China has drawn much attention from academics and society. Using the 2011 College Student Labor Market (CSLM) survey data from Tsinghua University, this paper estimated the effects of college quality on initial employment, including employment status and employment unit ownership for fresh college…

  7. Assessing the performance of dynamical trajectory estimates

    NASA Astrophysics Data System (ADS)

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  8. Assessing the performance of dynamical trajectory estimates.

    PubMed

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  9. A meta-analysis of the worldwide prevalence of pica during pregnancy and the postpartum period.

    PubMed

    Fawcett, Emily J; Fawcett, Jonathan M; Mazmanian, Dwight

    2016-06-01

    Although pica has long been associated with pregnancy, the exact prevalence in this population remains unknown. To estimate the prevalence of pica during pregnancy and the postpartum period, and to explain variations in prevalence estimates by examining potential moderating variables. PsycARTICLES, PsycINFO, PubMed, and Google Scholar were searched from inception to February 2014 using the keywords pica, prevalence, and epidemiology. Articles estimating pica prevalence during pregnancy and/or the postpartum period using a self-report questionnaire or interview were included. Study characteristics, pica prevalence, and eight potential moderating variables were recorded (parity, anemia, duration of pregnancy, mean maternal age, education, sampling method employed, region, and publication date). Random-effects models were employed. In total, 70 studies were included, producing an aggregate prevalence estimate of 27.8% (95% confidence interval 22.8-33.3). In light of substantial heterogeneity within the study model, the primary focus was identifying moderator variables. Pica prevalence was higher in Africa compared with elsewhere in the world, increased as the prevalence of anemia increased, and decreased as educational attainment increased. Geographical region, anemia, and education were found to moderate pica prevalence, partially explaining the heterogeneity in prevalence estimates across the literature. Copyright © 2016 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.

  10. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE PAGES

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...

    2017-08-25

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  11. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  12. Population assessment of tropical tuna based on their associative behavior around floating objects.

    PubMed

    Capello, M; Deneubourg, J L; Robert, M; Holland, K N; Schaefer, K M; Dagorn, L

    2016-11-03

    Estimating the abundance of pelagic fish species is a challenging task, due to their vast and remote habitat. Despite the development of satellite, archival and acoustic tagging techniques that allow the tracking of marine animals in their natural environments, these technologies have so far been underutilized in developing abundance estimations. We developed a new method for estimating the abundance of tropical tuna that employs these technologies and exploits the aggregative behavior of tuna around floating objects (FADs). We provided estimates of abundance indices based on a simulated set of tagged fish and studied the sensitivity of our method to different association dynamics, FAD numbers, population sizes and heterogeneities of the FAD-array. Taking the case study of yellowfin tuna (Thunnus albacares) acoustically-tagged in Hawaii, we implemented our approach on field data and derived for the first time the ratio between the associated and the total population. With more extensive and long-term monitoring of FAD-associated tunas and good estimates of the numbers of fish at FADs, our method could provide fisheries-independent estimates of populations of tropical tuna. The same approach can be applied to obtain population assessments for any marine and terrestrial species that display associative behavior and from which behavioral data have been acquired using acoustic, archival or satellite tags.

  13. A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging

    PubMed Central

    Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.

    2014-01-01

    Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990

  14. A transverse oscillation approach for estimation of three-dimensional velocity vectors, part I: concept and simulation study.

    PubMed

    Pihl, Michael Johannes; Jensen, Jørgen Arendt

    2014-10-01

    A method for 3-D velocity vector estimation using transverse oscillations is presented. The method employs a 2-D transducer and decouples the velocity estimation into three orthogonal components, which are estimated simultaneously and from the same data. The validity of the method is investigated by conducting simulations emulating a 32 × 32 matrix transducer. The results are evaluated using two performance metrics related to precision and accuracy. The study includes several parameters including 49 flow directions, the SNR, steering angle, and apodization types. The 49 flow directions cover the positive octant of the unit sphere. In terms of accuracy, the median bias is -2%. The precision of v(x) and v(y) depends on the flow angle ß and ranges from 5% to 31% relative to the peak velocity magnitude of 1 m/s. For comparison, the range is 0.4 to 2% for v(z). The parameter study also reveals, that the velocity estimation breaks down with an SNR between -6 and -3 dB. In terms of computational load, the estimation of the three velocity components requires 0.75 billion floating point operations per second (0.75 Gflops) for a realistic setup. This is well within the capability of modern scanners.

  15. Force estimation from OCT volumes using 3D CNNs.

    PubMed

    Gessert, Nils; Beringhoff, Jens; Otte, Christoph; Schlaefer, Alexander

    2018-07-01

    Estimating the interaction forces of instruments and tissue is of interest, particularly to provide haptic feedback during robot-assisted minimally invasive interventions. Different approaches based on external and integrated force sensors have been proposed. These are hampered by friction, sensor size, and sterilizability. We investigate a novel approach to estimate the force vector directly from optical coherence tomography image volumes. We introduce a novel Siamese 3D CNN architecture. The network takes an undeformed reference volume and a deformed sample volume as an input and outputs the three components of the force vector. We employ a deep residual architecture with bottlenecks for increased efficiency. We compare the Siamese approach to methods using difference volumes and two-dimensional projections. Data were generated using a robotic setup to obtain ground-truth force vectors for silicon tissue phantoms as well as porcine tissue. Our method achieves a mean average error of [Formula: see text] when estimating the force vector. Our novel Siamese 3D CNN architecture outperforms single-path methods that achieve a mean average error of [Formula: see text]. Moreover, the use of volume data leads to significantly higher performance compared to processing only surface information which achieves a mean average error of [Formula: see text]. Based on the tissue dataset, our methods shows good generalization in between different subjects. We propose a novel image-based force estimation method using optical coherence tomography. We illustrate that capturing the deformation of subsurface structures substantially improves force estimation. Our approach can provide accurate force estimates in surgical setups when using intraoperative optical coherence tomography.

  16. Evaluation of the pulse-contour method of determining stroke volume in man.

    NASA Technical Reports Server (NTRS)

    Alderman, E. L.; Branzi, A.; Sanders, W.; Brown, B. W.; Harrison, D. C.

    1972-01-01

    The pulse-contour method for determining stroke volume has been employed as a continuous rapid method of monitoring the cardiovascular status of patients. Twenty-one patients with ischemic heart disease and 21 patients with mitral valve disease were subjected to a variety of hemodynamic interventions. The pulse-contour estimations, using three different formulas derived by Warner, Kouchoukos, and Herd, were compared with indicator-dilution outputs. A comparison of the results of the two methods for determining stroke volume yielded correlation coefficients ranging from 0.59 to 0.84. The better performing Warner formula yielded a coefficient of variation of about 20%. The type of hemodynamic interventions employed did not significantly affect the results using the pulse-contour method. Although the correlation of the pulse-contour and indicator-dilution stroke volumes is high, the coefficient of variation is such that small changes in stroke volume cannot be accurately assessed by the pulse-contour method. However, the simplicity and rapidity of this method compared to determination of cardiac output by Fick or indicator-dilution methods makes it a potentially useful adjunct for monitoring critically ill patients.

  17. The relationship of age and place of delivery with postpartum contraception prior to discharge in Mexico: A retrospective cohort study.

    PubMed

    Darney, Blair G; Sosa-Rubi, Sandra G; Servan-Mori, Edson; Rodriguez, Maria I; Walker, Dilys; Lozano, Rafael

    2016-06-01

    To test the association of age (adolescents vs. older women) and place of delivery with receipt of immediate postpartum contraception in Mexico. Retrospective cohort study, Mexico, nationally representative sample of women 12-39years old at last delivery. We used multivariable logistic regression to test the association of self-reported receipt of postpartum contraception prior to discharge with age and place of delivery (public, employment based, private, or out of facility). We included individual and household-level confounders and calculated relative and absolute multivariable estimates of association. Our analytic sample included 7022 women (population, N=9,881,470). Twenty percent of the population was 12-19years old at last birth, 55% aged 20-29 and 25% 30-39years old. Overall, 43% of women reported no postpartum contraceptive method. Age was not significantly associated with receipt of a method, controlling for covariates. Women delivering in public facilities had lower odds of receipt of a method (Odds Ratio=0.52; 95% Confidence Interval (CI)=0.40-0.68) compared with employment-based insurance facilities. We estimated 76% (95% CI=74-78%) of adolescents (12-19years) who deliver in employment-based insurance facilities leave with a method compared with 59% (95% CI=56-62%) who deliver in public facilities. Both adolescents and women ages 20-39 receive postpartum contraception, but nearly half of all women receive no method. Place of delivery is correlated with receipt of postpartum contraception, with lower rates in the public sector. Lessons learned from Mexico are relevant to other countries seeking to improve adolescent health through reducing unintended pregnancy. Adolescents receive postpartum contraception as often as older women in Mexico, but half of all women receive no method. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. A novel control architecture for physiological tremor compensation in teleoperated systems.

    PubMed

    Ghorbanian, A; Zareinejad, M; Rezaei, S M; Sheikhzadeh, H; Baghestan, K

    2013-09-01

    Telesurgery delivers surgical care to a 'remote' patient by means of robotic manipulators. When accurate positioning of the surgeon's tool is required, as in microsurgery, physiological tremor causes unwanted imprecision during a surgical operation. Accurate estimation/compensation of physiological tremor in teleoperation systems has been shown to improve performance during telesurgery. A new control architecture is proposed for estimation and compensation of physiological tremor in the presence of communication time delays. This control architecture guarantees stability with satisfactory transparency. In addition, the proposed method can be used for applications that require modifications in transmitted signals through communication channels. Stability of the bilateral tremor-compensated teleoperation is preserved by extending the bilateral teleoperation to the equivalent trilateral Dual-master/Single-slave teleoperation. The bandlimited multiple Fourier linear combiner (BMFLC) algorithm is employed for real-time estimation of the operator's physiological tremor. Two kinds of stability analysis are employed. In the model-base controller, Llewellyn's Criterion is used to analyze the teleoperation absolute stability. In the second method, a nonmodel-based controller is proposed and the stability of the time-delayed teleoperated system is proved by employing a Lyapunov function. Experimental results are presented to validate the effectiveness of the new control architecture. The tremorous motion is measured by accelerometer to be compensated in real time. In addition, a Needle-Insertion setup is proposed as a slave robot for the application of brachytherapy, in which the needle penetrates in the desired position. The slave performs the desired task in two classes of environments (free motion of the slave and in the soft tissue). Experiments show that the proposed control architecture effectively compensates the user's tremorous motion and the slave follows only the master's voluntary motion in a stable manner. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Method for Estimating Three-Dimensional Knee Rotations Using Two Inertial Measurement Units: Validation with a Coordinate Measurement Machine

    PubMed Central

    Vitali, Rachel V.; Cain, Stephen M.; Zaferiou, Antonia M.; Ojeda, Lauro V.; Perkins, Noel C.

    2017-01-01

    Three-dimensional rotations across the human knee serve as important markers of knee health and performance in multiple contexts including human mobility, worker safety and health, athletic performance, and warfighter performance. While knee rotations can be estimated using optical motion capture, that method is largely limited to the laboratory and small capture volumes. These limitations may be overcome by deploying wearable inertial measurement units (IMUs). The objective of this study is to present a new IMU-based method for estimating 3D knee rotations and to benchmark the accuracy of the results using an instrumented mechanical linkage. The method employs data from shank- and thigh-mounted IMUs and a vector constraint for the medial-lateral axis of the knee during periods when the knee joint functions predominantly as a hinge. The method is carefully validated using data from high precision optical encoders in a mechanism that replicates 3D knee rotations spanning (1) pure flexion/extension, (2) pure internal/external rotation, (3) pure abduction/adduction, and (4) combinations of all three rotations. Regardless of the movement type, the IMU-derived estimates of 3D knee rotations replicate the truth data with high confidence (RMS error < 4° and correlation coefficient r≥0.94). PMID:28846613

  20. The efficacy of respondent-driven sampling for the health assessment of minority populations.

    PubMed

    Badowski, Grazyna; Somera, Lilnabeth P; Simsiman, Brayan; Lee, Hye-Ryeon; Cassel, Kevin; Yamanaka, Alisha; Ren, JunHao

    2017-10-01

    Respondent driven sampling (RDS) is a relatively new network sampling technique typically employed for hard-to-reach populations. Like snowball sampling, initial respondents or "seeds" recruit additional respondents from their network of friends. Under certain assumptions, the method promises to produce a sample independent from the biases that may have been introduced by the non-random choice of "seeds." We conducted a survey on health communication in Guam's general population using the RDS method, the first survey that has utilized this methodology in Guam. It was conducted in hopes of identifying a cost-efficient non-probability sampling strategy that could generate reasonable population estimates for both minority and general populations. RDS data was collected in Guam in 2013 (n=511) and population estimates were compared with 2012 BRFSS data (n=2031) and the 2010 census data. The estimates were calculated using the unweighted RDS sample and the weighted sample using RDS inference methods and compared with known population characteristics. The sample size was reached in 23days, providing evidence that the RDS method is a viable, cost-effective data collection method, which can provide reasonable population estimates. However, the results also suggest that the RDS inference methods used to reduce bias, based on self-reported estimates of network sizes, may not always work. Caution is needed when interpreting RDS study findings. For a more diverse sample, data collection should not be conducted in just one location. Fewer questions about network estimates should be asked, and more careful consideration should be given to the kind of incentives offered to participants. Copyright © 2017. Published by Elsevier Ltd.

  1. Waveform inversion of acoustic waves for explosion yield estimation

    DOE PAGES

    Kim, K.; Rodgers, A. J.

    2016-07-08

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  2. Quantification of Microbial Phenotypes

    PubMed Central

    Martínez, Verónica S.; Krömer, Jens O.

    2016-01-01

    Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694

  3. Conducting Meta-Analyses Based on p Values

    PubMed Central

    van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.

    2016-01-01

    Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466

  4. Waveform inversion of acoustic waves for explosion yield estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Rodgers, A. J.

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  5. A line transect model for aerial surveys

    USGS Publications Warehouse

    Quang, Pham Xuan; Lanctot, Richard B.

    1991-01-01

    We employ a line transect method to estimate the density of the common and Pacific loon in the Yukon Flats National Wildlife Refuge from aerial survey data. Line transect methods have the advantage of automatically taking into account “visibility bias” due to detectability difference of animals at different distances from the transect line. However, line transect methods must overcome two difficulties when applied to inaccurate recording of sighting distances due to high travel speeds, so that in fact only a few reliable distance class counts are available. We propose a unimodal detection function that provides an estimate of the effective area lost due to the blind strip, under the assumption that a line of perfect detection exists parallel to the transect line. The unimodal detection function can also be applied when a blind strip is absent, and in certain instances when the maximum probability of detection is less than 100%. A simple bootstrap procedure to estimate standard error is illustrated. Finally, we present results from a small set of Monte Carlo experiments.

  6. Lung dynamic MRI deblurring using low-rank decomposition and dictionary learning.

    PubMed

    Gou, Shuiping; Wang, Yueyue; Wu, Jiaolong; Lee, Percy; Sheng, Ke

    2015-04-01

    Lung dynamic MRI (dMRI) has emerged to be an appealing tool to quantify lung motion for both planning and treatment guidance purposes. However, this modality can result in blurry images due to intrinsically low signal-to-noise ratio in the lung and spatial/temporal interpolation. The image blurring could adversely affect the image processing that depends on the availability of fine landmarks. The purpose of this study is to reduce dMRI blurring using image postprocessing. To enhance the image quality and exploit the spatiotemporal continuity of dMRI sequences, a low-rank decomposition and dictionary learning (LDDL) method was employed to deblur lung dMRI and enhance the conspicuity of lung blood vessels. Fifty frames of continuous 2D coronal dMRI frames using a steady state free precession sequence were obtained from five subjects including two healthy volunteer and three lung cancer patients. In LDDL, the lung dMRI was decomposed into sparse and low-rank components. Dictionary learning was employed to estimate the blurring kernel based on the whole image, low-rank or sparse component of the first image in the lung MRI sequence. Deblurring was performed on the whole image sequences using deconvolution based on the estimated blur kernel. The deblurring results were quantified using an automated blood vessel extraction method based on the classification of Hessian matrix filtered images. Accuracy of automated extraction was calculated using manual segmentation of the blood vessels as the ground truth. In the pilot study, LDDL based on the blurring kernel estimated from the sparse component led to performance superior to the other ways of kernel estimation. LDDL consistently improved image contrast and fine feature conspicuity of the original MRI without introducing artifacts. The accuracy of automated blood vessel extraction was on average increased by 16% using manual segmentation as the ground truth. Image blurring in dMRI images can be effectively reduced using a low-rank decomposition and dictionary learning method using kernels estimated by the sparse component.

  7. The effects of health information technology adoption and hospital-physician integration on hospital efficiency.

    PubMed

    Cho, Na-Eun; Chang, Jongwha; Atems, Bebonchu

    2014-11-01

    To determine the impact of health information technology (HIT) adoption and hospital-physician integration on hospital efficiency. Using 2010 data from the American Hospital Association's (AHA) annual survey, the AHA IT survey, supplemented by the CMS Case Mix Index, and the US Census Bureau's small area income and poverty estimates, we examined how the adoption of HIT and employment of physicians affected hospital efficiency and whether they were substitutes or complements. The sample included 2173 hospitals. We employed a 2-stage approach. In the first stage, data envelopment analysis was used to estimate technical efficiency of hospitals. In the second stage, we used instrumental variable approaches, notably 2-stage least squares and the generalized method of moments, to examine the effects of IT adoption and integration on hospital efficiency. We found that HIT adoption and hospital-physician integration, when considered separately, each have statistically significant positive impacts on hospital efficiency. Also, we found that hospitals that adopted HIT with employed physicians will achieve less efficiency compared with hospitals that adopted HIT without employed physicians. Although HIT adoption and hospital-physician integration both seem to be key parts of improving hospital efficiency when one or the other is utilized individually, they can hurt hospital efficiency when utilized together.

  8. Precarious Employment, Bad Jobs, Labor Unions, and Early Retirement

    PubMed Central

    Warren, John R.; Sweeney, Megan M.; Hauser, Robert M.; Ho, Jeong-Hwa

    2011-01-01

    Objectives. We examined the extent to which involuntary job loss, exposure to “bad jobs,” and labor union membership across the life course are associated with the risk of early retirement. Methods. Using data from the Wisconsin Longitudinal Study, a large (N = 8,609) sample of men and women who graduated from high school in 1957, we estimated discrete-time event history models for the transition to first retirement through age 65. We estimated models separately for men and women. Results. We found that experience of involuntary job loss and exposure to bad jobs are associated with a lower risk of retiring before age 65, whereas labor union membership is associated with a higher likelihood of early retirement. These relationships are stronger for men than for women and are mediated to some extent by pre-retirement differences in pension eligibility, wealth, job characteristics, and health. Discussion. Results provide some support for hypotheses derived from theories of cumulative stratification, suggesting that earlier employment experiences should influence retirement outcomes indirectly through later-life characteristics. However, midlife employment experiences remain associated with earlier retirement, net of more temporally proximate correlates, highlighting the need for further theorization and empirical evaluation of the mechanisms through which increasingly common employment experiences influence the age at which older Americans retire. PMID:21310772

  9. The Demirjian versus the Willems method for dental age estimation in different populations: A meta-analysis of published studies

    PubMed Central

    2017-01-01

    Background The accuracy of radiographic methods for dental age estimation is important for biological growth research and forensic applications. Accuracy of the two most commonly used systems (Demirjian and Willems) has been evaluated with conflicting results. This study investigates the accuracies of these methods for dental age estimation in different populations. Methods A search of PubMed, Scopus, Ovid, Database of Open Access Journals and Google Scholar was undertaken. Eligible studies published before December 28, 2016 were reviewed and analyzed. Meta-analysis was performed on 28 published articles using the Demirjian and/or Willems methods to estimate chronological age in 14,109 children (6,581 males, 7,528 females) age 3–18 years in studies using Demirjian’s method and 10,832 children (5,176 males, 5,656 females) age 4–18 years in studies using Willems’ method. The weighted mean difference at 95% confidence interval was used to assess accuracies of the two methods in predicting the chronological age. Results The Demirjian method significantly overestimated chronological age (p<0.05) in males age 3–15 and females age 4–16 when studies were pooled by age cohorts and sex. The majority of studies using Willems’ method did not report significant overestimation of ages in either sex. Overall, Demirjian’s method significantly overestimated chronological age compared to the Willems method (p<0.05). The weighted mean difference for the Demirjian method was 0.62 for males and 0.72 for females, while that of the Willems method was 0.26 for males and 0.29 for females. Conclusion The Willems method provides more accurate estimation of chronological age in different populations, while Demirjian’s method has a broad application in terms of determining maturity scores. However, accuracy of Demirjian age estimations is confounded by population variation when converting maturity scores to dental ages. For highest accuracy of age estimation, population-specific standards, rather than a universal standard or methods developed on other populations, need to be employed. PMID:29117240

  10. Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors

    PubMed Central

    Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech

    2011-01-01

    Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935

  11. Simultaneous and Continuous Estimation of Shoulder and Elbow Kinematics from Surface EMG Signals

    PubMed Central

    Zhang, Qin; Liu, Runfeng; Chen, Wenbin; Xiong, Caihua

    2017-01-01

    In this paper, we present a simultaneous and continuous kinematics estimation method for multiple DoFs across shoulder and elbow joint. Although simultaneous and continuous kinematics estimation from surface electromyography (EMG) is a feasible way to achieve natural and intuitive human-machine interaction, few works investigated multi-DoF estimation across the significant joints of upper limb, shoulder and elbow joints. This paper evaluates the feasibility to estimate 4-DoF kinematics at shoulder and elbow during coordinated arm movements. Considering the potential applications of this method in exoskeleton, prosthetics and other arm rehabilitation techniques, the estimation performance is presented with different muscle activity decomposition and learning strategies. Principle component analysis (PCA) and independent component analysis (ICA) are respectively employed for EMG mode decomposition with artificial neural network (ANN) for learning the electromechanical association. Four joint angles across shoulder and elbow are simultaneously and continuously estimated from EMG in four coordinated arm movements. By using ICA (PCA) and single ANN, the average estimation accuracy 91.12% (90.23%) is obtained in 70-s intra-cross validation and 87.00% (86.30%) is obtained in 2-min inter-cross validation. This result suggests it is feasible and effective to use ICA (PCA) with single ANN for multi-joint kinematics estimation in variant application conditions. PMID:28611573

  12. Estimation of infection prevalence and sensitivity in a stratified two-stage sampling design employing highly specific diagnostic tests when there is no gold standard.

    PubMed

    Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S

    2015-11-10

    In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Source Detection with Bayesian Inference on ROSAT All-Sky Survey Data Sample

    NASA Astrophysics Data System (ADS)

    Guglielmetti, F.; Voges, W.; Fischer, R.; Boese, G.; Dose, V.

    2004-07-01

    We employ Bayesian inference for the joint estimation of sources and background on ROSAT All-Sky Survey (RASS) data. The probabilistic method allows for detection improvement of faint extended celestial sources compared to the Standard Analysis Software System (SASS). Background maps were estimated in a single step together with the detection of sources without pixel censoring. Consistent uncertainties of background and sources are provided. The source probability is evaluated for single pixels as well as for pixel domains to enhance source detection of weak and extended sources.

  14. Computational tools for multi-linked flexible structures

    NASA Technical Reports Server (NTRS)

    Lee, Gordon K. F.; Brubaker, Thomas A.; Shults, James R.

    1990-01-01

    A software module which designs and tests controllers and filters in Kalman Estimator form, based on a polynomial state-space model is discussed. The user-friendly program employs an interactive graphics approach to simplify the design process. A variety of input methods are provided to test the effectiveness of the estimator. Utilities are provided which address important issues in filter design such as graphical analysis, statistical analysis, and calculation time. The program also provides the user with the ability to save filter parameters, inputs, and outputs for future use.

  15. Drifter-based estimate of the 5 year dispersal of Fukushima-derived radionuclides

    NASA Astrophysics Data System (ADS)

    Rypina, I. I.; Jayne, S. R.; Yoshida, S.; Macdonald, A. M.; Buesseler, K.

    2014-11-01

    Employing some 40 years of North Pacific drifter-track observations from the Global Drifter Program database, statistics defining the horizontal spread of radionuclides from Fukushima nuclear power plant into the Pacific Ocean are investigated over a time scale of 5 years. A novel two-iteration method is employed to make the best use of the available drifter data. Drifter-based predictions of the temporal progression of the leading edge of the radionuclide distribution are compared to observed radionuclide concentrations from research surveys occupied in 2012 and 2013. Good agreement between the drifter-based predictions and the observations is found.

  16. Current-induced alternating reversed dual-echo-steady-state for joint estimation of tissue relaxation and electrical properties.

    PubMed

    Lee, Hyunyeol; Sohn, Chul-Ho; Park, Jaeseok

    2017-07-01

    To develop a current-induced, alternating reversed dual-echo-steady-state-based magnetic resonance electrical impedance tomography for joint estimation of tissue relaxation and electrical properties. The proposed method reverses the readout gradient configuration of conventional, in which steady-state-free-precession (SSFP)-ECHO is produced earlier than SSFP-free-induction-decay (FID) while alternating current pulses are applied in between the two SSFPs to secure high sensitivity of SSFP-FID to injection current. Additionally, alternating reversed dual-echo-steady-state signals are modulated by employing variable flip angles over two orthogonal injections of current pulses. Ratiometric signal models are analytically constructed, from which T 1 , T 2 , and current-induced B z are jointly estimated by solving a nonlinear inverse problem for conductivity reconstruction. Numerical simulations and experimental studies are performed to investigate the feasibility of the proposed method in estimating relaxation parameters and conductivity. The proposed method, if compared with conventional magnetic resonance electrical impedance tomography, enables rapid data acquisition and simultaneous estimation of T 1 , T 2 , and current-induced B z , yielding a comparable level of signal-to-noise ratio in the parameter estimates while retaining a relative conductivity contrast. We successfully demonstrated the feasibility of the proposed method in jointly estimating tissue relaxation parameters as well as conductivity distributions. It can be a promising, rapid imaging strategy for quantitative conductivity estimation. Magn Reson Med 78:107-120, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  17. Reachable set estimation for Takagi-Sugeno fuzzy systems against unknown output delays with application to tracking control of AUVs.

    PubMed

    Zhong, Zhixiong; Zhu, Yanzheng; Ahn, Choon Ki

    2018-07-01

    In this paper, we address the problem of reachable set estimation for continuous-time Takagi-Sugeno (T-S) fuzzy systems subject to unknown output delays. Based on the reachable set concept, a new controller design method is also discussed for such systems. An effective method is developed to attenuate the negative impact from the unknown output delays, which likely degrade the performance/stability of systems. First, an augmented fuzzy observer is proposed to capacitate a synchronous estimation for the system state and the disturbance term owing to the unknown output delays, which ensures that the reachable set of the estimation error is limited via the intersection operation of ellipsoids. Then, a compensation technique is employed to eliminate the influence on the system performance stemmed from the unknown output delays. Finally, the effectiveness and correctness of the obtained theories are verified by the tracking control of autonomous underwater vehicles. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  18. A nondestructive method for estimation of the fracture toughness of CrMoV rotor steels based on ultrasonic nonlinearity.

    PubMed

    Jeong, Hyunjo; Nahm, Seung-Hoon; Jhang, Kyung-Young; Nam, Young-Hyun

    2003-09-01

    The objective of this paper is to develop a nondestructive method for estimating the fracture toughness (K(IC)) of CrMoV steels used as the rotor material of steam turbines in power plants. To achieve this objective, a number of CrMoV steel samples were heat-treated, and the fracture appearance transition temperature (FATT) was determined as a function of aging time. Nonlinear ultrasonics was employed as the theoretical basis to explain the harmonic generation in a damaged material, and the nonlinearity parameter of the second harmonic wave was the experimental measure used to be correlated to the fracture toughness of the rotor steel. The nondestructive procedure for estimating the K(IC) consists of two steps. First, the correlations between the nonlinearity parameter and the FATT are sought. The FATT values are then used to estimate K(IC) using the K(IC) versus excess temperature (i.e., T-FATT) correlation that is available in the literature for CrMoV rotor steel.

  19. Experimental measure of arm stiffness during single reaching movements with a time-frequency analysis

    PubMed Central

    Pierobon, Alberto; DiZio, Paul; Lackner, James R.

    2013-01-01

    We tested an innovative method to estimate joint stiffness and damping during multijoint unfettered arm movements. The technique employs impulsive perturbations and a time-frequency analysis to estimate the arm's mechanical properties along a reaching trajectory. Each single impulsive perturbation provides a continuous estimation on a single-reach basis, making our method ideal to investigate motor adaptation in the presence of force fields and to study the control of movement in impaired individuals with limited kinematic repeatability. In contrast with previous dynamic stiffness studies, we found that stiffness varies during movement, achieving levels higher than during static postural control. High stiffness was associated with elevated reflexive activity. We observed a decrease in stiffness and a marked reduction in long-latency reflexes around the reaching movement velocity peak. This pattern could partly explain the difference between the high stiffness reported in postural studies and the low stiffness measured in dynamic estimation studies, where perturbations are typically applied near the peak velocity point. PMID:23945781

  20. LC-MS/MS-based approach for obtaining exposure estimates of metabolites in early clinical trials using radioactive metabolites as reference standards.

    PubMed

    Zhang, Donglu; Raghavan, Nirmala; Chando, Theodore; Gambardella, Janice; Fu, Yunlin; Zhang, Duxi; Unger, Steve E; Humphreys, W Griffith

    2007-12-01

    An LC-MS/MS-based approach that employs authentic radioactive metabolites as reference standards was developed to estimate metabolite exposures in early drug development studies. This method is useful to estimate metabolite levels in studies done with non-radiolabeled compounds where metabolite standards are not available to allow standard LC-MS/MS assay development. A metabolite mixture obtained from an in vivo source treated with a radiolabeled compound was partially purified, quantified, and spiked into human plasma to provide metabolite standard curves. Metabolites were analyzed by LC-MS/MS using the specific mass transitions and an internal standard. The metabolite concentrations determined by this approach were found to be comparable to those determined by valid LC-MS/MS assays. This approach does not requires synthesis of authentic metabolites or the knowledge of exact structures of metabolites, and therefore should provide a useful method to obtain early estimates of circulating metabolites in early clinical or toxicological studies.

  1. Population Estimation in Singapore Based on Remote Sensing and Open Data

    NASA Astrophysics Data System (ADS)

    Guo, H.; Cao, K.; Wang, P.

    2017-09-01

    Population estimation statistics are widely used in government, commercial and educational sectors for a variety of purposes. With growing emphases on real-time and detailed population information, data users nowadays have switched from traditional census data to more technology-based data source such as LiDAR point cloud and High-Resolution Satellite Imagery. Nevertheless, such data are costly and periodically unavailable. In this paper, the authors use West Coast District, Singapore as a case study to investigate the applicability and effectiveness of using satellite image from Google Earth for extraction of building footprint and population estimation. At the same time, volunteered geographic information (VGI) is also utilized as ancillary data for building footprint extraction. Open data such as Open Street Map OSM could be employed to enhance the extraction process. In view of challenges in building shadow extraction, this paper discusses several methods including buffer, mask and shape index to improve accuracy. It also illustrates population estimation methods based on building height and number of floor estimates. The results show that the accuracy level of housing unit method on population estimation can reach 92.5 %, which is remarkably accurate. This paper thus provides insights into techniques for building extraction and fine-scale population estimation, which will benefit users such as urban planners in terms of policymaking and urban planning of Singapore.

  2. Contrasting Causal Effects of Workplace Interventions.

    PubMed

    Izano, Monika A; Brown, Daniel M; Neophytou, Andreas M; Garcia, Erika; Eisen, Ellen A

    2018-07-01

    Occupational exposure guidelines are ideally based on estimated effects of static interventions that assign constant exposure over a working lifetime. Static effects are difficult to estimate when follow-up extends beyond employment because their identifiability requires additional assumptions. Effects of dynamic interventions that assign exposure while at work, allowing subjects to leave and become unexposed thereafter, are more easily identifiable but result in different estimates. Given the practical implications of exposure limits, we explored the drivers of the differences between static and dynamic interventions in a simulation study where workers could terminate employment because of an intermediate adverse health event that functions as a time-varying confounder. The two effect estimates became more similar with increasing strength of the health event and outcome relationship and with increasing time between health event and employment termination. Estimates were most dissimilar when the intermediate health event occurred early in employment, providing an effective screening mechanism.

  3. Noise Estimation in Electroencephalogram Signal by Using Volterra Series Coefficients

    PubMed Central

    Hassani, Malihe; Karami, Mohammad Reza

    2015-01-01

    The Volterra model is widely used for nonlinearity identification in practical applications. In this paper, we employed Volterra model to find the nonlinearity relation between electroencephalogram (EEG) signal and the noise that is a novel approach to estimate noise in EEG signal. We show that by employing this method. We can considerably improve the signal to noise ratio by the ratio of at least 1.54. An important issue in implementing Volterra model is its computation complexity, especially when the degree of nonlinearity is increased. Hence, in many applications it is urgent to reduce the complexity of computation. In this paper, we use the property of EEG signal and propose a new and good approximation of delayed input signal to its adjacent samples in order to reduce the computation of finding Volterra series coefficients. The computation complexity is reduced by the ratio of at least 1/3 when the filter memory is 3. PMID:26284176

  4. Development and evaluation of an acoustic device to estimate size distribution of channel catfish in commercial ponds

    USDA-ARS?s Scientific Manuscript database

    As one step in the continued effort to utilize acoustic methods and techniques to the betterment of catfish aquaculture, an acoustic “catfish sizer” was designed to determine the size distribution of Channel Catfish Ictalurus punctatus in commercial ponds. The catfish sizer employed a custom-built 4...

  5. Influencing Transfer and Baccalaureate Attainment for Community College Students through State Grant Aid: Quasi-Experimental Evidence from Texas

    ERIC Educational Resources Information Center

    Bordoloi Pazich, Loni

    2014-01-01

    This study uses statewide longitudinal data from Texas to estimate the impact of a state grant program intended to encourage low-income community college students to transfer to four-year institutions and complete the baccalaureate. Quasi-experimental methods employed include propensity score matching and regression discontinuity. Results indicate…

  6. Quantitative PCR Method for Diagnosis of Citrus Bacterial Canker†

    PubMed Central

    Cubero, J.; Graham, J. H.; Gottwald, T. R.

    2001-01-01

    For diagnosis of citrus bacterial canker by PCR, an internal standard is employed to ensure the quality of the DNA extraction and that proper requisites exist for the amplification reaction. The ratio of PCR products from the internal standard and bacterial target is used to estimate the initial bacterial concentration in citrus tissues with lesions. PMID:11375206

  7. Do They Stay or Do They Go? Residents Who Become Faculty at Their Training Institutions

    ERIC Educational Resources Information Center

    Gathright, Molly Marie; Krain, Lewis P.; Sparks, Shane E.; Thrush, Carol R.

    2012-01-01

    Objective: Few data exist on the topic of internal hiring of trainees in academic medicine. This study examines nationally representative data to determine the frequency of faculty psychiatrists who are employed in the same department in which they completed their residency training. Method: Estimates of internal faculty hiring were obtained by…

  8. Faculty and Staff Health Promotion: Results from the School Health Policies and Programs Study 2006

    ERIC Educational Resources Information Center

    Eaton, Danice K.; Marx, Eva; Bowie, Sara E.

    2007-01-01

    Background: US schools employ an estimated 6.7 million workers and are thus an ideal setting for employee wellness programs. This article describes the characteristics of school employee wellness programs in the United States, including state-, district-, and school-level policies and programs. Methods: The Centers for Disease Control and…

  9. Classical Item Analysis Using Latent Variable Modeling: A Note on a Direct Evaluation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2011-01-01

    A directly applicable latent variable modeling procedure for classical item analysis is outlined. The method allows one to point and interval estimate item difficulty, item correlations, and item-total correlations for composites consisting of categorical items. The approach is readily employed in empirical research and as a by-product permits…

  10. Lung Cancer and Elemental Carbon Exposure in Trucking Industry Workers

    PubMed Central

    Laden, Francine; Hart, Jaime E.; Davis, Mary E.; Eisen, Ellen A.; Smith, Thomas J.

    2012-01-01

    Background: Diesel exhaust has been considered to be a probable lung carcinogen based on studies of occupationally exposed workers. Efforts to define lung cancer risk in these studies have been limited in part by lack of quantitative exposure estimates. Objective: We conducted a retrospective cohort study to assess lung cancer mortality risk among U.S. trucking industry workers. Elemental carbon (EC) was used as a surrogate of exposure to engine exhaust from diesel vehicles, traffic, and loading dock operations. Methods: Work records were available for 31,135 male workers employed in the unionized U.S. trucking industry in 1985. A statistical model based on a national exposure assessment was used to estimate historical work-related exposures to EC. Lung cancer mortality was ascertained through the year 2000, and associations with cumulative and average EC were estimated using proportional hazards models. Results: Duration of employment was inversely associated with lung cancer risk consistent with a healthy worker survivor effect and a cohort composed of prevalent hires. After adjusting for employment duration, we noted a suggestion of a linear exposure–response relationship. For each 1,000-µg/m3 months of cumulative EC, based on a 5-year exposure lag, the hazard ratio (HR) was 1.07 [95% confidence interval (CI): 0.99, 1.15] with a similar association for a 10-year exposure lag [HR = 1.09 (95% CI: 0.99, 1.20)]. Average exposure was not associated with relative risk. Conclusions: Lung cancer mortality in trucking industry workers increased in association with cumulative exposure to EC after adjusting for negative confounding by employment duration. PMID:22739103

  11. A computerized recognition system for the home-based physiotherapy exercises using an RGBD camera.

    PubMed

    Ar, Ilktan; Akgul, Yusuf Sinan

    2014-11-01

    Computerized recognition of the home based physiotherapy exercises has many benefits and it has attracted considerable interest among the computer vision community. However, most methods in the literature view this task as a special case of motion recognition. In contrast, we propose to employ the three main components of a physiotherapy exercise (the motion patterns, the stance knowledge, and the exercise object) as different recognition tasks and embed them separately into the recognition system. The low level information about each component is gathered using machine learning methods. Then, we use a generative Bayesian network to recognize the exercise types by combining the information from these sources at an abstract level, which takes the advantage of domain knowledge for a more robust system. Finally, a novel postprocessing step is employed to estimate the exercise repetitions counts. The performance evaluation of the system is conducted with a new dataset which contains RGB (red, green, and blue) and depth videos of home-based exercise sessions for commonly applied shoulder and knee exercises. The proposed system works without any body-part segmentation, bodypart tracking, joint detection, and temporal segmentation methods. In the end, favorable exercise recognition rates and encouraging results on the estimation of repetition counts are obtained.

  12. Close-in characteristics of LH2/LOX reactions

    NASA Technical Reports Server (NTRS)

    Riehl, W. A.; Ullian, L. J.

    1985-01-01

    In deriving shock overpressures from space vehicles employing LH2 and LOX, separate methods of analyses and prediction are recommended, as a function of the distance. Three methods of treatment are recommended. For the Far Field - where the expected shock overpressure is less than 40 psi (lambda = 5) - use the classical PYRO approach to determine TNT yield, and employ classical ordnance (Kingery) curve to obtain the overall value. For the Close-In Range, a suggested limit is 3D, or a zone from a distance of three times the tank diameter to the tank wall. Rather than estimate a specific distance from the center of explosion to the target, it is only necessary to estimate whether this could be within one, two, or three diameters away from the wall; i.e., in the 1, 2, or 3D zone. Then assess whether mixing mode is by the PYRO CBGS (spill) mode or CBM (internal mixing) mode. From the zone and mixing mode, the probability of attaining various shock overpressures is determined from the plots provided herein. For the transition zone, between 40 psi and the 3D distance, it is tentatively recommended that both of the preceding methods be used, and to be conservative, the higher resulting value be used.

  13. Estimation of size of red blood cell aggregates using backscattering property of high-frequency ultrasound: In vivo evaluation

    NASA Astrophysics Data System (ADS)

    Kurokawa, Yusaku; Taki, Hirofumi; Yashiro, Satoshi; Nagasawa, Kan; Ishigaki, Yasushi; Kanai, Hiroshi

    2016-07-01

    We propose a method for assessment of the degree of red blood cell (RBC) aggregation using the backscattering property of high-frequency ultrasound. In this method, the scattering property of RBCs is extracted from the power spectrum of RBC echoes normalized by that from the posterior wall of a vein. In an experimental study using a phantom, employing the proposed method, the sizes of microspheres 5 and 20 µm in diameter were estimated to have mean values of 4.7 and 17.3 µm and standard deviations of 1.9 and 1.4 µm, respectively. In an in vivo experimental study, we compared the results between three healthy subjects and four diabetic patients. The average estimated scatterer diameters in healthy subjects at rest and during avascularization were 7 and 28 µm, respectively. In contrast, those in diabetic patients receiving both antithrombotic therapy and insulin therapy were 11 and 46 µm, respectively. These results show that the proposed method has high potential for clinical application to assess RBC aggregation, which may be related to the progress of diabetes.

  14. Multi-object segmentation using coupled nonparametric shape and relative pose priors

    NASA Astrophysics Data System (ADS)

    Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep

    2009-02-01

    We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.

  15. Derivative Spectrophotometric Method for Estimation of Antiretroviral Drugs in Fixed Dose Combinations

    PubMed Central

    P.B., Mohite; R.B., Pandhare; S.G., Khanage

    2012-01-01

    Purpose: Lamivudine is cytosine and zidovudine is cytidine and is used as an antiretroviral agents. Both drugs are available in tablet dosage forms with a dose of 150 mg for LAM and 300 mg ZID respectively. Method: The method employed is based on first order derivative spectroscopy. Wavelengths 279 nm and 300 nm were selected for the estimation of the Lamovudine and Zidovudine respectively by taking the first order derivative spectra. The conc. of both drugs was determined by proposed method. The results of analysis have been validated statistically and by recovery studies as per ICH guidelines. Result: Both the drugs obey Beer’s law in the concentration range 10-50 μg mL-1,for LAM and ZID; with regression 0.9998 and 0.9999, intercept – 0.0677 and – 0.0043 and slope 0.0457 and 0.0391 for LAM and ZID, respectively.The accuracy and reproducibility results are close to 100% with 2% RSD. Conclusion: A simple, accurate, precise, sensitive and economical procedures for simultaneous estimation of Lamovudine and Zidovudine in tablet dosage form have been developed. PMID:24312779

  16. Parental employment, family structure, and child's health insurance.

    PubMed

    Rolett, A; Parker, J D; Heck, K E; Makuc, D M

    2001-01-01

    To examine the impact of family structure on the relationship between parental employment characteristics and employer-sponsored health insurance coverage among children with employed parents in the United States. National Health Interview Survey data for 1993-1995 was used to estimate proportions of children without employer-sponsored health insurance, by family structure, separately according to maternal and paternal employment characteristics. In addition, relative odds of being without employer-sponsored insurance were estimated, controlling for family structure and child's age, race, and poverty status. Children with 2 employed parents were more likely to have employer-sponsored health insurance coverage than children with 1 employed parent, even among children in 2-parent families. However, among children with employed parents, the percentage with employer-sponsored health insurance coverage varied widely, depending on the hours worked, employment sector, occupation, industry, and firm size. Employer-sponsored health insurance coverage for children is extremely variable, depending on employment characteristics and marital status of the parents.

  17. Database and new models based on a group contribution method to predict the refractive index of ionic liquids.

    PubMed

    Wang, Xinxin; Lu, Xingmei; Zhou, Qing; Zhao, Yongsheng; Li, Xiaoqian; Zhang, Suojiang

    2017-08-02

    Refractive index is one of the important physical properties, which is widely used in separation and purification. In this study, the refractive index data of ILs were collected to establish a comprehensive database, which included about 2138 pieces of data from 1996 to 2014. The Group Contribution-Artificial Neural Network (GC-ANN) model and Group Contribution (GC) method were employed to predict the refractive index of ILs at different temperatures from 283.15 K to 368.15 K. Average absolute relative deviations (AARD) of the GC-ANN model and the GC method were 0.179% and 0.628%, respectively. The results showed that a GC-ANN model provided an effective way to estimate the refractive index of ILs, whereas the GC method was simple and extensive. In summary, both of the models were accurate and efficient approaches for estimating refractive indices of ILs.

  18. Comparison between prediction and experiment for all-movable wing and body combinations at supersonic speeds : lift, pitching moment, and hinge moment

    NASA Technical Reports Server (NTRS)

    Nielsen, Jack N; Kaattari, George E; Drake, William C

    1952-01-01

    A simple method is presented for estimating lift, pitching-moment, and hinge-moment characteristics of all-movable wings in the presence of a body as well as the characteristics of wing-body combinations employing such wings. In general, good agreement between the method and experiment was obtained for the lift and pitching moment of the entire wing-body combination and for the lift of the wing in the presence of the body. The method is valid for moderate angles of attack, wing deflection angles, and width of gap between wing and body. The method of estimating hinge moment was not considered sufficiently accurate for triangular all-movable wings. An alternate procedure is proposed based on the experimental moment characteristics of the wing alone. Further theoretical and experimental work is required to substantiate fully the proposed procedure.

  19. A joint tracking method for NSCC based on WLS algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Ruidan; Xu, Ying; Yuan, Hong

    2017-12-01

    Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.

  20. Texture feature extraction based on a uniformity estimation method for local brightness and structure in chest CT images.

    PubMed

    Peng, Shao-Hu; Kim, Deok-Hwan; Lee, Seok-Lyong; Lim, Myung-Kwan

    2010-01-01

    Texture feature is one of most important feature analysis methods in the computer-aided diagnosis (CAD) systems for disease diagnosis. In this paper, we propose a Uniformity Estimation Method (UEM) for local brightness and structure to detect the pathological change in the chest CT images. Based on the characteristics of the chest CT images, we extract texture features by proposing an extension of rotation invariant LBP (ELBP(riu4)) and the gradient orientation difference so as to represent a uniform pattern of the brightness and structure in the image. The utilization of the ELBP(riu4) and the gradient orientation difference allows us to extract rotation invariant texture features in multiple directions. Beyond this, we propose to employ the integral image technique to speed up the texture feature computation of the spatial gray level dependent method (SGLDM). Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Efficient Wide Baseline Structure from Motion

    NASA Astrophysics Data System (ADS)

    Michelini, Mario; Mayer, Helmut

    2016-06-01

    This paper presents a Structure from Motion approach for complex unorganized image sets. To achieve high accuracy and robustness, image triplets are employed and (an approximate) camera calibration is assumed to be known. The focus lies on a complete linking of images even in case of large image distortions, e.g., caused by wide baselines, as well as weak baselines. A method for embedding image descriptors into Hamming space is proposed for fast image similarity ranking. The later is employed to limit the number of pairs to be matched by a wide baseline method. An iterative graph-based approach is proposed formulating image linking as the search for a terminal Steiner minimum tree in a line graph. Finally, additional links are determined and employed to improve the accuracy of the pose estimation. By this means, loops in long image sequences are implicitly closed. The potential of the proposed approach is demonstrated by results for several complex image sets also in comparison with VisualSFM.

  2. Measurement of two-photon-absorption spectra through nonlinear fluorescence produced by a line-shaped excitation beam.

    PubMed

    Hasani, E; Parravicini, J; Tartara, L; Tomaselli, A; Tomassini, D

    2018-05-01

    We propose an innovative experimental approach to estimate the two-photon absorption (TPA) spectrum of a fluorescent material. Our method develops the standard indirect fluorescence-based method for the TPA measurement by employing a line-shaped excitation beam, generating a line-shaped fluorescence emission. Such a configuration, which requires a relatively high amount of optical power, permits to have a greatly increased fluorescence signal, thus avoiding the photon counterdetection devices usually used in these measurements, and allowing to employ detectors such as charge-coupled device (CCD) cameras. The method is finally tested on a fluorescent isothiocyanate sample, whose TPA spectrum, which is measured with the proposed technique, is compared with the TPA spectra reported in the literature, confirming the validity of our experimental approach. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  3. DIRBoost-an algorithm for boosting deformable image registration: application to lung CT intra-subject registration.

    PubMed

    Muenzing, Sascha E A; van Ginneken, Bram; Viergever, Max A; Pluim, Josien P W

    2014-04-01

    We introduce a boosting algorithm to improve on existing methods for deformable image registration (DIR). The proposed DIRBoost algorithm is inspired by the theory on hypothesis boosting, well known in the field of machine learning. DIRBoost utilizes a method for automatic registration error detection to obtain estimates of local registration quality. All areas detected as erroneously registered are subjected to boosting, i.e. undergo iterative registrations by employing boosting masks on both the fixed and moving image. We validated the DIRBoost algorithm on three different DIR methods (ANTS gSyn, NiftyReg, and DROP) on three independent reference datasets of pulmonary image scan pairs. DIRBoost reduced registration errors significantly and consistently on all reference datasets for each DIR algorithm, yielding an improvement of the registration accuracy by 5-34% depending on the dataset and the registration algorithm employed. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. VAMPnets for deep learning of molecular kinetics.

    PubMed

    Mardt, Andreas; Pasquali, Luca; Wu, Hao; Noé, Frank

    2018-01-02

    There is an increasing demand for computing the relevant structures, equilibria, and long-timescale kinetics of biomolecular processes, such as protein-drug binding, from high-throughput molecular dynamics simulations. Current methods employ transformation of simulated coordinates into structural features, dimension reduction, clustering the dimension-reduced data, and estimation of a Markov state model or related model of the interconversion rates between molecular structures. This handcrafted approach demands a substantial amount of modeling expertise, as poor decisions at any step will lead to large modeling errors. Here we employ the variational approach for Markov processes (VAMP) to develop a deep learning framework for molecular kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire mapping from molecular coordinates to Markov states, thus combining the whole data processing pipeline in a single end-to-end framework. Our method performs equally or better than state-of-the-art Markov modeling methods and provides easily interpretable few-state kinetic models.

  5. A Comparison of the Cheater Detection and the Unrelated Question Models: A Randomized Response Survey on Physical and Cognitive Doping in Recreational Triathletes

    PubMed Central

    Schröter, Hannes; Studzinski, Beatrix; Dietz, Pavel; Ulrich, Rolf; Striegel, Heiko; Simon, Perikles

    2016-01-01

    Purpose This study assessed the prevalence of physical and cognitive doping in recreational triathletes with two different randomized response models, that is, the Cheater Detection Model (CDM) and the Unrelated Question Model (UQM). Since both models have been employed in assessing doping, the major objective of this study was to investigate whether the estimates of these two models converge. Material and Methods An anonymous questionnaire was distributed to 2,967 athletes at two triathlon events (Frankfurt and Wiesbaden, Germany). Doping behavior was assessed either with the CDM (Frankfurt sample, one Wiesbaden subsample) or the UQM (one Wiesbaden subsample). A generalized likelihood-ratio test was employed to check whether the prevalence estimates differed significantly between models. In addition, we compared the prevalence rates of the present survey with those of a previous study on a comparable sample. Results After exclusion of incomplete questionnaires and outliers, the data of 2,017 athletes entered the final data analysis. Twelve-month prevalence for physical doping ranged from 4% (Wiesbaden, CDM and UQM) to 12% (Frankfurt CDM), and for cognitive doping from 1% (Wiesbaden, CDM) to 9% (Frankfurt CDM). The generalized likelihood-ratio test indicated no differences in prevalence rates between the two methods. Furthermore, there were no significant differences in prevalences between the present (undertaken in 2014) and the previous survey (undertaken in 2011), although the estimates tended to be smaller in the present survey. Discussion The results suggest that the two models can provide converging prevalence estimates. The high rate of cheaters estimated by the CDM, however, suggests that the present results must be seen as a lower bound and that the true prevalence of doping might be considerably higher. PMID:27218830

  6. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  7. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, J.; Lee, J.; Yadav, V.

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  8. Assessment of Crop Damage by Protected Wild Mammalian Herbivores on the Western Boundary of Tadoba-Andhari Tiger Reserve (TATR), Central India

    PubMed Central

    Bayani, Abhijeet; Tiwade, Dilip; Dongre, Ashok; Dongre, Aravind P.; Phatak, Rasika; Watve, Milind

    2016-01-01

    Crop raiding by wild herbivores close to an area of protected wildlife is a serious problem that can potentially undermine conservation efforts. Since there is orders of magnitude difference between farmers’ perception of damage and the compensation given by the government, an objective and realistic estimate of damage was found essential. We employed four different approaches to estimate the extent of and patterns in crop damage by wild herbivores along the western boundary of Tadoba-Andhari Tiger Reserve in the state of Maharashtra, central India. These approaches highlight different aspects of the problem but converge on an estimated damage of over 50% for the fields adjacent to the forest, gradually reducing in intensity with distance. We found that the visual damage assessment method currently employed by the government for paying compensation to farmers was uncorrelated to and grossly underestimated actual damage. The findings necessitate a radical rethinking of policies to assess, mitigate as well as compensate for crop damage caused by protected wildlife species. PMID:27093293

  9. Estimation of urban runoff and water quality using remote sensing and artificial intelligence.

    PubMed

    Ha, S R; Park, S Y; Park, D H

    2003-01-01

    Water quality and quantity of runoff are strongly dependent on the landuse and landcover (LULC) criteria. In this study, we developed a more improved parameter estimation procedure for the environmental model using remote sensing (RS) and artificial intelligence (AI) techniques. Landsat TM multi-band (7bands) and Korea Multi-Purpose Satellite (KOMPSAT) panchromatic data were selected for input data processing. We employed two kinds of artificial intelligence techniques, RBF-NN (radial-basis-function neural network) and ANN (artificial neural network), to classify LULC of the study area. A bootstrap resampling method, a statistical technique, was employed to generate the confidence intervals and distribution of the unit load. SWMM was used to simulate the urban runoff and water quality and applied to the study watershed. The condition of urban flow and non-point contaminations was simulated with rainfall-runoff and measured water quality data. The estimated total runoff, peak time, and pollutant generation varied considerably according to the classification accuracy and percentile unit load applied. The proposed procedure would efficiently be applied to water quality and runoff simulation in a rapidly changing urban area.

  10. Breaking the bottleneck: Use of molecular tailoring approach for the estimation of binding energies at MP2/CBS limit for large water clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Gurmeet; Nandi, Apurba; Gadre, Shridhar R., E-mail: gadre@iitk.ac.in

    2016-03-14

    A pragmatic method based on the molecular tailoring approach (MTA) for estimating the complete basis set (CBS) limit at Møller-Plesset second order perturbation (MP2) theory accurately for large molecular clusters with limited computational resources is developed. It is applied to water clusters, (H{sub 2}O){sub n} (n = 7, 8, 10, 16, 17, and 25) optimized employing aug-cc-pVDZ (aVDZ) basis-set. Binding energies (BEs) of these clusters are estimated at the MP2/aug-cc-pVNZ (aVNZ) [N = T, Q, and 5 (whenever possible)] levels of theory employing grafted MTA (GMTA) methodology and are found to lie within 0.2 kcal/mol of the corresponding full calculationmore » MP2 BE, wherever available. The results are extrapolated to CBS limit using a three point formula. The GMTA-MP2 calculations are feasible on off-the-shelf hardware and show around 50%–65% saving of computational time. The methodology has a potential for application to molecular clusters containing ∼100 atoms.« less

  11. A Secure Trust Establishment Scheme for Wireless Sensor Networks

    PubMed Central

    Ishmanov, Farruh; Kim, Sung Won; Nam, Seung Yeob

    2014-01-01

    Trust establishment is an important tool to improve cooperation and enhance security in wireless sensor networks. The core of trust establishment is trust estimation. If a trust estimation method is not robust against attack and misbehavior, the trust values produced will be meaningless, and system performance will be degraded. We present a novel trust estimation method that is robust against on-off attacks and persistent malicious behavior. Moreover, in order to aggregate recommendations securely, we propose using a modified one-step M-estimator scheme. The novelty of the proposed scheme arises from combining past misbehavior with current status in a comprehensive way. Specifically, we introduce an aggregated misbehavior component in trust estimation, which assists in detecting an on-off attack and persistent malicious behavior. In order to determine the current status of the node, we employ previous trust values and current measured misbehavior components. These components are combined to obtain a robust trust value. Theoretical analyses and evaluation results show that our scheme performs better than other trust schemes in terms of detecting an on-off attack and persistent misbehavior. PMID:24451471

  12. Capture-recapture methodology

    USGS Publications Warehouse

    Gould, William R.; Kendall, William L.

    2013-01-01

    Capture-recapture methods were initially developed to estimate human population abundance, but since that time have seen widespread use for fish and wildlife populations to estimate and model various parameters of population, metapopulation, and disease dynamics. Repeated sampling of marked animals provides information for estimating abundance and tracking the fate of individuals in the face of imperfect detection. Mark types have evolved from clipping or tagging to use of noninvasive methods such as photography of natural markings and DNA collection from feces. Survival estimation has been emphasized more recently as have transition probabilities between life history states and/or geographical locations, even where some states are unobservable or uncertain. Sophisticated software has been developed to handle highly parameterized models, including environmental and individual covariates, to conduct model selection, and to employ various estimation approaches such as maximum likelihood and Bayesian approaches. With these user-friendly tools, complex statistical models for studying population dynamics have been made available to ecologists. The future will include a continuing trend toward integrating data types, both for tagged and untagged individuals, to produce more precise and robust population models.

  13. The trapping and distribution of charge in polarized polymethylmethacrylate under electron-beam irradiation

    NASA Astrophysics Data System (ADS)

    Song, Z. G.; Gong, H.; Ong, C. K.

    1997-06-01

    A scanning electron microscope (SEM) mirror-image method (MIM) is employed to investigate the charging behaviour of polarized polymethylmethacrylate (PMMA) under electron-beam irradiation. An ellipsoid is used to model the trapped charge distribution and a fitting method is employed to calculate the total amount of the trapped charge and its distribution parameters. The experimental results reveal that the charging ability decreases with increasing applied electric field, which polarizes the PMMA sample, whereas the trapped charge distribution is elongated along the direction of the applied electric field and increases with increasing applied electric field. The charges are believed to be trapped in some localization states, of activation energy and radius estimated to be about 19.6 meV and 0022-3727/30/11/004/img6, respectively.

  14. A Substance Use Cost Calculator for US Employers With an Emphasis on Prescription Pain Medication Misuse

    PubMed Central

    Goplerud, Eric; Hodge, Sarah; Benham, Tess

    2017-01-01

    Objective: Substance use disorders are among the most common and costly health conditions affecting Americans. Despite estimates of national costs exceeding $400 billion annually, individual companies may not see how substance use impacts their bottom lines through lost productivity and absenteeism, turnover, health care expenses, disability, and workers’ compensation. Methods: Data on employed adults (18 years and older) from 3 years (2012 to 2014) of the National Survey on Drug Use and Health Public Use Data Files were analyzed. Results: The results offer employers an authoritative, free, epidemiologically grounded, and easy-to-use tool that gives specific information about how alcohol, prescription pain medication misuse, and illicit drug use is likely impacting workplaces like theirs. Conclusion: Employers have detailed reports of the cost of substance use that can be used to improve workplace policies and health benefits. PMID:29116987

  15. Modal smoothing for analysis of room reflections measured with spherical microphone and loudspeaker arrays.

    PubMed

    Morgenstern, Hai; Rafaely, Boaz

    2018-02-01

    Spatial analysis of room acoustics is an ongoing research topic. Microphone arrays have been employed for spatial analyses with an important objective being the estimation of the direction-of-arrival (DOA) of direct sound and early room reflections using room impulse responses (RIRs). An optimal method for DOA estimation is the multiple signal classification algorithm. When RIRs are considered, this method typically fails due to the correlation of room reflections, which leads to rank deficiency of the cross-spectrum matrix. Preprocessing methods for rank restoration, which may involve averaging over frequency, for example, have been proposed exclusively for spherical arrays. However, these methods fail in the case of reflections with equal time delays, which may arise in practice and could be of interest. In this paper, a method is proposed for systems that combine a spherical microphone array and a spherical loudspeaker array, referred to as multiple-input multiple-output systems. This method, referred to as modal smoothing, exploits the additional spatial diversity for rank restoration and succeeds where previous methods fail, as demonstrated in a simulation study. Finally, combining modal smoothing with a preprocessing method is proposed in order to increase the number of DOAs that can be estimated using low-order spherical loudspeaker arrays.

  16. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    PubMed

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  17. Ultrasound semi-automated measurement of fetal nuchal translucency thickness based on principal direction estimation

    NASA Astrophysics Data System (ADS)

    Yoon, Heechul; Lee, Hyuntaek; Jung, Haekyung; Lee, Mi-Young; Won, Hye-Sung

    2015-03-01

    The objective of the paper is to introduce a novel method for nuchal translucency (NT) boundary detection and thickness measurement, which is one of the most significant markers in the early screening of chromosomal defects, namely Down syndrome. To improve the reliability and reproducibility of NT measurements, several automated methods have been introduced. However, the performance of their methods degrades when NT borders are tilted due to varying fetal movements. Therefore, we propose a principal direction estimation based NT measurement method to provide reliable and consistent performance regardless of both fetal positions and NT directions. At first, Radon Transform and cost function are used to estimate the principal direction of NT borders. Then, on the estimated angle bin, i.e., the main direction of NT, gradient based features are employed to find initial NT lines which are beginning points of the active contour fitting method to find real NT borders. Finally, the maximum thickness is measured from distances between the upper and lower border of NT by searching along to the orthogonal lines of main NT direction. To evaluate the performance, 89 of in vivo fetal images were collected and the ground-truth database was measured by clinical experts. Quantitative results using intraclass correlation coefficients and difference analysis verify that the proposed method can improve the reliability and reproducibility in the measurement of maximum NT thickness.

  18. Exposure to Secondhand Smoke and Attitudes Toward Smoke-Free Workplaces Among Employed U.S. Adults: Findings From the National Adult Tobacco Survey

    PubMed Central

    King, Brian A.; Homa, David M.; Dube, Shanta R.; Babb, Stephen D.

    2015-01-01

    Introduction This study assessed the prevalence and correlates of secondhand smoke (SHS) exposure and attitudes toward smoke-free workplaces among employed U.S. adults. Methods Data came from the 2009–2010 National Adult Tobacco Survey, a landline and cellular telephone survey of adults aged ≥18 years in the United States and the District of Columbia. National and state estimates of past 7-day workplace SHS exposure and attitudes toward indoor and outdoor smoke-free workplaces were assessed among employed adults. National estimates were calculated by sex, age, race/ethnicity, education, annual household income, sexual orientation, U.S. region, and smoking status. Results Among employed adults who did not smoke cigarettes, 20.4% reported past 7-day SHS exposure at their workplace (state range: 12.4% [Maine] to 30.8% [Nevada]). Nationally, prevalence of exposure was higher among males, those aged 18–44 years, non-Hispanic Blacks, Hispanics, and non-Hispanic American Indians/Alaska natives compared to non-Hispanic Whites, those with less education and income, those in the western United States, and those with no smoke-free workplace policy. Among all employed adults, 83.8% and 23.2% believed smoking should never be allowed in indoor and outdoor areas of workplaces, respectively. Conclusions One-fifth of employed U.S. adult nonsmokers are exposed to SHS in the workplace, and disparities in exposure exist across states and subpopulations. Most employed adults believe indoor areas of workplaces should be smoke free, and nearly one-quarter believe outdoor areas should be smoke free. Efforts to protect employees from SHS exposure and to educate the public about the dangers of SHS and benefits of smoke-free workplaces could be beneficial. PMID:24812025

  19. Joint reconstruction of the initial pressure and speed of sound distributions from combined photoacoustic and ultrasound tomography measurements

    NASA Astrophysics Data System (ADS)

    Matthews, Thomas P.; Anastasio, Mark A.

    2017-12-01

    The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.

  20. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  1. A hybrid method of estimating pulsating flow parameters in the space-time domain

    NASA Astrophysics Data System (ADS)

    Pałczyński, Tomasz

    2017-05-01

    This paper presents a method for estimating pulsating flow parameters in partially open pipes, such as pipelines, internal combustion engine inlets, exhaust pipes and piston compressors. The procedure is based on the method of characteristics, and employs a combination of measurements and simulations. An experimental test rig is described, which enables pressure, temperature and mass flow rate to be measured within a defined cross section. The second part of the paper discusses the main assumptions of a simulation algorithm elaborated in the Matlab/Simulink environment. The simulation results are shown as 3D plots in the space-time domain, and compared with proposed models of phenomena relating to wave propagation, boundary conditions, acoustics and fluid mechanics. The simulation results are finally compared with acoustic phenomena, with an emphasis on the identification of resonant frequencies.

  2. Counteracting estimation bias and social influence to improve the wisdom of crowds.

    PubMed

    Kao, Albert B; Berdahl, Andrew M; Hartnett, Andrew T; Lutz, Matthew J; Bak-Coleman, Joseph B; Ioannou, Christos C; Giam, Xingli; Couzin, Iain D

    2018-04-01

    Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds. © 2018 The Author(s).

  3. Volume estimation of brain abnormalities in MRI data

    NASA Astrophysics Data System (ADS)

    Suprijadi, Pratama, S. H.; Haryanto, F.

    2014-02-01

    The abnormality of brain tissue always becomes a crucial issue in medical field. This medical condition can be recognized through segmentation of certain region from medical images obtained from MRI dataset. Image processing is one of computational methods which very helpful to analyze the MRI data. In this study, combination of segmentation and rendering image were used to isolate tumor and stroke. Two methods of thresholding were employed to segment the abnormality occurrence, followed by filtering to reduce non-abnormality area. Each MRI image is labeled and then used for volume estimations of tumor and stroke-attacked area. The algorithms are shown to be successful in isolating tumor and stroke in MRI images, based on thresholding parameter and stated detection accuracy.

  4. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  5. Sequential deconvolution from wave-front sensing using bivariate simplex splines

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai

    2015-05-01

    Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.

  6. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  7. Properties of star clusters - I. Automatic distance and extinction estimates

    NASA Astrophysics Data System (ADS)

    Buckner, Anne S. M.; Froebrich, Dirk

    2013-12-01

    Determining star cluster distances is essential to analyse their properties and distribution in the Galaxy. In particular, it is desirable to have a reliable, purely photometric distance estimation method for large samples of newly discovered cluster candidates e.g. from the Two Micron All Sky Survey, the UK Infrared Deep Sky Survey Galactic Plane Survey and VVV. Here, we establish an automatic method to estimate distances and reddening from near-infrared photometry alone, without the use of isochrone fitting. We employ a decontamination procedure of JHK photometry to determine the density of stars foreground to clusters and a galactic model to estimate distances. We then calibrate the method using clusters with known properties. This allows us to establish distance estimates with better than 40 per cent accuracy. We apply our method to determine the extinction and distance values to 378 known open clusters and 397 cluster candidates from the list of Froebrich, Scholz & Raftery. We find that the sample is biased towards clusters of a distance of approximately 3 kpc, with typical distances between 2 and 6 kpc. Using the cluster distances and extinction values, we investigate how the average extinction per kiloparsec distance changes as a function of the Galactic longitude. We find a systematic dependence that can be approximated by AH(l) [mag kpc-1] = 0.10 + 0.001 × |l - 180°|/° for regions more than 60° from the Galactic Centre.

  8. Simulating the Refractive Index Structure Constant ({C}_{n}^{2}) in the Surface Layer at Antarctica with a Mesoscale Model

    NASA Astrophysics Data System (ADS)

    Qing, Chun; Wu, Xiaoqing; Li, Xuebin; Tian, Qiguo; Liu, Dong; Rao, Ruizhong; Zhu, Wenyue

    2018-01-01

    In this paper, we introduce an approach wherein the Weather Research and Forecasting (WRF) model is coupled with the bulk aerodynamic method to estimate the surface layer refractive index structure constant (C n 2) above Taishan Station in Antarctica. First, we use the measured meteorological parameters to estimate C n 2 using the bulk aerodynamic method, and second, we use the WRF model output parameters to estimate C n 2 using the bulk aerodynamic method. Finally, the corresponding C n 2 values from the micro-thermometer are compared with the C n 2 values estimated using the WRF model coupled with the bulk aerodynamic method. We analyzed the statistical operators—the bias, root mean square error (RMSE), bias-corrected RMSE (σ), and correlation coefficient (R xy )—in a 20 day data set to assess how this approach performs. In addition, we employ contingency tables to investigate the estimation quality of this approach, which provides complementary key information with respect to the bias, RMSE, σ, and R xy . The quantitative results are encouraging and permit us to confirm the fine performance of this approach. The main conclusions of this study tell us that this approach provides a positive impact on optimizing the observing time in astronomical applications and provides complementary key information for potential astronomical sites.

  9. Analysis and interpretation of diffraction data from complex, anisotropic materials

    NASA Astrophysics Data System (ADS)

    Tutuncu, Goknur

    Most materials are elastically anisotropic and exhibit additional anisotropy beyond elastic deformation. For instance, in ferroelectric materials the main inelastic deformation mode is via domains, which are highly anisotropic crystallographic features. To quantify this anisotropy of ferroelectrics, advanced X-ray and neutron diffraction methods were employed. Extensive sets of data were collected from tetragonal BaTiO3, PZT and other ferroelectric ceramics. Data analysis was challenging due to the complex constitutive behavior of these materials. To quantify the elastic strain and texture evolution in ferroelectrics under loading, a number of data analysis techniques such as the single peak and Rietveld methods were used and their advantages and disadvantages compared. It was observed that the single peak analysis fails at low peak intensities especially after domain switching while the Rietveld method does not account for lattice strain anisotropy although it overcomes the low intensity problem via whole pattern analysis. To better account for strain anisotropy the constant stress (Reuss) approximation was employed within the Rietveld method and new formulations to estimate lattice strain were proposed. Along the way, new approaches for handling highly anisotropic lattice strain data were also developed and applied. All of the ceramics studied exhibited significant changes in their crystallographic texture after loading indicating non-180° domain switching. For a full interpretation of domain switching the spherical harmonics method was employed in Rietveld. A procedure for simultaneous refinement of multiple data sets was established for a complete texture analysis. To further interpret diffraction data, a solid mechanics model based on the self-consistent approach was used in calculating lattice strain and texture evolution during the loading of a polycrystalline ferroelectric. The model estimates both the macroscopic average response of a specimen and its hkl-dependent lattice strains for different reflections. It also tracks the number of grains (or domains) contributing to each reflection and allows for domain switching. The agreement between the model and experimental data was found to be satisfactory.

  10. Flood Scenario Simulation and Disaster Estimation of Ba-Ma Creek Watershed in Nantou County, Taiwan

    NASA Astrophysics Data System (ADS)

    Peng, S. H.; Hsu, Y. K.

    2018-04-01

    The present study proposed several scenario simulations of flood disaster according to the historical flood event and planning requirement in Ba-Ma Creek Watershed located in Nantou County, Taiwan. The simulations were made using the FLO-2D model, a numerical model which can compute the velocity and depth of flood on a two-dimensional terrain. Meanwhile, the calculated data were utilized to estimate the possible damage incurred by the flood disaster. The results thus obtained can serve as references for disaster prevention. Moreover, the simulated results could be employed for flood disaster estimation using the method suggested by the Water Resources Agency of Taiwan. Finally, the conclusions and perspectives are presented.

  11. Making It Count: Improving Estimates of the Size of Transgender and Gender Nonconforming Populations.

    PubMed

    Deutsch, Madeline B

    2016-06-01

    An accurate estimate of the number of transgender and gender nonconforming people is essential to inform policy and funding priorities and decisions. Historical reports of population sizes of 1 in 4000 to 1 in 50,000 have been based on clinical populations and likely underestimate the size of the transgender population. More recent population-based studies have found a 10- to 100-fold increase in population size. Studies that estimate population size should be population based, employ the two-step method to allow for collection of both gender identity and sex assigned at birth, and include measures to capture the range of transgender people with nonbinary gender identities.

  12. Dynamic control of remelting processes

    DOEpatents

    Bertram, Lee A.; Williamson, Rodney L.; Melgaard, David K.; Beaman, Joseph J.; Evans, David G.

    2000-01-01

    An apparatus and method of controlling a remelting process by providing measured process variable values to a process controller; estimating process variable values using a process model of a remelting process; and outputting estimated process variable values from the process controller. Feedback and feedforward control devices receive the estimated process variable values and adjust inputs to the remelting process. Electrode weight, electrode mass, electrode gap, process current, process voltage, electrode position, electrode temperature, electrode thermal boundary layer thickness, electrode velocity, electrode acceleration, slag temperature, melting efficiency, cooling water temperature, cooling water flow rate, crucible temperature profile, slag skin temperature, and/or drip short events are employed, as are parameters representing physical constraints of electroslag remelting or vacuum arc remelting, as applicable.

  13. Estimation of true height: a study in population-specific methods among young South African adults.

    PubMed

    Lahner, Christen Renée; Kassier, Susanna Maria; Veldman, Frederick Johannes

    2017-02-01

    To investigate the accuracy of arm-associated height estimation methods in the calculation of true height compared with stretch stature in a sample of young South African adults. A cross-sectional descriptive design was employed. Pietermaritzburg, Westville and Durban, KwaZulu-Natal, South Africa, 2015. Convenience sample (N 900) aged 18-24 years, which included an equal number of participants from both genders (150 per gender) stratified across race (Caucasian, Black African and Indian). Continuous variables that were investigated included: (i) stretch stature; (ii) total armspan; (iii) half-armspan; (iv) half-armspan ×2; (v) demi-span; (vi) demi-span gender-specific equation; (vii) WHO equation; and (viii) WHO-adjusted equations; as well as categorization according to gender and race. Statistical analysis was conducted using IBM SPSS Statistics Version 21.0. Significant correlations were identified between gender and height estimation measurements, with males being anatomically larger than females (P<0·001). Significant differences were documented when study participants were stratified according to race and gender (P<0·001). Anatomical similarities were noted between Indians and Black Africans, whereas Caucasians were anatomically different from the other race groups. Arm-associated height estimation methods were able to estimate true height; however, each method was specific to each gender and race group. Height can be calculated by using arm-associated measurements. Although universal equations for estimating true height exist, for the enhancement of accuracy, the use of equations that are race-, gender- and population-specific should be considered.

  14. Comparing least-squares and quantile regression approaches to analyzing median hospital charges.

    PubMed

    Olsen, Cody S; Clark, Amy E; Thomas, Andrea M; Cook, Lawrence J

    2012-07-01

    Emergency department (ED) and hospital charges obtained from administrative data sets are useful descriptors of injury severity and the burden to EDs and the health care system. However, charges are typically positively skewed due to costly procedures, long hospital stays, and complicated or prolonged treatment for few patients. The median is not affected by extreme observations and is useful in describing and comparing distributions of hospital charges. A least-squares analysis employing a log transformation is one approach for estimating median hospital charges, corresponding confidence intervals (CIs), and differences between groups; however, this method requires certain distributional properties. An alternate method is quantile regression, which allows estimation and inference related to the median without making distributional assumptions. The objective was to compare the log-transformation least-squares method to the quantile regression approach for estimating median hospital charges, differences in median charges between groups, and associated CIs. The authors performed simulations using repeated sampling of observed statewide ED and hospital charges and charges randomly generated from a hypothetical lognormal distribution. The median and 95% CI and the multiplicative difference between the median charges of two groups were estimated using both least-squares and quantile regression methods. Performance of the two methods was evaluated. In contrast to least squares, quantile regression produced estimates that were unbiased and had smaller mean square errors in simulations of observed ED and hospital charges. Both methods performed well in simulations of hypothetical charges that met least-squares method assumptions. When the data did not follow the assumed distribution, least-squares estimates were often biased, and the associated CIs had lower than expected coverage as sample size increased. Quantile regression analyses of hospital charges provide unbiased estimates even when lognormal and equal variance assumptions are violated. These methods may be particularly useful in describing and analyzing hospital charges from administrative data sets. © 2012 by the Society for Academic Emergency Medicine.

  15. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  16. Problems in estimating self-supplied industrial water use by indirect methods, the California example

    USGS Publications Warehouse

    Burt, R.J.

    1983-01-01

    Consumptive fresh-water use by industry in California is estimated at about 230 million gallons per day, or about one-half of one percent of agricultural withdrawals in the State , and only about 1 percent of agricultural consumptive use. Therefore, a significant State-wide realignment of the total water resources could not be made by industrial conservation measures. Nevertheless, considerable latitude for water conservation exists in industry -- fresh water consumed by self-supplied industry amounts to about 40 percent of its withdrawals in California, and only about 10 to 15 percent nationally (not including power-plant use). Furthermore, where firms withdraw and consume less water there is more for others nearby to use. The main question in attempting to estimate self-supplied industrial water use in California by indirect methods was whether accurate estimates of industrial water use could be made from data on surrogates such as production and employment. The answer is apparently not. A fundamental problem was that different data bases produced variable coefficients of water use for similar industries. Much of the potential for error appeared to lie in the water data bases rather than the production or employment data. The apparent reasons are that water-use data are based on responses to questionnaires, which are prone to errors in reporting, and because the data may be aggregated inappropriately for this kind of correlation. Industries within an apparently similar category commonly use different amounts of water, both because of differences in the product and because of differences in production processes even where the end-product is similar. (USGS)

  17. Leveraging probabilistic peak detection to estimate baseline drift in complex chromatographic samples.

    PubMed

    Lopatka, Martin; Barcaru, Andrei; Sjerps, Marjan J; Vivó-Truyols, Gabriel

    2016-01-29

    Accurate analysis of chromatographic data often requires the removal of baseline drift. A frequently employed strategy strives to determine asymmetric weights in order to fit a baseline model by regression. Unfortunately, chromatograms characterized by a very high peak saturation pose a significant challenge to such algorithms. In addition, a low signal-to-noise ratio (i.e. s/n<40) also adversely affects accurate baseline correction by asymmetrically weighted regression. We present a baseline estimation method that leverages a probabilistic peak detection algorithm. A posterior probability of being affected by a peak is computed for each point in the chromatogram, leading to a set of weights that allow non-iterative calculation of a baseline estimate. For extremely saturated chromatograms, the peak weighted (PW) method demonstrates notable improvement compared to the other methods examined. However, in chromatograms characterized by low-noise and well-resolved peaks, the asymmetric least squares (ALS) and the more sophisticated Mixture Model (MM) approaches achieve superior results in significantly less time. We evaluate the performance of these three baseline correction methods over a range of chromatographic conditions to demonstrate the cases in which each method is most appropriate. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Dealing with gene expression missing data.

    PubMed

    Brás, L P; Menezes, J C

    2006-05-01

    Compared evaluation of different methods is presented for estimating missing values in microarray data: weighted K-nearest neighbours imputation (KNNimpute), regression-based methods such as local least squares imputation (LLSimpute) and partial least squares imputation (PLSimpute) and Bayesian principal component analysis (BPCA). The influence in prediction accuracy of some factors, such as methods' parameters, type of data relationships used in the estimation process (i.e. row-wise, column-wise or both), missing rate and pattern and type of experiment [time series (TS), non-time series (NTS) or mixed (MIX) experiments] is elucidated. Improvements based on the iterative use of data (iterative LLS and PLS imputation--ILLSimpute and IPLSimpute), the need to perform initial imputations (modified PLS and Helland PLS imputation--MPLSimpute and HPLSimpute) and the type of relationships employed (KNNarray, LLSarray, HPLSarray and alternating PLS--APLSimpute) are proposed. Overall, it is shown that data set properties (type of experiment, missing rate and pattern) affect the data similarity structure, therefore influencing the methods' performance. LLSimpute and ILLSimpute are preferable in the presence of data with a stronger similarity structure (TS and MIX experiments), whereas PLS-based methods (MPLSimpute, IPLSimpute and APLSimpute) are preferable when estimating NTS missing data.

  19. A theoretical perspective on the accuracy of rotational resonance (R 2)-based distance measurements in solid-state NMR

    NASA Astrophysics Data System (ADS)

    Pandey, Manoj Kumar; Ramachandran, Ramesh

    2010-03-01

    The application of solid-state NMR methodology for bio-molecular structure determination requires the measurement of constraints in the form of 13C-13C and 13C-15N distances, torsion angles and, in some cases, correlation of the anisotropic interactions. Since the availability of structurally important constraints in the solid state is limited due to lack of sufficient spectral resolution, the accuracy of the measured constraints become vital in studies relating the three-dimensional structure of proteins to its biological functions. Consequently, the theoretical methods employed to quantify the experimental data become important. To accentuate this aspect, we re-examine analytical two-spin models currently employed in the estimation of 13C-13C distances based on the rotational resonance (R 2) phenomenon. Although the error bars for the estimated distances tend to be in the range 0.5-1.0 Å, R 2 experiments are routinely employed in a variety of systems ranging from simple peptides to more complex amyloidogenic proteins. In this article we address this aspect by highlighting the systematic errors introduced by analytical models employing phenomenological damping terms to describe multi-spin effects. Specifically, the spin dynamics in R 2 experiments is described using Floquet theory employing two different operator formalisms. The systematic errors introduced by the phenomenological damping terms and their limitations are elucidated in two analytical models and analysed by comparing the results with rigorous numerical simulations.

  20. Age and gender estimation using Region-SIFT and multi-layered SVM

    NASA Astrophysics Data System (ADS)

    Kim, Hyunduk; Lee, Sang-Heon; Sohn, Myoung-Kyu; Hwang, Byunghun

    2018-04-01

    In this paper, we propose an age and gender estimation framework using the region-SIFT feature and multi-layered SVM classifier. The suggested framework entails three processes. The first step is landmark based face alignment. The second step is the feature extraction step. In this step, we introduce the region-SIFT feature extraction method based on facial landmarks. First, we define sub-regions of the face. We then extract SIFT features from each sub-region. In order to reduce the dimensions of features we employ a Principal Component Analysis (PCA) and a Linear Discriminant Analysis (LDA). Finally, we classify age and gender using a multi-layered Support Vector Machines (SVM) for efficient classification. Rather than performing gender estimation and age estimation independently, the use of the multi-layered SVM can improve the classification rate by constructing a classifier that estimate the age according to gender. Moreover, we collect a dataset of face images, called by DGIST_C, from the internet. A performance evaluation of proposed method was performed with the FERET database, CACD database, and DGIST_C database. The experimental results demonstrate that the proposed approach classifies age and performs gender estimation very efficiently and accurately.

  1. Multiple imputation for cure rate quantile regression with censored data.

    PubMed

    Wu, Yuanshan; Yin, Guosheng

    2017-03-01

    The main challenge in the context of cure rate analysis is that one never knows whether censored subjects are cured or uncured, or whether they are susceptible or insusceptible to the event of interest. Considering the susceptible indicator as missing data, we propose a multiple imputation approach to cure rate quantile regression for censored data with a survival fraction. We develop an iterative algorithm to estimate the conditionally uncured probability for each subject. By utilizing this estimated probability and Bernoulli sample imputation, we can classify each subject as cured or uncured, and then employ the locally weighted method to estimate the quantile regression coefficients with only the uncured subjects. Repeating the imputation procedure multiple times and taking an average over the resultant estimators, we obtain consistent estimators for the quantile regression coefficients. Our approach relaxes the usual global linearity assumption, so that we can apply quantile regression to any particular quantile of interest. We establish asymptotic properties for the proposed estimators, including both consistency and asymptotic normality. We conduct simulation studies to assess the finite-sample performance of the proposed multiple imputation method and apply it to a lung cancer study as an illustration. © 2016, The International Biometric Society.

  2. Absenteeism and Employer Costs Associated With Chronic Diseases and Health Risk Factors in the US Workforce.

    PubMed

    Asay, Garrett R Beeler; Roy, Kakoli; Lang, Jason E; Payne, Rebecca L; Howard, David H

    2016-10-06

    Employers may incur costs related to absenteeism among employees who have chronic diseases or unhealthy behaviors. We examined the association between employee absenteeism and 5 conditions: 3 risk factors (smoking, physical inactivity, and obesity) and 2 chronic diseases (hypertension and diabetes). We identified 5 chronic diseases or risk factors from 2 data sources: MarketScan Health Risk Assessment and the Medical Expenditure Panel Survey (MEPS). Absenteeism was measured as the number of workdays missed because of sickness or injury. We used zero-inflated Poisson regression to estimate excess absenteeism as the difference in the number of days missed from work by those who reported having a risk factor or chronic disease and those who did not. Covariates included demographics (eg, age, education, sex) and employment variables (eg, industry, union membership). We quantified absenteeism costs in 2011 and adjusted them to reflect growth in employment costs to 2015 dollars. Finally, we estimated absenteeism costs for a hypothetical small employer (100 employees) and a hypothetical large employer (1,000 employees). Absenteeism estimates ranged from 1 to 2 days per individual per year depending on the risk factor or chronic disease. Except for the physical inactivity and obesity estimates, disease- and risk-factor-specific estimates were similar in MEPS and MarketScan. Absenteeism increased with the number of risk factors or diseases reported. Nationally, each risk factor or disease was associated with annual absenteeism costs greater than $2 billion. Absenteeism costs ranged from $16 to $81 (small employer) and $17 to $286 (large employer) per employee per year. Absenteeism costs associated with chronic diseases and health risk factors can be substantial. Employers may incur these costs through lower productivity, and employees could incur costs through lower wages.

  3. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  4. Corruption costs lives: evidence from a cross-country study.

    PubMed

    Li, Qiang; An, Lian; Xu, Jing; Baliamoune-Lutz, Mina

    2018-01-01

    This paper investigates the effect of corruption on health outcomes by using cross-country panel data covering about 150 countries for the period of 1995 to 2012. We employ ordinary least squares (OLS), fixed-effects and two-stage least squares (2SLS) estimation methods, and find that corruption significantly increases mortality rates, and reduces life expectancy and immunization rates. The results are consistent across different regions, gender, and measures of corruption. The findings suggest that reducing corruption can be an effective method to improve health outcomes.

  5. Genes with minimal phylogenetic information are problematic for coalescent analyses when gene tree estimation is biased.

    PubMed

    Xi, Zhenxiang; Liu, Liang; Davis, Charles C

    2015-11-01

    The development and application of coalescent methods are undergoing rapid changes. One little explored area that bears on the application of gene-tree-based coalescent methods to species tree estimation is gene informativeness. Here, we investigate the accuracy of these coalescent methods when genes have minimal phylogenetic information, including the implementation of the multilocus bootstrap approach. Using simulated DNA sequences, we demonstrate that genes with minimal phylogenetic information can produce unreliable gene trees (i.e., high error in gene tree estimation), which may in turn reduce the accuracy of species tree estimation using gene-tree-based coalescent methods. We demonstrate that this problem can be alleviated by sampling more genes, as is commonly done in large-scale phylogenomic analyses. This applies even when these genes are minimally informative. If gene tree estimation is biased, however, gene-tree-based coalescent analyses will produce inconsistent results, which cannot be remedied by increasing the number of genes. In this case, it is not the gene-tree-based coalescent methods that are flawed, but rather the input data (i.e., estimated gene trees). Along these lines, the commonly used program PhyML has a tendency to infer one particular bifurcating topology even though it is best represented as a polytomy. We additionally corroborate these findings by analyzing the 183-locus mammal data set assembled by McCormack et al. (2012) using ultra-conserved elements (UCEs) and flanking DNA. Lastly, we demonstrate that when employing the multilocus bootstrap approach on this 183-locus data set, there is no strong conflict between species trees estimated from concatenation and gene-tree-based coalescent analyses, as has been previously suggested by Gatesy and Springer (2014). Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Task-based data-acquisition optimization for sparse image reconstruction systems

    NASA Astrophysics Data System (ADS)

    Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.

    2017-03-01

    Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.

  7. Assessing the Prospects for Employment in an Expansion of US Aquaculture

    NASA Astrophysics Data System (ADS)

    Ngo, N.

    2006-12-01

    The United States imports 60 percent of its seafood, leading to a 7 billion seafood trade deficit. To mitigate this deficit, the National Oceanographic and Atmospheric Administration (NOAA), a branch of the U.S. Department of Commerce, has promoted the expansion of U.S. production of seafood by aquaculture. NOAA projects that the future expansion of a U.S. aquaculture industry could produce as much as 5 billion in annual sales. NOAA claims that one of the benefits of this expansion would be an increase in employment from 180,000 to 600,000 persons (100,000 indirect jobs and 500,000 direct jobs). Sources of these estimates and the assumptions upon which they are based are unclear, however. The Marine Aquaculture Task Force (MATF), an independent scientific panel, has been skeptical of NOAA's employment estimates, claiming that its sources of information are weak and based upon dubious assumptions. If NOAA has exaggerated its employment projections, then the benefits from an expansion of U.S. aquaculture production would not be as large as projected. y study examined published estimates of labor productivity from the domestic and foreign aquaculture of a variety of species, and I projected the potential increase in employment associated with a 5 billion aquaculture industry, as proposed by NOAA. Results showed that employment estimates will range from only 40,000 to 128,000 direct jobs by 2025 as a consequence of the proposed expansion. Consequently, NOAA may have overestimated its employment projections-?possibly by as much as 170 percent, implying that NOAA's employment estimate requires further research or adjustment.

  8. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    NASA Astrophysics Data System (ADS)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to a three-dimensional feature space to assign a degree of physicalness to each cluster. The proposed algorithm is applied to two case studies: one with synthetic data and one with real test data obtained from a hammer impact test. The results indicate that the algorithm successfully clusters similar modes and gives a reasonable quantification of the extent to which each cluster is physical.

  9. Unified framework to evaluate panmixia and migration direction among multiple sampling locations.

    PubMed

    Beerli, Peter; Palczewski, Michal

    2010-05-01

    For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.

  10. Volume estimation using food specific shape templates in mobile image-based dietary assessment

    NASA Astrophysics Data System (ADS)

    Chae, Junghoon; Woo, Insoo; Kim, SungYe; Maciejewski, Ross; Zhu, Fengqing; Delp, Edward J.; Boushey, Carol J.; Ebert, David S.

    2011-03-01

    As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume.

  11. Online estimation of lithium-ion battery capacity using sparse Bayesian learning

    NASA Astrophysics Data System (ADS)

    Hu, Chao; Jain, Gaurav; Schmidt, Craig; Strief, Carrie; Sullivan, Melani

    2015-09-01

    Lithium-ion (Li-ion) rechargeable batteries are used as one of the major energy storage components for implantable medical devices. Reliability of Li-ion batteries used in these devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, patients and physicians. To ensure a Li-ion battery operates reliably, it is important to develop health monitoring techniques that accurately estimate the capacity of the battery throughout its life-time. This paper presents a sparse Bayesian learning method that utilizes the charge voltage and current measurements to estimate the capacity of a Li-ion battery used in an implantable medical device. Relevance Vector Machine (RVM) is employed as a probabilistic kernel regression method to learn the complex dependency of the battery capacity on the characteristic features that are extracted from the charge voltage and current measurements. Owing to the sparsity property of RVM, the proposed method generates a reduced-scale regression model that consumes only a small fraction of the CPU time required by a full-scale model, which makes online capacity estimation computationally efficient. 10 years' continuous cycling data and post-explant cycling data obtained from Li-ion prismatic cells are used to verify the performance of the proposed method.

  12. The Employment of Part-Time Faculty at Community Colleges

    ERIC Educational Resources Information Center

    Christensen, Chad

    2008-01-01

    Institutions of higher education have been gradually employing more part-time faculty over the past several decades. Walsh estimates that part-time faculty employment across all types of institutions increased 79 percent from 1981 to 1999. The growth of part-time faculty at community colleges is equally astounding. It is now estimated that 67…

  13. A Unified Framework for Estimating Minimum Detectable Effects for Comparative Short Interrupted Time Series Designs

    ERIC Educational Resources Information Center

    Price, Cristofer; Unlu, Fatih

    2014-01-01

    The Comparative Short Interrupted Time Series (C-SITS) design is a frequently employed quasi-experimental method, in which the pre- and post-intervention changes observed in the outcome levels of a treatment group is compared with those of a comparison group where the difference between the former and the latter is attributed to the treatment. The…

  14. Infrared Dielectric Properties of Low-Stress Silicon Oxide

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Wollack, Edward J.; Brown, Ari D.; Miller, Kevin H.

    2016-01-01

    Silicon oxide thin films play an important role in the realization of optical coatings and high-performance electrical circuits. Estimates of the dielectric function in the far- and mid-infrared regime are derived from the observed transmittance spectrum for a commonly employed low-stress silicon oxide formulation. The experimental, modeling, and numerical methods used to extract the dielectric function are presented.

  15. Clay Caterpillar Whodunit: A Customizable Method for Studying Predator-Prey Interactions in the Field

    ERIC Educational Resources Information Center

    Curtis, Rachel; Klemens, Jeffrey A.; Agosta, Salvatore J.; Bartlow, Andrew W.; Wood, Steve; Carlson, Jason A.; Stratford, Jeffrey A.; Steele, Michael A.

    2013-01-01

    Predator-prey dynamics are an important concept in ecology, often serving as an introduction to the field of community ecology. However, these dynamics are difficult for students to observe directly. We describe a methodology that employs model caterpillars made of clay to estimate rates of predator attack on a prey species. This approach can be…

  16. Quantifying allometric model uncertainty for plot-level live tree biomass stocks with a data-driven, hierarchical framework

    Treesearch

    Brian J. Clough; Matthew B. Russell; Grant M. Domke; Christopher W. Woodall

    2016-01-01

    Accurate uncertainty assessments of plot-level live tree biomass stocks are an important precursor to estimating uncertainty in annual national greenhouse gas inventories (NGHGIs) developed from forest inventory data. However, current approaches employed within the United States’ NGHGI do not specifically incorporate methods to address error in tree-scale biomass...

  17. New estimates of extensive-air-shower energies on the basis of signals in scintillation detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anyutin, N. V.; Dedenko, L. G., E-mail: ddn@dec1.sinp.msu.ru; Roganova, T. M.

    New formulas for estimating the energy of inclined extensive air showers (EASs) on the basis of signals in detectors by means of an original method and detailed tables of signals induced in scintillation detectors by photons, electrons, positrons, and muons and calculated with the aid of the GEANT4 code package were proposed in terms of the QGSJETII-04, EPOS LHC, and GHEISHA models. The parameters appearing in the proposed formulas were calculated by employing the CORSIKA code package. It is shown that, for showers of zenith angles in the range of 20◦–45◦, the standard constant-intensity-cut method, which is used to interpretmore » data from the Yakutsk EAS array, overestimates the shower energy by a factor of 1.2 to 1.5. It is proposed to employ the calculated VEM (Vertical Equivalent Muon) signal units of 10.8 and 11.4 MeV for, respectively, ground-based and underground scintillation detectors and to take into account the dependence of signals on the azimuthal angle of the detector position and fluctuations in the development of showers.« less

  18. Fresh Biomass Estimation in Heterogeneous Grassland Using Hyperspectral Measurements and Multivariate Statistical Analysis

    NASA Astrophysics Data System (ADS)

    Darvishzadeh, R.; Skidmore, A. K.; Mirzaie, M.; Atzberger, C.; Schlerf, M.

    2014-12-01

    Accurate estimation of grassland biomass at their peak productivity can provide crucial information regarding the functioning and productivity of the rangelands. Hyperspectral remote sensing has proved to be valuable for estimation of vegetation biophysical parameters such as biomass using different statistical techniques. However, in statistical analysis of hyperspectral data, multicollinearity is a common problem due to large amount of correlated hyper-spectral reflectance measurements. The aim of this study was to examine the prospect of above ground biomass estimation in a heterogeneous Mediterranean rangeland employing multivariate calibration methods. Canopy spectral measurements were made in the field using a GER 3700 spectroradiometer, along with concomitant in situ measurements of above ground biomass for 170 sample plots. Multivariate calibrations including partial least squares regression (PLSR), principal component regression (PCR), and Least-Squared Support Vector Machine (LS-SVM) were used to estimate the above ground biomass. The prediction accuracy of the multivariate calibration methods were assessed using cross validated R2 and RMSE. The best model performance was obtained using LS_SVM and then PLSR both calibrated with first derivative reflectance dataset with R2cv = 0.88 & 0.86 and RMSEcv= 1.15 & 1.07 respectively. The weakest prediction accuracy was appeared when PCR were used (R2cv = 0.31 and RMSEcv= 2.48). The obtained results highlight the importance of multivariate calibration methods for biomass estimation when hyperspectral data are used.

  19. Using the ratio of the magnetic field to the analytic signal of the magnetic gradient tensor in determining the position of simple shaped magnetic anomalies

    NASA Astrophysics Data System (ADS)

    Karimi, Kurosh; Shirzaditabar, Farzad

    2017-08-01

    The analytic signal of magnitude of the magnetic field’s components and its first derivatives have been employed for locating magnetic structures, which can be considered as point-dipoles or line of dipoles. Although similar methods have been used for locating such magnetic anomalies, they cannot estimate the positions of anomalies in noisy states with an acceptable accuracy. The methods are also inexact in determining the depth of deep anomalies. In noisy cases and in places other than poles, the maximum points of the magnitude of the magnetic vector components and Az are not located exactly above 3D bodies. Consequently, the horizontal location estimates of bodies are accompanied by errors. Here, the previous methods are altered and generalized to locate deeper models in the presence of noise even at lower magnetic latitudes. In addition, a statistical technique is presented for working in noisy areas and a new method, which is resistant to noise by using a ‘depths mean’ method, is made. Reduction to the pole transformation is also used to find the most possible actual horizontal body location. Deep models are also well estimated. The method is tested on real magnetic data over an urban gas pipeline in the vicinity of Kermanshah province, Iran. The estimated location of the pipeline is accurate in accordance with the result of the half-width method.

  20. A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831

  1. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  2. A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.

    PubMed

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.

  3. Epidemiologic programs for computers and calculators. A microcomputer program for multiple logistic regression by unconditional and conditional maximum likelihood methods.

    PubMed

    Campos-Filho, N; Franco, E L

    1989-02-01

    A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.

  4. Prognostics of slurry pumps based on a moving-average wear degradation index and a general sequential Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tse, Peter W.

    2015-05-01

    Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.

  5. Injuries at Work in the US Adult Population: Contributions to the Total Injury Burden

    PubMed Central

    Smith, Gordon S.; Wellman, Helen M.; Sorock, Gary S.; Warner, Margaret; Courtney, Theodore K.; Pransky, Glenn S.; Fingerhut, Lois A.

    2005-01-01

    Objectives. We estimated the contribution of nonfatal work-related injuries on the injury burden among working-age adults (aged 18–64 years) in the United States. Methods. We used the 1997–1999 National Health Interview Survey (NHIS) to estimate injury rates and proportions of work-related vs non–work-related injuries. Results. An estimated 19.4 million medically treated injuries occurred annually to working-age adults (11.7 episodes per 100 persons; 95% confidence interval [CI]=11.3, 12.1); 29%, or 5.5 million (4.5 per 100 persons; 95% CI=4.2, 4.7), occurred at work and varied by gender, age, and race/ethnicity. Among employed persons, 38% of injuries occurred at work, and among employed men aged 55–64 years, 49% of injuries occurred at work. Conclusions. Injuries at work comprise a substantial part of the injury burden, accounting for nearly half of all injuries in some age groups. The NHIS provides an important source of population-based data with which to determine the work relatedness of injuries. Study estimates of days away from work after injury were 1.8 times higher than the Bureau of Labor Statistics (BLS) workplace-based estimates and 1.4 times as high as BLS estimates for private industry. The prominence of occupational injuries among injuries to working-age adults reinforces the need to examine workplace conditions in efforts to reduce the societal impact of injuries. PMID:15983273

  6. Assessing an ensemble Kalman filter inference of Manning's n coefficient of an idealized tidal inlet against a polynomial chaos-based MCMC

    NASA Astrophysics Data System (ADS)

    Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim

    2017-08-01

    Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and uncertainties of Manning's n coefficients compared to the full posterior distributions inferred by MCMC.

  7. Binding-affinity predictions of HSP90 in the D3R Grand Challenge 2015 with docking, MM/GBSA, QM/MM, and free-energy simulations

    NASA Astrophysics Data System (ADS)

    Misini Ignjatović, Majda; Caldararu, Octav; Dong, Geng; Muñoz-Gutierrez, Camila; Adasme-Carreño, Francisco; Ryde, Ulf

    2016-09-01

    We have estimated the binding affinity of three sets of ligands of the heat-shock protein 90 in the D3R grand challenge blind test competition. We have employed four different methods, based on five different crystal structures: first, we docked the ligands to the proteins with induced-fit docking with the Glide software and calculated binding affinities with three energy functions. Second, the docked structures were minimised in a continuum solvent and binding affinities were calculated with the MM/GBSA method (molecular mechanics combined with generalised Born and solvent-accessible surface area solvation). Third, the docked structures were re-optimised by combined quantum mechanics and molecular mechanics (QM/MM) calculations. Then, interaction energies were calculated with quantum mechanical calculations employing 970-1160 atoms in a continuum solvent, combined with energy corrections for dispersion, zero-point energy and entropy, ligand distortion, ligand solvation, and an increase of the basis set to quadruple-zeta quality. Fourth, relative binding affinities were estimated by free-energy simulations, using the multi-state Bennett acceptance-ratio approach. Unfortunately, the results were varying and rather poor, with only one calculation giving a correlation to the experimental affinities larger than 0.7, and with no consistent difference in the quality of the predictions from the various methods. For one set of ligands, the results could be strongly improved (after experimental data were revealed) if it was recognised that one of the ligands displaced one or two water molecules. For the other two sets, the problem is probably that the ligands bind in different modes than in the crystal structures employed or that the conformation of the ligand-binding site or the whole protein changes.

  8. Binding-affinity predictions of HSP90 in the D3R Grand Challenge 2015 with docking, MM/GBSA, QM/MM, and free-energy simulations.

    PubMed

    Misini Ignjatović, Majda; Caldararu, Octav; Dong, Geng; Muñoz-Gutierrez, Camila; Adasme-Carreño, Francisco; Ryde, Ulf

    2016-09-01

    We have estimated the binding affinity of three sets of ligands of the heat-shock protein 90 in the D3R grand challenge blind test competition. We have employed four different methods, based on five different crystal structures: first, we docked the ligands to the proteins with induced-fit docking with the Glide software and calculated binding affinities with three energy functions. Second, the docked structures were minimised in a continuum solvent and binding affinities were calculated with the MM/GBSA method (molecular mechanics combined with generalised Born and solvent-accessible surface area solvation). Third, the docked structures were re-optimised by combined quantum mechanics and molecular mechanics (QM/MM) calculations. Then, interaction energies were calculated with quantum mechanical calculations employing 970-1160 atoms in a continuum solvent, combined with energy corrections for dispersion, zero-point energy and entropy, ligand distortion, ligand solvation, and an increase of the basis set to quadruple-zeta quality. Fourth, relative binding affinities were estimated by free-energy simulations, using the multi-state Bennett acceptance-ratio approach. Unfortunately, the results were varying and rather poor, with only one calculation giving a correlation to the experimental affinities larger than 0.7, and with no consistent difference in the quality of the predictions from the various methods. For one set of ligands, the results could be strongly improved (after experimental data were revealed) if it was recognised that one of the ligands displaced one or two water molecules. For the other two sets, the problem is probably that the ligands bind in different modes than in the crystal structures employed or that the conformation of the ligand-binding site or the whole protein changes.

  9. Assessing model uncertainty using hexavalent chromium and ...

    EPA Pesticide Factsheets

    Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective of this analysis is to characterize model uncertainty by evaluating the variance in estimates across several epidemiologic analyses.Methods: This analysis compared 7 publications analyzing two different chromate production sites in Ohio and Maryland. The Ohio cohort consisted of 482 workers employed from 1940-72, while the Maryland site employed 2,357 workers from 1950-74. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability in estimates across and within model forms. A total of 7 similarly parameterized analyses were considered across model forms, and 23 analyses with alternative parameterizations were considered within model form (14 Cox; 9 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients for 7 similar analyses ranged from 2.47

  10. A new approach on seismic mortality estimations based on average population density

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.

  11. GRACE Hydrological estimates for small basins: Evaluating processing approaches on the High Plains Aquifer, USA

    NASA Astrophysics Data System (ADS)

    Longuevergne, Laurent; Scanlon, Bridget R.; Wilson, Clark R.

    2010-11-01

    The Gravity Recovery and Climate Experiment (GRACE) satellites provide observations of water storage variation at regional scales. However, when focusing on a region of interest, limited spatial resolution and noise contamination can cause estimation bias and spatial leakage, problems that are exacerbated as the region of interest approaches the GRACE resolution limit of a few hundred km. Reliable estimates of water storage variations in small basins require compromises between competing needs for noise suppression and spatial resolution. The objective of this study was to quantitatively investigate processing methods and their impacts on bias, leakage, GRACE noise reduction, and estimated total error, allowing solution of the trade-offs. Among the methods tested is a recently developed concentration algorithm called spatiospectral localization, which optimizes the basin shape description, taking into account limited spatial resolution. This method is particularly suited to retrieval of basin-scale water storage variations and is effective for small basins. To increase confidence in derived methods, water storage variations were calculated for both CSR (Center for Space Research) and GRGS (Groupe de Recherche de Géodésie Spatiale) GRACE products, which employ different processing strategies. The processing techniques were tested on the intensively monitored High Plains Aquifer (450,000 km2 area), where application of the appropriate optimal processing method allowed retrieval of water storage variations over a portion of the aquifer as small as ˜200,000 km2.

  12. Accuracy of Blood Loss Measurement during Cesarean Delivery.

    PubMed

    Doctorvaladan, Sahar V; Jelks, Andrea T; Hsieh, Eric W; Thurer, Robert L; Zakowski, Mark I; Lagrew, David C

    2017-04-01

    Objective  This study aims to compare the accuracy of visual, quantitative gravimetric, and colorimetric methods used to determine blood loss during cesarean delivery procedures employing a hemoglobin extraction assay as the reference standard. Study Design  In 50 patients having cesarean deliveries blood loss determined by assays of hemoglobin content on surgical sponges and in suction canisters was compared with obstetricians' visual estimates, a quantitative gravimetric method, and the blood loss determined by a novel colorimetric system. Agreement between the reference assay and other measures was evaluated by the Bland-Altman method. Results  Compared with the blood loss measured by the reference assay (470 ± 296 mL), the colorimetric system (572 ± 334 mL) was more accurate than either visual estimation (928 ± 261 mL) or gravimetric measurement (822 ± 489 mL). The correlation between the assay method and the colorimetric system was more predictive (standardized coefficient = 0.951, adjusted R 2  = 0.902) than either visual estimation (standardized coefficient = 0.700, adjusted R 2  = 00.479) or the gravimetric determination (standardized coefficient = 0.564, adjusted R 2  = 0.304). Conclusion  During cesarean delivery, measuring blood loss using colorimetric image analysis is superior to visual estimation and a gravimetric method. Implementation of colorimetric analysis may enhance the ability of management protocols to improve clinical outcomes.

  13. Accuracy of Blood Loss Measurement during Cesarean Delivery

    PubMed Central

    Doctorvaladan, Sahar V.; Jelks, Andrea T.; Hsieh, Eric W.; Thurer, Robert L.; Zakowski, Mark I.; Lagrew, David C.

    2017-01-01

    Objective This study aims to compare the accuracy of visual, quantitative gravimetric, and colorimetric methods used to determine blood loss during cesarean delivery procedures employing a hemoglobin extraction assay as the reference standard. Study Design In 50 patients having cesarean deliveries blood loss determined by assays of hemoglobin content on surgical sponges and in suction canisters was compared with obstetricians' visual estimates, a quantitative gravimetric method, and the blood loss determined by a novel colorimetric system. Agreement between the reference assay and other measures was evaluated by the Bland–Altman method. Results Compared with the blood loss measured by the reference assay (470 ± 296 mL), the colorimetric system (572 ± 334 mL) was more accurate than either visual estimation (928 ± 261 mL) or gravimetric measurement (822 ± 489 mL). The correlation between the assay method and the colorimetric system was more predictive (standardized coefficient = 0.951, adjusted R2 = 0.902) than either visual estimation (standardized coefficient = 0.700, adjusted R2 = 00.479) or the gravimetric determination (standardized coefficient = 0.564, adjusted R2 = 0.304). Conclusion During cesarean delivery, measuring blood loss using colorimetric image analysis is superior to visual estimation and a gravimetric method. Implementation of colorimetric analysis may enhance the ability of management protocols to improve clinical outcomes. PMID:28497007

  14. Models for estimating and projecting global, regional and national prevalence and disease burden of asthma: protocol for a systematic review.

    PubMed

    Bhuia, Mohammad Romel; Nwaru, Bright I; Weir, Christopher J; Sheikh, Aziz

    2017-05-17

    Models that have so far been used to estimate and project the prevalence and disease burden of asthma are in most cases inadequately described and irreproducible. We aim systematically to describe and critique the existing models in relation to their strengths, limitations and reproducibility, and to determine the appropriate models for estimating and projecting the prevalence and disease burden of asthma. We will search the following electronic databases to identify relevant literature published from 1980 to 2017: Medline, Embase, WHO Library and Information Services and Web of Science Core Collection. We will identify additional studies by searching the reference list of all the retrieved papers and contacting experts. We will include observational studies that used models for estimating and/or projecting prevalence and disease burden of asthma regarding human population of any age and sex. Two independent reviewers will assess the studies for inclusion and extract data from included papers. Data items will include authors' names, publication year, study aims, data source and time period, study population, asthma outcomes, study methodology, model type, model settings, study variables, methods of model derivation, methods of parameter estimation and/or projection, model fit information, key findings and identified research gaps. A detailed critical narrative synthesis of the models will be undertaken in relation to their strengths, limitations and reproducibility. A quality assessment checklist and scoring framework will be used to determine the appropriate models for estimating and projecting the prevalence anddiseaseburden of asthma. We will not collect any primary data for this review, and hence there is no need for formal National Health Services Research Ethics Committee approval. We will present our findings at scientific conferences and publish the findings in the peer-reviewed scientific journal. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. Adaptive framework to better characterize errors of apriori fluxes and observational residuals in a Bayesian setup for the urban flux inversions.

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Karion, A.; Mueller, K.; Gourdji, S.; Martin, C.; Whetstone, J. R.

    2017-12-01

    The National Institute of Standards and Technology (NIST) supports the North-East Corridor Baltimore Washington (NEC-B/W) project and Indianapolis Flux Experiment (INFLUX) aiming to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties. These projects employ different flux estimation methods including top-down inversion approaches. The traditional Bayesian inversion method estimates emission distributions by updating prior information using atmospheric observations of Green House Gases (GHG) coupled to an atmospheric and dispersion model. The magnitude of the update is dependent upon the observed enhancement along with the assumed errors such as those associated with prior information and the atmospheric transport and dispersion model. These errors are specified within the inversion covariance matrices. The assumed structure and magnitude of the specified errors can have large impact on the emission estimates from the inversion. The main objective of this work is to build a data-adaptive model for these covariances matrices. We construct a synthetic data experiment using a Kalman Filter inversion framework (Lopez et al., 2017) employing different configurations of transport and dispersion model and an assumed prior. Unlike previous traditional Bayesian approaches, we estimate posterior emissions using regularized sample covariance matrices associated with prior errors to investigate whether the structure of the matrices help to better recover our hypothetical true emissions. To incorporate transport model error, we use ensemble of transport models combined with space-time analytical covariance to construct a covariance that accounts for errors in space and time. A Kalman Filter is then run using these covariances along with Maximum Likelihood Estimates (MLE) of the involved parameters. Preliminary results indicate that specifying sptio-temporally varying errors in the error covariances can improve the flux estimates and uncertainties. We also demonstrate that differences between the modeled and observed meteorology can be used to predict uncertainties associated with atmospheric transport and dispersion modeling which can help improve the skill of an inversion at urban scales.

  16. Mixed effects modelling for glass category estimation from glass refractive indices.

    PubMed

    Lucy, David; Zadora, Grzegorz

    2011-10-10

    520 Glass fragments were taken from 105 glass items. Each item was either a container, a window, or glass from an automobile. Each of these three classes of use are defined as glass categories. Refractive indexes were measured both before, and after a programme of re-annealing. Because the refractive index of each fragment could not in itself be observed before and after re-annealing, a model based approach was used to estimate the change in refractive index for each glass category. It was found that less complex estimation methods would be equivalent to the full model, and were subsequently used. The change in refractive index was then used to calculate a measure of the evidential value for each item belonging to each glass category. The distributions of refractive index change were considered for each glass category, and it was found that, possibly due to small samples, members of the normal family would not adequately model the refractive index changes within two of the use types considered here. Two alternative approaches to modelling the change in refractive index were used, one employed more established kernel density estimates, the other a newer approach called log-concave estimation. Either method when applied to the change in refractive index was found to give good estimates of glass category, however, on all performance metrics kernel density estimates were found to be slightly better than log-concave estimates, although the estimates from log-concave estimation prossessed properties which had some qualitative appeal not encapsulated in the selected measures of performance. These results and implications of these two methods of estimating probability densities for glass refractive indexes are discussed. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Efficient high-rate satellite clock estimation for PPP ambiguity resolution using carrier-ranges.

    PubMed

    Chen, Hua; Jiang, Weiping; Ge, Maorong; Wickert, Jens; Schuh, Harald

    2014-11-25

    In order to catch up the short-term clock variation of GNSS satellites, clock corrections must be estimated and updated at a high-rate for Precise Point Positioning (PPP). This estimation is already very time-consuming for the GPS constellation only as a great number of ambiguities need to be simultaneously estimated. However, on the one hand better estimates are expected by including more stations, and on the other hand satellites from different GNSS systems must be processed integratively for a reliable multi-GNSS positioning service. To alleviate the heavy computational burden, epoch-differenced observations are always employed where ambiguities are eliminated. As the epoch-differenced method can only derive temporal clock changes which have to be aligned to the absolute clocks but always in a rather complicated way, in this paper, an efficient method for high-rate clock estimation is proposed using the concept of "carrier-range" realized by means of PPP with integer ambiguity resolution. Processing procedures for both post- and real-time processing are developed, respectively. The experimental validation shows that the computation time could be reduced to about one sixth of that of the existing methods for post-processing and less than 1 s for processing a single epoch of a network with about 200 stations in real-time mode after all ambiguities are fixed. This confirms that the proposed processing strategy will enable the high-rate clock estimation for future multi-GNSS networks in post-processing and possibly also in real-time mode.

  18. IDENTIFICATION OF OFF-FARM AGRICULTURAL OCCUPATIONS AND THE EDUCATION NEEDED FOR EMPLOYMENT IN THESE OCCUPATIONS IN DELAWARE.

    ERIC Educational Resources Information Center

    BARWICK, RALPH P.

    THE PURPOSES OF THE STUDY WERE TO (1) IDENTIFY PRESENT AND EMERGING OFF-FARM AGRICULTURAL OCCUPATIONS, (2) ESTIMATE THE NUMBER EMPLOYED, (3) ESTIMATE THE NUMBER TO BE EMPLOYED IN THE FUTURE, AND (4) DETERMINE COMPETENCIES NEEDED IN SELECTED OCCUPATIONAL FAMILIES. A DISPROPORTIONATE RANDOM SAMPLE OF 267 BUSINESSES OR SERVICES WAS DRAWN FROM A LIST…

  19. The Clock’N Test as a Possible Measure of Emotions: Normative Data Collected on a Non-clinical Population

    PubMed Central

    Gros, Auriane; Manera, Valeria; Daumas, Anaïs; Guillemin, Sophie; Rouaud, Olivier; Martin, Martine Lemesle; Giroud, Maurice; Béjot, Yannick

    2016-01-01

    Objective: At present emotional experience and implicit emotion regulation (IER) abilities are mainly assessed though self-reports, which are subjected to several biases. The aim of the present studies was to validate the Clock’N test, a recently developed time estimation task employing emotional priming to assess implicitly emotional reactivity and IER. Methods: In Study 1, the Clock’N test was administered to 150 healthy participants with different age, laterality and gender, in order to ascertain whether these factors affected the test results. In phase 1 participant were asked to judge the duration of seven sounds. In phase 2, before judging the duration of the same sounds, participants were presented with short arousing video-clip used as emotional priming stimuli. Time warp was calculated as the difference in time estimation between phase 2 and phase 1, and used to assess how emotions affected subjective time estimations. In study 2, a representative sample was selected to provide normative scores to be employed to assess emotional reactivity (Score 1) and IER (Score 2), and to calculate statistical cutoffs, based on the 10th and 90th score distribution percentiles. Results: Converging with previous findings, the results of study 1 suggested that the Clock’N test can be employed to assess both emotional reactivity, as indexed by an initial time underestimation, and IER, as indexed by a progressive shift to time overestimation. No effects of gender, age and laterality were found. Conclusions: These results suggest that the Clock’N test is adapted to assess emotional reactivity and IER. After collection of data on the test discriminant and convergent validity, this test may be employed to assess deficits in these abilities in different clinical populations. PMID:26903825

  20. Regional frequency analysis of extreme rainfalls using partial L moments method

    NASA Astrophysics Data System (ADS)

    Zakaria, Zahrahtul Amani; Shabri, Ani

    2013-07-01

    An approach based on regional frequency analysis using L moments and LH moments are revisited in this study. Subsequently, an alternative regional frequency analysis using the partial L moments (PL moments) method is employed, and a new relationship for homogeneity analysis is developed. The results were then compared with those obtained using the method of L moments and LH moments of order two. The Selangor catchment, consisting of 37 sites and located on the west coast of Peninsular Malaysia, is chosen as a case study. PL moments for the generalized extreme value (GEV), generalized logistic (GLO), and generalized Pareto distributions were derived and used to develop the regional frequency analysis procedure. PL moment ratio diagram and Z test were employed in determining the best-fit distribution. Comparison between the three approaches showed that GLO and GEV distributions were identified as the suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation used for performance evaluation shows that the method of PL moments would outperform L and LH moments methods for estimation of large return period events.

Top