Information estimators for weighted observations.
Hino, Hideitsu; Murata, Noboru
2013-10-01
The Shannon information content is a valuable numerical characteristic of probability distributions. The problem of estimating the information content from an observed dataset is very important in the fields of statistics, information theory, and machine learning. The contribution of the present paper is in proposing information estimators, and showing some of their applications. When the given data are associated with weights, each datum contributes differently to the empirical average of statistics. The proposed estimators can deal with this kind of weighted data. Similar to other conventional methods, the proposed information estimator contains a parameter to be tuned, and is computationally expensive. To overcome these problems, the proposed estimator is further modified so that it is more computationally efficient and has no tuning parameter. The proposed methods are also extended so as to estimate the cross-entropy, entropy, and Kullback-Leibler divergence. Simple numerical experiments show that the information estimators work properly. Then, the estimators are applied to two specific problems, distribution-preserving data compression, and weight optimization for ensemble regression.
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
31 CFR 205.24 - How are accurate estimates maintained?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...
Micromagnetometer calibration for accurate orientation estimation.
Zhang, Zhi-Qiang; Yang, Guang-Zhong
2015-02-01
Micromagnetometers, together with inertial sensors, are widely used for attitude estimation for a wide variety of applications. However, appropriate sensor calibration, which is essential to the accuracy of attitude reconstruction, must be performed in advance. Thus far, many different magnetometer calibration methods have been proposed to compensate for errors such as scale, offset, and nonorthogonality. They have also been used for obviate magnetic errors due to soft and hard iron. However, in order to combine the magnetometer with inertial sensor for attitude reconstruction, alignment difference between the magnetometer and the axes of the inertial sensor must be determined as well. This paper proposes a practical means of sensor error correction by simultaneous consideration of sensor errors, magnetic errors, and alignment difference. We take the summation of the offset and hard iron error as the combined bias and then amalgamate the alignment difference and all the other errors as a transformation matrix. A two-step approach is presented to determine the combined bias and transformation matrix separately. In the first step, the combined bias is determined by finding an optimal ellipsoid that can best fit the sensor readings. In the second step, the intrinsic relationships of the raw sensor readings are explored to estimate the transformation matrix as a homogeneous linear least-squares problem. Singular value decomposition is then applied to estimate both the transformation matrix and magnetic vector. The proposed method is then applied to calibrate our sensor node. Although there is no ground truth for the combined bias and transformation matrix for our node, the consistency of calibration results among different trials and less than 3(°) root mean square error for orientation estimation have been achieved, which illustrates the effectiveness of the proposed sensor calibration method for practical applications. PMID:25265625
The Accuracy of Broselow Tape Weight Estimate among Pediatric Population
AlGarni, Abdullaziz; AlGamdi, Fasial; Jawish, Mona; Wani, Tariq Ahmad
2016-01-01
Objective. To determine the accuracy of the Broselow Tape (BT) versions 2007 and 2011 in estimating weight among pediatric population. Methods. A cross-sectional study was conducted at King Fahad Medical City and six schools across Riyadh province on 1–143-month-old children. BT 2007 and 2011 estimated weights were recorded. Both tapes via the child's height produce an estimated weight, which was compared with the actual weight. Results. A total of 3537 children were recruited. The height (cm) of the subjects was 97.7 ± 24.1 and the actual weight (kg) was 16.07 ± 8.9, whereas the estimated weight determined by BT 2007 was 15.87 ± 7.56 and by BT 2011 was 16.38 ± 7.95. Across all the five age groups, correlation between actual weight and BT 2007 ranged between 0.702 and 0.788, while correlation between actual weight and BT 2011 ranged between 0.698 and 0.788. Correlation between BT 2007 and BT 2011 across all the five age groups ranged from 0.979 to 0.989. Accuracy of both the tape versions was adversely affected when age was >95 months and body weight was >26 kilograms. Conclusions. Our study showed that BT 2007 and 2011 provided accurate estimation of the body weight based on measured body height. However, 2011 version provided more precise estimate for weight. PMID:27668258
The Accuracy of Broselow Tape Weight Estimate among Pediatric Population
AlGarni, Abdullaziz; AlGamdi, Fasial; Jawish, Mona; Wani, Tariq Ahmad
2016-01-01
Objective. To determine the accuracy of the Broselow Tape (BT) versions 2007 and 2011 in estimating weight among pediatric population. Methods. A cross-sectional study was conducted at King Fahad Medical City and six schools across Riyadh province on 1–143-month-old children. BT 2007 and 2011 estimated weights were recorded. Both tapes via the child's height produce an estimated weight, which was compared with the actual weight. Results. A total of 3537 children were recruited. The height (cm) of the subjects was 97.7 ± 24.1 and the actual weight (kg) was 16.07 ± 8.9, whereas the estimated weight determined by BT 2007 was 15.87 ± 7.56 and by BT 2011 was 16.38 ± 7.95. Across all the five age groups, correlation between actual weight and BT 2007 ranged between 0.702 and 0.788, while correlation between actual weight and BT 2011 ranged between 0.698 and 0.788. Correlation between BT 2007 and BT 2011 across all the five age groups ranged from 0.979 to 0.989. Accuracy of both the tape versions was adversely affected when age was >95 months and body weight was >26 kilograms. Conclusions. Our study showed that BT 2007 and 2011 provided accurate estimation of the body weight based on measured body height. However, 2011 version provided more precise estimate for weight.
Calculating weighted estimates of peak streamflow statistics
Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.
2012-01-01
According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.
Weighted conditional least-squares estimation
Booth, J.G.
1987-01-01
A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered.
A live weight-heart girth relationship for accurate dosing of east African shorthorn zebu cattle.
Lesosky, Maia; Dumas, Sarah; Conradie, Ilana; Handel, Ian Graham; Jennings, Amy; Thumbi, Samuel; Toye, Phillip; Bronsvoort, Barend Mark de Clare
2013-01-01
The accurate estimation of livestock weights is important for many aspects of livestock management including nutrition, production and appropriate dosing of pharmaceuticals. Subtherapeutic dosing has been shown to accelerate pathogen resistance which can have subsequent widespread impacts. There are a number of published models for the prediction of live weight from morphometric measurements of cattle, but many of these models use measurements difficult to gather and include complicated age, size and gender stratification. In this paper, we use data from the Infectious Diseases of East Africa calf cohort study and additional data collected at local markets in western Kenya to develop a simple model based on heart girth circumference to predict live weight of east African shorthorn zebu (SHZ) cattle. SHZ cattle are widespread throughout eastern and southern Africa and are economically important multipurpose animals. We demonstrate model accuracy by splitting the data into training and validation subsets and comparing fitted and predicted values. The final model is weight(0.262) = 0.95 + 0.022 × girth which has an R (2) value of 0.98 and 95 % prediction intervals that fall within the ± 20 % body weight error band regarded as acceptable when dosing livestock. This model provides a highly reliable and accurate method for predicting weights of SHZ cattle using a single heart girth measurement which can be easily obtained with a tape measure in the field setting. PMID:22923040
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Analytical Fuselage and Wing Weight Estimation of Transport Aircraft
NASA Technical Reports Server (NTRS)
Chambers, Mark C.; Ardema, Mark D.; Patron, Anthony P.; Hahn, Andrew S.; Miura, Hirokazu; Moore, Mark D.
1996-01-01
A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft, and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT has traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight. Using statistical analysis techniques, relations between the load-bearing fuselage and wing weights calculated by PDCYL and corresponding actual weights were determined.
Accurate Parameter Estimation for Unbalanced Three-Phase System
Chen, Yuan
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS. PMID:25162056
Accurate parameter estimation for unbalanced three-phase system.
Chen, Yuan; So, Hing Cheung
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.
Yu, Jinhua; Wang, Yuanyuan; Chen, Ping
2009-01-01
Accurate estimation of fetal weight before delivery is of great benefit to limit the potential complication associated with the low-birth-weight infants. Although the regression analysis has been used as a daily clinical means to estimate the fetal weight on the basis of ultrasound measurements, it still lacks enough accuracy for low-birth-weight fetuses. The ineffectiveness is mainly due to the large inter- or intraobserver variability in measurements and the inappropriateness of the regression analysis. A novel method based on the support vector regression (SVR) is proposed to improve the weight estimation accuracy for fetuses of less than 2500 g. Here, fuzzy logic is introduced into SVR (termed FSVR) to limit the contribution of inaccurate training data to the model establishment, and thus, to enhance the robustness of FSVR to noisy data. To guarantee the generalization performance of the FSVR model, the nondominated sorting genetic algorithm (NSGA) is utilized to obtain the optimal parameters for the FSVR, which is referred to as the evolutionary fuzzy support vector regression (EFSVR) model. Compared with regression formulas, back-propagation neural network, and SVR, EFSVR achieves the lowest mean absolute percent error (6.6%) and the highest correlation coefficient (0.902) between the estimated fetal weight and the actual birth weight. The EFSVR model produces significant improvement (1.9%-4.2%) on the accuracy of fetal weight estimation over several widely used formulas. Experiments show the potential of EFSVR in clinical prenatal care.
Validation of an Improved Pediatric Weight Estimation Strategy
Abdel-Rahman, Susan M.; Ahlers, Nichole; Holmes, Anne; Wright, Krista; Harris, Ann; Weigel, Jaylene; Hill, Talita; Baird, Kim; Michaels, Marla; Kearns, Gregory L.
2013-01-01
OBJECTIVES To validate the recently described Mercy method for weight estimation in an independent cohort of children living in the United States. METHODS Anthropometric data including weight, height, humeral length, and mid upper arm circumference were collected from 976 otherwise healthy children (2 months to 14 years old). The data were used to examine the predictive performances of the Mercy method and four other weight estimation strategies (the Advanced Pediatric Life Support [APLS] method, the Broselow tape, and the Luscombe and Owens and the Nelson methods). RESULTS The Mercy method demonstrated accuracy comparable to that observed in the original study (mean error: −0.3 kg; mean percentage error: −0.3%; root mean square error: 2.62 kg; 95% limits of agreement: 0.83–1.19). This method estimated weight within 20% of actual for 95% of children compared with 58.7% for APLS, 78% for Broselow, 54.4% for Luscombe and Owens, and 70.4% for Nelson. Furthermore, the Mercy method was the only weight estimation strategy which enabled prediction of weight in all of the children enrolled. CONCLUSIONS The Mercy method proved to be highly accurate and more robust than existing weight estimation strategies across a wider range of age and body mass index values, thereby making it superior to other existing approaches. PMID:23798905
Accurate measure by weight of liquids in industry
Muller, M.R.
1992-12-12
This research's focus was to build a prototype of a computerized liquid dispensing system. This liquid metering system is based on the concept of altering the representative volume to account for temperature changes in the liquid to be dispensed. This is actualized by using a measuring tank and a temperature compensating displacement plunger. By constantly monitoring the temperature of the liquid, the plunger can be used to increase or decrease the specified volume to more accurately dispense liquid with a specified mass. In order to put the device being developed into proper engineering perspective, an extensive literature review was undertaken on all areas of industrial metering of liquids with an emphasis on gravimetric methods.
Accurate measure by weight of liquids in industry. Final report
Muller, M.R.
1992-12-12
This research`s focus was to build a prototype of a computerized liquid dispensing system. This liquid metering system is based on the concept of altering the representative volume to account for temperature changes in the liquid to be dispensed. This is actualized by using a measuring tank and a temperature compensating displacement plunger. By constantly monitoring the temperature of the liquid, the plunger can be used to increase or decrease the specified volume to more accurately dispense liquid with a specified mass. In order to put the device being developed into proper engineering perspective, an extensive literature review was undertaken on all areas of industrial metering of liquids with an emphasis on gravimetric methods.
An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-01-01
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314
An accurate link correlation estimator for improving wireless protocol performance.
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-02-12
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation.
Calorie Estimation in Adults Differing in Body Weight Class and Weight Loss Status
Brown, Ruth E; Canning, Karissa L; Fung, Michael; Jiandani, Dishay; Riddell, Michael C; Macpherson, Alison K; Kuk, Jennifer L
2016-01-01
Purpose Ability to accurately estimate calories is important for weight management, yet few studies have investigated whether individuals can accurately estimate calories during exercise, or in a meal. The objective of this study was to determine if accuracy of estimation of moderate or vigorous exercise energy expenditure and calories in food is associated with body weight class or weight loss status. Methods Fifty-eight adults who were either normal weight (NW) or overweight (OW), and either attempting (WL) or not attempting weight loss (noWL), exercised on a treadmill at a moderate (60% HRmax) and a vigorous intensity (75% HRmax) for 25 minutes. Subsequently, participants estimated the number of calories they expended through exercise, and created a meal that they believed to be calorically equivalent to the exercise energy expenditure. Results The mean difference between estimated and measured calories in exercise and food did not differ within or between groups following moderate exercise. Following vigorous exercise, OW-noWL overestimated energy expenditure by 72%, and overestimated the calories in their food by 37% (P<0.05). OW-noWL also significantly overestimated exercise energy expenditure compared to all other groups (P<0.05), and significantly overestimated calories in food compared to both WL groups (P<0.05). However, among all groups there was a considerable range of over and underestimation (−280 kcal to +702 kcal), as reflected by the large and statistically significant absolute error in calorie estimation of exercise and food. Conclusion There was a wide range of under and overestimation of calories during exercise and in a meal. Error in calorie estimation may be greater in overweight adults who are not attempting weight loss. PMID:26469988
Sonography in Fetal Birth Weight Estimation
ERIC Educational Resources Information Center
Akinola, R. A.; Akinola, O. I.; Oyekan, O. O.
2009-01-01
The estimation of fetal birth weight is an important factor in the management of high risk pregnancies. The information and knowledge gained through this study, comparing a combination of various fetal parameters using computer assisted analysis, will help the obstetrician to screen the high risk pregnancies, monitor the growth and development,…
Generalized weighted ratio method for accurate turbidity measurement over a wide range.
Liu, Hongbo; Yang, Ping; Song, Hong; Guo, Yilu; Zhan, Shuyue; Huang, Hui; Wang, Hangzhou; Tao, Bangyi; Mu, Quanquan; Xu, Jing; Li, Dejun; Chen, Ying
2015-12-14
Turbidity measurement is important for water quality assessment, food safety, medicine, ocean monitoring, etc. In this paper, a method that accurately estimates the turbidity over a wide range is proposed, where the turbidity of the sample is represented as a weighted ratio of the scattered light intensities at a series of angles. An improvement in the accuracy is achieved by expanding the structure of the ratio function, thus adding more flexibility to the turbidity-intensity fitting. Experiments have been carried out with an 850 nm laser and a power meter fixed on a turntable to measure the light intensity at different angles. The results show that the relative estimation error of the proposed method is 0.58% on average for a four-angle intensity combination for all test samples with a turbidity ranging from 160 NTU to 4000 NTU.
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
Accurate photometric redshift probability density estimation - method comparison and application
NASA Astrophysics Data System (ADS)
Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben
2015-10-01
We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ≥ 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.
Structural Weight Estimation for Launch Vehicles
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Su, Philip; Eldred, Lloyd
2002-01-01
This paper describes some of the work in progress to develop automated structural weight estimation procedures within the Vehicle Analysis Branch (VAB) of the NASA Langley Research Center. One task of the VAB is to perform system studies at the conceptual and early preliminary design stages on launch vehicles and in-space transportation systems. Some examples of these studies for Earth to Orbit (ETO) systems are the Future Space Transportation System [1], Orbit On Demand Vehicle [2], Venture Star [3], and the Personnel Rescue Vehicle[4]. Structural weight calculation for launch vehicle studies can exist on several levels of fidelity. Typically historically based weight equations are used in a vehicle sizing program. Many of the studies in the vehicle analysis branch have been enhanced in terms of structural weight fraction prediction by utilizing some level of off-line structural analysis to incorporate material property, load intensity, and configuration effects which may not be captured by the historical weight equations. Modification of Mass Estimating Relationships (MER's) to assess design and technology impacts on vehicle performance are necessary to prioritize design and technology development decisions. Modern CAD/CAE software, ever increasing computational power and platform independent computer programming languages such as JAVA provide new means to create greater depth of analysis tools which can be included into the conceptual design phase of launch vehicle development. Commercial framework computing environments provide easy to program techniques which coordinate and implement the flow of data in a distributed heterogeneous computing environment. It is the intent of this paper to present a process in development at NASA LaRC for enhanced structural weight estimation using this state of the art computational power.
Convex weighting criteria for speaking rate estimation
Jiao, Yishan; Berisha, Visar; Tu, Ming; Liss, Julie
2015-01-01
Speaking rate estimation directly from the speech waveform is a long-standing problem in speech signal processing. In this paper, we pose the speaking rate estimation problem as that of estimating a temporal density function whose integral over a given interval yields the speaking rate within that interval. In contrast to many existing methods, we avoid the more difficult task of detecting individual phonemes within the speech signal and we avoid heuristics such as thresholding the temporal envelope to estimate the number of vowels. Rather, the proposed method aims to learn an optimal weighting function that can be directly applied to time-frequency features in a speech signal to yield a temporal density function. We propose two convex cost functions for learning the weighting functions and an adaptation strategy to customize the approach to a particular speaker using minimal training. The algorithms are evaluated on the TIMIT corpus, on a dysarthric speech corpus, and on the ICSI Switchboard spontaneous speech corpus. Results show that the proposed methods outperform three competing methods on both healthy and dysarthric speech. In addition, for spontaneous speech rate estimation, the result show a high correlation between the estimated speaking rate and ground truth values. PMID:26167516
Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.
2008-01-01
Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.
Accurate estimators of correlation functions in Fourier space
NASA Astrophysics Data System (ADS)
Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.
2016-08-01
Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.
Accurate heart rate estimation from camera recording via MUSIC algorithm.
Fouladi, Seyyed Hamed; Balasingham, Ilangko; Ramstad, Tor Audun; Kansanen, Kimmo
2015-01-01
In this paper, we propose an algorithm to extract heart rate frequency from video camera using the Multiple SIgnal Classification (MUSIC) algorithm. This leads to improved accuracy of the estimated heart rate frequency in cases the performance is limited by the number of samples and frame rate. Monitoring vital signs remotely can be exploited for both non-contact physiological and psychological diagnosis. The color variation recorded by ordinary cameras is used for heart rate monitoring. The orthogonality between signal space and noise space is used to find more accurate heart rate frequency in comparison with traditional methods. It is shown via experimental results that the limitation of previous methods can be overcome by using subspace methods. PMID:26738015
Accurate Orientation Estimation Using AHRS under Conditions of Magnetic Distortion
Yadav, Nagesh; Bleakley, Chris
2014-01-01
Low cost, compact attitude heading reference systems (AHRS) are now being used to track human body movements in indoor environments by estimation of the 3D orientation of body segments. In many of these systems, heading estimation is achieved by monitoring the strength of the Earth's magnetic field. However, the Earth's magnetic field can be locally distorted due to the proximity of ferrous and/or magnetic objects. Herein, we propose a novel method for accurate 3D orientation estimation using an AHRS, comprised of an accelerometer, gyroscope and magnetometer, under conditions of magnetic field distortion. The system performs online detection and compensation for magnetic disturbances, due to, for example, the presence of ferrous objects. The magnetic distortions are detected by exploiting variations in magnetic dip angle, relative to the gravity vector, and in magnetic strength. We investigate and show the advantages of using both magnetic strength and magnetic dip angle for detecting the presence of magnetic distortions. The correction method is based on a particle filter, which performs the correction using an adaptive cost function and by adapting the variance during particle resampling, so as to place more emphasis on the results of dead reckoning of the gyroscope measurements and less on the magnetometer readings. The proposed method was tested in an indoor environment in the presence of various magnetic distortions and under various accelerations (up to 3 g). In the experiments, the proposed algorithm achieves <2° static peak-to-peak error and <5° dynamic peak-to-peak error, significantly outperforming previous methods. PMID:25347584
NASA Astrophysics Data System (ADS)
Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime
2016-03-01
Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.
LOCATION OF BODY FAT AMONG WOMEN WHO ACCURATELY OR INACCURATELY PERCEIVE THEIR WEIGHT STATUS.
Rote, Aubrianne E; Klos, Lori A; Swartz, Ann M
2015-10-01
This cross-sectional study investigated location of body fat, with specific focus on abdominal fat, among normal weight and overweight women who accurately or inaccurately perceived their weight status. Young, adult women (N = 120; M age = 19.5 yr., SD = 1.2) were asked to classify their weight status using the Self-Classified Weight subscale from the Multidimensional Body-Self Relations Questionnaire. Actual weight status was operationalized via dual-energy x-ray absorptiometry. Overweight women who thought they were normal weight had an average of 19 pounds more fat than normal weight women with 1.5 pounds of excess abdominal fat. Interventions to raise awareness among overweight women unaware of their fat level are warranted. However, these interventions should balance consideration of potential detriments to body image among these women. PMID:26474442
A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors
Hwang, Beomsoo; Jeon, Doyoung
2015-01-01
In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074
Hwang, Beomsoo; Jeon, Doyoung
2015-04-09
In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.
Weight Estimation Tool for Children Aged 6 to 59 Months in Limited-Resource Settings
2016-01-01
Importance A simple, reliable anthropometric tool for rapid estimation of weight in children would be useful in limited-resource settings where current weight estimation tools are not uniformly reliable, nearly all global under-five mortality occurs, severe acute malnutrition is a significant contributor in approximately one-third of under-five mortality, and a weight scale may not be immediately available in emergencies to first-response providers. Objective To determine the accuracy and precision of mid-upper arm circumference (MUAC) and height as weight estimation tools in children under five years of age in low-to-middle income countries. Design This was a retrospective observational study. Data were collected in 560 nutritional surveys during 1992–2006 using a modified Expanded Program of Immunization two-stage cluster sample design. Setting Locations with high prevalence of acute and chronic malnutrition. Participants A total of 453,990 children met inclusion criteria (age 6–59 months; weight ≤ 25 kg; MUAC 80–200 mm) and exclusion criteria (bilateral pitting edema; biologically implausible weight-for-height z-score (WHZ), weight-for-age z-score (WAZ), and height-for-age z-score (HAZ) values). Exposures Weight was estimated using Broselow Tape, Hong Kong formula, and database MUAC alone, height alone, and height and MUAC combined. Main Outcomes and Measures Mean percentage difference between true and estimated weight, proportion of estimates accurate to within ± 25% and ± 10% of true weight, weighted Kappa statistic, and Bland-Altman bias were reported as measures of tool accuracy. Standard deviation of mean percentage difference and Bland-Altman 95% limits of agreement were reported as measures of tool precision. Results Database height was a more accurate and precise predictor of weight compared to Broselow Tape 2007 [B], Broselow Tape 2011 [A], and MUAC. Mean percentage difference between true and estimated weight was +0.49% (SD = 10
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Technology Transfer Automated Retrieval System (TEKTRAN)
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...
NASA Astrophysics Data System (ADS)
Rhee, Young Min
2000-10-01
A modified method to construct an accurate potential energy surface by interpolation is presented. The modification is based on the use of Cartesian coordinates in the weighting function. The translational and rotational invariance of the potential is incorporated by a proper definition of the distance between two Cartesian configurations. A numerical algorithm to find the distance is developed. It is shown that the present method is more exact in describing a planar system compared to the previous methods with weightings in internal coordinates. The applicability of the method to reactive systems is also demonstrated by performing classical trajectory simulations on the surface.
On Relevance Weight Estimation and Query Expansion.
ERIC Educational Resources Information Center
Robertson, S. E.
1986-01-01
A Bayesian argument is used to suggest modifications to the Robertson and Jones relevance weighting formula to accommodate the addition to the query of terms taken from the relevant documents identified during the search. (Author)
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 402-434-4880, by e-mailing nfo@ncwm.net, or on the Internet at http://www.nist.gov/owm. (b) All scales... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. These materials are incorporated as they exist on the date... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Scales; accurate weights,...
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 402-434-4880, by e-mailing nfo@ncwm.net, or on the Internet at http://www.nist.gov/owm. (b) All scales... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. These materials are incorporated as they exist on the date... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Scales; accurate weights,...
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 402-434-4880, by e-mailing nfo@ncwm.net, or on the Internet at http://www.nist.gov/owm. (b) All scales... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. These materials are incorporated as they exist on the date... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Scales; accurate weights,...
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 402-434-4880, by e-mailing nfo@ncwm.net, or on the Internet at http://www.nist.gov/owm. (b) All scales... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. These materials are incorporated as they exist on the date... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Scales; accurate weights,...
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 402-434-4880, by e-mailing nfo@ncwm.net, or on the Internet at http://www.nist.gov/owm. (b) All scales... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. These materials are incorporated as they exist on the date... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Scales; accurate weights,...
Does more accurate exposure prediction necessarily improve health effect estimates?
Szpiro, Adam A; Paciorek, Christopher J; Sheppard, Lianne
2011-09-01
A unique challenge in air pollution cohort studies and similar applications in environmental epidemiology is that exposure is not measured directly at subjects' locations. Instead, pollution data from monitoring stations at some distance from the study subjects are used to predict exposures, and these predicted exposures are used to estimate the health effect parameter of interest. It is usually assumed that minimizing the error in predicting the true exposure will improve health effect estimation. We show in a simulation study that this is not always the case. We interpret our results in light of recently developed statistical theory for measurement error, and we discuss implications for the design and analysis of epidemiologic research.
ERIC Educational Resources Information Center
Geller, Josie; Srikameswaran, Suja; Zaitsoff, Shannon L.; Cockell, Sarah J.; Poole, Gary D.
2003-01-01
Examined parents' awareness of their daughters' attitudes, beliefs, and feelings about their bodies. Sixty-six adolescent daughters completed an eating disorder scale, a body figure rating scale, and made ratings of their shape and weight. Greater discrepancies between parents' estimates of daughters' body esteem and daughters' self-reported body…
Accurate response surface approximations for weight equations based on structural optimization
NASA Astrophysics Data System (ADS)
Papila, Melih
Accurate weight prediction methods are vitally important for aircraft design optimization. Therefore, designers seek weight prediction techniques with low computational cost and high accuracy, and usually require a compromise between the two. The compromise can be achieved by combining stress analysis and response surface (RS) methodology. While stress analysis provides accurate weight information, RS techniques help to transmit effectively this information to the optimization procedure. The focus of this dissertation is structural weight equations in the form of RS approximations and their accuracy when fitted to results of structural optimizations that are based on finite element analyses. Use of RS methodology filters out the numerical noise in structural optimization results and provides a smooth weight function that can easily be used in gradient-based configuration optimization. In engineering applications RS approximations of low order polynomials are widely used, but the weight may not be modeled well by low-order polynomials, leading to bias errors. In addition, some structural optimization results may have high-amplitude errors (outliers) that may severely affect the accuracy of the weight equation. Statistical techniques associated with RS methodology are sought in order to deal with these two difficulties: (1) high-amplitude numerical noise (outliers) and (2) approximation model inadequacy. The investigation starts with reducing approximation error by identifying and repairing outliers. A potential reason for outliers in optimization results is premature convergence, and outliers of such nature may be corrected by employing different convergence settings. It is demonstrated that outlier repair can lead to accuracy improvements over the more standard approach of removing outliers. The adequacy of approximation is then studied by a modified lack-of-fit approach, and RS errors due to the approximation model are reduced by using higher order polynomials. In
Accurate feature detection and estimation using nonlinear and multiresolution analysis
NASA Astrophysics Data System (ADS)
Rudin, Leonid; Osher, Stanley
1994-11-01
A program for feature detection and estimation using nonlinear and multiscale analysis was completed. The state-of-the-art edge detection was combined with multiscale restoration (as suggested by the first author) and robust results in the presence of noise were obtained. Successful applications to numerous images of interest to DOD were made. Also, a new market in the criminal justice field was developed, based in part, on this work.
Simulation model accurately estimates total dietary iodine intake.
Verkaik-Kloosterman, Janneke; van 't Veer, Pieter; Ocké, Marga C
2009-07-01
One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and probabilistic techniques was developed. Data from the Dutch National Food Consumption Survey (1997-1998) and an update of the Food Composition database were used to simulate 3 different scenarios: Dutch iodine legislation until July 2008, Dutch iodine legislation after July 2008, and a potential future situation. Results from studies measuring iodine excretion during the former legislation are comparable with the iodine intakes estimated with our model. For both former and current legislation, iodine intake was adequate for a large part of the Dutch population, but some young children (<5%) were at risk of intakes that were too low. In the scenario of a potential future situation using lower salt iodine levels, the percentage of the Dutch population with intakes that were too low increased (almost 10% of young children). To keep iodine intakes adequate, salt iodine levels should not be decreased, unless many more foods will contain iodized salt. Our model should be useful in predicting the effects of food reformulation or fortification on habitual nutrient intakes.
Estimating ideal body weight--a new formula.
Lemmens, Harry J M; Brodsky, Jay B; Bernstein, Donald P
2005-08-01
A simple formula for estimating ideal body weight (IBW) in kilograms for both men and women is presented. The equation IBW = 22 x H2, where H is equal to patient height in meters, yields weight values midway within the range of weights obtained using published IBW formulae.
Weight estimation techniques for composite airplanes in general aviation industry
NASA Technical Reports Server (NTRS)
Paramasivam, T.; Horn, W. J.; Ritter, J.
1986-01-01
Currently available weight estimation methods for general aviation airplanes were investigated. New equations with explicit material properties were developed for the weight estimation of aircraft components such as wing, fuselage and empennage. Regression analysis was applied to the basic equations for a data base of twelve airplanes to determine the coefficients. The resulting equations can be used to predict the component weights of either metallic or composite airplanes.
Image analysis for estimating the weight of live animals
NASA Astrophysics Data System (ADS)
Schofield, C. P.; Marchant, John A.
1991-02-01
Many components of animal production have been automated. For example weighing feeding identification and yield recording on cattle pigs poultry and fish. However some of these tasks still require a considerable degree of human input and more effective automation could lead to better husbandry. For example if the weight of pigs could be monitored more often without increasing labour input then this information could be used to measure growth rates and control fat level allowing accurate prediction of market dates and optimum carcass quality to be achieved with improved welfare at minimum cost. Some aspects of animal production have defied automation. For example attending to the well being of housed animals is the preserve of the expert stockman. He gathers visual data about the animals in his charge (in more plain words goes and looks at their condition and behaviour) and processes this data to draw conclusions and take actions. Automatically collecting data on well being implies that the animals are not disturbed from their normal environment otherwise false conclusions will be drawn. Computer image analysis could provide the data required without the need to disturb the animals. This paper describes new work at the Institute of Engineering Research which uses image analysis to estimate the weight of pigs as a starting point for the wider range of applications which have been identified. In particular a technique has been developed to
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John
2016-01-01
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.
Bioaccessibility tests accurately estimate bioavailability of lead to quail.
Beyer, W Nelson; Basta, Nicholas T; Chaney, Rufus L; Henry, Paula F P; Mosby, David E; Rattner, Barnett A; Scheckel, Kirk G; Sprague, Daniel T; Weber, John S
2016-09-01
Hazards of soil-borne lead (Pb) to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, the authors measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from 5 Pb-contaminated Superfund sites had relative bioavailabilities from 33% to 63%, with a mean of approximately 50%. Treatment of 2 of the soils with phosphorus (P) significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in 6 in vitro tests and regressed on bioavailability: the relative bioavailability leaching procedure at pH 1.5, the same test conducted at pH 2.5, the Ohio State University in vitro gastrointestinal method, the urban soil bioaccessible lead test, the modified physiologically based extraction test, and the waterfowl physiologically based extraction test. All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the relative bioavailability leaching procedure at pH 2.5 and Ohio State University in vitro gastrointestinal tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite, and tertiary Pb phosphate) and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb, and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb. Environ Toxicol Chem 2016;35:2311-2319. Published 2016 Wiley Periodicals Inc. on behalf of
Zhang, Xinyue; Lourenco, Daniela; Aguilar, Ignacio; Legarra, Andres; Misztal, Ignacy
2016-01-01
former. Manhattan plots had higher resolution with 5 and 100 QTL. Using a common weight for a window of 20 SNP that sums or averages the SNP variance enhances accuracy of predicting GEBV and provides accurate estimation of marker effects. PMID:27594861
Zhang, Xinyue; Lourenco, Daniela; Aguilar, Ignacio; Legarra, Andres; Misztal, Ignacy
2016-01-01
. Manhattan plots had higher resolution with 5 and 100 QTL. Using a common weight for a window of 20 SNP that sums or averages the SNP variance enhances accuracy of predicting GEBV and provides accurate estimation of marker effects.
Zhang, Xinyue; Lourenco, Daniela; Aguilar, Ignacio; Legarra, Andres; Misztal, Ignacy
2016-01-01
. Manhattan plots had higher resolution with 5 and 100 QTL. Using a common weight for a window of 20 SNP that sums or averages the SNP variance enhances accuracy of predicting GEBV and provides accurate estimation of marker effects. PMID:27594861
Zhang, Xinyue; Lourenco, Daniela; Aguilar, Ignacio; Legarra, Andres; Misztal, Ignacy
2016-01-01
former. Manhattan plots had higher resolution with 5 and 100 QTL. Using a common weight for a window of 20 SNP that sums or averages the SNP variance enhances accuracy of predicting GEBV and provides accurate estimation of marker effects.
Comparative Study of Clinical and Sonographic Estimation of Foetal Weight at Term.
Bakshi, L; Begum, H A; Khan, I; Dey, S K; Bhattacharjee, M; Bakshi, M K; Dey, S; Habib, A; Barman, K K
2015-07-01
A cross sectional comparative study was conducted at Dhaka National Medical College, Dhaka from January to June 2012, to observe the accuracy of clinical and ultrasonographic estimation of foetal weight at term in our environment. Seventy five pregnant women who fulfilled the inclusion criteria had their foetal weight estimated independently using clinical and ultrasonographic methods. Accuracy was determined by percentage error, absolute percentage error and proportion of estimates within 10% of actual birth weight (birth weight fetus of +10%). Statistical analysis was done using the paired t-test, the Wilcoxon signed-rank test, and the chi-square test. The study sample had an actual average birth weight of 2989.60 ± 408.76 (range 2310-4000 gm). Overall, the clinical method overestimated birth-weight, while ultrasound underestimated it. The mean absolute percentage error of the clinical method was more than that of the sonographic method, and the number of estimates within 10% of actual birth weight for the clinical method (41.3%) was less than for the sonographic method (57.3%); the difference was not statistically significant. In the low birth-weight (<2,500 gm) group, the mean absolute percentage error of sonographic estimates were significantly smaller. Significantly more sonographic estimates (75%) were within 10% of actual birth-weight than those of the clinical method (0%). No statistically significant difference was observed in all the measures of accuracy for the normal birth-weight range of 2,500-<4,000 gm and in the macrosomic group (≥ 4,000 gm). Clinical estimation of birth-weight is as accurate as routine ultrasonographic estimation, except in low-birth-weight babies.
NASA Technical Reports Server (NTRS)
Sensmeier, Mark D.; Samareh, Jamshid A.
2005-01-01
An approach is proposed for the application of rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process. This should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. A demonstration of this process is shown for two sample aircraft wing designs.
Estimating the weight of generally configured dual wing systems
NASA Technical Reports Server (NTRS)
Cronin, D. L.; Somnay, R. J.
1985-01-01
Formulas available for the weight estimation of monoplane wings cannot be said to be appropriate for the estimation of generally configured dual wing systems. In the present paper a method is described which simultaneously generates a structural weight estimate and a fully stressed, quasi-optimal structure for a model of a dual wing system. The method is fast and inexpensive. It is ideally suited to preliminary design. To illustrate the method, a dual wing system and a conventional wing system are sized. Numerical computation is shown to be suitably fast for both cases and, for both cases, convergence to a final configuration is shown to be quite rapid. To illustrate the validity of the method, a conventional wing is sized and its weight obtained by the present method is compared to its weight determined by a reputable weight estimation formula. The results are shown to be very close.
Pros, Cons, and Alternatives to Weight Based Cost Estimating
NASA Technical Reports Server (NTRS)
Joyner, Claude R.; Lauriem, Jonathan R.; Levack, Daniel H.; Zapata, Edgar
2011-01-01
Many cost estimating tools use weight as a major parameter in projecting the cost. This is often combined with modifying factors such as complexity, technical maturity of design, environment of operation, etc. to increase the fidelity of the estimate. For a set of conceptual designs, all meeting the same requirements, increased weight can be a major driver in increased cost. However, once a design is fixed, increased weight generally decreases cost, while decreased weight generally increases cost - and the relationship is not linear. Alternative approaches to estimating cost without using weight (except perhaps for materials costs) have been attempted to try to produce a tool usable throughout the design process - from concept studies through development. This paper will address the pros and cons of using weight based models for cost estimating, using liquid rocket engines as the example. It will then examine approaches that minimize the impct of weight based cost estimating. The Rocket Engine- Cost Model (RECM) is an attribute based model developed internally by Pratt & Whitney Rocketdyne for NASA. RECM will be presented primarily to show a successful method to use design and programmatic parameters instead of weight to estimate both design and development costs and production costs. An operations model developed by KSC, the Launch and Landing Effects Ground Operations model (LLEGO), will also be discussed.
NASA Technical Reports Server (NTRS)
Grissom, D. S.; Schneider, W. C.
1971-01-01
The determination of a base line (minimum weight) design for the primary structure of the living quarters modules in an earth-orbiting space base was investigated. Although the design is preliminary in nature, the supporting analysis is sufficiently thorough to provide a reasonably accurate weight estimate of the major components that are considered to comprise the structural weight of the space base.
Fetal weight estimation by ultrasonic measurement of abdominal circumference.
Kearney, K; Vigneron, N; Frischman, P; Johnson, J W
1978-02-01
The purpose of this study was to compare ultrasonic measurements of fetal abdominal circumference to ultrasonic measurements of fetal biparietal diameter, as a means of estimating fetal body weight. Of 58 fetuses who had abdominal circumferences measured, 48 (82%) of the predicted weights were within 15% of the actual birth weights. Forty-four of the same 58 fetuses had satisfactory biparietal diameter measurements, but only 21 (48%) of the predicted weights were within 15% of the actual birthweights. Ultrasonic measurement of abdominal circumference appears to be a more reliable index of fetal body weight than other currently available techniques.
Accurate measurement of body weight and food intake in environmentally enriched male Wistar rats.
Beale, Kylie E L; Murphy, Kevin G; Harrison, Eleanor K; Kerton, Angela J; Ghatei, Mohammad A; Bloom, Stephen R; Smith, Kirsty L
2011-08-01
Laboratory animals are crucial in the study of energy homeostasis. In particular, rats are used to study alterations in food intake and body weight. To accurately record food intake or energy expenditure it is necessary to house rats individually, which can be stressful for social animals. Environmental enrichment may reduce stress and improve welfare in laboratory rodents. However, the effect of environmental enrichment on food intake and thus experimental outcome is unknown. We aimed to determine the effect of environmental enrichment on food intake, body weight, behavior and fecal and plasma stress hormones in male Wistar rats. Singly housed 5-7-week-old male rats were given either no environmental enrichment, chew sticks, a plastic tube of 67 mm internal diameter, or both chew sticks and a tube. No differences in body weight or food intake were seen over a 7-day period. Importantly, the refeeding response following a 24-h fast was unaffected by environmental enrichment. Rearing, a behavior often associated with stress, was significantly reduced in all enriched groups compared to controls. There was a significant increase in fecal immunoglobulin A (IgA) in animals housed with both forms of enrichment compared to controls at the termination of the study, suggesting enrichment reduces hypothalamo-pituitary-adrenal (HPA) axis activity in singly housed rats. In summary, environmental enrichment does not influence body weight and food intake in singly housed male Wistar rats and may therefore be used to refine the living conditions of animals used in the study of energy homeostasis without compromising experimental outcome.
Selecting class weights to minimize classification bias in acreage estimation
NASA Technical Reports Server (NTRS)
Belcher, W. M.; Minter, T. C.
1976-01-01
Preliminary results of experiments being performed to select optimal class weights for use with the maximum likelihood classifier in acreage estimation using remote sensor imagery are presented. These weights will be optimal in the sense that the bias will be minimized in the proportion estimate obtained from the classification results by sample counting. The procedure was tested using Landsat MSS data from an 8 by 9.6 km area of ground truth in Finney County, Kansas.
Robust and efficient estimation with weighted composite quantile regression
NASA Astrophysics Data System (ADS)
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
ERIC Educational Resources Information Center
Natale, Ruby; Uhlhorn, Susan B.; Lopez-Mitnik, Gabriela; Camejo, Stephanie; Englebert, Nicole; Delamater, Alan M.; Messiah, Sarah E.
2016-01-01
Background: One in four preschool-age children in the United States are currently overweight or obese. Previous studies have shown that caregivers of this age group often have difficulty accurately recognizing their child's weight status. The purpose of this study was to examine factors associated with accurate/inaccurate perception of child body…
Weighted measurement fusion Kalman estimator for multisensor descriptor system
NASA Astrophysics Data System (ADS)
Dou, Yinfeng; Ran, Chenjian; Gao, Yuan
2016-08-01
For the multisensor linear stochastic descriptor system with correlated measurement noises, the fused measurement can be obtained based on the weighted least square (WLS) method, and the reduced-order state components are obtained applying singular value decomposition method. Then, the multisensor descriptor system is transformed to a fused reduced-order non-descriptor system with correlated noise. And the weighted measurement fusion (WMF) Kalman estimator of this reduced-order subsystem is presented. According to the relationship of the presented non-descriptor system and the original descriptor system, the WMF Kalman estimator and its estimation error variance matrix of the original multisensor descriptor system are presented. The presented WMF Kalman estimator has global optimality, and can avoid computing these cross-variances of the local Kalman estimator, compared with the state fusion method. A simulation example about three-sensors stochastic dynamic input and output systems in economy verifies the effectiveness.
Habecker, Patrick; Dombrowski, Kirk; Khan, Bilal
2015-01-01
Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM) provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations) via traditional survey tools such as telephone or mail surveys—by asking a representative sample to estimate the number of people they know who are members of such a “hidden” subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation “trimming” to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights. PMID:26630261
Advanced composites sizing guide for preliminary weight estimates
NASA Astrophysics Data System (ADS)
Burns, J. W.
During the preliminary design and proposal phases, it is necessary for the mass properties engineer to make weight estimates that require preliminary rough estimates to improve or verify Level I and Level II estimates and to support trade studies for various types of construction, materials substitution, wing t/c, and design criteria changes. The purpose of this paper is to provide a simple and easy to understand, preliminary sizing guide and present some numeric examples that will aid the mass properties engineer that is inexperienced with advanced composites analysis.
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
Development of Classification and Story Building Data for Accurate Earthquake Damage Estimation
NASA Astrophysics Data System (ADS)
Sakai, Yuki; Fukukawa, Noriko; Arai, Kensuke
We investigated the method of developing classification and story building data from census population database in order to estimate earthquake damage more accurately especially in the urban area presuming that there are correlation between numbers of non-wooden or high-rise buildings and the population. We formulated equations of estimating numbers of wooden houses, low-to-mid-rise(1-9 story) and high-rise(over 10 story) non-wooden buildings in the 1km mesh from night and daytime population database based on the building data we investigated and collected in the selected 20 meshs in Kanto area. We could accurately estimate the numbers of three classified buildings by the formulated equations, but in some special cases, such as the apartment block mesh, the estimated values are quite different from actual values.
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Sensmeier, mark D.; Stewart, Bret A.
2006-01-01
Algorithms for rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process have been developed. Application of these algorithms should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. Recent enhancements to this approach include the porting of the algorithms to a platform-independent software language Python, and modifications to specifically consider morphing aircraft-type configurations. Two sample cases which illustrate these recent developments are presented.
Development of Non-Optimum Factors for Launch Vehicle Propellant Tank Bulkhead Weight Estimation
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Wallace, Matthew L.; Cerro, Jeffrey A.
2012-01-01
Non-optimum factors are used during aerospace conceptual and preliminary design to account for the increased weights of as-built structures due to future manufacturing and design details. Use of higher-fidelity non-optimum factors in these early stages of vehicle design can result in more accurate predictions of a concept s actual weights and performance. To help achieve this objective, non-optimum factors are calculated for the aluminum-alloy gores that compose the ogive and ellipsoidal bulkheads of the Space Shuttle Super-Lightweight Tank propellant tanks. Minimum values for actual gore skin thicknesses and weld land dimensions are extracted from selected production drawings, and are used to predict reference gore weights. These actual skin thicknesses are also compared to skin thicknesses predicted using classical structural mechanics and tank proof-test pressures. Both coarse and refined weights models are developed for the gores. The coarse model is based on the proof pressure-sized skin thicknesses, and the refined model uses the actual gore skin thicknesses and design detail dimensions. To determine the gore non-optimum factors, these reference weights are then compared to flight hardware weights reported in a mass properties database. When manufacturing tolerance weight estimates are taken into account, the gore non-optimum factors computed using the coarse weights model range from 1.28 to 2.76, with an average non-optimum factor of 1.90. Application of the refined weights model yields non-optimum factors between 1.00 and 1.50, with an average non-optimum factor of 1.14. To demonstrate their use, these calculated non-optimum factors are used to predict heavier, more realistic gore weights for a proposed heavy-lift launch vehicle s propellant tank bulkheads. These results indicate that relatively simple models can be developed to better estimate the actual weights of large structures for future launch vehicles.
EDIN0613P weight estimating program. [for launch vehicles
NASA Technical Reports Server (NTRS)
Hirsch, G. N.
1976-01-01
The weight estimating relationships and program developed for space power system simulation are described. The program was developed to size a two-stage launch vehicle for the space power system. The program is actually part of an overall simulation technique called EDIN (Engineering Design and Integration) system. The program sizes the overall vehicle, generates major component weights and derives a large amount of overall vehicle geometry. The program is written in FORTRAN V and is designed for use on the Univac Exec 8 (1110). By utilizing the flexibility of this program while remaining cognizant of the limits imposed upon output depth and accuracy by utilization of generalized input, this program concept can be a useful tool for estimating purposes at the conceptual design stage of a launch vehicle.
Are In-Bed Electronic Weights Recorded in the Medical Record Accurate?
Gerl, Heather; Miko, Alexandra; Nelson, Mandy; Godaire, Lori
2016-01-01
This study found large discrepancies between in-bed weights recorded in the medical record and carefully obtained standing weights with a calibrated, electronic bedside scale. This discrepancy appears to be related to inadequate bed calibration before patient admission and having excessive linen, clothing, and/or equipment on the bed during weighing by caregivers. PMID:27522846
Kurugol, Sila; Freiman, Moti; Afacan, Onur; Domachevsky, Liran; Perez-Rossello, Jeannette M; Callahan, Michael J; Warfield, Simon K
2015-01-01
Non-invasive characterization of water molecule's mobility variations by quantitative analysis of diffusion-weighted MRI (DW-MRI) signal decay in the abdomen has the potential to serve as a biomarker in gastrointestinal and oncological applications. Accurate and reproducible estimation of the signal decay model parameters is challenging due to the presence of respiratory, cardiac, and peristalsis motion. Independent registration of each b-value image to the b-value=0 s/mm(2) image prior to parameter estimation might be sub-optimal because of the low SNR and contrast difference between images of varying b-value. In this work, we introduce a motion-compensated parameter estimation framework that simultaneously solves image registration and model estimation (SIR-ME) problems by utilizing the interdependence of acquired volumes along the diffusion weighting dimension. We evaluated the improvement in model parameters estimation accuracy using 16 in-vivo DW-MRI data sets of Crohn's disease patients by comparing parameter estimates obtained using the SIR-ME model to the parameter estimates obtained by fitting the signal decay model to the acquired DW-MRI images. The proposed SIR-ME model reduced the average root-mean-square error between the observed signal and the fitted model by more than 50%. Moreover, the SIR-ME model estimates discriminate between normal and abnormal bowel loops better than the standard parameter estimates.
Performance and Weight Estimates for an Advanced Open Rotor Engine
NASA Technical Reports Server (NTRS)
Hendricks, Eric S.; Tong, Michael T.
2012-01-01
NASA s Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft. The open rotor concept (also historically referred to an unducted fan or advanced turboprop) may allow for the achievement of this objective by reducing engine fuel consumption. To evaluate the potential impact of open rotor engines, cycle modeling and engine weight estimation capabilities have been developed. The initial development of the cycle modeling capabilities in the Numerical Propulsion System Simulation (NPSS) tool was presented in a previous paper. Following that initial development, further advancements have been made to the cycle modeling and weight estimation capabilities for open rotor engines and are presented in this paper. The developed modeling capabilities are used to predict the performance of an advanced open rotor concept using modern counter-rotating propeller designs. Finally, performance and weight estimates for this engine are presented and compared to results from a previous NASA study of advanced geared and direct-drive turbofans.
Do We Know Whether Researchers and Reviewers are Estimating Risk and Benefit Accurately?
Hey, Spencer Phillips; Kimmelman, Jonathan
2016-10-01
Accurate estimation of risk and benefit is integral to good clinical research planning, ethical review, and study implementation. Some commentators have argued that various actors in clinical research systems are prone to biased or arbitrary risk/benefit estimation. In this commentary, we suggest the evidence supporting such claims is very limited. Most prior work has imputed risk/benefit beliefs based on past behavior or goals, rather than directly measuring them. We describe an approach - forecast analysis - that would enable direct and effective measure of the quality of risk/benefit estimation. We then consider some objections and limitations to the forecasting approach. PMID:27197044
Do We Know Whether Researchers and Reviewers are Estimating Risk and Benefit Accurately?
Hey, Spencer Phillips; Kimmelman, Jonathan
2016-10-01
Accurate estimation of risk and benefit is integral to good clinical research planning, ethical review, and study implementation. Some commentators have argued that various actors in clinical research systems are prone to biased or arbitrary risk/benefit estimation. In this commentary, we suggest the evidence supporting such claims is very limited. Most prior work has imputed risk/benefit beliefs based on past behavior or goals, rather than directly measuring them. We describe an approach - forecast analysis - that would enable direct and effective measure of the quality of risk/benefit estimation. We then consider some objections and limitations to the forecasting approach.
Digital combining-weight estimation for broadband sources using maximum-likelihood estimates
NASA Technical Reports Server (NTRS)
Rodemich, E. R.; Vilnrotter, V. A.
1994-01-01
An algorithm described for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system is compared with the maximum-likelihood estimate. This provides some improvement in performance, with an increase in computational complexity. However, the maximum-likelihood algorithm is simple enough to allow implementation on a PC-based combining system.
48 CFR 52.247-8 - Estimated Weights or Quantities Not Guaranteed.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Estimated Weights or... Provisions and Clauses 52.247-8 Estimated Weights or Quantities Not Guaranteed. As prescribed in 47.207-3(e... transportation-related services when weights or quantities are estimates: Estimated Weights or Quantities...
On the accurate estimation of gap fraction during daytime with digital cover photography
NASA Astrophysics Data System (ADS)
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.
Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro
2016-01-12
The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy.
Alwan, Heba; Viswanathan, Bharathi; Paccaud, Fred; Bovet, Pascal
2011-01-01
Background. We examined body image perception and its association with reported weight-control behavior among adolescents in the Seychelles. Methods. We conducted a school-based survey of 1432 students aging 11–17 years in the Seychelles. Perception of body image was assessed using both a closed-ended question (CEQ) and Stunkard's pictorial silhouettes (SPS). Voluntary attempts to change weight were also assessed. Results. A substantial proportion of the overweight students did not consider themselves as overweight (SPS: 24%, CEQ: 34%), and a substantial proportion of the normal-weight students considered themselves as too thin (SPS: 29%, CEQ: 15%). Logistic regression analysis showed that students with an accurate weight perception were more likely to have appropriate weight-control behavior. Conclusions. We found that substantial proportions of students had an inaccurate perception of their weight and that weight perception was associated with weight-control behavior. These findings point to forces that can drive the upwards overweight trends. PMID:21603277
Variance and covariance estimates for weaning weight of Senepol cattle.
Wright, D W; Johnson, Z B; Brown, C J; Wildeus, S
1991-10-01
Variance and covariance components were estimated for weaning weight from Senepol field data for use in the reduced animal model for a maternally influenced trait. The 4,634 weaning records were used to evaluate 113 sires and 1,406 dams on the island of St. Croix. Estimates of direct additive genetic variance (sigma 2A), maternal additive genetic variance (sigma 2M), covariance between direct and maternal additive genetic effects (sigma AM), permanent maternal environmental variance (sigma 2PE), and residual variance (sigma 2 epsilon) were calculated by equating variances estimated from a sire-dam model and a sire-maternal grandsire model, with and without the inverse of the numerator relationship matrix (A-1), to their expectations. Estimates were sigma 2A, 139.05 and 138.14 kg2; sigma 2M, 307.04 and 288.90 kg2; sigma AM, -117.57 and -103.76 kg2; sigma 2PE, -258.35 and -243.40 kg2; and sigma 2 epsilon, 588.18 and 577.72 kg2 with and without A-1, respectively. Heritability estimates for direct additive (h2A) were .211 and .210 with and without A-1, respectively. Heritability estimates for maternal additive (h2M) were .47 and .44 with and without A-1, respectively. Correlations between direct and maternal (IAM) effects were -.57 and -.52 with and without A-1, respectively. PMID:1778806
Damon, Bruce M; Heemskerk, Anneriet M; Ding, Zhaohua
2012-06-01
Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor magnetic resonance imaging fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image data sets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8 and 15.3 m(-1)), signal-to-noise ratio (50, 75, 100 and 150) and voxel geometry (13.8- and 27.0-mm(3) voxel volume with isotropic resolution; 13.5-mm(3) volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to second-order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m(-1)), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation.
Blanc, Ann K.; Wardlaw, Tessa
2005-01-01
OBJECTIVE: To critically examine the data used to produce estimates of the proportion of infants with low birth weight in developing countries and to describe biases in these data. To assess the effect of adjustment procedures on the estimates and propose a modified estimation procedure for international reporting purposes. METHODS: Mothers' reports about their recent births in 62 nationally representative Demographic and Health Surveys (DHS) conducted between 1990 and 2000 were analysed. The proportion of infants weighed at birth, characteristics of those weighed, extent of misreporting, and mothers' subjective assessments of their children's size at birth were examined. FINDINGS: In many developing countries the majority of infants were not weighed at birth. Those who were weighed were more likely to have mothers who live in urban areas and are educated, and to be born in a medical facility with assistance from medically trained personnel. Birth weights reported by mothers are "heaped" on multiples of 500 grams. CONCLUSION: Current survey-based estimates of the prevalence of low birth weight are biased substantially downwards. Two adjustments to reported data are recommended: a weighting procedure that combines reported birth weights with mothers' assessment of the child's size at birth, and categorization of one-quarter of the infants reported to have a birth weight of exactly 2500 grams as having low birth weight. Averaged over all surveys, these procedures increased the proportion classified as having low birth weight by 25%. We also recommend that the proportion of infants not weighed at birth be routinely reported. Efforts are needed to increase the weighing of newborns and the recording of their weights. PMID:15798841
An evaluation of two indirect methods of estimating body weight in Holstein calves and heifers.
Dingwell, R T; Wallace, M M; McLaren, C J; Leslie, C F; Leslie, K E
2006-10-01
Monitoring the growth of replacement heifers is a useful management tool to assist producers in achieving a reasonable goal for age at first calving. Standard growth curves have been established, and heart girth tapes are widely available to estimate body weight (BW). Probably the easiest, and undoubtedly the most accurate, means of determining the actual BW of heifers is by using a calibrated electronic scale. However, if an electronic scale is not available, indirect methods of BW estimation are required. The hipometer is a new indirect tool that uses the external width between the greater trochanters of the left and right femurs to estimate BW. The purpose of this observational study was to evaluate the hipometer and the heart girth tape to estimate the BW of Holstein heifers, as compared with their actual weight recorded by an electronic scale. A total of 311 Holstein heifers in 4 research herds, ranging in age from 1 wk old to immediately prior to calving (24 mo), were used in this comparison. The mean BW of all heifers was 261 +/- 124 kg. The Pearson values of the correlation between the scale and hipometer weights, and the scale and tape weights were 0.92 and 0.94, respectively. The concordance correlations of scale weight with hipometer and tape weights were 0.98 and 0.99, respectively. The agreement among the 3 methods, as assessed by the kappa statistic, was substantial for heifers aged 3 to 15 mo. However, poor to no agreement was observed in heifers younger than 3 mo, as well as at 15 mo of age or greater (kappa 0 to 0.18). This is of particular concern because these groups represent the age when dairy heifers would be weaned (< 3 mo) and the age when breeding would normally commence (> 15 mo). We concluded that the hipometer is an easy and useful alternative method of estimating the BW of Holstein heifers, particularly in heifers aged 3 to 15 mo. PMID:16960075
Code of Federal Regulations, 2013 CFR
2013-01-01
...., suite 700, Washington, DC 20408. (b) All scales used to determine the net weight of meat or poultry... by the Director of the Federal Register in accordance with 5 U.S.C. 552(a) and 1 CFR part 51. (These... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Scale requirements for...
Code of Federal Regulations, 2014 CFR
2014-01-01
...., suite 700, Washington, DC 20408. (b) All scales used to determine the net weight of meat or poultry... by the Director of the Federal Register in accordance with 5 U.S.C. 552(a) and 1 CFR part 51. (These... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Scale requirements for...
Code of Federal Regulations, 2012 CFR
2012-01-01
...., suite 700, Washington, DC 20408. (b) All scales used to determine the net weight of meat or poultry... by the Director of the Federal Register in accordance with 5 U.S.C. 552(a) and 1 CFR part 51. (These... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Scale requirements for...
Kerker, Bonnie D.; Owens, Pamela L.; Zigler, Edward; Horwitz, Sarah M.
2004-01-01
OBJECTIVES: The objectives of this literature review were to assess current challenges to estimating the prevalence of mental health disorders among individuals with mental retardation (MR) and to develop recommendations to improve such estimates for this population. METHODS: The authors identified 200 peer-reviewed articles, book chapters, government documents, or reports from national and international organizations on the mental health status of people with MR. Based on the study's inclusion criteria, 52 articles were included in the review. RESULTS: Available data reveal inconsistent estimates of the prevalence of mental health disorders among those with MR, but suggest that some mental health conditions are more common among these individuals than in the general population. Two main challenges to identifying accurate prevalence estimates were found: (1) health care providers have difficulty diagnosing mental health conditions among individuals with MR; and (2) methodological limitations of previous research inhibit confidence in study results. CONCLUSIONS: Accurate prevalence estimates are necessary to ensure the availability of appropriate treatment services. To this end, health care providers should receive more training regarding the mental health treatment of individuals with MR. Further, government officials should discuss mechanisms of collecting nationally representative data, and the research community should utilize consistent methods with representative samples when studying mental health conditions in this population. PMID:15219798
Accurate estimation of forest carbon stocks by 3-D remote sensing of individual trees.
Omasa, Kenji; Qiu, Guo Yu; Watanuki, Kenichi; Yoshimi, Kenji; Akiyama, Yukihide
2003-03-15
Forests are one of the most important carbon sinks on Earth. However, owing to the complex structure, variable geography, and large area of forests, accurate estimation of forest carbon stocks is still a challenge for both site surveying and remote sensing. For these reasons, the Kyoto Protocol requires the establishment of methodologies for estimating the carbon stocks of forests (Kyoto Protocol, Article 5). A possible solution to this challenge is to remotely measure the carbon stocks of every tree in an entire forest. Here, we present a methodology for estimating carbon stocks of a Japanese cedar forest by using a high-resolution, helicopter-borne 3-dimensional (3-D) scanning lidar system that measures the 3-D canopy structure of every tree in a forest. Results show that a digital image (10-cm mesh) of woody canopy can be acquired. The treetop can be detected automatically with a reasonable accuracy. The absolute error ranges for tree height measurements are within 42 cm. Allometric relationships of height to carbon stocks then permit estimation of total carbon storage by measurement of carbon stocks of every tree. Thus, we suggest that our methodology can be used to accurately estimate the carbon stocks of Japanese cedar forests at a stand scale. Periodic measurements will reveal changes in forest carbon stocks.
Bowden, Jack; Davey Smith, George; Haycock, Philip C.
2016-01-01
ABSTRACT Developments in genome‐wide association studies and the increasing availability of summary genetic association data have made application of Mendelian randomization relatively straightforward. However, obtaining reliable results from a Mendelian randomization investigation remains problematic, as the conventional inverse‐variance weighted method only gives consistent estimates if all of the genetic variants in the analysis are valid instrumental variables. We present a novel weighted median estimator for combining data on multiple genetic variants into a single causal estimate. This estimator is consistent even when up to 50% of the information comes from invalid instrumental variables. In a simulation analysis, it is shown to have better finite‐sample Type 1 error rates than the inverse‐variance weighted method, and is complementary to the recently proposed MR‐Egger (Mendelian randomization‐Egger) regression method. In analyses of the causal effects of low‐density lipoprotein cholesterol and high‐density lipoprotein cholesterol on coronary artery disease risk, the inverse‐variance weighted method suggests a causal effect of both lipid fractions, whereas the weighted median and MR‐Egger regression methods suggest a null effect of high‐density lipoprotein cholesterol that corresponds with the experimental evidence. Both median‐based and MR‐Egger regression methods should be considered as sensitivity analyses for Mendelian randomization investigations with multiple genetic variants. PMID:27061298
Bowden, Jack; Davey Smith, George; Haycock, Philip C; Burgess, Stephen
2016-05-01
Developments in genome-wide association studies and the increasing availability of summary genetic association data have made application of Mendelian randomization relatively straightforward. However, obtaining reliable results from a Mendelian randomization investigation remains problematic, as the conventional inverse-variance weighted method only gives consistent estimates if all of the genetic variants in the analysis are valid instrumental variables. We present a novel weighted median estimator for combining data on multiple genetic variants into a single causal estimate. This estimator is consistent even when up to 50% of the information comes from invalid instrumental variables. In a simulation analysis, it is shown to have better finite-sample Type 1 error rates than the inverse-variance weighted method, and is complementary to the recently proposed MR-Egger (Mendelian randomization-Egger) regression method. In analyses of the causal effects of low-density lipoprotein cholesterol and high-density lipoprotein cholesterol on coronary artery disease risk, the inverse-variance weighted method suggests a causal effect of both lipid fractions, whereas the weighted median and MR-Egger regression methods suggest a null effect of high-density lipoprotein cholesterol that corresponds with the experimental evidence. Both median-based and MR-Egger regression methods should be considered as sensitivity analyses for Mendelian randomization investigations with multiple genetic variants. PMID:27061298
Helb, Danica A.; Tetteh, Kevin K. A.; Felgner, Philip L.; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R.; Beeson, James G.; Tappero, Jordan; Smith, David L.; Crompton, Peter D.; Rosenthal, Philip J.; Dorsey, Grant; Drakeley, Christopher J.; Greenhouse, Bryan
2015-01-01
Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual’s recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86–0.93), whereas responses to six antigens accurately estimated an individual’s malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs. PMID:26216993
Bayesian hemodynamic parameter estimation by bolus tracking perfusion weighted imaging.
Boutelier, Timothé; Kudo, Koshuke; Pautot, Fabrice; Sasaki, Makoto
2012-07-01
A delay-insensitive probabilistic method for estimating hemodynamic parameters, delays, theoretical residue functions, and concentration time curves by computed tomography (CT) and magnetic resonance (MR) perfusion weighted imaging is presented. Only a mild stationarity hypothesis is made beyond the standard perfusion model. New microvascular parameters with simple hemodynamic interpretation are naturally introduced. Simulations on standard digital phantoms show that the method outperforms the oscillating singular value decomposition (oSVD) method in terms of goodness-of-fit, linearity, statistical and systematic errors on all parameters, especially at low signal-to-noise ratios (SNRs). Delay is always estimated sharply with user-supplied resolution and is purely arterial, by contrast to oSVD time-to-maximum TMAX that is very noisy and biased by mean transit time (MTT), blood volume, and SNR. Residue functions and signals estimates do not suffer overfitting anymore. One CT acute stroke case confirms simulation results and highlights the ability of the method to reliably estimate MTT when SNR is low. Delays look promising for delineating the arterial occlusion territory and collateral circulation. PMID:22410325
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.
Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images
Lavoie, Benjamin R.; Okoniewski, Michal; Fear, Elise C.
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.
Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785
Accurate estimation of object location in an image sequence using helicopter flight data
NASA Technical Reports Server (NTRS)
Tang, Yuan-Liang; Kasturi, Rangachar
1994-01-01
In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.
Effective Echo Detection and Accurate Orbit Estimation Algorithms for Space Debris Radar
NASA Astrophysics Data System (ADS)
Isoda, Kentaro; Sakamoto, Takuya; Sato, Toru
Orbit estimation of space debris, objects of no inherent value orbiting the earth, is a task that is important for avoiding collisions with spacecraft. The Kamisaibara Spaceguard Center radar system was built in 2004 as the first radar facility in Japan devoted to the observation of space debris. In order to detect the smaller debris, coherent integration is effective in improving SNR (Signal-to-Noise Ratio). However, it is difficult to apply coherent integration to real data because the motions of the targets are unknown. An effective algorithm is proposed for echo detection and orbit estimation of the faint echoes from space debris. The characteristics of the evaluation function are utilized by the algorithm. Experiments show the proposed algorithm improves SNR by 8.32dB and enables estimation of orbital parameters accurately to allow for re-tracking with a single radar.
Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar
2016-01-01
Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non
Estimating topological properties of weighted networks from limited information
NASA Astrophysics Data System (ADS)
Cimini, Giulio; Squartini, Tiziano; Gabrielli, Andrea; Garlaschelli, Diego
2015-10-01
A problem typically encountered when studying complex systems is the limitedness of the information available on their topology, which hinders our understanding of their structure and of the dynamical processes taking place on them. A paramount example is provided by financial networks, whose data are privacy protected: Banks publicly disclose only their aggregate exposure towards other banks, keeping individual exposures towards each single bank secret. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here, we develop a reconstruction method, based on statistical mechanics concepts, that makes use of the empirical link density in a highly nontrivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems.
Estimating topological properties of weighted networks from limited information
NASA Astrophysics Data System (ADS)
Gabrielli, Andrea; Cimini, Giulio; Garlaschelli, Diego; Squartini, Angelo
A typical problem met when studying complex systems is the limited information available on their topology, which hinders our understanding of their structural and dynamical properties. A paramount example is provided by financial networks, whose data are privacy protected. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here we develop a reconstruction method, based on statistical mechanics concepts, that exploits the empirical link density in a highly non-trivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems. Acknoweledgement to ``Growthcom'' ICT - EC project (Grant No: 611272) and ``Crisislab'' Italian Project.
NASA Astrophysics Data System (ADS)
Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong
2015-08-01
For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Rashid, Mamoon; Pain, Arnab
2013-01-01
Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: arnab.pain@kaust.edu.sa or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23193222
NASA Astrophysics Data System (ADS)
Powell, Jacob; Heider, Emily C.; Campiglia, Andres; Harper, James K.
2016-10-01
The ability of density functional theory (DFT) methods to predict accurate fluorescence spectra for polycyclic aromatic hydrocarbons (PAHs) is explored. Two methods, PBE0 and CAM-B3LYP, are evaluated both in the gas phase and in solution. Spectra for several of the most toxic PAHs are predicted and compared to experiment, including three isomers of C24H14 and a PAH containing heteroatoms. Unusually high-resolution experimental spectra are obtained for comparison by analyzing each PAH at 4.2 K in an n-alkane matrix. All theoretical spectra visually conform to the profiles of the experimental data but are systematically offset by a small amount. Specifically, when solvent is included the PBE0 functional overestimates peaks by 16.1 ± 6.6 nm while CAM-B3LYP underestimates the same transitions by 14.5 ± 7.6 nm. These calculated spectra can be empirically corrected to decrease the uncertainties to 6.5 ± 5.1 and 5.7 ± 5.1 nm for the PBE0 and CAM-B3LYP methods, respectively. A comparison of computed spectra in the gas phase indicates that the inclusion of n-octane shifts peaks by +11 nm on average and this change is roughly equivalent for PBE0 and CAM-B3LYP. An automated approach for comparing spectra is also described that minimizes residuals between a given theoretical spectrum and all available experimental spectra. This approach identifies the correct spectrum in all cases and excludes approximately 80% of the incorrect spectra, demonstrating that an automated search of theoretical libraries of spectra may eventually become feasible.
Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti
2016-01-01
The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems. PMID:26651397
Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti
2016-01-01
The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems.
Accurate Estimation of Carotid Luminal Surface Roughness Using Ultrasonic Radio-Frequency Echo
NASA Astrophysics Data System (ADS)
Kitamura, Kosuke; Hasegawa, Hideyuki; Kanai, Hiroshi
2012-07-01
It would be useful to measure the minute surface roughness of the carotid arterial wall to detect the early stage of atherosclerosis. In conventional ultrasonography, the axial resolution of a B-mode image depends on the ultrasonic wavelength of 150 µm at 10 MHz because a B-mode image is constructed using the amplitude of the radio-frequency (RF) echo. Therefore, the surface roughness caused by atherosclerosis in an early stage cannot be measured using a conventional B-mode image obtained by ultrasonography because the roughness is 10-20 µm. We have realized accurate transcutaneous estimation of such a minute surface profile using the lateral motion of the carotid arterial wall, which is estimated by block matching of received ultrasonic signals. However, the width of the region where the surface profile is estimated depends on the magnitude of the lateral displacement of the carotid arterial wall (i.e., if the lateral displacement of the arterial wall is 1 mm, the surface profile is estimated in a region of 1 mm in width). In this study, the width was increased by combining surface profiles estimated using several ultrasonic beams. In the present study, we first measured a fine wire, whose diameter was 13 µm, using ultrasonic equipment to obtain an ultrasonic beam profile for determination of the optimal kernel size for block matching based on the correlation between RF echoes. Second, we estimated the lateral displacement and surface profile of a phantom, which had a saw tooth profile on its surface, and compared the surface profile measured by ultrasound with that measured by a laser profilometer. Finally, we estimated the lateral displacement and surface roughness of the carotid arterial wall of three healthy subjects (24-, 23-, and 23-year-old males) using the proposed method.
Lamb mode selection for accurate wall loss estimation via guided wave tomography
Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.
2014-02-18
Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.
NASA Astrophysics Data System (ADS)
Granata, Daniele; Carnevale, Vincenzo
2016-08-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset.
Granata, Daniele; Carnevale, Vincenzo
2016-01-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265
Granata, Daniele; Carnevale, Vincenzo
2016-01-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant "collective" variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265
Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.
Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet
2016-05-01
Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments.
48 CFR 52.247-8 - Estimated Weights or Quantities Not Guaranteed.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Provisions and Clauses 52.247-8 Estimated Weights or Quantities Not Guaranteed. As prescribed in 47.207-3(e... transportation-related services when weights or quantities are estimates: Estimated Weights or Quantities Not..., as the Government does not guarantee any particular volume of traffic described in this...
MIDAS robust trend estimator for accurate GPS station velocities without step detection
NASA Astrophysics Data System (ADS)
Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-03-01
Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.
Methods for accurate estimation of net discharge in a tidal channel
Simpson, M.R.; Bland, R.
2000-01-01
Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three
MIDAS robust trend estimator for accurate GPS station velocities without step detection
Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-01-01
Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences. PMID:27668140
MIDAS robust trend estimator for accurate GPS station velocities without step detection
Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-01-01
Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna', T.; Mykkeltveit, S.
2016-10-01
modified velocity gradients reduce the residuals, the relative location uncertainties, and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.
Accurate estimation of human body orientation from RGB-D sensors.
Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao
2013-10-01
Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method. PMID:23893759
Accurate estimation of motion blur parameters in noisy remote sensing image
NASA Astrophysics Data System (ADS)
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Efficient and accurate estimation of relative order tensors from λ- maps
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Rishi; Miao, Xijiang; Shealy, Paul; Valafar, Homayoun
2009-06-01
The rapid increase in the availability of RDC data from multiple alignment media in recent years has necessitated the development of more sophisticated analyses that extract the RDC data's full information content. This article presents an analysis of the distribution of RDCs from two media (2D-RDC data), using the information obtained from a λ-map. This article also introduces an efficient algorithm, which leverages these findings to extract the order tensors for each alignment medium using unassigned RDC data in the absence of any structural information. The results of applying this 2D-RDC analysis method to synthetic and experimental data are reported in this article. The relative order tensor estimates obtained from the 2D-RDC analysis are compared to order tensors obtained from the program REDCAT after using assignment and structural information. The final comparisons indicate that the relative order tensors estimated from the unassigned 2D-RDC method very closely match the results from methods that require assignment and structural information. The presented method is successful even in cases with small datasets. The results of analyzing experimental RDC data for the protein 1P7E are presented to demonstrate the potential of the presented work in accurately estimating the principal order parameters from RDC data that incompletely sample the RDC space. In addition to the new algorithm, a discussion of the uniqueness of the solutions is presented; no more than two clusters of distinct solutions have been shown to satisfy each λ-map.
Accurate estimation of human body orientation from RGB-D sensors.
Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao
2013-10-01
Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.
Accurate estimation of the RMS emittance from single current amplifier data
Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.
2002-05-31
This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H{sup -} ion source.
Quick and accurate estimation of the elastic constants using the minimum image method
NASA Astrophysics Data System (ADS)
Tretiakov, Konstantin V.; Wojciechowski, Krzysztof W.
2015-04-01
A method for determining the elastic properties using the minimum image method (MIM) is proposed and tested on a model system of particles interacting by the Lennard-Jones (LJ) potential. The elastic constants of the LJ system are determined in the thermodynamic limit, N → ∞, using the Monte Carlo (MC) method in the NVT and NPT ensembles. The simulation results show that when determining the elastic constants, the contribution of long-range interactions cannot be ignored, because that would lead to erroneous results. In addition, the simulations have revealed that the inclusion of further interactions of each particle with all its minimum image neighbors even in case of small systems leads to results which are very close to the values of elastic constants in the thermodynamic limit. This enables one for a quick and accurate estimation of the elastic constants using very small samples.
Pitfalls in accurate estimation of overdiagnosis: implications for screening policy and compliance.
Feig, Stephen A
2013-01-01
Stories in the public media that 30 to 50% of screen-detected breast cancers are overdiagnosed dissuade women from being screened because overdiagnosed cancers would never result in death if undetected yet do result in unnecessary treatment. However, such concerns are unwarranted because the frequency of overdiagnosis, when properly calculated, is only 0 to 5%. In the previous issue of Breast Cancer Research, Duffy and Parmar report that accurate estimation of the rate of overdiagnosis recognizes the effect of lead time on detection rates and the consequent requirement for an adequate number of years of follow-up. These indispensable elements were absent from highly publicized studies that overestimated the frequency of overdiagnosis.
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.
Saccà, Alessandro
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry
NASA Astrophysics Data System (ADS)
van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.
2016-03-01
Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ < 20°). These methods enable accurate depth estimation of surgical tools with respect to anatomical structures. However, they are computationally expensive and time consuming, rendering them unattractive for image-guided interventions. We propose an alternative approach for depth estimation of biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.
Accurate Estimation of the Fine Layering Effect on the Wave Propagation in the Carbonate Rocks
NASA Astrophysics Data System (ADS)
Bouchaala, F.; Ali, M. Y.
2014-12-01
The attenuation caused to the seismic wave during its propagation can be mainly divided into two parts, the scattering and the intrinsic attenuation. The scattering is an elastic redistribution of the energy due to the medium heterogeneities. However the intrinsic attenuation is an inelastic phenomenon, mainly due to the fluid-grain friction during the wave passage. The intrinsic attenuation is directly related to the physical characteristics of the medium, so this parameter is very can be used for media characterization and fluid detection, which is beneficial for the oil and gas industry. The intrinsic attenuation is estimated by subtracting the scattering from the total attenuation, therefore the accuracy of the intrinsic attenuation is directly dependent on the accuracy of the total attenuation and the scattering. The total attenuation can be estimated from the recorded waves, by using in-situ methods as the spectral ratio and frequency shift methods. The scattering is estimated by assuming the heterogeneities as a succession of stacked layers, each layer is characterized by a single density and velocity. The accuracy of the scattering is strongly dependent on the layer thicknesses, especially in the case of the media composed of carbonate rocks, such media are known for their strong heterogeneity. Previous studies gave some assumptions for the choice of the layer thickness, but they showed some limitations especially in the case of carbonate rocks. In this study we established a relationship between the layer thicknesses and the frequency of the propagation, after certain mathematical development of the Generalized O'Doherty-Anstey formula. We validated this relationship through some synthetic tests and real data provided from a VSP carried out over an onshore oilfield in the emirate of Abu Dhabi in the United Arab Emirates, primarily composed of carbonate rocks. The results showed the utility of our relationship for an accurate estimation of the scattering
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2012-01-01
This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…
Can student health professionals accurately estimate alcohol content in commonly occurring drinks?
Sinclair, Julia; Searle, Emma
2016-01-01
Objectives: Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students. Methods: A cross-sectional survey of 891 medical and nursing students across different years of training was conducted. Students were asked the alcohol content of 10 different alcoholic drinks by seeing a slide of the drink (with picture, volume and percentage of alcohol by volume) for 30 s. Results: Overall, the mean number of correctly estimated drinks (out of the 10 tested) was 2.4, increasing to just over 3 if a 10% margin of error was used. Wine and premium strength beers were underestimated by over 50% of students. Those who drank alcohol themselves, or who were further on in their clinical training, did better on the task, but overall the levels remained low. Conclusions: Knowledge of, or the ability to work out, the alcohol content of commonly consumed drinks is poor, and further research is needed to understand the reasons for this and the impact this may have on the likelihood to undertake screening or initiate treatment. PMID:27536344
NASA Astrophysics Data System (ADS)
Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray
2016-06-01
Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.
mBEEF: An accurate semi-local Bayesian error estimation density functional
NASA Astrophysics Data System (ADS)
Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas
2014-04-01
We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.
Greater contrast in Martian hydrological history from more accurate estimates of paleodischarge
NASA Astrophysics Data System (ADS)
Jacobsen, R. E.; Burr, D. M.
2016-09-01
Correlative width-discharge relationships from the Missouri River Basin are commonly used to estimate fluvial paleodischarge on Mars. However, hydraulic geometry provides alternative, and causal, width-discharge relationships derived from broader samples of channels, including those in reduced-gravity (submarine) environments. Comparison of these relationships implies that causal relationships from hydraulic geometry should yield more accurate and more precise discharge estimates. Our remote analysis of a Martian-terrestrial analog channel, combined with in situ discharge data, substantiates this implication. Applied to Martian features, these results imply that paleodischarges of interior channels of Noachian-Hesperian (~3.7 Ga) valley networks have been underestimated by a factor of several, whereas paleodischarges for smaller fluvial deposits of the Late Hesperian-Early Amazonian (~3.0 Ga) have been overestimated. Thus, these new paleodischarges significantly magnify the contrast between early and late Martian hydrologic activity. Width-discharge relationships from hydraulic geometry represent validated tools for quantifying fluvial input near candidate landing sites of upcoming missions.
Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)
1995-01-01
It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.
Weight estimation of unconventional structures by structural optimization
NASA Technical Reports Server (NTRS)
Miura, Hirokazu; Shyu, Albert
1986-01-01
Automated techniques are presented that are used in structural optimization technology, with emphasis on modifications of finite element models to obtain an optimal material distribution for minimum weight while satisfying the prescribed design requirements. It is anticipated that the future development of computer aided engineering (CAE) system will provide environments where structural analysis, a design optimization, and weight evaluation modules are integrated, sharing a common data base. Structural optimization capabilities obtained by integrating a finite element structural analysis program and a numerical optimization code are developed and applied to two illustrative examples: marine gear housing structural weight minimization and joined wing structures.
NASA Astrophysics Data System (ADS)
Honjo, Yasunori; Hasegawa, Hideyuki; Kanai, Hiroshi
2012-07-01
rates estimated using different kernel sizes were examined using the normalized mean-squared error of the estimated strain rate from the actual one obtained by the 1D phase-sensitive method. Compared with conventional kernel sizes, this result shows the possibility of the proposed correlation kernel to enable more accurate measurement of the strain rate. In in vivo measurement, the regional instantaneous velocities and strain rates in the radial direction of the heart wall were analyzed in detail at an extremely high temporal resolution (frame rate of 860 Hz). In this study, transition in contraction and relaxation was able to be detected by 2D tracking. These results indicate the potential of this method in the high-accuracy estimation of the strain rates and detailed analyses of the physiological function of the myocardium.
Accurate automatic estimation of total intracranial volume: a nuisance variable with less nuisance.
Malone, Ian B; Leung, Kelvin K; Clegg, Shona; Barnes, Josephine; Whitwell, Jennifer L; Ashburner, John; Fox, Nick C; Ridgway, Gerard R
2015-01-01
Total intracranial volume (TIV/ICV) is an important covariate for volumetric analyses of the brain and brain regions, especially in the study of neurodegenerative diseases, where it can provide a proxy of maximum pre-morbid brain volume. The gold-standard method is manual delineation of brain scans, but this requires careful work by trained operators. We evaluated Statistical Parametric Mapping 12 (SPM12) automated segmentation for TIV measurement in place of manual segmentation and also compared it with SPM8 and FreeSurfer 5.3.0. For T1-weighted MRI acquired from 288 participants in a multi-centre clinical trial in Alzheimer's disease we find a high correlation between SPM12 TIV and manual TIV (R(2)=0.940, 95% Confidence Interval (0.924, 0.953)), with a small mean difference (SPM12 40.4±35.4ml lower than manual, amounting to 2.8% of the overall mean TIV in the study). The correlation with manual measurements (the key aspect when using TIV as a covariate) for SPM12 was significantly higher (p<0.001) than for either SPM8 (R(2)=0.577 CI (0.500, 0.644)) or FreeSurfer (R(2)=0.801 CI (0.744, 0.843)). These results suggest that SPM12 TIV estimates are an acceptable substitute for labour-intensive manual estimates even in the challenging context of multiple centres and the presence of neurodegenerative pathology. We also briefly discuss some aspects of the statistical modelling approaches to adjust for TIV. PMID:25255942
A Rapid Empirical Method for Estimating the Gross Takeoff Weight of a High Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mack, Robert J.
1999-01-01
During the cruise segment of the flight mission, aircraft flying at supersonic speeds generate sonic booms that are usually maximum at the beginning of cruise. The pressure signature with the shocks causing these perceived booms can be predicted if the aircraft's geometry, Mach number, altitude, angle of attack, and cruise weight are known. Most methods for estimating aircraft weight, especially beginning-cruise weight, are empirical and based on least- square-fit equations that best represent a body of component weight data. The empirical method discussed in this report used simplified weight equations based on a study of performance and weight data from conceptual and real transport aircraft. Like other weight-estimation methods, weights were determined at several points in the mission. While these additional weights were found to be useful, it is the determination of beginning-cruise weight that is most important for the prediction of the aircraft's sonic-boom characteristics.
How accurately can we estimate energetic costs in a marine top predator, the king penguin?
Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J
2007-01-01
King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained. PMID:17363231
Imani, Farsad; Karimi Rouzbahani, Hamid Reza; Goudarzi, Mehrdad; Tarrahi, Mohammad Javad; Ebrahim Soltani, Alireza
2016-01-01
Background: During anesthesia, continuous body temperature monitoring is essential, especially in children. Anesthesia can increase the risk of loss of body temperature by three to four times. Hypothermia in children results in increased morbidity and mortality. Since the measurement points of the core body temperature are not easily accessible, near core sites, like rectum, are used. Objectives: The purpose of this study was to measure skin temperature over the carotid artery and compare it with the rectum temperature, in order to propose a model for accurate estimation of near core body temperature. Patients and Methods: Totally, 124 patients within the age range of 2 - 6 years, undergoing elective surgery, were selected. Temperature of rectum and skin over the carotid artery was measured. Then, the patients were randomly divided into two groups (each including 62 subjects), namely modeling (MG) and validation groups (VG). First, in the modeling group, the average temperature of the rectum and skin over the carotid artery were measured separately. The appropriate model was determined, according to the significance of the model’s coefficients. The obtained model was used to predict the rectum temperature in the second group (VG group). Correlation of the predicted values with the real values (the measured rectum temperature) in the second group was investigated. Also, the difference in the average values of these two groups was examined in terms of significance. Results: In the modeling group, the average rectum and carotid temperatures were 36.47 ± 0.54°C and 35.45 ± 0.62°C, respectively. The final model was obtained, as follows: Carotid temperature × 0.561 + 16.583 = Rectum temperature. The predicted value was calculated based on the regression model and then compared with the measured rectum value, which showed no significant difference (P = 0.361). Conclusions: The present study was the first research, in which rectum temperature was compared with that
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.
Development and validation of GFR-estimating equations using diabetes, transplant and weight
Stevens, Lesley A.; Schmid, Christopher H.; Zhang, Yaping L.; Coresh, Josef; Manzi, Jane; Landis, Richard; Bakoush, Omran; Contreras, Gabriel; Genuth, Saul; Klintmalm, Goran B.; Poggio, Emilio; Rossing, Peter; Rule, Andrew D.; Weir, Matthew R.; Kusek, John; Greene, Tom; Levey, Andrew S.
2010-01-01
Background. We have reported a new equation (CKD-EPI equation) that reduces bias and improves accuracy for GFR estimation compared to the MDRD study equation while using the same four basic predictor variables: creatinine, age, sex and race. Here, we describe the development and validation of this equation as well as other equations that incorporate diabetes, transplant and weight as additional predictor variables. Methods. Linear regression was used to relate log-measured GFR (mGFR) to sex, race, diabetes, transplant, weight, various transformations of creatinine and age with and without interactions. Equations were developed in a pooled database of 10 studies [2/3 (N = 5504) for development and 1/3 (N = 2750) for internal validation], and final model selection occurred in 16 additional studies [external validation (N = 3896)]. Results. The mean mGFR was 68, 67 and 68 ml/min/ 1.73 m2 in the development, internal validation and external validation datasets, respectively. In external validation, an equation that included a linear age term and spline terms in creatinine to account for a reduction in the magnitude of the slope at low serum creatinine values exhibited the best performance (bias = 2.5, RMSE = 0.250) among models using the four basic predictor variables. Addition of terms for diabetes and transplant did not improve performance. Equations with weight showed a small improvement in the subgroup with BMI <20 kg/m2. Conclusions. The CKD-EPI equation, based on creatinine, age, sex and race, has been validated and is more accurate than the MDRD study equation. The addition of weight, diabetes and transplant does not significantly improve equation performance. PMID:19793928
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1983-01-01
Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.
Underdetermined DOA Estimation Using MVDR-Weighted LASSO
Salama, Amgad A.; Ahmad, M. Omair; Swamy, M. N. S.
2016-01-01
The direction of arrival (DOA) estimation problem is formulated in a compressive sensing (CS) framework, and an extended array aperture is presented to increase the number of degrees of freedom of the array. The ordinary least square adaptable least absolute shrinkage and selection operator (OLS A-LASSO) is applied for the first time for DOA estimation. Furthermore, a new LASSO algorithm, the minimum variance distortionless response (MVDR) A-LASSO, which solves the DOA problem in the CS framework, is presented. The proposed algorithm does not depend on the singular value decomposition nor on the orthogonality of the signal and the noise subspaces. Hence, the DOA estimation can be done without a priori knowledge of the number of sources. The proposed algorithm can estimate up to ((M2−2)/2+M−1)/2 sources using M sensors without any constraints or assumptions about the nature of the signal sources. Furthermore, the proposed algorithm exhibits performance that is superior compared to that of the classical DOA estimation methods, especially for low signal to noise ratios (SNR), spatially-closed sources and coherent scenarios. PMID:27657080
2012-01-01
Background Few equations have been developed in veterinary medicine compared to human medicine to predict body composition. The present study was done to evaluate the influence of weight loss on biometry (BIO), bioimpedance analysis (BIA) and ultrasonography (US) in cats, proposing equations to estimate fat (FM) and lean (LM) body mass, as compared to dual energy x-ray absorptiometry (DXA) as the referenced method. For this were used 16 gonadectomized obese cats (8 males and 8 females) in a weight loss program. DXA, BIO, BIA and US were performed in the obese state (T0; obese animals), after 10% of weight loss (T1) and after 20% of weight loss (T2). Stepwise regression was used to analyze the relationship between the dependent variables (FM, LM) determined by DXA and the independent variables obtained by BIO, BIA and US. The better models chosen were evaluated by a simple regression analysis and means predicted vs. determined by DXA were compared to verify the accuracy of the equations. Results The independent variables determined by BIO, BIA and US that best correlated (p < 0.005) with the dependent variables (FM and LM) were BW (body weight), TC (thoracic circumference), PC (pelvic circumference), R (resistance) and SFLT (subcutaneous fat layer thickness). Using Mallows’Cp statistics, p value and r2, 19 equations were selected (12 for FM, 7 for LM); however, only 7 equations accurately predicted FM and one LM of cats. Conclusions The equations with two variables are better to use because they are effective and will be an alternative method to estimate body composition in the clinical routine. For estimated lean mass the equations using body weight associated with biometrics measures can be proposed. For estimated fat mass the equations using body weight associated with bioimpedance analysis can be proposed. PMID:22781317
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
ERIC Educational Resources Information Center
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
Information Weighted Consensus for Distributed Estimation in Vision Networks
ERIC Educational Resources Information Center
Kamal, Ahmed Tashrif
2013-01-01
Due to their high fault-tolerance, ease of installation and scalability to large networks, distributed algorithms have recently gained immense popularity in the sensor networks community, especially in computer vision. Multi-target tracking in a camera network is one of the fundamental problems in this domain. Distributed estimation algorithms…
Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei
2014-11-01
A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod.
Suárez, Ernesto; Pratt, Adam J; Chong, Lillian T; Zuckerman, Daniel M
2016-01-01
First-passage times (FPTs) are widely used to characterize stochastic processes such as chemical reactions, protein folding, diffusion processes or triggering a stock option. In previous work (Suarez et al., JCTC 2014;10:2658-2667), we demonstrated a non-Markovian analysis approach that, with a sufficient subset of history information, yields unbiased mean first-passage times from weighted-ensemble (WE) simulations. The estimation of the distribution of the first-passage times is, however, a more ambitious goal since it cannot be obtained by direct observation in WE trajectories. Likewise, a large number of events would be required to make a good estimation of the distribution from a regular "brute force" simulation. Here, we show how the previously developed non-Markovian analysis can generate approximate, but highly accurate, FPT distributions from WE data. The analysis can also be applied to any other unbiased trajectories, such as from standard molecular dynamics simulations. The present study employs a range of systems with independent verification of the distributions to demonstrate the success and limitations of the approach. By comparison to a standard Markov analysis, the non-Markovian approach is less sensitive to the user-defined discretization of configuration space.
Area-to-point parameter estimation with geographically weighted regression
NASA Astrophysics Data System (ADS)
Murakami, Daisuke; Tsutsumi, Morito
2015-07-01
The modifiable areal unit problem (MAUP) is a problem by which aggregated units of data influence the results of spatial data analysis. Standard GWR, which ignores aggregation mechanisms, cannot be considered to serve as an efficient countermeasure of MAUP. Accordingly, this study proposes a type of GWR with aggregation mechanisms, termed area-to-point (ATP) GWR herein. ATP GWR, which is closely related to geostatistical approaches, estimates the disaggregate-level local trend parameters by using aggregated variables. We examine the effectiveness of ATP GWR for mitigating MAUP through a simulation study and an empirical study. The simulation study indicates that the method proposed herein is robust to the MAUP when the spatial scales of aggregation are not too global compared with the scale of the underlying spatial variations. The empirical studies demonstrate that the method provides intuitively consistent estimates.
Estimation of lean body weight in older community-dwelling men
Mitchell, Sarah J; Kirkpatrick, Carl M J; Le Couteur, David G; Naganathan, Vasi; Sambrook, Philip N; Seibel, Markus J; Blyth, Fiona M; Waite, Louise M; Handelsman, David J; Cumming, Robert G; Hilmer, Sarah N
2010-01-01
AIMS Lean body weight (LBW) decreases with age while total body fat increases, altering drug pharmacokinetics. The aim of this study was to evaluate the ability of the LBW equation to predict dual-energy X-ray absorptiometry (DXA)-derived fat free mass (FFMDXA) in older community-dwelling males compared with that of two existing FFM equations: the Heitmann and Deurenberg equations. METHODS Data were obtained from 1655 older men enrolled in the Concord Health and Ageing in Men Project. The predictive performance of the LBW and FFM equations to predict FFMDXA accurately was assessed graphically using Bland–Altman plots and quantitatively for precision and bias using the method of Sheiner and Beal in all participants and in frailty and body mass index (BMI) subgroups. RESULTS The LBW and Heitmann equations consistently overestimated FFMDXA for all frailty and BMI subgroups with a mean difference [95% confidence interval (CI)] of 5.5 kg (−0.65, 11.63 kg) and 3.34 kg (−2.84, 9.64 kg), respectively. The Deurenberg equation overestimated FFMDXA for overweight participants but underestimated FFMDXA for not-frail participants, with a mean difference (95% CI) of 1 kg (−7.23, 5.25 kg) for all participants. CONCLUSION LBW and FFM estimated using these equations give results comparable to DXA-derived FFM. The LBW and Heitmann equations provide a more consistent estimate of FFMDXA in all frailty and BMI groups despite the Deurenberg equation having the smallest mean difference. Further studies to determine whether the LBW equation is a clinically useful substitute for weight when determining drug dose in older people appear warranted. PMID:20233174
NASA Technical Reports Server (NTRS)
Mullen, J., Jr.
1978-01-01
The implementation of the changes to the program for Wing Aeroelastic Design and the development of a program to estimate aircraft fuselage weights are described. The equations to implement the modified planform description, the stiffened panel skin representation, the trim loads calculation, and the flutter constraint approximation are presented. A comparison of the wing model with the actual F-5A weight material distributions and loads is given. The equations and program techniques used for the estimation of aircraft fuselage weights are described. These equations were incorporated as a computer code. The weight predictions of this program are compared with data from the C-141.
ERIC Educational Resources Information Center
Penfield, Randall D.; Bergeron, Jennifer M.
2005-01-01
This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…
Shen, Yan; Lou, Shuqin; Wang, Xin
2014-03-20
The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters.
Schneider, Iris K.; Parzuchowski, Michal; Wojciszke, Bogdan; Schwarz, Norbert; Koole, Sander L.
2015-01-01
Previous work suggests that perceived importance of an object influences estimates of its weight. Specifically, important books were estimated to be heavier than non-important books. However, the experimental set-up of these studies may have suffered from a potential confound and findings may be confined to books only. Addressing this, we investigate the effect of importance on weight estimates by examining whether the importance of information stored on a data storage device (USB-stick or portable hard drive) can alter weight estimates. Results show that people thinking a USB-stick holds important tax information (vs. expired tax information vs. no information) estimate it to be heavier (Experiment 1) compared to people who do not. Similarly, people who are told a portable hard drive holds personally relevant information (vs. irrelevant), also estimate the drive to be heavier (Experiments 2A,B). PMID:25620942
Schneider, Iris K; Parzuchowski, Michal; Wojciszke, Bogdan; Schwarz, Norbert; Koole, Sander L
2014-01-01
Previous work suggests that perceived importance of an object influences estimates of its weight. Specifically, important books were estimated to be heavier than non-important books. However, the experimental set-up of these studies may have suffered from a potential confound and findings may be confined to books only. Addressing this, we investigate the effect of importance on weight estimates by examining whether the importance of information stored on a data storage device (USB-stick or portable hard drive) can alter weight estimates. Results show that people thinking a USB-stick holds important tax information (vs. expired tax information vs. no information) estimate it to be heavier (Experiment 1) compared to people who do not. Similarly, people who are told a portable hard drive holds personally relevant information (vs. irrelevant), also estimate the drive to be heavier (Experiments 2A,B). PMID:25620942
Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; Dechant, Lawrence
2016-05-31
Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.
Reference Models for Structural Technology Assessment and Weight Estimation
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Eldred, Lloyd
2005-01-01
Previously the Exploration Concepts Branch of NASA Langley Research Center has developed techniques for automating the preliminary design level of launch vehicle airframe structural analysis for purposes of enhancing historical regression based mass estimating relationships. This past work was useful and greatly reduced design time, however its application area was very narrow in terms of being able to handle a large variety in structural and vehicle general arrangement alternatives. Implementation of the analysis approach presented herein also incorporates some newly developed computer programs. Loft is a program developed to create analysis meshes and simultaneously define structural element design regions. A simple component defining ASCII file is read by Loft to begin the design process. HSLoad is a Visual Basic implementation of the HyperSizer Application Programming Interface, which automates the structural element design process. Details of these two programs and their use are explained in this paper. A feature which falls naturally out of the above analysis paradigm is the concept of "reference models". The flexibility of the FEA based JAVA processing procedures and associated process control classes coupled with the general utility of Loft and HSLoad make it possible to create generic program template files for analysis of components ranging from something as simple as a stiffened flat panel, to curved panels, fuselage and cryogenic tank components, flight control surfaces, wings, through full air and space vehicle general arrangements.
Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter
NASA Astrophysics Data System (ADS)
Strano, Salvatore; Terzo, Mario
2016-06-01
The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.
The GFR and GFR decline cannot be accurately estimated in type 2 diabetics.
Gaspari, Flavio; Ruggenenti, Piero; Porrini, Esteban; Motterlini, Nicola; Cannata, Antonio; Carrara, Fabiola; Jiménez Sosa, Alejandro; Cella, Claudia; Ferrari, Silvia; Stucchi, Nadia; Parvanova, Aneliya; Iliev, Ilian; Trevisan, Roberto; Bossi, Antonio; Zaletel, Jelka; Remuzzi, Giuseppe
2013-07-01
There are no adequate studies that have formally tested the performance of different estimating formulas in patients with type 2 diabetes both with and without overt nephropathy. Here we evaluated the agreement between baseline GFRs, GFR changes at month 6, and long-term GFR decline measured by iohexol plasma clearance or estimated by 15 creatinine-based formulas in 600 type 2 diabetics followed for a median of 4.0 years. Ninety patients were hyperfiltering. The number of those identified by estimation formulas ranged from 0 to 24:58 were not identified by any formula. Baseline GFR was significantly underestimated and a 6-month GFR reduction was missed in hyperfiltering patients. Long-term GFR decline was also underestimated by all formulas in the whole study group and in hyper-, normo-, and hypofiltering patients considered separately. Five formulas generated positive slopes in hyperfiltering patients. Baseline concordance correlation coefficients and total deviation indexes ranged from 32.1% to 92.6% and from 0.21 to 0.53, respectively. Concordance correlation coefficients between estimated and measured long-term GFR decline ranged from -0.21 to 0.35. The agreement between estimated and measured values was also poor within each subgroup considered separately. Thus, our study questions the use of any estimation formula to identify hyperfiltering patients and monitor renal disease progression and response to treatment in type 2 diabetics without overt nephropathy.
ACCURATE ESTIMATIONS OF STELLAR AND INTERSTELLAR TRANSITION LINES OF TRIPLY IONIZED GERMANIUM
Dutta, Narendra Nath; Majumder, Sonjoy E-mail: sonjoy@gmail.com
2011-08-10
In this paper, we report on weighted oscillator strengths of E1 transitions and transition probabilities of E2 transitions among different low-lying states of triply ionized germanium using highly correlated relativistic coupled cluster (RCC) method. Due to the abundance of Ge IV in the solar system, planetary nebulae, white dwarf stars, etc., the study of such transitions is important from an astrophysical point of view. The weighted oscillator strengths of E1 transitions are presented in length and velocity gauge forms to check the accuracy of the calculations. We find excellent agreement between calculated and experimental excitation energies. Oscillator strengths of few transitions, wherever studied in the literature via other theoretical and experimental approaches, are compared with our RCC calculations.
2014-01-01
Background This study aimed to investigate the relationship between preoperative estimated prostate weight on ultrasonography and clinical manifestations of transurethral resection (TUR) syndrome. Methods The records of patients who underwent TUR of the prostate under regional anesthesia over a 6-year period were retrospectively reviewed. TUR syndrome is usually defined as a serum sodium level of < 125 mmol/l combined with clinical cardiovascular or neurological manifestations. This study focused on the clinical manifestations only, and recorded specific central nervous system and cardiovascular abnormalities according to the checklist proposed by Hahn. Patients with and without clinical manifestations of TUR syndrome were compared to determine the factors associated with TUR syndrome. Receiver operating characteristic curve analysis was used to determine the optimal cutoff value of estimated prostate weight for the prediction of clinical manifestations of TUR syndrome. Results This study included 167 patients, of which 42 developed clinical manifestations of TUR syndrome. There were significant differences in preoperative estimated prostate weight, operation time, resected prostate weight, intravenous fluid infusion volume, blood transfusion volume, and drainage of the suprapubic irrigation fluid between patients with and without clinical manifestations of TUR syndrome. The preoperative estimated prostate weight was correlated with the resected prostate weight (Spearman’s correlation coefficient, 0.749). Receiver operator characteristic curve analysis showed that the optimal cutoff value of estimated prostate weight for the prediction of clinical manifestations of TUR syndrome was 75 g (sensitivity, 0.70; specificity, 0.69; area under the curve, 0.73). Conclusions Preoperative estimation of prostate weight by ultrasonography can predict the development of clinical manifestations of TUR syndrome. Particular care should be taken when the estimated prostate
Precision of sugarcane biomass estimates in pot studies using fresh and dry weights
Technology Transfer Automated Retrieval System (TEKTRAN)
Sugarcane (Saccharum spp.) field studies generally report fresh weight (FW) rather than dry weight (DW) due to logistical difficulties in drying large amounts of biomass. Pot studies often measure biomass of young plants with DW under the assumption that DW provides a more precise estimate of treatm...
Empirical expressions for estimating length and weight of axial-flow components of VTOL powerplants
NASA Technical Reports Server (NTRS)
Sagerser, D. A.; Lieblein, S.; Krebs, R. P.
1971-01-01
Simplified equations are presented for estimating the length and weight of major powerplant components of VTOL aircraft. The equations were developed from correlations of lift and cruise engine data. Components involved include fan, fan duct, compressor, combustor, turbine, structure, and accessories. Comparisons of actual and calculated total engine weights are included for several representative engines.
On estimating mean lifetimes by a weighted sum of lifetime measurements
NASA Astrophysics Data System (ADS)
Prosper, Harrison Bertrand
1987-10-01
Given N lifetime measurements an estimate of the mean lifetime can be obtained from a weighted sum of these measurements. We derive exact expressions for the probability density function, the moment-generating function, and the cumulative distribution function for the weighted sum. We indicate how these results might be used in the estimation of particle lifetimes. The probability distribution function of Yost for the distribution of lifetime measurements with finite measurement error is our starting point.
A method to estimate weight and dimensions of large and small gas turbine engines
NASA Technical Reports Server (NTRS)
Onat, E.; Klees, G. W.
1979-01-01
A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.
FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+
NASA Astrophysics Data System (ADS)
Sahoo, B. K.
2010-12-01
We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.
Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle
NASA Technical Reports Server (NTRS)
VanEepoel, John; Thienel, Julie; Sanner, Robert M.
2006-01-01
In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.
Some recommendations for an accurate estimation of Lanice conchilega density based on tube counts
NASA Astrophysics Data System (ADS)
van Hoey, Gert; Vincx, Magda; Degraer, Steven
2006-12-01
The tube building polychaete Lanice conchilega is a common and ecologically important species in intertidal and shallow subtidal sands. It builds a characteristic tube with ragged fringes and can retract rapidly into its tube to depths of more than 20 cm. Therefore, it is very difficult to sample L. conchilega individuals, especially with a Van Veen grab. Consequently, many studies have used tube counts as estimates of real densities. This study reports on some aspects to be considered when using tube counts as a density estimate of L. conchilega, based on intertidal and subtidal samples. Due to its accuracy and independence of sampling depth, the tube method is considered the prime method to estimate the density of L. conchilega. However, caution is needed when analyzing samples with fragile young individuals and samples from areas where temporary physical disturbance is likely to occur.
Accurate State Estimation and Tracking of a Non-Cooperative Target Vehicle
NASA Technical Reports Server (NTRS)
Thienel, Julie K.; Sanner, Robert M.
2006-01-01
Autonomous space rendezvous scenarios require knowledge of the target vehicle state in order to safely dock with the chaser vehicle. Ideally, the target vehicle state information is derived from telemetered data, or with the use of known tracking points on the target vehicle. However, if the target vehicle is non-cooperative and does not have the ability to maintain attitude control, or transmit attitude knowledge, the docking becomes more challenging. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a tracking control scheme. The approach is tested with the robotic servicing mission concept for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates, but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST.
A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system
Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob
2013-01-01
Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541
Fast and accurate probability density estimation in large high dimensional astronomical datasets
NASA Astrophysics Data System (ADS)
Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.
2015-01-01
Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1985-01-01
Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.
Spectral estimation from laser scanner data for accurate color rendering of objects
NASA Astrophysics Data System (ADS)
Baribeau, Rejean
2002-06-01
Estimation methods are studied for the recovery of the spectral reflectance across the visible range from the sensing at just three discrete laser wavelengths. Methods based on principal component analysis and on spline interpolation are judged based on the CIE94 color differences for some reference data sets. These include the Macbeth color checker, the OSA-UCS color charts, some artist pigments, and a collection of miscellaneous surface colors. The optimal three sampling wavelengths are also investigated. It is found that color can be estimated with average accuracy ΔE94 = 2.3 when optimal wavelengths 455 nm, 540 n, and 610 nm are used.
Heo, Seo Weon; Kim, Hyungsuk
2010-05-01
An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered.
NASA Astrophysics Data System (ADS)
Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru
2014-05-01
This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.
How Accurate and Robust Are the Phylogenetic Estimates of Austronesian Language Relationships?
Greenhill, Simon J.; Drummond, Alexei J.; Gray, Russell D.
2010-01-01
We recently used computational phylogenetic methods on lexical data to test between two scenarios for the peopling of the Pacific. Our analyses of lexical data supported a pulse-pause scenario of Pacific settlement in which the Austronesian speakers originated in Taiwan around 5,200 years ago and rapidly spread through the Pacific in a series of expansion pulses and settlement pauses. We claimed that there was high congruence between traditional language subgroups and those observed in the language phylogenies, and that the estimated age of the Austronesian expansion at 5,200 years ago was consistent with the archaeological evidence. However, the congruence between the language phylogenies and the evidence from historical linguistics was not quantitatively assessed using tree comparison metrics. The robustness of the divergence time estimates to different calibration points was also not investigated exhaustively. Here we address these limitations by using a systematic tree comparison metric to calculate the similarity between the Bayesian phylogenetic trees and the subgroups proposed by historical linguistics, and by re-estimating the age of the Austronesian expansion using only the most robust calibrations. The results show that the Austronesian language phylogenies are highly congruent with the traditional subgroupings, and the date estimates are robust even when calculated using a restricted set of historical calibrations. PMID:20224774
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980
Accurate estimation of influenza epidemics using Google search data via ARGO
Yang, Shihao; Santillana, Mauricio; Kou, S. C.
2015-01-01
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980
Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle
NASA Astrophysics Data System (ADS)
Timinis, Constantinos; Pitris, Costas
2016-03-01
The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.
Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?
NASA Astrophysics Data System (ADS)
Ramarohetra, J.; Sultan, B.
2012-04-01
Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and
Techniques for accurate estimation of net discharge in a tidal channel
Simpson, Michael R.; Bland, Roger
1999-01-01
An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. The relative magnitude of equipment errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second. Typical maximum flow rates during the data-collection period averaged 750 cubic meters per second.
Plant DNA Barcodes Can Accurately Estimate Species Richness in Poorly Known Floras
Costion, Craig; Ford, Andrew; Cross, Hugh; Crayn, Darren; Harrington, Mark; Lowe, Andrew
2011-01-01
Background Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70%) and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. Methodology/Principal Findings Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. Conclusions/Significance We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways. PMID:22096501
Jubran, Mohammad K; Bansal, Manu; Kondi, Lisimachos P; Grover, Rohan
2009-01-01
In this paper, we propose an optimal strategy for the transmission of scalable video over packet-based multiple-input multiple-output (MIMO) systems. The scalable extension of H.264/AVC that provides a combined temporal, quality and spatial scalability is used. For given channel conditions, we develop a method for the estimation of the distortion of the received video and propose different error concealment schemes. We show the accuracy of our distortion estimation algorithm in comparison with simulated wireless video transmission with packet errors. In the proposed MIMO system, we employ orthogonal space-time block codes (O-STBC) that guarantee independent transmission of different symbols within the block code. In the proposed constrained bandwidth allocation framework, we use the estimated end-to-end decoder distortion to optimally select the application layer parameters, i.e., quantization parameter (QP) and group of pictures (GOP) size, and physical layer parameters, i.e., rate-compatible turbo (RCPT) code rate and symbol constellation. Results show the substantial performance gain by using different symbol constellations across the scalable layers as compared to a fixed constellation.
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985
Development of weight and cost estimates for lifting surfaces with active controls
NASA Technical Reports Server (NTRS)
Anderson, R. D.; Flora, C. C.; Nelson, R. M.; Raymond, E. T.; Vincent, J. H.
1976-01-01
Equations and methodology were developed for estimating the weight and cost incrementals due to active controls added to the wing and horizontal tail of a subsonic transport airplane. The methods are sufficiently generalized to be suitable for preliminary design. Supporting methodology and input specifications for the weight and cost equations are provided. The weight and cost equations are structured to be flexible in terms of the active control technology (ACT) flight control system specification. In order to present a self-contained package, methodology is also presented for generating ACT flight control system characteristics for the weight and cost equations. Use of the methodology is illustrated.
Weight and volume estimates for aluminum-air batteries designed for electric vehicle applications
Cooper, J.F.
1980-01-01
The weights and volumes of reactants, electrolyte, and hardware components are estimated for an aluminum-air battery designed for a 40-kW (peak), 70-kWh aluminum-air battery. Generalized equations are derived which express battery power and energy content as functions of total anode area, aluminum-anode weight, and discharge current density. Equations are also presented which express total battery weight and volume as linear combinations of the variables, anode area and anode weight. The sizing and placement of battery components within the engine compartment of typical five-passenger vehicles is briefly discussed.
Development of a conceptual flight vehicle design weight estimation method library and documentation
NASA Astrophysics Data System (ADS)
Walker, Andrew S.
The state of the art in estimating the volumetric size and mass of flight vehicles is held today by an elite group of engineers in the Aerospace Conceptual Design Industry. This is not a skill readily accessible or taught in academia. To estimate flight vehicle mass properties, many aerospace engineering students are encouraged to read the latest design textbooks, learn how to use a few basic statistical equations, and plunge into the details of parametric mass properties analysis. Specifications for and a prototype of a standardized engineering "tool-box" of conceptual and preliminary design weight estimation methods were developed to manage the growing and ever-changing body of weight estimation knowledge. This also bridges the gap in Mass Properties education for aerospace engineering students. The Weight Method Library will also be used as a living document for use by future aerospace students. This "tool-box" consists of a weight estimation method bibliography containing unclassified, open-source literature for conceptual and preliminary flight vehicle design phases. Transport aircraft validation cases have been applied to each entry in the AVD Weight Method Library in order to provide a sense of context and applicability to each method. The weight methodology validation results indicate consensus and agreement of the individual methods. This generic specification of a method library will be applicable for use by other disciplines within the AVD Lab, Post-Graduate design labs, or engineering design professionals.
Endres, M I; Lobeck-Luchterhand, K M; Espejo, L A; Tucker, C B
2014-01-01
Dairy welfare assessment programs are becoming more common on US farms. Outcome-based measurements, such as locomotion, hock lesion, hygiene, and body condition scores (BCS), are included in these assessments. The objective of the current study was to investigate the proportion of cows in the pen or subsamples of pens on a farm needed to provide an accurate estimate of the previously mentioned measurements. In experiment 1, we evaluated cows in 52 high pens (50 farms) for lameness using a 1- to 5-scale locomotion scoring system (1 = normal and 5 = severely lame; 24.4 and 6% of animals were scored ≥ 3 or ≥ 4, respectively). Cows were also given a BCS using a 1- to 5-scale, where 1 = emaciated and 5 = obese; cows were rarely thin (BCS ≤ 2; 0.10% of cows) or fat (BCS ≥ 4; 0.11% of cows). Hygiene scores were assessed on a 1- to 5-scale with 1 = clean and 5 = severely dirty; 54.9% of cows had a hygiene score ≥ 3. Hock injuries were classified as 1 = no lesion, 2 = mild lesion, and 3 = severe lesion; 10.6% of cows had a score of 3. Subsets of data were created with 10 replicates of random sampling that represented 100, 90, 80, 70, 60, 50, 40, 30, 20, 15, 10, 5, and 3% of the cows measured/pen. In experiment 2, we scored the same outcome measures on all cows in lactating pens from 12 farms and evaluated using pen subsamples: high; high and fresh; high, fresh, and hospital; and high, low, and hospital. For both experiments, the association between the estimates derived from all subsamples and entire pen (experiment 1) or herd (experiment 2) prevalence was evaluated using linear regression. To be considered a good estimate, 3 criteria must be met: R(2)>0.9, slope = 1, and intercept = 0. In experiment 1, on average, recording 15% of the pen represented the percentage of clinically lame cows (score ≥ 3), whereas 30% needed to be measured to estimate severe lameness (score ≥ 4). Only 15% of the pen was needed to estimate the percentage of the herd with a hygiene
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
ERIC Educational Resources Information Center
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Univariate and Default Standard Unit Biases in Estimation of Body Weight and Caloric Content
ERIC Educational Resources Information Center
Geier, Andrew B.; Rozin, Paul
2009-01-01
College students estimated the weight of adult women from either photographs or a live presentation by a set of models and estimated the calories in 1 of 2 actual meals. The 2 meals had the same items, but 1 had larger portion sizes than the other. The results suggest: (a) Judgments are biased toward transforming the example in question to the…
Essink-Bot, Marie-Louise; Pereira, Joaquin; Packer, Claire; Schwarzinger, Michael; Burstrom, Kristina
2002-01-01
OBJECTIVE: To investigate the sources of cross-national variation in disability-adjusted life-years (DALYs) in the European Disability Weights Project. METHODS: Disability weights for 15 disease stages were derived empirically in five countries by means of a standardized procedure and the cross-national differences in visual analogue scale (VAS) scores were analysed. For each country the burden of dementia in women, used as an illustrative example, was estimated in DALYs. An analysis was performed of the relative effects of cross-national variations in demography, epidemiology and disability weights on DALY estimates. FINDINGS: Cross-national comparison of VAS scores showed almost identical ranking orders. After standardization for population size and age structure of the populations, the DALY rates per 100000 women ranged from 1050 in France to 1404 in the Netherlands. Because of uncertainties in the epidemiological data, the extent to which these differences reflected true variation between countries was difficult to estimate. The use of European rather than country-specific disability weights did not lead to a significant change in the burden of disease estimates for dementia. CONCLUSIONS: Sound epidemiological data are the first requirement for burden of disease estimation and relevant between-countries comparisons. DALY estimates for dementia were relatively insensitive to differences in disability weights between European countries. PMID:12219156
Accurate Estimation of Airborne Ultrasonic Time-of-Flight for Overlapping Echoes
Sarabia, Esther G.; Llata, Jose R.; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P.
2013-01-01
In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774
Accurate estimation of airborne ultrasonic time-of-flight for overlapping echoes.
Sarabia, Esther G; Llata, Jose R; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P
2013-01-01
In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774
Signal-to-noise Ratio and Combiner Weight Estimation for Symbol Stream Combining
NASA Technical Reports Server (NTRS)
Vo, Q. D.
1984-01-01
A method is presented for signal to noise ratio (SNR) and symbol stream combiner weight estimation. The SNR estimator employs absolute value moments as in an earlier method. The main contribution is that a new algorithm is derived for the combiner weight estimator to remove the large bias at low SNRs. The new algorithm is simulated to combine two independent symbol streams at various SNRs. As an example, the combining two symbol streams at SNRs of -1 dB and -7 dB, conbiner weight estimates using 1000 samples for the -1 dB stream and 10,000 samples for the -7 dB stream achieve an output SNR of -0.039 dB, which is just 0.012 dB below the theoretical limit achievable with perfect knowledge of the SNRs.
Xiaopeng, QI; Liang, WEI; BARKER, Laurie; LEKIACHVILI, Akaki; Xingyou, ZHANG
2015-01-01
Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature’s association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly—or 30-day—basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R2, mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS’s merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects. PMID:26167169
NASA Astrophysics Data System (ADS)
Saslow, Wayne M.
2014-04-01
Three common approaches to F→=ma→ are: (1) as an exactly true definition of force F→ in terms of measured inertial mass m and measured acceleration a→; (2) as an exactly true axiom relating measured values of a→, F→ and m; and (3) as an imperfect but accurately true physical law relating measured a→ to measured F→, with m an experimentally determined, matter-dependent constant, in the spirit of the resistance R in Ohm's law. In the third case, the natural units are those of a→ and F→, where a→ is normally specified using distance and time as standard units, and F→ from a spring scale as a standard unit; thus mass units are derived from force, distance, and time units such as newtons, meters, and seconds. The present work develops the third approach when one includes a second physical law (again, imperfect but accurate)—that balance-scale weight W is proportional to m—and the fact that balance-scale measurements of relative weight are more accurate than those of absolute force. When distance and time also are more accurately measurable than absolute force, this second physical law permits a shift to standards of mass, distance, and time units, such as kilograms, meters, and seconds, with the unit of force—the newton—a derived unit. However, were force and distance more accurately measurable than time (e.g., time measured with an hourglass), this second physical law would permit a shift to standards of force, mass, and distance units such as newtons, kilograms, and meters, with the unit of time—the second—a derived unit. Therefore, the choice of the most accurate standard units depends both on what is most accurately measurable and on the accuracy of physical law.
NASA Astrophysics Data System (ADS)
Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.
2015-03-01
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.
An Energy-Efficient Strategy for Accurate Distance Estimation in Wireless Sensor Networks
Tarrío, Paula; Bernardos, Ana M.; Casar, José R.
2012-01-01
In line with recent research efforts made to conceive energy saving protocols and algorithms and power sensitive network architectures, in this paper we propose a transmission strategy to minimize the energy consumption in a sensor network when using a localization technique based on the measurement of the strength (RSS) or the time of arrival (TOA) of the received signal. In particular, we find the transmission power and the packet transmission rate that jointly minimize the total consumed energy, while ensuring at the same time a desired accuracy in the RSS or TOA measurements. We also propose some corrections to these theoretical results to take into account the effects of shadowing and packet loss in the propagation channel. The proposed strategy is shown to be effective in realistic scenarios providing energy savings with respect to other transmission strategies, and also guaranteeing a given accuracy in the distance estimations, which will serve to guarantee a desired accuracy in the localization result. PMID:23202218
Monaco, James P; Madabhushi, Anant
2012-12-01
Many estimation tasks require Bayesian classifiers capable of adjusting their performance (e.g. sensitivity/specificity). In situations where the optimal classification decision can be identified by an exhaustive search over all possible classes, means for adjusting classifier performance, such as probability thresholding or weighting the a posteriori probabilities, are well established. Unfortunately, analogous methods compatible with Markov random fields (i.e. large collections of dependent random variables) are noticeably absent from the literature. Consequently, most Markov random field (MRF) based classification systems typically restrict their performance to a single, static operating point (i.e. a paired sensitivity/specificity). To address this deficiency, we previously introduced an extension of maximum posterior marginals (MPM) estimation that allows certain classes to be weighted more heavily than others, thus providing a means for varying classifier performance. However, this extension is not appropriate for the more popular maximum a posteriori (MAP) estimation. Thus, a strategy for varying the performance of MAP estimators is still needed. Such a strategy is essential for several reasons: (1) the MAP cost function may be more appropriate in certain classification tasks than the MPM cost function, (2) the literature provides a surfeit of MAP estimation implementations, several of which are considerably faster than the typical Markov Chain Monte Carlo methods used for MPM, and (3) MAP estimation is used far more often than MPM. Consequently, in this paper we introduce multiplicative weighted MAP (MWMAP) estimation-achieved via the incorporation of multiplicative weights into the MAP cost function-which allows certain classes to be preferred over others. This creates a natural bias for specific classes, and consequently a means for adjusting classifier performance. Similarly, we show how this multiplicative weighting strategy can be applied to the MPM
Monaco, James P; Madabhushi, Anant
2012-12-01
Many estimation tasks require Bayesian classifiers capable of adjusting their performance (e.g. sensitivity/specificity). In situations where the optimal classification decision can be identified by an exhaustive search over all possible classes, means for adjusting classifier performance, such as probability thresholding or weighting the a posteriori probabilities, are well established. Unfortunately, analogous methods compatible with Markov random fields (i.e. large collections of dependent random variables) are noticeably absent from the literature. Consequently, most Markov random field (MRF) based classification systems typically restrict their performance to a single, static operating point (i.e. a paired sensitivity/specificity). To address this deficiency, we previously introduced an extension of maximum posterior marginals (MPM) estimation that allows certain classes to be weighted more heavily than others, thus providing a means for varying classifier performance. However, this extension is not appropriate for the more popular maximum a posteriori (MAP) estimation. Thus, a strategy for varying the performance of MAP estimators is still needed. Such a strategy is essential for several reasons: (1) the MAP cost function may be more appropriate in certain classification tasks than the MPM cost function, (2) the literature provides a surfeit of MAP estimation implementations, several of which are considerably faster than the typical Markov Chain Monte Carlo methods used for MPM, and (3) MAP estimation is used far more often than MPM. Consequently, in this paper we introduce multiplicative weighted MAP (MWMAP) estimation-achieved via the incorporation of multiplicative weights into the MAP cost function-which allows certain classes to be preferred over others. This creates a natural bias for specific classes, and consequently a means for adjusting classifier performance. Similarly, we show how this multiplicative weighting strategy can be applied to the MPM
Monaco, James P.; Madabhushi, Anant
2012-01-01
Many estimation tasks require Bayesian classifiers capable of adjusting their performance (e.g. sensitivity/Specificity). In situations where the optimal classification decision can be identified by an exhaustive search over all possible classes, means for adjusting classifier performance, such as probability thresholding or weighting the a posteriori probabilities, are well established. Unfortunately, analogous methods compatible with Markov random fields (i.e. large collections of dependent random variables) are noticeably absent from the literature. Consequently, most Markov random field (MRF) based classification systems typically restrict their performance to a single, static operating point (i.e. a paired sensitivity/Specificity). To address this deficiency, we previously introduced an extension of maximum posterior marginals (MPM) estimation that allows certain classes to be weighted more heavily than others, thus providing a means for varying classifier performance. However, this extension is not appropriate for the more popular maximum a posteriori (MAP) estimation. Thus, a strategy for varying the performance of MAP estimators is still needed. Such a strategy is essential for several reasons: 1) the MAP cost function may be more appropriate in certain classification tasks than the MPM cost function, 2) the literature provides a surfeit of MAP estimation implementations, several of which are considerably faster than the typical Markov Chain Monte Carlo methods used for MPM, and 3) MAP estimation is used far more often than MPM. Consequently, in this paper we introduce multiplicative weighted MAP (MWMAP) estimation — achieved via the incorporation of multiplicative weights into the MAP cost function — which allows certain classes to be preferred over others. This creates a natural bias for Specific classes, and consequently a means for adjusting classifier performance. Similarly, we show how this multiplicative weighting strategy can be applied to the
[Research on maize multispectral image accurate segmentation and chlorophyll index estimation].
Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e
2015-01-01
In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray
Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates
NASA Astrophysics Data System (ADS)
Carbogno, Christian; Scheffler, Matthias
In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.
Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.
Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M
2016-08-01
Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835
NASA Astrophysics Data System (ADS)
Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick
2009-06-01
When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.
2014-01-01
Background Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. Methods We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0–13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. Results The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. Conclusions The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments. PMID:24885453
Neural network-based visual body weight estimation for drug dosage finding
NASA Astrophysics Data System (ADS)
Pfitzner, Christian; May, Stefan; Nüchter, Andreas
2016-03-01
Body weight adapted drug dosages are important for emergency treatments: Inaccuracies in body weight estimation may lead to inaccurate drug dosing. This paper describes an improved approach to estimating the body weight of emergency patients in a trauma room, based on images from an RGB-D and a thermal camera. The improvements are specific to several aspects: Fusion of RGB-D and thermal camera eases filtering and segmentation of the patient's body from the background. Robustness and accuracy is gained by an artificial neural network, which considers geometric features from the sensors as input, e.g. the patient's volume, and shape parameters. Preliminary experiments with 69 patients show an accuracy close to 90 percent, with less than 10 percent relative error and the results are compared with the physician's estimate, the patient's statement and an established anthropometric method.
Conway, T L; Cronan, T A; Peterson, K A
1989-05-01
This study examined whether percent body fat estimated from a simple circumference technique was more strongly associated with physical fitness than commonly used weight-height indices. Participants included 5,710 Navy men and 477 Navy women. Physical fitness measures included a 1.5-mile run, sit-ups test, sit-reach flexibility test, and an average fitness score. Circumference-estimated percent body fat was more strongly correlated with physical fitness than were any of the weight-height indices. Although the overall pattern of associations was similar for men and women, correlations were stronger for men. Circumference-estimated percent body fat may be more strongly associated with physical fitness because it assesses actual body fat more reliably than weight-height indices.
Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.
NASA Astrophysics Data System (ADS)
Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke
2013-04-01
temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules
Yarwood, Annie; Han, Buhm; Raychaudhuri, Soumya; Bowes, John; Lunt, Mark; Pappas, Dimitrios A; Kremer, Joel; Greenberg, Jeffrey D; Plenge, Robert; Worthington, Jane; Barton, Anne; Eyre, Steve
2015-01-01
Background There is currently great interest in the incorporation of genetic susceptibility loci into screening models to identify individuals at high risk of disease. Here, we present the first risk prediction model including all 46 known genetic loci associated with rheumatoid arthritis (RA). Methods A weighted genetic risk score (wGRS) was created using 45 RA non-human leucocyte antigen (HLA) susceptibility loci, imputed amino acids at HLA-DRB1 (11, 71 and 74), HLA-DPB1 (position 9) HLA-B (position 9) and gender. The wGRS was tested in 11 366 RA cases and 15 489 healthy controls. The risk of developing RA was estimated using logistic regression by dividing the wGRS into quintiles. The ability of the wGRS to discriminate between cases and controls was assessed by receiver operator characteristic analysis and discrimination improvement tests. Results Individuals in the highest risk group showed significantly increased odds of developing anti-cyclic citrullinated peptide-positive RA compared to the lowest risk group (OR 27.13, 95% CI 23.70 to 31.05). The wGRS was validated in an independent cohort that showed similar results (area under the curve 0.78, OR 18.00, 95% CI 13.67 to 23.71). Comparison of the full wGRS with a wGRS in which HLA amino acids were replaced by a HLA tag single-nucleotide polymorphism showed a significant loss of sensitivity and specificity. Conclusions Our study suggests that in RA, even when using all known genetic susceptibility variants, prediction performance remains modest; while this is insufficiently accurate for general population screening, it may prove of more use in targeted studies. Our study has also highlighted the importance of including HLA variation in risk prediction models. PMID:24092415
NASA Astrophysics Data System (ADS)
Tan, Zhiqiang; Xia, Junchao; Zhang, Bin W.; Levy, Ronald M.
2016-01-01
The weighted histogram analysis method (WHAM) including its binless extension has been developed independently in several different contexts, and widely used in chemistry, physics, and statistics, for computing free energies and expectations from multiple ensembles. However, this method, while statistically efficient, is computationally costly or even infeasible when a large number, hundreds or more, of distributions are studied. We develop a locally WHAM (local WHAM) from the perspective of simulations of simulations (SOS), using generalized serial tempering (GST) to resample simulated data from multiple ensembles. The local WHAM equations based on one jump attempt per GST cycle can be solved by optimization algorithms orders of magnitude faster than standard implementations of global WHAM, but yield similarly accurate estimates of free energies to global WHAM estimates. Moreover, we propose an adaptive SOS procedure for solving local WHAM equations stochastically when multiple jump attempts are performed per GST cycle. Such a stochastic procedure can lead to more accurate estimates of equilibrium distributions than local WHAM with one jump attempt per cycle. The proposed methods are broadly applicable when the original data to be "WHAMMED" are obtained properly by any sampling algorithm including serial tempering and parallel tempering (replica exchange). To illustrate the methods, we estimated absolute binding free energies and binding energy distributions using the binding energy distribution analysis method from one and two dimensional replica exchange molecular dynamics simulations for the beta-cyclodextrin-heptanoate host-guest system. In addition to the computational advantage of handling large datasets, our two dimensional WHAM analysis also demonstrates that accurate results similar to those from well-converged data can be obtained from simulations for which sampling is limited and not fully equilibrated.
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1994-01-01
An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.
NASA Technical Reports Server (NTRS)
Martinovic, Zoran N.; Cerro, Jeffrey A.
2002-01-01
This is an interim user's manual for current procedures used in the Vehicle Analysis Branch at NASA Langley Research Center, Hampton, Virginia, for launch vehicle structural subsystem weight estimation based on finite element modeling and structural analysis. The process is intended to complement traditional methods of conceptual and early preliminary structural design such as the application of empirical weight estimation or application of classical engineering design equations and criteria on one dimensional "line" models. Functions of two commercially available software codes are coupled together. Vehicle modeling and analysis are done using SDRC/I-DEAS, and structural sizing is performed with the Collier Research Corp. HyperSizer program.
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, F. J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
Yamagishi, Junya; Okimoto, Noriaki; Morimoto, Gentaro; Taiji, Makoto
2014-11-01
The Poisson-Boltzmann implicit solvent (PB) is widely used to estimate the solvation free energies of biomolecules in molecular simulations. An optimized set of atomic radii (PB radii) is an important parameter for PB calculations, which determines the distribution of dielectric constants around the solute. We here present new PB radii for the AMBER protein force field to accurately reproduce the solvation free energies obtained from explicit solvent simulations. The presented PB radii were optimized using results from explicit solvent simulations of the large systems. In addition, we discriminated PB radii for N- and C-terminal residues from those for nonterminal residues. The performances using our PB radii showed high accuracy for the estimation of solvation free energies at the level of the molecular fragment. The obtained PB radii are effective for the detailed analysis of the solvation effects of biomolecules.
Estimation of Disability Weights in the General Population of South Korea Using a Paired Comparison.
Ock, Minsu; Ahn, Jeonghoon; Yoon, Seok-Jun; Jo, Min-Woo
2016-01-01
We estimated the disability weights in the South Korean population by using a paired comparison-only model wherein 'full health' and 'being dead' were included as anchor points, without resorting to a cardinal method, such as person trade-off. The study was conducted via 2 types of survey: a household survey involving computer-assisted face-to-face interviews and a web-based survey (similar to that of the GBD 2010 disability weight study). With regard to the valuation methods, paired comparison, visual analogue scale (VAS), and standard gamble (SG) were used in the household survey, whereas paired comparison and population health equivalence (PHE) were used in the web-based survey. Accordingly, we described a total of 258 health states, with 'full health' and 'being dead' designated as anchor points. In the analysis, 4 models were considered: a paired comparison-only model; hybrid model between paired comparison and PHE; VAS model; and SG model. A total of 2,728 and 3,188 individuals participated in the household and web-based survey, respectively. The Pearson correlation coefficients of the disability weights of health states between the GBD 2010 study and the current models were 0.802 for Model 2, 0.796 for Model 1, 0.681 for Model 3, and 0.574 for Model 4 (all P-values<0.001). The discrimination of values according to health state severity was most suitable in Model 1. Based on these results, the paired comparison-only model was selected as the best model for estimating disability weights in South Korea, and for maintaining simplicity in the analysis. Thus, disability weights can be more easily estimated by using paired comparison alone, with 'full health' and 'being dead' as one of the health states. As noted in our study, we believe that additional evidence regarding the universality of disability weight can be observed by using a simplified methodology of estimating disability weights. PMID:27606626
Estimation of breed-specific heterosis effects for birth, weaning, and yearling weight in cattle.
Schiermiester, L N; Thallman, R M; Kuehn, L A; Kachman, S D; Spangler, M L
2015-01-01
Heterosis, assumed proportional to expected breed heterozygosity, was calculated for 6834 individuals with birth, weaning and yearling weight records from Cycle VII and advanced generations of the U.S. Meat Animal Research Center (USMARC) Germplasm Evaluation (GPE) project. Breeds represented in these data included: Angus, Hereford, Red Angus, Charolais, Gelbvieh, Simmental, Limousin and Composite MARC III. Heterosis was further estimated by proportions of British × British (B × B), British × Continental (B × C) and Continental × Continental (C × C) crosses and by breed-specific combinations. Model 1 fitted fixed covariates for heterosis within biological types while Model 2 fitted random breed-specific combinations nested within the fixed biological type covariates. Direct heritability estimates (SE) for birth, weaning ,and yearling weight for Model 1 were 0.42 (0.04), 0.22 (0.03), and 0.39 (0.05), respectively. The direct heritability estimates (SE) of birth, weaning, and yearling weight for Model 2 were the same as Model 1, except yearling weight heritability was 0.38 (0.05). The B × B, B × C, and C × C heterosis estimates for birth weight were 0.47 (0.37), 0.75 (0.32), and 0.73 (0.54) kg, respectively. The B × B, B × C, and C × C heterosis estimates for weaning weight were 6.43 (1.80), 8.65 (1.54), and 5.86 (2.57) kg, respectively. Yearling weight estimates for B × B, B × C, and C × C heterosis were 17.59(3.06), 13.88 (2.63), and 9.12 (4.34) kg, respectively. Differences did exist among estimates of breed-specific heterosis for weaning and yearling weight, although the variance component associated with breed-specific heterosis was not significant. These results illustrate that there are differences in breed-specific heterosis and exploiting these differences can lead to varying levels of heterosis among mating plans.
Bachmann, Sebastian; Neufeld, Roman; Dzemski, Martin; Stalke, Dietmar
2016-06-13
New external calibration curves (ECCs) for the estimation of aggregation states of small molecules in solution by DOSY NMR spectroscopy for a range of different common NMR solvents ([D6 ]DMSO, C6 D12 , C6 D6 , CDCl3 , and CD2 Cl2 ) are introduced and applied. ECCs are of avail to estimate molecular weights (MWs) from diffusion coefficients of previously unknown aggregates. This enables a straightforward and elaborate examination of (de)aggregation phenomena in solution.
Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation
ter Horst, Arjan C.; Koppen, Mathieu; Selen, Luc P. J.; Medendorp, W. Pieter
2015-01-01
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement. PMID:26658990
Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation.
ter Horst, Arjan C; Koppen, Mathieu; Selen, Luc P J; Medendorp, W Pieter
2015-01-01
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.
Martinson, K L; Coleman, R C; Rendahl, A K; Fang, Z; McCue, M E
2014-05-01
Excessive BW has become a major health issue in the equine (Equus caballus) industry. The objectives were to determine if the addition of neck circumference and height improved existing BW estimation equations, to develop an equation for estimation of ideal BW, and to develop a method for assessing the likelihood of being overweight in adult equids. Six hundred and twenty-nine adult horses and ponies who met the following criteria were measured and weighed at 2 horse shows in September 2011 in Minnesota: age ≥ 3 yr, height ≥ 112 cm, and nonpregnant. Personnel assessed BCS on a scale of 1 to 9 and measured wither height at the third thoracic vertebra, body length from the point of shoulder to the point of the buttock, neck and girth circumference, and weight using a portable livestock scale. Individuals were grouped into breed types on the basis of existing knowledge and were confirmed with multivariate ANOVA analysis of morphometric measurements. Equations for estimated and ideal BW were developed using linear regression modeling. For estimated BW, the model was fit using all individuals and all morphometric measurements. For ideal BW, the model was fit using individuals with a BCS of 5; breed type, height, and body length were considered as these measurements are not affected by adiposity. A BW score to assess the likelihood of being overweight was developed by fitting a proportional odds logistic regression model on BCS using the difference between ideal and estimated BW, the neck to height ratio, and the girth to height ratio as predictors; this score was then standardized using the data from individuals with a BCS of 5. Breed types included Arabian, stock, and pony. Mean (± SD) BCS was 5.6 ± 0.9. BW (kg) was estimated by taking [girth (cm)(1.48)6 × length (cm)(0.554) × height (cm)(0.599) × neck (cm)(0.173)]/3,596, 3,606, and 3,441 for Arabians, ponies, and stock horses, respectively (R(2) = 0.92; mean-squared error (MSE) = 22 kg). Ideal BW (kg) was
Martinson, K L; Coleman, R C; Rendahl, A K; Fang, Z; McCue, M E
2014-05-01
Excessive BW has become a major health issue in the equine (Equus caballus) industry. The objectives were to determine if the addition of neck circumference and height improved existing BW estimation equations, to develop an equation for estimation of ideal BW, and to develop a method for assessing the likelihood of being overweight in adult equids. Six hundred and twenty-nine adult horses and ponies who met the following criteria were measured and weighed at 2 horse shows in September 2011 in Minnesota: age ≥ 3 yr, height ≥ 112 cm, and nonpregnant. Personnel assessed BCS on a scale of 1 to 9 and measured wither height at the third thoracic vertebra, body length from the point of shoulder to the point of the buttock, neck and girth circumference, and weight using a portable livestock scale. Individuals were grouped into breed types on the basis of existing knowledge and were confirmed with multivariate ANOVA analysis of morphometric measurements. Equations for estimated and ideal BW were developed using linear regression modeling. For estimated BW, the model was fit using all individuals and all morphometric measurements. For ideal BW, the model was fit using individuals with a BCS of 5; breed type, height, and body length were considered as these measurements are not affected by adiposity. A BW score to assess the likelihood of being overweight was developed by fitting a proportional odds logistic regression model on BCS using the difference between ideal and estimated BW, the neck to height ratio, and the girth to height ratio as predictors; this score was then standardized using the data from individuals with a BCS of 5. Breed types included Arabian, stock, and pony. Mean (± SD) BCS was 5.6 ± 0.9. BW (kg) was estimated by taking [girth (cm)(1.48)6 × length (cm)(0.554) × height (cm)(0.599) × neck (cm)(0.173)]/3,596, 3,606, and 3,441 for Arabians, ponies, and stock horses, respectively (R(2) = 0.92; mean-squared error (MSE) = 22 kg). Ideal BW (kg) was
A Computer Code for Gas Turbine Engine Weight And Disk Life Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Ghosn, Louis J.; Halliwell, Ian; Wickenheiser, Tim (Technical Monitor)
2002-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. In this paper, the major enhancements to NASA's engine-weight estimate computer code (WATE) are described. These enhancements include the incorporation of improved weight-calculation routines for the compressor and turbine disks using the finite-difference technique. Furthermore, the stress distribution for various disk geometries was also incorporated, for a life-prediction module to calculate disk life. A material database, consisting of the material data of most of the commonly-used aerospace materials, has also been incorporated into WATE. Collectively, these enhancements provide a more realistic and systematic way to calculate the engine weight. They also provide additional insight into the design trade-off between engine life and engine weight. To demonstrate the new capabilities, the enhanced WATE code is used to perform an engine weight/life trade-off assessment on a production aircraft engine.
Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul
2015-01-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821
Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix
2015-12-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.
Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix
2015-12-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821
Preliminary weight and cost estimates for transport aircraft composite structural design concepts
NASA Technical Reports Server (NTRS)
1973-01-01
Preliminary weight and cost estimates have been prepared for design concepts utilized for a transonic long range transport airframe with extensive applications of advanced composite materials. The design concepts, manufacturing approach, and anticipated details of manufacturing cost reflected in the composite airframe are substantially different from those found in conventional metal structure and offer further evidence of the advantages of advanced composite materials.
Kushwaha, B P; Mandal, A; Arora, A L; Kumar, R; Kumar, S; Notter, D R
2009-08-01
Estimates of (co)variance components were obtained for weights at birth, weaning and 6, 9 and 12 months of age in Chokla sheep maintained at the Central Sheep and Wool Research Institute, Avikanagar, Rajasthan, India, over a period of 21 years (1980-2000). Records of 2030 lambs descended from 150 rams and 616 ewes were used in the study. Analyses were carried out by restricted maximum likelihood (REML) fitting an animal model and ignoring or including maternal genetic or permanent environmental effects. Six different animal models were fitted for all traits. The best model was chosen after testing the improvement of the log-likelihood values. Direct heritability estimates were inflated substantially for all traits when maternal effects were ignored. Heritability estimates for weight at birth, weaning and 6, 9 and 12 months of age were 0.20, 0.18, 0.16, 0.22 and 0.23, respectively in the best models. Additive maternal and maternal permanent environmental effects were both significant at birth, accounting for 9% and 12% of phenotypic variance, respectively, but the source of maternal effects (additive versus permanent environmental) at later ages could not be clearly identified. The estimated repeatabilities across years of ewe effects on lamb body weights were 0.26, 0.14, 0.12, 0.13, and 0.15 at birth, weaning, 6, 9 and 12 months of age, respectively. These results indicate that modest rates of genetic progress are possible for all weights. PMID:19630878
Valchev, Nikola; Zijdewind, Inge; Keysers, Christian; Gazzola, Valeria; Avenanti, Alessio; Maurits, Natasha M.
2016-01-01
Seeing others performing an action induces the observers’ motor cortex to “resonate” with the observed action. Transcranial magnetic stimulation (TMS) studies suggest that such motor resonance reflects the encoding of various motor features of the observed action, including the apparent motor effort. However, it is unclear whether such encoding requires direct observation or whether force requirements can be inferred when the moving body part is partially occluded. To address this issue, we presented participants with videos of a right hand lifting a box of three different weights and asked them to estimate its weight. During each trial we delivered one transcranial magnetic stimulation (TMS) pulse over the left primary motor cortex of the observer and recorded the motor evoked potentials (MEPs) from three muscles of the right hand (first dorsal interosseous, FDI, abductor digiti minimi, ADM, and brachioradialis, BR). Importantly, because the hand shown in the videos was hidden behind a screen, only the contractions in the actor’s BR muscle under the bare skin were observable during the entire videos, while the contractions in the actor’s FDI and ADM muscles were hidden during the grasp and actual lift. The amplitudes of the MEPs recorded from the BR (observable) and FDI (hidden) muscle increased with the weight of the box. These findings indicate that the modulation of motor excitability induced by action observation extends to the cortical representation of muscles with contractions that could not be observed. Thus, motor resonance appears to reflect force requirements of observed lifting actions even when the moving body part is occluded from view. PMID:25462196
Valchev, Nikola; Zijdewind, Inge; Keysers, Christian; Gazzola, Valeria; Avenanti, Alessio; Maurits, Natasha M
2015-01-01
Seeing others performing an action induces the observers' motor cortex to "resonate" with the observed action. Transcranial magnetic stimulation (TMS) studies suggest that such motor resonance reflects the encoding of various motor features of the observed action, including the apparent motor effort. However, it is unclear whether such encoding requires direct observation or whether force requirements can be inferred when the moving body part is partially occluded. To address this issue, we presented participants with videos of a right hand lifting a box of three different weights and asked them to estimate its weight. During each trial we delivered one transcranial magnetic stimulation (TMS) pulse over the left primary motor cortex of the observer and recorded the motor evoked potentials (MEPs) from three muscles of the right hand (first dorsal interosseous, FDI, abductor digiti minimi, ADM, and brachioradialis, BR). Importantly, because the hand shown in the videos was hidden behind a screen, only the contractions in the actor's BR muscle under the bare skin were observable during the entire videos, while the contractions in the actor's FDI and ADM muscles were hidden during the grasp and actual lift. The amplitudes of the MEPs recorded from the BR (observable) and FDI (hidden) muscle increased with the weight of the box. These findings indicate that the modulation of motor excitability induced by action observation extends to the cortical representation of muscles with contractions that could not be observed. Thus, motor resonance appears to reflect force requirements of observed lifting actions even when the moving body part is occluded from view.
NASA Technical Reports Server (NTRS)
Hale, P. L.
1982-01-01
The weight and major envelope dimensions of small aircraft propulsion gas turbine engines are estimated. The computerized method, called WATE-S (Weight Analysis of Turbine Engines-Small) is a derivative of the WATE-2 computer code. WATE-S determines the weight of each major component in the engine including compressors, burners, turbines, heat exchangers, nozzles, propellers, and accessories. A preliminary design approach is used where the stress levels, maximum pressures and temperatures, material properties, geometry, stage loading, hub/tip radius ratio, and mechanical overspeed are used to determine the component weights and dimensions. The accuracy of the method is generally better than + or - 10 percent as verified by analysis of four small aircraft propulsion gas turbine engines.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
NASA Technical Reports Server (NTRS)
Wahba, Grace; Deepak, A. (Editor)
1988-01-01
The problem of merging direct and remotely sensed (indirect) data with forecast data to get an estimate of the present state of the atmosphere for the purpose of numerical weather prediction is examined. To carry out this merging optimally, it is necessary to provide an estimate of the relative weights to be given to the observations and forecast. It is possible to do this dynamically from the information to be merged, if the correlation structure of the errors from the various sources is sufficiently different. Some new statistical approaches to doing this are described, and conditions quantified in which such estimates are likely to be good.
Applying fuzzy logic to estimate the parameters of the length-weight relationship.
Bitar, S D; Campos, C P; Freitas, C E C
2016-05-01
We evaluated three mathematical procedures to estimate the parameters of the relationship between weight and length for Cichla monoculus: least squares ordinary regression on log-transformed data, non-linear estimation using raw data and a mix of multivariate analysis and fuzzy logic. Our goal was to find an alternative approach that considers the uncertainties inherent to this biological model. We found that non-linear estimation generated more consistent estimates than least squares regression. Our results also indicate that it is possible to find consistent estimates of the parameters directly from the centers of mass of each cluster. However, the most important result is the intervals obtained with the fuzzy inference system. PMID:27143051
Applying fuzzy logic to estimate the parameters of the length-weight relationship.
Bitar, S D; Campos, C P; Freitas, C E C
2016-05-01
We evaluated three mathematical procedures to estimate the parameters of the relationship between weight and length for Cichla monoculus: least squares ordinary regression on log-transformed data, non-linear estimation using raw data and a mix of multivariate analysis and fuzzy logic. Our goal was to find an alternative approach that considers the uncertainties inherent to this biological model. We found that non-linear estimation generated more consistent estimates than least squares regression. Our results also indicate that it is possible to find consistent estimates of the parameters directly from the centers of mass of each cluster. However, the most important result is the intervals obtained with the fuzzy inference system.
Effect of Temporal Residual Correlation on Estimation of Model Averaging Weights
NASA Astrophysics Data System (ADS)
Ye, M.; Lu, D.; Curtis, G. P.; Meyer, P. D.; Yabusaki, S.
2010-12-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are always calculated using model selection criteria such as AIC, AICc, BIC, and KIC. However, this method sometimes leads to an unrealistic situation in which one model receives overwhelmingly high averaging weight (even 100%), which cannot be justified by available data and knowledge. It is found in this study that the unrealistic situation is due partly, if not solely, to ignorance of residual correlation when estimating the negative log-likelihood function common to all the model selection criteria. In the context of maximum-likelihood or least-square inverse modeling, the residual correlation is accounted for in the full covariance matrix; when the full covariance matrix is replaced by its diagonal counterpart, it assumes data independence and ignores the correlation. As a result, treating the correlated residuals as independent distorts the distance between observations and simulations of alternative models. As a result, it may lead to incorrect estimation of model selection criteria and model averaging weights. This is illustrated for a set of surface complexation models developed to simulate uranium transport based on a series of column experiments. The residuals are correlated in time, and the time correlation is addressed using a second-order autoregressive model. The modeling results reveal importance of considering residual correlation in the estimation of model averaging weights.
NASA Astrophysics Data System (ADS)
Omoniyi, Bayonle; Stow, Dorrik
2016-04-01
One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.
NASA Astrophysics Data System (ADS)
Montes-Hugo, M.; Bouakba, H.; Arnone, R.
2014-06-01
The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL) is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU) and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor), EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM) for estimating the phytoplankton absorption coefficient at 443 nm (aph(443)) and the chlorophyll concentration (chl) in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443) estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013). A change on the inversion approach used for estimating aph(443) values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System) default values for the optical cross section of phytoplankton (i.e., aph(443) = aph(443)/chl = 0.056 m2mg-1), the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443) retrievals and with respect to in situ determinations increased up to 29%.
Large-scale Advanced Propfan (LAP) performance, acoustic and weight estimation, January, 1984
NASA Technical Reports Server (NTRS)
Parzych, D.; Shenkman, A.; Cohen, S.
1985-01-01
In comparison to turbo-prop applications, the Prop-Fan is designed to operate in a significantly higher range of aircraft flight speeds. Two concerns arise regarding operation at very high speeds: aerodynamic performance and noise generation. This data package covers both topics over a broad range of operating conditions for the eight (8) bladed SR-7L Prop-Fan. Operating conditions covered are: Flight Mach Number 0 - 0.85; blade tip speed 600-800 ft/sec; and cruise power loading 20-40 SHP/D2. Prop-Fan weight and weight scaling estimates are also included.
An Object-oriented Computer Code for Aircraft Engine Weight Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Naylor, Bret A.
2008-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented
NASA Astrophysics Data System (ADS)
Chen, Jian; Tustison, Nicholas J.; Amini, Amir A.
2006-03-01
In this paper, an improved framework for estimation of 3-D left-ventricular deformations from tagged MRI is presented. Contiguous short- and long-axis tagged MR images are collected and are used within a 4-D B-Spline based deformable model to determine 4-D displacements and strains. An initial 4-D B-spline model fitted to sparse tag line data is first constructed by minimizing a 4-D Chamfer distance potential-based energy function for aligning isoparametric planes of the model with tag line locations; subsequently, dense virtual tag lines based on 2-D phase-based displacement estimates and the initial model are created. A final 4-D B-spline model with increased knots is fitted to the virtual tag lines. From the final model, we can extract accurate 3-D myocardial deformation fields and corresponding strain maps which are local measures of non-rigid deformation. Lagrangian strains in simulated data are derived which show improvement over our previous work. The method is also applied to 3-D tagged MRI data collected in a canine.
Palmstrom, Christin R.
2015-01-01
There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size), as measured by computerized tomography (CT) scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex. PMID:26082858
Estimation of Disability Weights in the General Population of South Korea Using a Paired Comparison
Ock, Minsu; Ahn, Jeonghoon; Yoon, Seok-Jun; Jo, Min-Woo
2016-01-01
We estimated the disability weights in the South Korean population by using a paired comparison-only model wherein ‘full health’ and ‘being dead’ were included as anchor points, without resorting to a cardinal method, such as person trade-off. The study was conducted via 2 types of survey: a household survey involving computer-assisted face-to-face interviews and a web-based survey (similar to that of the GBD 2010 disability weight study). With regard to the valuation methods, paired comparison, visual analogue scale (VAS), and standard gamble (SG) were used in the household survey, whereas paired comparison and population health equivalence (PHE) were used in the web-based survey. Accordingly, we described a total of 258 health states, with ‘full health’ and ‘being dead’ designated as anchor points. In the analysis, 4 models were considered: a paired comparison-only model; hybrid model between paired comparison and PHE; VAS model; and SG model. A total of 2,728 and 3,188 individuals participated in the household and web-based survey, respectively. The Pearson correlation coefficients of the disability weights of health states between the GBD 2010 study and the current models were 0.802 for Model 2, 0.796 for Model 1, 0.681 for Model 3, and 0.574 for Model 4 (all P-values<0.001). The discrimination of values according to health state severity was most suitable in Model 1. Based on these results, the paired comparison-only model was selected as the best model for estimating disability weights in South Korea, and for maintaining simplicity in the analysis. Thus, disability weights can be more easily estimated by using paired comparison alone, with ‘full health’ and ‘being dead’ as one of the health states. As noted in our study, we believe that additional evidence regarding the universality of disability weight can be observed by using a simplified methodology of estimating disability weights. PMID:27606626
Reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization
Shi, Xin Zhao, Xiangmo Hui, Fei Ma, Junyan Yang, Lan
2014-10-06
Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations is constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.
Weighted LS-SVM for function estimation applied to artifact removal in bio-signal processing.
Caicedo, Alexander; Van Huffel, Sabine
2010-01-01
Weighted LS-SVM is normally used for function estimation from highly corrupted data in order to decrease the impact of outliers. However, this method is limited in size and big time series should be segmented in smaller groups. Therefore, border discontinuities represent a problem in the final estimated function. Several methods such as committee networks or multilayer networks of LS-SVMs are used to address this problem, but these methods require extra training and hence the computational cost is increased. In this paper a technique that includes an extra weight vector in the formulation of the cost function for the LS-SVM problem is proposed as an alternative solution. The method is then applied to the removal of some artifacts in biomedical signals.
Image haze removal using a hybrid of fuzzy inference system and weighted estimation
NASA Astrophysics Data System (ADS)
Wang, Jyun-Guo; Tai, Shen-Chuan; Lin, Cheng-Jian
2015-05-01
The attenuation of the light transmitted through air can reduce image quality when taking a photograph outdoors, especially in a hazy environment. Hazy images often lack sufficient information for image recognition systems to operate effectively. In order to solve the aforementioned problems, this study proposes a hybrid method combining fuzzy theory with weighted estimation for the removal of haze from images. A transmission map is first created based on fuzzy theory. According to the transmission map, the proposed method automatically finds the possible atmospheric lights and refines the atmospheric lights by mixing these candidates. Weighted estimation is then employed to generate a refined transmission map, which removes the halo artifact from around the sharp edges. Experimental results demonstrate the superiority of the proposed method over existing methods with regard to contrast, color depth, and the elimination of halo artifacts.
Regularized estimate of the weight vector of an adaptive antenna array
NASA Astrophysics Data System (ADS)
Ermolayev, V. T.; Flaksman, A. G.; Sorokin, I. S.
2013-02-01
We consider an adaptive antenna array (AAA) with the maximum signal-to-noise ratio (SNR) at the output. The antenna configuration is assumed to be arbitrary. A rigorous analytical solution for the optimal weight vector of the AAA is obtained if the input process is defined by the noise correlation matrix and the useful-signal vector. On the basis of this solution, the regularized estimate of the weight vector is derived by using a limited number of input noise samples, which can be either greater or smaller than the number of array elements. Computer simulation results of adaptive signal processing indicate small losses in the SNR compared with the optimal SNR value. It is shown that the computing complexity of the proposed estimate is proportional to the number of noise samples, the number of external noise sources, and the squared number of array elements.
Veale, David; Gledhill, Lucinda J; Christodoulou, Polyxeni; Hodsoll, John
2016-09-01
Our aim was to systematically review the prevalence of body dysmorphic disorder (BDD) in a variety of settings. Weighted prevalence estimate and 95% confidence intervals in each study were calculated. The weighted prevalence of BDD in adults in the community was estimated to be 1.9%; in adolescents 2.2%; in student populations 3.3%; in adult psychiatric inpatients 7.4%; in adolescent psychiatric inpatients 7.4%; in adult psychiatric outpatients 5.8%; in general cosmetic surgery 13.2%; in rhinoplasty surgery 20.1%; in orthognathic surgery 11.2%; in orthodontics/cosmetic dentistry settings 5.2%; in dermatology outpatients 11.3%; in cosmetic dermatology outpatients 9.2%; and in acne dermatology clinics 11.1%. Women outnumbered men in the majority of settings but not in cosmetic or dermatological settings. BDD is common in some psychiatric and cosmetic settings but is poorly identified. PMID:27498379
An Object-Oriented Computer Code for Aircraft Engine Weight Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Naylor, Bret A.
2009-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.
NASA Astrophysics Data System (ADS)
Lee, Ming-Wei; Chen, Yi-Chun
2014-02-01
In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.
NASA Technical Reports Server (NTRS)
Jensen, J. K.; Wright, R. L.
1981-01-01
Estimates of total spacecraft weight and packaging options were made for three conceptual designs of a microwave radiometer spacecraft. Erectable structures were found to be slightly lighter than deployable structures but could be packaged in one-tenth the volume. The tension rim concept, an unconventional design approach, was found to be the lightest and transportable to orbit in the least number of shuttle flights.
Effects of a limited class of nonlinearities on estimates of relative weights
NASA Astrophysics Data System (ADS)
Richards, Virginia M.
2002-02-01
Perturbation analyses have been applied in recent years to determine the relative contribution of individual stimulus components in detection and discrimination tasks. Responses to stimulus samples are compared to stimulus parameters to determine the details of the decision rule. Often, a linear model is assumed and it is of interest to determine the relative contribution of different stimulus elements to the decision. Here, biases in estimated relative weights are considered for the case where the decision variable is given by D=(∑(αiXin)k)m and the stimulus components, the Xi, are normally distributed, of equal variance, and mutually independent. The αi are the ``true'' combination weights, and n, k, and m are positive reals. The method used to estimate relative weights is the correlation coefficient between the Xi and the observer's responses. Estimates of relative αi do not depend on m but may depend on the mean values of the Xi and the values of n and k (a dependence on the variance, σi2, holds even for linear transformations).
NASA Astrophysics Data System (ADS)
Sollberger, S.; Perez, K.; Schubert, C. J.; Eugster, W.; Wehrli, B.; Del Sontro, T.
2013-12-01
Currently, carbon dioxide (CO2) and methane (CH4) emissions from lakes, reservoirs and rivers are readily investigated due to the global warming potential of those gases and the role these inland waters play in the carbon cycle. However, there is a lack of high spatiotemporally-resolved emission estimates, and how to accurately assess the gas transfer velocity (K) remains controversial. In anthropogenically-impacted systems where run-of-river reservoirs disrupt the flow of sediments by increasing the erosion and load accumulation patterns, the resulting production of carbonic greenhouse gases (GH-C) is likely to be enhanced. The GH-C flux is thus counteracting the terrestrial carbon sink in these environments that act as net carbon emitters. The aim of this project was to determine the GH-C emissions from a medium-sized river heavily impacted by several impoundments and channelization through a densely-populated region of Switzerland. Estimating gas emission from rivers is not trivial and recently several models have been put forth to do so; therefore a second goal of this project was to compare the river emission models available with direct measurements. Finally, we further validated the modeled fluxes by using a combined approach with water sampling, chamber measurements, and highly temporal GH-C monitoring using an equilibrator. We conducted monthly surveys along the 120 km of the lower Aare River where we sampled for dissolved CH4 (';manual' sampling) at a 5-km sampling resolution, and measured gas emissions directly with chambers over a 35 km section. We calculated fluxes (F) via the boundary layer equation (F=K×(Cw-Ceq)) that uses the water-air GH-C concentration (C) gradient (Cw-Ceq) and K, which is the most sensitive parameter. K was estimated using 11 different models found in the literature with varying dependencies on: river hydrology (n=7), wind (2), heat exchange (1), and river width (1). We found that chamber fluxes were always higher than boundary
ERIC Educational Resources Information Center
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
Estimation of parameters of random effects models from samples collected via complex multistage designs is considered. One way to reduce estimation bias due to unequal probabilities of selection is to incorporate sampling weights. Many researchers have been proposed various weighting methods (Korn, & Graubard, 2003; Pfeffermann, Skinner, Holmes,…
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-01-01
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS. PMID:27420066
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-01-01
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS. PMID:27420066
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-07-12
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.
Ellis, Alan R.; Dusetzina, Stacie B.; Hansen, Richard A.; Gaynes, Bradley N.; Farley, Joel F.; Stürmer, Til
2013-01-01
Purpose The choice of propensity score (PS) implementation influences treatment effect estimates not only because different methods estimate different quantities, but also because different estimators respond in different ways to phenomena such as treatment effect heterogeneity and limited availability of potential matches. Using effectiveness data, we describe lessons learned from sensitivity analyses with matched and weighted estimates. Methods With subsample data (N=1,292) from Sequenced Treatment Alternatives to Relieve Depression, a 2001–2004 effectiveness trial of depression treatments, we implemented PS matching and weighting to estimate the treatment effect in the treated and conducted multiple sensitivity analyses. Results Matching and weighting both balanced covariates but yielded different samples and treatment effect estimates (matched RR 1.00, 95% CI:0.75–1.34; weighted RR 1.28, 95% CI:0.97–1.69). In sensitivity analyses, as increasing numbers of observations at both ends of the PS distribution were excluded from the weighted analysis, weighted estimates approached the matched estimate (weighted RR 1.04, 95% CI 0.77–1.39 after excluding all observations below the 5th percentile of the treated and above the 95th percentile of the untreated). Treatment appeared to have benefits only in the highest and lowest PS strata. Conclusions Matched and weighted estimates differed due to incomplete matching, sensitivity of weighted estimates to extreme observations, and possibly treatment effect heterogeneity. PS analysis requires identifying the population and treatment effect of interest, selecting an appropriate implementation method, and conducting and reporting sensitivity analyses. Weighted estimation especially should include sensitivity analyses relating to influential observations, such as those treated contrary to prediction. PMID:23280682
Zare, Ali; Mahmoodi, Mahmood; Mohammad, Kazem; Zeraati, Hojjat; Hosseini, Mostafa; Holakouie Naieni, Kourosh
2014-01-01
The 5-year survival rate is a good prognostic indicator for patients with Gastric cancer that is usually estimated based on Kaplan-Meier. In situations where censored observations are too many, this method produces biased estimations. This study aimed to compare estimations of Kaplan-Meier and Weighted Kaplan-Meier as an alternative method to deal with the problem of heavy-censoring. Data from 330 patients with Gastric cancer who had undergone surgery at Iran Cancer Institute from 1995- 1999 were analyzed. The Survival Time of these patients was determined after surgery, and the 5-year survival rate for these patients was evaluated based on Kaplan-Meier and Weighted Kaplan-Meier methods. A total of 239 (72.4%) patients passed away by the end of the study and 91(27.6%) patients were censored. The mean and median of survival time for these patients were 24.86±23.73 and 16.33 months, respectively. The one-year, two-year, three-year, four-year, and five-year survival rates of these patients with standard error estimation based on Kaplan-Meier were 0.66 (0.0264), 0.42 (0.0284), 0.31 (0.0274), 0.26 (0.0264) and 0.21 (0.0256) months, respectively. The estimations of Weighted Kaplan-Meier for these patients were 0.62 (0.0251), 0.35 (0.0237), 0.24 (0.0211), 0.17 (0.0172), and 0.10 (0.0125) months, consecutively. In cases where censoring assumption is not made, and the study has many censored observations, estimations obtained from the Kaplan-Meier are biased and are estimated higher than its real amount. But Weighted Kaplan-Meier decreases bias of survival probabilities by providing appropriate weights and presents more accurate understanding.
Goldfeld, K S
2014-03-30
Cost-effectiveness analysis is an important tool that can be applied to the evaluation of a health treatment or policy. When the observed costs and outcomes result from a nonrandomized treatment, making causal inference about the effects of the treatment requires special care. The challenges are compounded when the observation period is truncated for some of the study subjects. This paper presents a method of unbiased estimation of cost-effectiveness using observational study data that is not fully observed. The method-twice-weighted multiple interval estimation of a marginal structural model-was developed in order to analyze the cost-effectiveness of treatment protocols for advanced dementia residents living nursing homes when they become acutely ill. A key feature of this estimation approach is that it facilitates a sensitivity analysis that identifies the potential effects of unmeasured confounding on the conclusions concerning cost-effectiveness.
NASA Astrophysics Data System (ADS)
Kassinopoulos, Michalis; Pitris, Costas
2016-03-01
The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.
NASA Astrophysics Data System (ADS)
Subramanian, Swetha; Mast, T. Douglas
2015-09-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.
Subramanian, Swetha; Mast, T Douglas
2015-10-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. PMID:26352462
NASA Astrophysics Data System (ADS)
Beirle, Steffen; Hörmann, Christoph; Jöckel, Patrick; Penning de Vries, Marloes; Pozzer, Andrea; Sihler, Holger; Valks, Pieter; Wagner, Thomas
2016-04-01
The STRatospheric Estimation Algorithm from Mainz (STREAM) determines stratospheric columns of NO2 which are needed for the retrieval of tropospheric columns from satellite observations. It is based on the total column measurements over clean, remote regions as well as over clouded scenes where the tropospheric column is effectively shielded. The contribution of individual satellite measurements to the stratospheric estimate is controlled by various weighting factors. STREAM is a flexible and robust algorithm and does not require input from chemical transport models. It was developed as verification algorithm for the upcoming satellite instrument TROPOMI, as complement to the operational stratospheric correction based on data assimilation. STREAM was successfully applied to the UV/vis satellite instruments GOME 1/2, SCIAMACHY, and OMI. It overcomes some of the artefacts of previous algorithms, as it is capable of reproducing gradients of stratospheric NO2, e.g. related to the polar vortex, and reduces interpolation errors over continents. Based on synthetic input data, the uncertainty of STREAM was quantified as about 0.1-0.2 ×1015 molecules cm‑2, in accordance to the typical deviations between stratospheric estimates from different algorithms compared in this study.
NASA Astrophysics Data System (ADS)
Beirle, Steffen; Hörmann, Christoph; Jöckel, Patrick; Liu, Song; Penning de Vries, Marloes; Pozzer, Andrea; Sihler, Holger; Valks, Pieter; Wagner, Thomas
2016-07-01
The STRatospheric Estimation Algorithm from Mainz (STREAM) determines stratospheric columns of NO2 which are needed for the retrieval of tropospheric columns from satellite observations. It is based on the total column measurements over clean, remote regions as well as over clouded scenes where the tropospheric column is effectively shielded. The contribution of individual satellite measurements to the stratospheric estimate is controlled by various weighting factors. STREAM is a flexible and robust algorithm and does not require input from chemical transport models. It was developed as a verification algorithm for the upcoming satellite instrument TROPOMI, as a complement to the operational stratospheric correction based on data assimilation. STREAM was successfully applied to the UV/vis satellite instruments GOME 1/2, SCIAMACHY, and OMI. It overcomes some of the artifacts of previous algorithms, as it is capable of reproducing gradients of stratospheric NO2, e.g., related to the polar vortex, and reduces interpolation errors over continents. Based on synthetic input data, the uncertainty of STREAM was quantified as about 0.1-0.2 × 1015 molecules cm-2, in accordance with the typical deviations between stratospheric estimates from different algorithms compared in this study.
Estimate of the weight in bovine livestock using digital image processing and neural network
NASA Astrophysics Data System (ADS)
Arias, N. A.; Molina, M. L.; Gualdron, Oscar
2004-10-01
A procedure inside the context of artificial vision that estimates the weight in bovine livestock was developed and designed. The input data to the system are images obtained by a videotape camera to color. These images are digitized and preprocessing using filters of elimination of noise; later are implemented and evaluated different segmentation methods that allow to obtain the contour of the animal in semiautomatic form. This process consists of an automatic binarization in its initial stage and an eventual manual adjustment. Later the characteristics that have a strong correlation with the weight of the animal are extracted. These are extracted in a form that is independent of the orientation of the object inside the image. Such characteristics include: the superior area, perimeter, wide of abdomen, wide of haunch, wide of scapula. The estimate of the weight of the animal is made by means of a neural network type feedforward whose inputs feed with the extracted characteristics of the image. Finally the system is evaluated in the number and type of characteristics kept in mind and in the structure of the neural network utilized.
Evaluation of Bias in Estimates of Early Childhood Obesity From Parent-Reported Heights and Weights
Weden, Margaret M.; Lau, Christopher; Brownell, Peter; Nazarov, Zafar; Fernandes, Meenakshi
2014-01-01
Objectives. We evaluated bias in estimated obesity prevalence owing to error in parental reporting. We also evaluated bias mitigation through application of Centers for Disease Control and Prevention’s biologically implausible value (BIV) cutoffs. Methods. We simulated obesity prevalence of children aged 2 to 5 years in 2 panel surveys after counterfactually substituting parameters estimated from 1999–2008 National Health and Nutrition Examination Survey data for prevalence of extreme height and weight and for proportions obese in extreme height or weight categories. Results. Heights reported below the first and fifth height-for-age percentiles explained between one half and two thirds, respectively, of total bias in obesity prevalence. Bias was reduced by one tenth when excluding cases with height-for-age and weight-for-age BIVs and by one fifth when excluding cases with body mass–index-for-age BIVs. Applying BIVs, however, resulted in incorrect exclusion of nonnegligible proportions of obese children. Conclusions. Correcting the reporting of children’s heights in the first percentile alone may reduce overestimation of early childhood obesity prevalence in surveys with parental reporting by one half to two thirds. Excluding BIVs has limited effectiveness in mitigating this bias. PMID:24832432
Granger causality-based synaptic weights estimation for analyzing neuronal networks.
Shao, Pei-Chiang; Huang, Jian-Jia; Shann, Wei-Chang; Yen, Chen-Tung; Tsai, Meng-Li; Yen, Chien-Chang
2015-06-01
Granger causality (GC) analysis has emerged as a powerful analytical method for estimating the causal relationship among various types of neural activity data. However, two problems remain not very clear and further researches are needed: (1) The GC measure is designed to be nonnegative in its original form, lacking of the trait for differentiating the effects of excitations and inhibitions between neurons. (2) How is the estimated causality related to the underlying synaptic weights? Based on the GC, we propose a computational algorithm under a best linear predictor assumption for analyzing neuronal networks by estimating the synaptic weights among them. Under this assumption, the GC analysis can be extended to measure both excitatory and inhibitory effects between neurons. The method was examined by three sorts of simulated networks: those with linear, almost linear, and nonlinear network structures. The method was also illustrated to analyze real spike train data from the anterior cingulate cortex (ACC) and the striatum (STR). The results showed, under the quinpirole administration, the significant existence of excitatory effects inside the ACC, excitatory effects from the ACC to the STR, and inhibitory effects inside the STR.
Weighted Least Squares Estimates of the Magnetotelluric Transfer Functions from Nonstationary Data
Stodt, John A.
1982-11-01
Magnetotelluric field measurements can generally be viewed as sums of signal and additive random noise components. The standard unweighted least squares estimates of the impedance and tipper functions which are usually calculated from noisy data are not optimal when the measured fields are nonstationary. The nonstationary behavior of the signals and noises should be exploited by weighting the data appropriately to reduce errors in the estimates of the impedances and tippers. Insight into the effects of noise on the estimates is gained by careful development of a statistical model, within a linear system framework, which allows for nonstationary behavior of both the signal and noise components of the measured fields. The signal components are, by definition, linearly related to each other by the impedance and tipper functions. It is therefore appropriate to treat them as deterministic parameters, rather than as random variables, when analyzing the effects of noise on the calculated impedances and tippers. From this viewpoint, weighted least squares procedures are developed to reduce the errors in impedances and tippers which are calculated from nonstationary data.
Random weighting error estimation for the inversion result of finite-fault rupture history
NASA Astrophysics Data System (ADS)
Ai, Yin-Shuang; Zheng, Tian-Yu; He, Yu-Mei
1999-07-01
Since the non-unique solution exists in the inversion for finite-fault rupture history, the random weighting method has been used to estimate error of the inversion results in this paper. The resolution distributions of slip amplitude, rake, rupture time and rise time on the finite fault were deduced quantitatively by model calculation. By using the random weighting method, the inversion results of Taiwan Strait earthquake and Myanmar-China boundary earthquake show that the parameters related to the rupture centers of two events have the highest resolution, and the solution are the most reliable; otherwise the resolution of the slip amplitudes and rise time on the finite-fault boundary is low.
Fuel Consumption Reduction and Weight Estimate of an Intercooled-Recuperated Turboprop Engine
NASA Astrophysics Data System (ADS)
Andriani, Roberto; Ghezzi, Umberto; Ingenito, Antonella; Gamma, Fausto
2012-09-01
The introduction of intercooling and regeneration in a gas turbine engine can lead to performance improvement and fuel consumption reduction. Moreover, as first consequence of the saved fuel, also the pollutant emission can be greatly reduced. Turboprop seems to be the most suitable gas turbine engine to be equipped with intercooler and heat recuperator thanks to the relatively small mass flow rate and the small propulsion power fraction due to the exhaust nozzle. However, the extra weight and drag due to the heat exchangers must be carefully considered. An intercooled-recuperated turboprop engine is studied by means of a thermodynamic numeric code that, computing the thermal cycle, simulates the engine behavior at different operating conditions. The main aero engine performances, as specific power and specific fuel consumption, are then evaluated from the cycle analysis. The saved fuel, the pollution reduction, and the engine weight are then estimated for an example case.
Model selection in the weighted generalized estimating equations for longitudinal data with dropout.
Gosho, Masahiko
2016-05-01
We propose criteria for variable selection in the mean model and for the selection of a working correlation structure in longitudinal data with dropout missingness using weighted generalized estimating equations. The proposed criteria are based on a weighted quasi-likelihood function and a penalty term. Our simulation results show that the proposed criteria frequently select the correct model in candidate mean models. The proposed criteria also have good performance in selecting the working correlation structure for binary and normal outcomes. We illustrate our approaches using two empirical examples. In the first example, we use data from a randomized double-blind study to test the cancer-preventing effects of beta carotene. In the second example, we use longitudinal CD4 count data from a randomized double-blind study. PMID:26509243
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
BEECH, D. J.; ROCHE, E. D.; SIBBONS, P. D.; ROSSDALE, P. D.; OUSEY, J. C.
2000-01-01
Mean glomerular volume has previously been estimated, using stereological techniques, specifically the point-sampled intercept (PSI), either from isotropic or from vertical sections. As glomeruli are approximately spherical structures, the same stereological technique was carried out on vertical and arbitrary sections to determine whether section orientation had any effect on mean glomerular volume estimation. Equine kidneys from 10 individuals were analysed using the PSI method of estimating volume-weighted mean glomerular volume (MGV); for each kidney, arbitrary and vertical sections were analysed. MGVs were not significantly different between arbitrary and vertical sections (P = 0.691) when analysing the data with the paired t test; when plotting MGV estimates from arbitrary sections against those from vertical sections the intercept was found not to be significantly different from zero (P > 0.8) and the slope of the regression line not to be significantly different from 1.0 (P > 0.4). For the estimation of MGV in equine kidneys using PSI, arbitrary sections may be used if it is not possible to use isotropic or vertical sections, but some caution must be exercised in the interpretation of results so gained. PMID:11005722
Kim, Sanghong; Kano, Manabu; Nakagawa, Hiroshi; Hasebe, Shinji
2011-12-15
Development of quality estimation models using near infrared spectroscopy (NIRS) and multivariate analysis has been accelerated as a process analytical technology (PAT) tool in the pharmaceutical industry. Although linear regression methods such as partial least squares (PLS) are widely used, they cannot always achieve high estimation accuracy because physical and chemical properties of a measuring object have a complex effect on NIR spectra. In this research, locally weighted PLS (LW-PLS) which utilizes a newly defined similarity between samples is proposed to estimate active pharmaceutical ingredient (API) content in granules for tableting. In addition, a statistical wavelength selection method which quantifies the effect of API content and other factors on NIR spectra is proposed. LW-PLS and the proposed wavelength selection method were applied to real process data provided by Daiichi Sankyo Co., Ltd., and the estimation accuracy was improved by 38.6% in root mean square error of prediction (RMSEP) compared to the conventional PLS using wavelengths selected on the basis of variable importance on the projection (VIP). The results clearly show that the proposed calibration modeling technique is useful for API content estimation and is superior to the conventional one. PMID:22001843
Towards higher sensitivity and stability of axon diameter estimation with diffusion‐weighted MRI
Alexander, Daniel C.; Kurniawan, Nyoman D.; Reutens, David C.; Yang, Zhengyi
2016-01-01
Diffusion‐weighted MRI is an important tool for in vivo and non‐invasive axon morphometry. The ActiveAx technique utilises an optimised acquisition protocol to infer orientationally invariant indices of axon diameter and density by fitting a model of white matter to the acquired data. In this study, we investigated the factors that influence the sensitivity to small‐diameter axons, namely the gradient strength of the acquisition protocol and the model fitting routine. Diffusion‐weighted ex. vivo images of the mouse brain were acquired using 16.4‐T MRI with high (G max of 300 mT/m) and ultra‐high (G max of 1350 mT/m) gradient strength acquisitions. The estimated axon diameter indices of the mid‐sagittal corpus callosum were validated using electron microscopy. In addition, a dictionary‐based fitting routine was employed and evaluated. Axon diameter indices were closer to electron microscopy measures when higher gradient strengths were employed. Despite the improvement, estimated axon diameter indices (a lower bound of ~ 1.8 μm) remained higher than the measurements obtained using electron microscopy (~1.2 μm). We further observed that limitations of pulsed gradient spin echo (PGSE) acquisition sequences and axonal dispersion could also influence the sensitivity with which axon diameter indices could be estimated. Our results highlight the influence of acquisition protocol, tissue model and model fitting, in addition to gradient strength, on advanced microstructural diffusion‐weighted imaging techniques. © 2016 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd. PMID:26748471
Calculation of weighted averages approach for the estimation of ping tolerance values
Silalom, S.; Carter, J.L.; Chantaramongkol, P.
2010-01-01
A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Gosselin, Richard A
2015-01-01
Abstract Objective To calculate the effect of using two different sets of disability weights for estimates of disability-adjusted life-years (DALYs) averted by interventions delivered in one hospital in India. Methods DALYs averted by surgical and non-surgical interventions were estimated for 3445 patients who were admitted to a 106-bed private hospital in a semi-urban area of northern India in 2012–2013. Disability weights were taken from global burden of disease (GBD) studies. We used the GBD 1990 disability weights and then repeated all of our calculations using the corresponding GBD 2010 weights. DALYs averted were estimated for surgical and non-surgical interventions using disability weight, risk of death and/or disability, and effectiveness of treatment. Findings The disability weights assigned in the GBD 1990 study to the sequelae of conditions such as cataract, cancer and injuries were substantially different to those assigned in the GBD 2010 study. These differences in weights led to large differences in estimates of DALYs averted. For all surgical interventions delivered to this patient cohort, 11 517 DALYs were averted if we used the GDB 1990 weights and 9401 DALYs were averted if we used the GDB 2010 disability weights. For non-surgical interventions 5168 DALYs were averted using the GDB 1990 disability weights and 5537 DALYS were averted using the GDB 2010 disability weights. Conclusion Estimates of the effectiveness of hospital interventions depend upon the disability weighting used. Researchers and resource allocators need to be very cautious when comparing results from studies that have used different sets of disability weights. PMID:26170505
Miyamoto, Kayoko; Nishimuta, Mamoru; Hamaoka, Takafumi; Kodama, Naoko; Yoshitake, Yutaka
2012-01-01
To determine the energy intake (EI) required to maintain body weight (equilibrium energy intake: EEI), we investigated the relationship between calculated energy intake and body weight changes in female subjects participating in 14 human balance studies (n=149) conducted at the National Institute of Health and Nutrition (Tokyo). In four and a half studies (n=43), sweat was collected from the arm to estimate loss of minerals through sweating during exercise on a bicycle ergometer; these subjects were classified in the exercise group (Ex G). In nine and a half experiments (n=106) subjects did not exercise, and were classified in the sedentary group (Sed G). The relationship between dietary energy intake (EI) and body weight (BW) changes (ΔBW) was analyzed and divided by four variables: body weight (BW), lean body mass (LBM), standard body weight (SBW), and body surface area (BSA). Equilibrium energy intake (EEI) and 95% confidence interval (CI) for EEI in Ex G were 34.3 and 32.8-35.9 kcal/kg BW/d, 32.0 and 30.8-33.1 kcal/kg SBW/d, 46.3 and 44.2-48.5 kcal/kg LBW/d, and 1,200 and 1,170-1,240 kcal/m(2) BSA/d, respectively. EEI and 95% CI for EEI in Sed G were 34.5 and 33.9-35.1 kcal/kg BW/d, 31.4 and 30.9-32.0 kcal/kg SBW/d, 44.9 and 44.1-45.8 kcal/kg LBM/d, and 1,200 and 1,180-1,210 kcal/m2 BSA/d, respectively. EEIs obtained in this study are 3 to 5% higher than estimated energy requirement (EER) for Japanese. In five out of six analyses, EER in a population (female, 18-29 y, physical activity level: 1.50) was under 95% CI of EEI obtained in this study.
Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi
2015-01-01
Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm2) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060–0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996–0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359
Hayakawa, Carole K.; Spanier, Jerome; Venugopalan, Vasan
2014-01-01
We examine the relative error of Monte Carlo simulations of radiative transport that employ two commonly used estimators that account for absorption differently, either discretely, at interaction points, or continuously, between interaction points. We provide a rigorous derivation of these discrete and continuous absorption weighting estimators within a stochastic model that we show to be equivalent to an analytic model, based on the radiative transport equation (RTE). We establish that both absorption weighting estimators are unbiased and, therefore, converge to the solution of the RTE. An analysis of spatially resolved reflectance predictions provided by these two estimators reveals no advantage to either in cases of highly scattering and highly anisotropic media. However, for moderate to highly absorbing media or isotropically scattering media, the discrete estimator provides smaller errors at proximal source locations while the continuous estimator provides smaller errors at distal locations. The origin of these differing variance characteristics can be understood through examination of the distribution of exiting photon weights. PMID:24562029
Feature weight estimation for gene selection: a local hyperlinear learning approach
2014-01-01
Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071
NASA Astrophysics Data System (ADS)
Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.
2015-12-01
Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi
ERIC Educational Resources Information Center
Feldt, Leonard S.
2004-01-01
In some settings, the validity of a battery composite or a test score is enhanced by weighting some parts or items more heavily than others in the total score. This article describes methods of estimating the total score reliability coefficient when differential weights are used with items or parts.
Real-time combining of residual carrier array signals using ML weight estimates
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor A.; Rodemich, Eugene R.; Dolinar, Samuel J., Jr.
1992-01-01
A real-time digital signal combining system for use with array feeds is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed residual carrier samples in each channel using a 'sliding-window' implementation of a maximum-likelihood (ML) parameter estimator. It is shown that with averaging times of about 0.1 s, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the array feed, even in the presence of severe wind gusts and similar disturbances.
Hernández, Moisés; Guerrero, Ginés D.; Cecilia, José M.; García, José M.; Inuggi, Alberto; Jbabdi, Saad; Behrens, Timothy E. J.; Sotiropoulos, Stamatios N.
2013-01-01
With the performance of central processing units (CPUs) having effectively reached a limit, parallel processing offers an alternative for applications with high computational demands. Modern graphics processing units (GPUs) are massively parallel processors that can execute simultaneously thousands of light-weight processes. In this study, we propose and implement a parallel GPU-based design of a popular method that is used for the analysis of brain magnetic resonance imaging (MRI). More specifically, we are concerned with a model-based approach for extracting tissue structural information from diffusion-weighted (DW) MRI data. DW-MRI offers, through tractography approaches, the only way to study brain structural connectivity, non-invasively and in-vivo. We parallelise the Bayesian inference framework for the ball & stick model, as it is implemented in the tractography toolbox of the popular FSL software package (University of Oxford). For our implementation, we utilise the Compute Unified Device Architecture (CUDA) programming model. We show that the parameter estimation, performed through Markov Chain Monte Carlo (MCMC), is accelerated by at least two orders of magnitude, when comparing a single GPU with the respective sequential single-core CPU version. We also illustrate similar speed-up factors (up to 120x) when comparing a multi-GPU with a multi-CPU implementation. PMID:23658616
Weissman-Miller, Deborah
2013-01-01
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment. PMID:24190595
ERIC Educational Resources Information Center
Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.
1997-01-01
This review finds that formula-based procedures can be used in place of empirical validation for estimating population validity or in place of empirical cross-validation for estimating population cross-validity. Discusses conditions under which the equal weights procedure is a viable alternative. (SLD)
He, Bin; Yao, D; Lian, Jie; Wu, D
2002-04-01
We have developed a method for estimating the three-dimensional distribution of equivalent current sources inside the brain from scalp potentials. Laplacian weighted minimum norm algorithm has been used in the present study to estimate the inverse solutions. A three-concentric-sphere inhomogeneous head model was used to represent the head volume conductor. A closed-form solution of the electrical potential over the scalp and inside the brain due to a point current source was developed for the three-concentric-sphere inhomogeneous head model. Computer simulation studies were conducted to validate the proposed equivalent current source imaging. Assuming source configurations as either multiple dipoles or point current sources/sinks, in computer simulations we used our method to reconstruct these sources, and compared with the equivalent dipole source imaging. Human experimental studies were also conducted and the equivalent current source imaging was performed on the visual evoked potential data. These results highlight the advantages of the equivalent current source imaging and suggest that it may become an alternative approach to imaging spatially distributed current sources-sinks in the brain and other organ systems.
Ahlberg, C M; Kuehn, L A; Thallman, R M; Kachman, S D; Snelling, W M; Spangler, M L
2016-05-01
Birth weight (BWT) and calving difficulty (CD) were recorded on 4,579 first-parity females from the Germplasm Evaluation Program at the U.S. Meat Animal Research Center (USMARC). Both traits were analyzed using a bivariate animal model with direct and maternal effects. Calving difficulty was transformed from the USMARC scores to corresponding -scores from the standard normal distribution based on the incidence rate of the USMARC scores. Breed fraction covariates were included to estimate breed differences. Heritability estimates (SE) for BWT direct, CD direct, BWT maternal, and CD maternal were 0.34 (0.10), 0.29 (0.10), 0.15 (0.08), and 0.13 (0.08), respectively. Calving difficulty direct breed effects deviated from Angus ranged from -0.13 to 0.77 and maternal breed effects deviated from Angus ranged from -0.27 to 0.36. Hereford-, Angus-, Gelbvieh-, and Brangus-sired calves would be the least likely to require assistance at birth, whereas Chiangus-, Charolais-, and Limousin-sired calves would be the most likely to require assistance at birth. Maternal breed effects for CD were least for Simmental and Charolais and greatest for Red Angus and Chiangus. Results showed that the diverse biological types of cattle have different effects on both BWT and CD. Furthermore, results provide a mechanism whereby beef cattle producers can compare EBV for CD direct and maternal arising from disjoined and breed-specific genetic evaluations. PMID:27285683
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-01-01
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-01-01
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-05-07
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path.
Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris
2012-01-01
Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa
NASA Astrophysics Data System (ADS)
Cook, Tessa S.; Chadalavada, Seetharam C.; Boonn, William W.
2013-03-01
One of the biggest challenges in dose monitoring is customization of CT dose estimates to the patient. Patient size remains a highly significant variable. One metric that has previously been used for patient size is patient weight, though this is often criticized as inaccurate. In this work, we compare patients' weight to their effective diameters obtained from a CT scan of the chest or the abdomen. CT exams of the chest (N=163) and abdomen/pelvis (N=168) performed on adult patients in July 2012 were randomly selected for analysis. The effective diameter of the patient for each exam was determined using the central slice of the scan region for each exam using eXposure™ (Radimetrics, Inc., Toronto, Canada). In some cases, the same patient had both a chest and abdominopelvic CT, so effective diameters from both regions were analyzed. In this small sample size, there appears to be a linear relationship between patient weight and effective diameter when measured in the mid-chest and mid-abdomen of adult patients. However, for each weight, patient effective diameter can vary by 5 cm from the regression line in both the chest and the abdomen. A 5-cm difference corresponds to a difference of approximately 0.2 in the chest and 0.3 in the abdomen/pelvis for the correction factors recommended for size-specific dose estimation by the AAPM. This preliminary data suggests that weight-based CT protocoling may in fact be appropriate for some adults. However, more work is needed to identify those patients in whom weight-based protocoling is not appropriate.
Mello, Beatriz; Schrago, Carlos G
2014-01-01
Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333
How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates
ERIC Educational Resources Information Center
Otterbach, Steffen; Sousa-Poza, Alfonso
2010-01-01
This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…
Haasl, Ryan J; Payseur, Bret A
2010-12-01
Theoretical work focused on microsatellite variation has produced a number of important results, including the expected distribution of repeat sizes and the expected squared difference in repeat size between two randomly selected samples. However, closed-form expressions for the sampling distribution and frequency spectrum of microsatellite variation have not been identified. Here, we use coalescent simulations of the stepwise mutation model to develop gamma and exponential approximations of the microsatellite allele frequency spectrum, a distribution central to the description of microsatellite variation across the genome. For both approximations, the parameter of biological relevance is the number of alleles at a locus, which we express as a function of θ, the population-scaled mutation rate, based on simulated data. Discovered relationships between θ, the number of alleles, and the frequency spectrum support the development of three new estimators of microsatellite θ. The three estimators exhibit roughly similar mean squared errors (MSEs) and all are biased. However, across a broad range of sample sizes and θ values, the MSEs of these estimators are frequently lower than all other estimators tested. The new estimators are also reasonably robust to mutation that includes step sizes greater than one. Finally, our approximation to the microsatellite allele frequency spectrum provides a null distribution of microsatellite variation. In this context, a preliminary analysis of the effects of demographic change on the frequency spectrum is performed. We suggest that simulations of the microsatellite frequency spectrum under evolutionary scenarios of interest may guide investigators to the use of relevant and sometimes novel summary statistics.
The Neighborhood Food Environment and Adult Weight Status: Estimates From Longitudinal Data
2011-01-01
Objectives. I used longitudinal data to consider the relationship between the neighborhood food environment and adult weight status. Methods. I combined individual-level data on adults from the 1998 through 2004 survey years of the National Longitudinal Survey of Youth 1979 with zip code–level data on the neighborhood food environment. I estimated ordinary least squares models of obesity, body mass index (BMI), and change in BMI. Results. For residents of urban areas, the neighborhood density of small grocery stores was positively and significantly related to obesity and BMI. For individuals who moved from a rural area to an urban area over a 2-year period, changes in neighborhood supermarket density, small grocery store density, and full-service restaurant density were significantly related to the change in BMI over that period. Conclusions. Residents of urban neighborhoods with a higher concentration of small grocery stores may be more likely to patronize these stores and consume more calories because small grocery stores tend to offer more unhealthy food options than healthy food options. Moving to an urban area may expose movers to a wider variety of food options that may influence calorie consumption. PMID:21088263
Ohta, Hiroyuki; Sakuma, Masae; Suzuki, Akitsu; Morimoto, Yuuka; Ishikawa, Makoto; Umeda, Minako; Arai, Hidekazu
2016-01-01
Fibroblast growth factor 23 (FGF23) is a molecule involved in regulating phosphorus homeostasis. Although some studies indicated an association between serum FGF23 levels and sex, the association has not been fully investigated. The purpose of this study was to evaluate whether sex could influence FGF23 responsiveness to dietary phosphorus intake in healthy individuals. Thirty two healthy subjects between 21 and 28 years were recruited for this study. Subjects performed 24-hour urine collection and blood samples were collected. We estimated phosphorus intake (UC-P) from the urine collection (UC), and evaluated any association between UC-P and serum FGF23 levels. Subsequently, we compared serum FGF23 levels between males and females. Positive correlation was observed between UC-P and serum FGF23 levels. Serum FGF23 levels were significantly higher in males than in females. Serum FGF23 levels/UC-P was significantly higher in females than in males. There was no significant difference in serum FGF23 levels/UC-P/BW between the male and female groups. Our results indicate that there was no gender difference between FGF23 responsiveness to phosphorus intake per body weight.
ERIC Educational Resources Information Center
Mozumdar, Arupendra; Liguori, Gary
2011-01-01
The purposes of this study were to generate correction equations for self-reported height and weight quartiles and to test the accuracy of the body mass index (BMI) classification based on corrected self-reported height and weight among 739 male and 434 female college students. The BMIqc (from height and weight quartile-specific, corrected…
NASA Technical Reports Server (NTRS)
Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.
1977-01-01
Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.
Quantitative shape analysis with weighted covariance estimates for increased statistical efficiency
2013-01-01
Background The introduction and statistical formalisation of landmark-based methods for analysing biological shape has made a major impact on comparative morphometric analyses. However, a satisfactory solution for including information from 2D/3D shapes represented by ‘semi-landmarks’ alongside well-defined landmarks into the analyses is still missing. Also, there has not been an integration of a statistical treatment of measurement error in the current approaches. Results We propose a procedure based upon the description of landmarks with measurement covariance, which extends statistical linear modelling processes to semi-landmarks for further analysis. Our formulation is based upon a self consistent approach to the construction of likelihood-based parameter estimation and includes corrections for parameter bias, induced by the degrees of freedom within the linear model. The method has been implemented and tested on measurements from 2D fly wing, 2D mouse mandible and 3D mouse skull data. We use these data to explore possible advantages and disadvantages over the use of standard Procrustes/PCA analysis via a combination of Monte-Carlo studies and quantitative statistical tests. In the process we show how appropriate weighting provides not only greater stability but also more efficient use of the available landmark data. The set of new landmarks generated in our procedure (‘ghost points’) can then be used in any further downstream statistical analysis. Conclusions Our approach provides a consistent way of including different forms of landmarks into an analysis and reduces instabilities due to poorly defined points. Our results suggest that the method has the potential to be utilised for the analysis of 2D/3D data, and in particular, for the inclusion of information from surfaces represented by multiple landmark points. PMID:23548043
Abd Rahman, Azrin N; Tett, Susan E; Staatz, Christine E
2014-03-01
Mycophenolic acid (MPA) is a potent immunosuppressant agent, which is increasingly being used in the treatment of patients with various autoimmune diseases. Dosing to achieve a specific target MPA area under the concentration-time curve from 0 to 12 h post-dose (AUC12) is likely to lead to better treatment outcomes in patients with autoimmune disease than a standard fixed-dose strategy. This review summarizes the available published data around concentration monitoring strategies for MPA in patients with autoimmune disease and examines the accuracy and precision of methods reported to date using limited concentration-time points to estimate MPA AUC12. A total of 13 studies were identified that assessed the correlation between single time points and MPA AUC12 and/or examined the predictive performance of limited sampling strategies in estimating MPA AUC12. The majority of studies investigated mycophenolate mofetil (MMF) rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation of MPA. Correlations between MPA trough concentrations and MPA AUC12 estimated by full concentration-time profiling ranged from 0.13 to 0.94 across ten studies, with the highest associations (r (2) = 0.90-0.94) observed in lupus nephritis patients. Correlations were generally higher in autoimmune disease patients compared with renal allograft recipients and higher after MMF compared with EC-MPS intake. Four studies investigated use of a limited sampling strategy to predict MPA AUC12 determined by full concentration-time profiling. Three studies used a limited sampling strategy consisting of a maximum combination of three sampling time points with the latest sample drawn 3-6 h after MMF intake, whereas the remaining study tested all combinations of sampling times. MPA AUC12 was best predicted when three samples were taken at pre-dose and at 1 and 3 h post-dose with a mean bias and imprecision of 0.8 and 22.6 % for multiple linear regression analysis and of -5.5 and 23.0 % for
Matsuura, Akihiro; Irimajiri, Mami; Matsuzaki, Kunihiro; Hiraguri, Yuko; Nakanowatari, Toshihiko; Yamazaki, Atusi; Hodate, Koichi
2013-01-01
The aim of this study was to establish a method for estimating loading capacity for Japanese native horses by gait analysis using an accelerometer. Six mares of Japanese native horses were used. The acceleration of each horse was recorded during walking and trotting along a straight course at a sampling frequency of 200 Hz. Each horse performed 12 tests: one test with a loaded weight of 80 kg (First 80 kg) followed by 10 tests with random loaded weights between 85 kg and 130 kg and a final test with a loaded weight of 80 kg again. The time series of acceleration was subjected to fast Fourier transformation, and the autocorrelation coefficient was calculated. The first two peaks of the autocorrelation were defined as symmetry and regularity of the gait. At trot, symmetries in the 100, 110, and 125 kg tests were significantly lower than that in First 80 kg (P < 0.05, by analysis of covariance and Sidak's test). These results imply that the maximum permissible load weight is less than 100 kg, which is 29% of the body weight of Japanese native horses. Our method is a widely applicable and welfare-friendly method for estimating maximum permissible load weights of horses. PMID:23302086
An exploratory investigation of weight estimation techniques for hypersonic flight vehicles
NASA Technical Reports Server (NTRS)
Cook, E. L.
1981-01-01
The three basic methods of weight prediction (fixed-fraction, statistical correlation, and point stress analysis) and some of the computer programs that have been developed to implement them are discussed. A modified version of the WAATS (Weights Analysis of Advanced Transportation Systems) program is presented, along with input data forms and an example problem.
Lee, J W; Choi, S B; Jung, Y H; Keown, J F; Van Vleck, L D
2000-06-01
Data collected by the National Livestock Research Institute of the Rural Development Administration of Korea were used to estimate genetic parameters for yearling (YWT, n = 5,848), 18-mo (W18, n = 4,585), and slaughter (SWT, n = 2,279) weights for Korean Native cattle. Nine animal models were used to obtain REML estimates of genetic parameters: DP-2 included genetic, uncorrelated dam, and residual random effects; DQ-2 included genetic, sire x region x year-season interaction, and residual random effects; DPQ-2 was based on DQ-2 but included both interaction and dam effects; DMP-2 was based on DP-2 but with dam effect partitioned to include maternal genetic and permanent environmental effects; and DMPQ-2 was based on DMP-2 but also included sire interaction effects. Those five models included two fixed factors: region x year-season and age of dam x sex effects. Models DP-3, DQ-3, DPQ-3, and DMPQ-3 were based on DP-2, DQ-2, DPQ-2, and DMPQ-2 but included as a third fixed factor whether or not identification of the sire was known. Estimates of heritability with DMPQ-3 for YWT, with DPQ-3 for W18 and SWT when analyzed with single-trait analyses were .14, .11, and .17, respectively, and were nearly the same with bivariate analyses. Estimate of maternal heritability for YWT from single-trait analysis was .04, with estimates for other traits near zero. For bivariate analyses, the estimate for YWT was .01. With single trait analysis, estimate of the direct-maternal genetic correlation for YWT was negative (-.81). Estimates of direct genetic correlations between YWT and W18, YWT and SWT, and W18 and SWT were .99, 1.00, and .97, respectively. Estimates of environmental correlations varied from .60 to .81; the largest was between W18 and SWT. Including a fixed factor for whether sire identification was missing or not missing reduced the estimate of heritability for slaughter weight. The results suggest that the sire x region x year-season interaction is important for yearling
Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo
2016-01-01
Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255
Shockley, Keith R.
2016-01-01
High-throughput in vitro screening experiments can be used to generate concentration-response data for large chemical libraries. It is often desirable to estimate the concentration needed to achieve a particular effect, or potency, for each chemical tested in an assay. Potency estimates can be used to directly compare chemical profiles and prioritize compounds for confirmation studies, or employed as input data for prediction modeling and association mapping. The concentration for half-maximal activity derived from the Hill equation model (i.e., AC50) is the most common potency measure applied in pharmacological research and toxicity testing. However, the AC50 parameter is subject to large uncertainty for many concentration-response relationships. In this study we introduce a new measure of potency based on a weighted Shannon entropy measure termed the weighted entropy score (WES). Our potency estimator (Point of Departure, PODWES) is defined as the concentration producing the maximum rate of change in weighted entropy along a concentration-response profile. This approach provides a new tool for potency estimation that does not depend on the assumption of monotonicity or any other pre-specified concentration-response relationship. PODWES estimates potency with greater precision and less bias compared to the conventional AC50 assessed across a range of simulated conditions. PMID:27302286
Using machine vision to estimate bird weight in the poultry industry
NASA Astrophysics Data System (ADS)
Lotufo, Roberto A.; Taube-Netto, Miguel; Conejo, Eduardo; Hoyos, Francisco J. d.
1999-03-01
This work describes a real-time continuous broiler weighting system based on machine vision, used for size sort planning in a process plant. We demonstrate that this technology can be used successfully as an alternative to classical electromechanical carcasses weighting system. A digitized silhouette image of the carcass hung by its feet is divided in six regions: the legs, the wings, the neck and the breast. Once the parts are separated, their individual areas are measured and used in a polynomial equation that predicts the overall bird weight. A sample of 1400 birds were collected, labeled and weighted in a precision scale, in different days and different hours. We found an error standard deviation of 78 grams for broilers weighing from 750 to 2100 grams. The morphological image processing algorithms demonstrated to be robust to extract the individual parts of the carcass.
Error estimates of Lagrange interpolation and orthonormal expansions for Freud weights
NASA Astrophysics Data System (ADS)
Kwon, K. H.; Lee, D. W.
2001-08-01
Let Sn[f] be the nth partial sum of the orthonormal polynomials expansion with respect to a Freud weight. Then we obtain sufficient conditions for the boundedness of Sn[f] and discuss the speed of the convergence of Sn[f] in weighted Lp space. We also find sufficient conditions for the boundedness of the Lagrange interpolation polynomial Ln[f], whose nodal points are the zeros of orthonormal polynomials with respect to a Freud weight. In particular, if W(x)=e-(1/2)x2 is the Hermite weight function, then we obtain sufficient conditions for the inequalities to hold:andwhere and k=0,1,2...,r.
Her, Namguk; Amy, Gary; Foss, David; Chow, Jaeweon
2002-08-01
High performance size exclusion chromatography (HPSEC) with ultraviolet absorbance (UVA) detection has been widely utilized to estimate the molecular weight (MW) and MW distribution of natural organic matter (NOM). However, the estimation of MW with UVA detection is inherently inaccurate because UVA at 254 nm only detects limited components (mostly pi bonded molecules) of NOM, and the molar absorptivity of these different NOM constituents is not equal. In comparison, a SEC chromatogram obtained with a DOC detector showed significant differences compared to a corresponding UVA chromatogram, resulting in different MW values as well as different estimates of polydispersivity. The MWs of Suwannee River humic acid (SRHA), Suwannee River fulvic acid (SRFA), and various mixtures thereof were estimated with HPSEC coupled with UVA and DOC detectors. The results show that UVA is not an adequate detector for quantitative analysis of MW estimation but rather can be used only for limited qualitative analysis. The NOM in several natural waters (Irvine Ranch, California groundwater, and Barr Lake, Colorado surface water) were also characterized to demonstrate the different MWs obtained with the two detectors. The results of the SEC-DOC chromatograms revealed NOM constituent peaks that went undetected by UVA. Utilizing online DOC detection, a better representation of NOM MWs was suggested, with NOM displaying higher weight-averaged MW (Mw) and lower number-averaged MW (Mn) as well as higher polydispersivity. A method for estimation of the MWs of NOM fractional components and polydispersivities is presented. PMID:12188370
Kamphuis, Claudia; Burke, Jennie K; Taukiri, Sarah; Petch, Susan-Fay; Turner, Sally-Anne
2016-08-01
Dairy cows grazing pasture and milked using automated milking systems (AMS) have lower milking frequencies than indoor fed cows milked using AMS. Therefore, milk recording intervals used for herd testing indoor fed cows may not be suitable for cows on pasture based farms. We hypothesised that accurate standardised 24 h estimates could be determined for AMS herds with milk recording intervals of less than the Gold Standard (48 hs), but that the optimum milk recording interval would depend on the herd average for milking frequency. The Gold Standard protocol was applied on five commercial dairy farms with AMS, between December 2011 and February 2013. From 12 milk recording test periods, involving 2211 cow-test days and 8049 cow milkings, standardised 24 h estimates for milk volume and milk composition were calculated for the Gold Standard protocol and compared with those collected during nine alternative sampling scenarios, including six shorter sampling periods and three in which a fixed number of milk samples per cow were collected. Results infer a 48 h milk recording protocol is unnecessarily long for collecting accurate estimates during milk recording on pasture based AMS farms. Collection of two milk samples only per cow was optimal in terms of high concordance correlation coefficients for milk volume and components and a low proportion of missed cow-test days. Further research is required to determine the effects of diurnal variations in milk composition on standardised 24 h estimates for milk volume and components, before a protocol based on a fixed number of samples could be considered. Based on the results of this study New Zealand have adopted a split protocol for herd testing based on the average milking frequency for the herd (NZ Herd Test Standard 8100:2015). PMID:27600967
NASA Astrophysics Data System (ADS)
Williamson, Nathan H.; Nydén, Magnus; Röding, Magnus
2016-06-01
We present comprehensive derivations for the statistical models and methods for the use of pulsed gradient spin echo (PGSE) NMR to characterize the molecular weight distribution of polymers via the well-known scaling law relating diffusion coefficients and molecular weights. We cover the lognormal and gamma distribution models and linear combinations of these distributions. Although the focus is on methodology, we illustrate the use experimentally with three polystyrene samples, comparing the NMR results to gel permeation chromatography (GPC) measurements, test the accuracy and noise-sensitivity on simulated data, and provide code for implementation.
Chan, Yi-Hsin; Tsai, Wei-Chung; Shen, Changyu; Han, Seongwook; Chen, Lan S.; Lin, Shien-Fong; Chen, Peng-Sheng
2015-01-01
Background We recently reported that subcutaneous nerve activity (SCNA) can be used to estimate sympathetic tone. Objectives To test the hypothesis that left thoracic SCNA is more accurate than heart rate variability (HRV) in estimating cardiac sympathetic tone in ambulatory dogs with myocardial infarction (MI). Methods We used an implanted radiotransmitter to study left stellate ganglion nerve activity (SGNA), vagal nerve activity (VNA), and thoracic SCNA in 9 dogs at baseline and up to 8 weeks after MI. HRV was determined based by time-domain, frequency-domain and non-linear analyses. Results The correlation coefficients between integrated SGNA and SCNA averaged 0.74 (95% confidence interval (CI), 0.41–1.06) at baseline and 0.82 (95% CI, 0.63–1.01) after MI (P<.05 for both). The absolute values of the correlation coefficients were significant larger than that between SGNA and HRV analysis based on time-domain, frequency-domain and non-linear analyses, respectively, at baseline (P<.05 for all) and after MI (P<.05 for all). There was a clear increment of SGNA and SCNA at 2, 4, 6 and 8 weeks after MI, while HRV parameters showed no significant changes. Significant circadian variations were noted in SCNA, SGNA and all HRV parameters at baseline and after MI, respectively. Atrial tachycardia (AT) episodes were invariably preceded by the SCNA and SGNA, which were progressively increased from 120th, 90th, 60th to 30th s before the AT onset. No such changes of HRV parameters were observed before AT onset. Conclusion SCNA is more accurate than HRV in estimating cardiac sympathetic tone in ambulatory dogs with MI. PMID:25778433
Estimation of breed-specific heterosis effects for birth, weaning, and yearling weight in cattle
Technology Transfer Automated Retrieval System (TEKTRAN)
Heterosis, assumed proportional to expected breed heterozygosity, was calculated for 6,834 individuals with birth, weaning and yearling weight records from Cycle VII and advanced generations of the U.S. Meat Animal Research Center (USMARC) Germplasm Evaluation (GPE) project. Breeds represented in t...
ERIC Educational Resources Information Center
Qing, Siyu
2014-01-01
The National Science Foundation (NSF) Survey of Doctorate Recipients (SDR) collects information on a sample of individuals in the United States with PhD degrees. A significant portion of the sampled individuals appear in multiple survey years and can be linked across time. Survey weights in each year are created and adjusted for oversampling and…
Parent-Reported Height and Weight as Sources of Bias in Survey Estimates of Childhood Obesity
Weden, Margaret M.; Brownell, Peter B.; Rendall, Michael S.; Lau, Christopher; Fernandes, Meenakshi; Nazarov, Zafar
2013-01-01
Parental reporting of height and weight was evaluated for US children aged 2–13 years. The prevalence of obesity (defined as a body mass index value (calculated as weight (kg)/height (m)2) in the 95th percentile or higher) and its height and weight components were compared in child supplements of 2 nationally representative surveys: the 1996–2008 Children of the National Longitudinal Survey of Youth 1979 Cohort (NLSY79-Child) and the 1997 Child Development Supplement of the Panel Study of Income Dynamics (PSID-CDS). Sociodemographic differences in parent reporting error were analyzed. Error was largest for children aged 2–5 years. Underreporting of height, not overreporting of weight, generated a strong upward bias in obesity prevalence at those ages. Frequencies of parent-reported heights below the Centers for Disease Control and Prevention's (Atlanta, Georgia) first percentile were implausibly high at 16.5% (95% confidence interval (CI): 14.3, 19.0) in the NLSY79-Child and 20.6% (95% CI: 16.0, 26.3) in the PSID-CDS. They were highest among low-income children at 33.2% (95% CI: 22.4, 46.1) in the PSID-CDS and 26.2% (95% CI: 20.2, 33.2) in the NLSY79-Child. Bias in the reporting of obesity decreased with children's age and reversed direction at ages 12–13 years. Underreporting of weight increased with age, and underreporting of height decreased with age. We recommend caution to researchers who use parent-reported heights, especially for very young children, and offer practical solutions for survey data collection and research on child obesity. PMID:23785115
Sepehrband, Farshid; Clark, Kristi A; Ullmann, Jeremy F P; Kurniawan, Nyoman D; Leanage, Gayeshika; Reutens, David C; Yang, Zhengyi
2015-09-01
We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intracellular and intraneurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different subregions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (diffusion MRI: 42 ± 6%, 36 ± 4%, and 43 ± 5%; electron microscopy: 41 ± 10%, 36 ± 8%, and 44 ± 12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers.
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.
2015-01-01
A structural concept called pultruded rod stitched efficient unitized structure (PRSEUS) was developed by the Boeing Company to address the complex structural design aspects associated with a pressurized hybrid wing body (HWB) aircraft configuration. While PRSEUS was an enabling technology for the pressurized HWB structure, limited investigation of PRSEUS for other aircraft structures, such as circular fuselages and wings, has been done. Therefore, a study was undertaken to investigate the potential weight savings afforded by using the PRSEUS concept for a commercial transport wing. The study applied PRSEUS to the Advanced Subsonic Technology (AST) Program composite semi-span test article, which was sized using three load cases. The initial PRSEUS design was developed by matching cross-sectional stiffnesses for each stringer/skin combination within the wing covers, then the design was modified to ensure that the PRSEUS design satisfied the design criteria. It was found that the PRSEUS wing design exhibited weight savings over the blade-stiffened composite AST Program wing of nearly 9%, and a weight savings of 49% and 29% for the lower and upper covers, respectively, compared to an equivalent metallic wing.
NASA Astrophysics Data System (ADS)
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
NASA Astrophysics Data System (ADS)
Kaur, Jasmeet; Nandy, D. K.; Arora, Bindiya; Sahoo, B. K.
2015-01-01
Accurate knowledge of interaction potentials among the alkali-metal atoms and alkaline-earth ions is very useful in the studies of cold atom physics. Here we carry out theoretical studies of the long-range interactions among the Li, Na, K, and Rb alkali-metal atoms with the Ca+, Ba+, Sr+, and Ra+ alkaline-earth ions systematically, which are largely motivated by their importance in a number of applications. These interactions are expressed as a power series in the inverse of the internuclear separation R . Both the dispersion and induction components of these interactions are determined accurately from the algebraic coefficients corresponding to each power combination in the series. Ultimately, these coefficients are expressed in terms of the electric multipole polarizabilities of the above-mentioned systems, which are calculated using the matrix elements obtained from a relativistic coupled-cluster method and core contributions to these quantities from the random-phase approximation. We also compare our estimated polarizabilities with the other available theoretical and experimental results to verify accuracies in our calculations. In addition, we also evaluate the lifetimes of the first two low-lying states of the ions using the above matrix elements. Graphical representations of the dispersion coefficients versus R are given among all the alkaline ions with Rb.
NASA Astrophysics Data System (ADS)
de Meersman, K.; van der Baan, M.; Kendall, J.-M.; Jones, R. H.
2003-04-01
We present a weighted multi-station complex polarisation analysis to determine P-wave and S-wave polarisation properties of three-component seismic array data. Complex polarisation analysis of particle motion on seismic data was first introduced by Vidale (1986). In its original form, the method is an interpretation of the eigenvalue decomposition of a 3 by 3, complex data-covariance matrix. We have extended the definition of the data-covariance matrix (C) to C=X^HW-1 X, where C now is a 3n by 3n symmetric complex covariance matrix, with n the number of included three-component (3C) stations. X is the data matrix, the columns of which are the analytic signals of the Northern, Eastern and vertical components of the subsequent 3C stations. X^H is the transpose of the complex conjugate of X and W is a diagonal weighting matrix containing the pre-arrival noise levels of all components and all stations. The signals used in the data-matrix are corrected for arrival time differences. The eigenvectors and eigenvalues of C now describe the polarisation properties within the selected analysis window for all included stations. The main advantages of this approach are a better separation of signal and noise in the covariance matrix and the measurement of signal polarisation properties that are not influenced by the presence of polarised white noise. The technique was incorporated in an automated routine to measure the P-wave and S-wave polarisation properties of a microseismic data-set. The data were recorded in the Valhall oilfield in 1998 with a six level 3C vertical linear array with geophones at 20 m intervals between depths of 2100 m and 2200 m. In total 303 microseismic events were analysed and the results compared with manual interpretations. This comparison showed the advantage and high accuracy of the method.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-10
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2013-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
García-Santos, Glenda; Scheiben, Dominik; Binder, Claudia R
2011-03-01
Investigations of occupational and environmental risk caused by the use of agrochemicals have received considerable interest over the last decades. And yet, in developing countries, the lack of staff and analytical equipment as well the costs of chemical analyses make it difficult, if not impossible, to monitor pesticide contamination and residues in humans, air, water, and soils. A new and simple method is presented here for estimation of pesticide deposition in humans and soil after application. The estimate is derived on the basis of water mass balance measured in a given number of high absorbent papers under low evaporative conditions and unsaturated atmosphere. The method is presented as a suitable, rapid, low cost screening tool, complementary to toxicological tests, to assess occupational and environmental exposure caused by knapsack sprayers, where there is a lack of analytical instruments. This new method, called the "weight method", was tested to obtain drift deposition on the neighbouring field and the clothes of the applicator after spraying water with a knapsack sprayer in one of the largest areas of potato production in Colombia. The results were confirmed by experimental data using a tracer and the same set up used for the weight method. The weight method was able to explain 86% of the airborne drift and deposition variance.
Cui, Dong; Pu, Weiting; Liu, Jing; Bian, Zhijie; Li, Qiuli; Wang, Lei; Gu, Guanghua
2016-10-01
Synchronization is an important mechanism for understanding information processing in normal or abnormal brains. In this paper, we propose a new method called normalized weighted-permutation mutual information (NWPMI) for double variable signal synchronization analysis and combine NWPMI with S-estimator measure to generate a new method named S-estimator based normalized weighted-permutation mutual information (SNWPMI) for analyzing multi-channel electroencephalographic (EEG) synchronization strength. The performances including the effects of time delay, embedding dimension, coupling coefficients, signal to noise ratios (SNRs) and data length of the NWPMI are evaluated by using Coupled Henon mapping model. The results show that the NWPMI is superior in describing the synchronization compared with the normalized permutation mutual information (NPMI). Furthermore, the proposed SNWPMI method is applied to analyze scalp EEG data from 26 amnestic mild cognitive impairment (aMCI) subjects and 20 age-matched controls with normal cognitive function, who both suffer from type 2 diabetes mellitus (T2DM). The proposed methods NWPMI and SNWPMI are suggested to be an effective index to estimate the synchronization strength.
Cui, Dong; Pu, Weiting; Liu, Jing; Bian, Zhijie; Li, Qiuli; Wang, Lei; Gu, Guanghua
2016-10-01
Synchronization is an important mechanism for understanding information processing in normal or abnormal brains. In this paper, we propose a new method called normalized weighted-permutation mutual information (NWPMI) for double variable signal synchronization analysis and combine NWPMI with S-estimator measure to generate a new method named S-estimator based normalized weighted-permutation mutual information (SNWPMI) for analyzing multi-channel electroencephalographic (EEG) synchronization strength. The performances including the effects of time delay, embedding dimension, coupling coefficients, signal to noise ratios (SNRs) and data length of the NWPMI are evaluated by using Coupled Henon mapping model. The results show that the NWPMI is superior in describing the synchronization compared with the normalized permutation mutual information (NPMI). Furthermore, the proposed SNWPMI method is applied to analyze scalp EEG data from 26 amnestic mild cognitive impairment (aMCI) subjects and 20 age-matched controls with normal cognitive function, who both suffer from type 2 diabetes mellitus (T2DM). The proposed methods NWPMI and SNWPMI are suggested to be an effective index to estimate the synchronization strength. PMID:27451314
Guo, Penghong; Rivera, Daniel E.; Downs, Danielle S.; Savage, Jennifer S.
2016-01-01
Excessive gestational weight gain (i.e., weight gain during pregnancy) is a significant public health concern, and has been the recent focus of novel, control systems-based interventions. This paper develops a control-oriented dynamical systems model based on a first-principles energy balance model from the literature, which is evaluated against participant data from a study targeted to obese and overweight pregnant women. The results indicate significant under-reporting of energy intake among the participant population. A series of approaches based on system identification and state estimation are developed in the paper to better understand and characterize the extent of under-reporting; these range from back-calculating energy intake from a closed-form of the energy balance model, to a constrained semi-physical identification approach that estimates the extent of systematic under-reporting in the presence of noise and possibly missing data. Additionally, we describe an adaptive algorithm based on Kalman filtering to estimate energy intake in real-time. The approaches are illustrated with data from both simulated and actual intervention participants. PMID:27570366
van Donkelaar, Aaron; Martin, Randall V; Spurr, Robert J D; Burnett, Richard T
2015-09-01
We used a geographically weighted regression (GWR) statistical model to represent bias of fine particulate matter concentrations (PM2.5) derived from a 1 km optimal estimate (OE) aerosol optical depth (AOD) satellite retrieval that used AOD-to-PM2.5 relationships from a chemical transport model (CTM) for 2004-2008 over North America. This hybrid approach combined the geophysical understanding and global applicability intrinsic to the CTM relationships with the knowledge provided by observational constraints. Adjusting the OE PM2.5 estimates according to the GWR-predicted bias yielded significant improvement compared with unadjusted long-term mean values (R(2) = 0.82 versus R(2) = 0.62), even when a large fraction (70%) of sites were withheld for cross-validation (R(2) = 0.78) and developed seasonal skill (R(2) = 0.62-0.89). The effect of individual GWR predictors on OE PM2.5 estimates additionally provided insight into the sources of uncertainty for global satellite-derived PM2.5 estimates. These predictor-driven effects imply that local variability in surface elevation and urban emissions are important sources of uncertainty in geophysical calculations of the AOD-to-PM2.5 relationship used in satellite-derived PM2.5 estimates over North America, and potentially worldwide. PMID:26261937
van Donkelaar, Aaron; Martin, Randall V; Spurr, Robert J D; Burnett, Richard T
2015-09-01
We used a geographically weighted regression (GWR) statistical model to represent bias of fine particulate matter concentrations (PM2.5) derived from a 1 km optimal estimate (OE) aerosol optical depth (AOD) satellite retrieval that used AOD-to-PM2.5 relationships from a chemical transport model (CTM) for 2004-2008 over North America. This hybrid approach combined the geophysical understanding and global applicability intrinsic to the CTM relationships with the knowledge provided by observational constraints. Adjusting the OE PM2.5 estimates according to the GWR-predicted bias yielded significant improvement compared with unadjusted long-term mean values (R(2) = 0.82 versus R(2) = 0.62), even when a large fraction (70%) of sites were withheld for cross-validation (R(2) = 0.78) and developed seasonal skill (R(2) = 0.62-0.89). The effect of individual GWR predictors on OE PM2.5 estimates additionally provided insight into the sources of uncertainty for global satellite-derived PM2.5 estimates. These predictor-driven effects imply that local variability in surface elevation and urban emissions are important sources of uncertainty in geophysical calculations of the AOD-to-PM2.5 relationship used in satellite-derived PM2.5 estimates over North America, and potentially worldwide.
Identical twins in forensic genetics - Epidemiology and risk based estimation of weight of evidence.
Tvedebrink, Torben; Morling, Niels
2015-12-01
The increase in the number of forensic genetic loci used for identification purposes results in infinitesimal random match probabilities. These probabilities are computed under assumptions made for rather simple population genetic models. Often, the forensic expert reports likelihood ratios, where the alternative hypothesis is assumed not to encompass close relatives. However, this approach implies that important factors present in real human populations are discarded. This approach may be very unfavourable to the defendant. In this paper, we discuss some important aspects concerning the closest familial relationship, i.e., identical (monozygotic) twins, when reporting the weight of evidence. This can be done even when the suspect has no knowledge of an identical twin or when official records hold no twin information about the suspect. The derived expressions are not original as several authors previously have published results accounting for close familial relationships. However, we revisit the discussion to increase the awareness among forensic genetic practitioners and include new information on medical and societal factors to assess the risk of not considering a monozygotic twin as the true perpetrator. If accounting for a monozygotic twin in the weight of evidence, it implies that the likelihood ratio is truncated at a maximal value depending on the prevalence of monozygotic twins and the societal efficiency of recognising a monozygotic twin. If a monozygotic twin is considered as an alternative proposition, then data relevant for the Danish society suggests that the threshold of likelihood ratios should approximately be between 150,000 and 2,000,000 in order to take the risk of an unrecognised identical, monozygotic twin into consideration. In other societies, the threshold of the likelihood ratio in crime cases may reach other, often lower, values depending on the recognition of monozygotic twins and the age of the suspect. In general, more strictly kept
Identical twins in forensic genetics - Epidemiology and risk based estimation of weight of evidence.
Tvedebrink, Torben; Morling, Niels
2015-12-01
The increase in the number of forensic genetic loci used for identification purposes results in infinitesimal random match probabilities. These probabilities are computed under assumptions made for rather simple population genetic models. Often, the forensic expert reports likelihood ratios, where the alternative hypothesis is assumed not to encompass close relatives. However, this approach implies that important factors present in real human populations are discarded. This approach may be very unfavourable to the defendant. In this paper, we discuss some important aspects concerning the closest familial relationship, i.e., identical (monozygotic) twins, when reporting the weight of evidence. This can be done even when the suspect has no knowledge of an identical twin or when official records hold no twin information about the suspect. The derived expressions are not original as several authors previously have published results accounting for close familial relationships. However, we revisit the discussion to increase the awareness among forensic genetic practitioners and include new information on medical and societal factors to assess the risk of not considering a monozygotic twin as the true perpetrator. If accounting for a monozygotic twin in the weight of evidence, it implies that the likelihood ratio is truncated at a maximal value depending on the prevalence of monozygotic twins and the societal efficiency of recognising a monozygotic twin. If a monozygotic twin is considered as an alternative proposition, then data relevant for the Danish society suggests that the threshold of likelihood ratios should approximately be between 150,000 and 2,000,000 in order to take the risk of an unrecognised identical, monozygotic twin into consideration. In other societies, the threshold of the likelihood ratio in crime cases may reach other, often lower, values depending on the recognition of monozygotic twins and the age of the suspect. In general, more strictly kept
Thorup, V M; Edwards, D; Friggens, N C
2012-04-01
Precise energy balance estimates for individual cows are of great importance to monitor health, reproduction, and feed management. Energy balance is usually calculated as energy input minus output (EB(inout)), requiring measurements of feed intake and energy output sources (milk, maintenance, activity, growth, and pregnancy). Except for milk yield, direct measurements of the other sources are difficult to obtain in practice, and estimates contain considerable error sources, limiting on-farm use. Alternatively, energy balance can be estimated from body reserve changes (EB(body)) using body weight (BW) and body condition score (BCS). Automated weighing systems exist and new technology performing semi-automated body condition scoring has emerged, so frequent automated BW and BCS measurements are feasible. We present a method to derive individual EB(body) estimates from frequently measured BW and BCS and evaluate the performance of the estimated EB(body) against the traditional EB(inout) method. From 76 Danish Holstein and Jersey cows, parity 1 or 2+, on a glycerol-rich or a whole grain-rich total mixed ration, BW was measured automatically at each milking. The BW was corrected for the weight of milk produced and for gutfill. Changes in BW and BCS were used to calculate changes in body protein, body lipid, and EB(body) during the first 150 d in milk. The EB(body) was compared with the traditional EB(inout) by isolating the term within EB(inout) associated with most uncertainty; that is, feed energy content (FEC); FEC=(EB(body)+EMilk+EMaintenance+Eactivity)/dry matter intake, where the energy requirements are for milk produced (EMilk), maintenance (EMaintenance), and activity (EActivity). Estimated FEC agreed well with FEC values derived from tables (the mean estimate was 0.21 MJ of effective energy/kg of dry matter or 2.2% higher than the mean table value). Further, the FEC profile did not suggest systematic bias in EB(body) with stage of lactation. The EB
Thorup, V M; Edwards, D; Friggens, N C
2012-04-01
Precise energy balance estimates for individual cows are of great importance to monitor health, reproduction, and feed management. Energy balance is usually calculated as energy input minus output (EB(inout)), requiring measurements of feed intake and energy output sources (milk, maintenance, activity, growth, and pregnancy). Except for milk yield, direct measurements of the other sources are difficult to obtain in practice, and estimates contain considerable error sources, limiting on-farm use. Alternatively, energy balance can be estimated from body reserve changes (EB(body)) using body weight (BW) and body condition score (BCS). Automated weighing systems exist and new technology performing semi-automated body condition scoring has emerged, so frequent automated BW and BCS measurements are feasible. We present a method to derive individual EB(body) estimates from frequently measured BW and BCS and evaluate the performance of the estimated EB(body) against the traditional EB(inout) method. From 76 Danish Holstein and Jersey cows, parity 1 or 2+, on a glycerol-rich or a whole grain-rich total mixed ration, BW was measured automatically at each milking. The BW was corrected for the weight of milk produced and for gutfill. Changes in BW and BCS were used to calculate changes in body protein, body lipid, and EB(body) during the first 150 d in milk. The EB(body) was compared with the traditional EB(inout) by isolating the term within EB(inout) associated with most uncertainty; that is, feed energy content (FEC); FEC=(EB(body)+EMilk+EMaintenance+Eactivity)/dry matter intake, where the energy requirements are for milk produced (EMilk), maintenance (EMaintenance), and activity (EActivity). Estimated FEC agreed well with FEC values derived from tables (the mean estimate was 0.21 MJ of effective energy/kg of dry matter or 2.2% higher than the mean table value). Further, the FEC profile did not suggest systematic bias in EB(body) with stage of lactation. The EB
Deshpande, Amol A.; Madhavan, P.; Deshpande, Girish R.; Chandel, Ravi Kumar; Yarbagi, Kaviraj M.; Joshi, Alok R.; Moses Babu, J.; Murali Krishna, R.; Rao, I. M.
2016-01-01
Fondaparinux sodium is a synthetic low-molecular-weight heparin (LMWH). This medication is an anticoagulant or a blood thinner, prescribed for the treatment of pulmonary embolism and prevention and treatment of deep vein thrombosis. Its determination in the presence of related impurities was studied and validated by a novel ion-pair HPLC method. The separation of the drug and its degradation products was achieved with the polymer-based PLRPs column (250 mm × 4.6 mm; 5 μm) in gradient elution mode. The mixture of 100 mM n-hexylamine and 100 mM acetic acid in water was used as buffer solution. Mobile phase A and mobile phase B were prepared by mixing the buffer and acetonitrile in the ratio of 90:10 (v/v) and 20:80 (v/v), respectively. Mobile phases were delivered in isocratic mode (2% B for 0–5 min) followed by gradient mode (2–85% B in 5–60 min). An Evaporative Light Scattering Detector (ELSD) was connected to the LC system to detect the responses of chromatographic separation. Further, the drug was subjected to stress studies for acidic, basic, oxidative, photolytic, and thermal degradations as per ICH guidelines and the drug was found to be labile in acid, base hydrolysis, and oxidation, while stable in neutral, thermal, and photolytic degradation conditions. The method provided linear responses over the concentration range of the LOQ to 0.30% for each impurity with respect to the analyte concentration of 12.5 mg/mL, and regression analysis showed a correlation coefficient value (r2) of more than 0.99 for all the impurities. The LOD and LOQ were found to be 1.4 µg/mL and 4.1 µg/mL, respectively, for fondaparinux. The developed ion-pair method was validated as per ICH guidelines with respect to accuracy, selectivity, precision, linearity, and robustness. PMID:27110496
Deshpande, Amol A; Madhavan, P; Deshpande, Girish R; Chandel, Ravi Kumar; Yarbagi, Kaviraj M; Joshi, Alok R; Moses Babu, J; Murali Krishna, R; Rao, I M
2016-01-01
Fondaparinux sodium is a synthetic low-molecular-weight heparin (LMWH). This medication is an anticoagulant or a blood thinner, prescribed for the treatment of pulmonary embolism and prevention and treatment of deep vein thrombosis. Its determination in the presence of related impurities was studied and validated by a novel ion-pair HPLC method. The separation of the drug and its degradation products was achieved with the polymer-based PLRPs column (250 mm × 4.6 mm; 5 μm) in gradient elution mode. The mixture of 100 mM n-hexylamine and 100 mM acetic acid in water was used as buffer solution. Mobile phase A and mobile phase B were prepared by mixing the buffer and acetonitrile in the ratio of 90:10 (v/v) and 20:80 (v/v), respectively. Mobile phases were delivered in isocratic mode (2% B for 0-5 min) followed by gradient mode (2-85% B in 5-60 min). An Evaporative Light Scattering Detector (ELSD) was connected to the LC system to detect the responses of chromatographic separation. Further, the drug was subjected to stress studies for acidic, basic, oxidative, photolytic, and thermal degradations as per ICH guidelines and the drug was found to be labile in acid, base hydrolysis, and oxidation, while stable in neutral, thermal, and photolytic degradation conditions. The method provided linear responses over the concentration range of the LOQ to 0.30% for each impurity with respect to the analyte concentration of 12.5 mg/mL, and regression analysis showed a correlation coefficient value (r(2)) of more than 0.99 for all the impurities. The LOD and LOQ were found to be 1.4 µg/mL and 4.1 µg/mL, respectively, for fondaparinux. The developed ion-pair method was validated as per ICH guidelines with respect to accuracy, selectivity, precision, linearity, and robustness. PMID:27110496
Estimates of genetic parameters for visual scores and daily weight gain in Brangus animals.
Queiroz, S A; Oliveira, J A; Costa, G Z; Fries, L A
2011-05-01
(Co)variance components were estimated for visual scores of conformation (CY), early finishing (PY) and muscling (MY) at 550 days of age (yearling), average daily gain from weaning to yearling (GWY), conformation (CW), early finishing (PW) and muscling (MW) scores at weaning, and average daily gain from birth to weaning (GBW) in animals forming the Brazilian Brangus breed born between 1986 and 2002 from the livestock files of GenSys Consultants Associados S/C Ltda. The data set contained 53 683; 45 136; 52 937; 56 471; 24 531; 21 166; 24 006 and 25 419 records for CW, PW, MW, GBW, CY, PY, MY and GWY, respectively. Data were analyzed by the restricted maximum likelihood method using single- and two-trait animal models. Direct heritability estimates obtained by single-trait analysis were 0.12, 0.14, 0.13 and 0.14 for CY, PY and MY scores and GWY, respectively. A positive association was observed between the same visual scores at weaning and yearling, with correlations ranging from 0.64 to 0.94. Estimated correlations between GBW and weaning and yearling scores ranged from 0.60 to 0.77. The genetic correlation between GBW and GWY was low (0.10), whereas correlations of 0.55, 0.37 and 0.47 were observed between GWY and CY, PY and MY, respectively. Moreover, GWY showed a weak correlation with CW (0.10), PW (-0.08) and MW (-0.03) scores. These results indicate that selection of the traits that was studied would result in a small response. In addition, selection based on average daily gain may have an indirect effect on visual scores as the correlations between GWY and visual scores were generally strong. PMID:22440022
Yoon, Hyun Joong; Chung, Seong Youb
2013-12-01
This paper addresses the emotion recognition problem from electroencephalogram signals, in which emotions are represented on the valence and arousal dimensions. Fast Fourier transform analysis is used to extract features and the feature selection based on Pearson correlation coefficient is applied. This paper proposes a probabilistic classifier based on Bayes' theorem and a supervised learning using a perceptron convergence algorithm. To verify the proposed methodology, we use an open database. An emotion is defined as two-level class and three-level class in both valence and arousal dimensions. For the two-level class case, the average accuracy of the valence and arousal estimation is 70.9% and 70.1%, respectively. For the three-level class case, the average accuracy is 55.4% and 55.2%, respectively. PMID:24290940
Hook, E B; Regal, R R
1997-06-15
In log-linear capture-recapture approaches to population size, the method of model selection may have a major effect upon the estimate. In addition, the estimate may also be very sensitive if certain cells are null or very sparse, even with the use of multiple sources. The authors evaluated 1) various approaches to the issue of model uncertainty and 2) a small sample correction for three or more sources recently proposed by Hook and Regal. The authors compared the estimates derived using 1) three different information criteria that included Akaike's Information Criterion (AIC) and two alternative formulations of the Bayesian Information Criterion (BIC), one proposed by Draper ("two pi") and one by Schwarz ("not two pi"); 2) two related methods of weighting estimates associated with models; 3) the independent model; and 4) the saturated model, with the known totals in 20 different populations studied by five separate groups of investigators. For each method, we also compared the estimate derived with or without the proposed small sample correction. At least in these data sets, the use of AIC appeared on balance to be preferable. The BIC formulation suggested by Draper appeared slightly preferable to that suggested by Schwarz. Adjustment for model uncertainty appears to improve results slightly. The proposed small sample correction appeared to diminish relative log bias but only when sparse cells were present. Otherwise, its use tended to increase relative log bias. Use of the saturated model (with or without the small sample correction) appears to be optimal if the associated interval is not uselessly large, and if one can plausibly exclude an all-source interaction. All other approaches led to an estimate that was too low by about one standard deviation.
Dirtu, Alin C; Geens, Tinne; Dirinck, Eveline; Malarvannan, Govindan; Neels, Hugo; Van Gaal, Luc; Jorens, Philippe G; Covaci, Adrian
2013-09-01
Human exposure to chemicals commonly encountered in our environment, like phthalates, is routinely assessed through urinary measurement of their metabolites. A particular attention is given to the specific population groups, such as obese, for which the dietary intake of environmental chemicals is higher. To evaluate the exposure to phthalates, nine phthalate metabolites (PMs) were analyzed in urine collected from obese individuals and a control population. Obese individuals lost weight through either bariatric surgery or a conservative weight loss program with dietary and lifestyle counseling. Urine samples were also collected from the obese individuals after 3, 6 and 12months of weight loss. Individual daily intakes of the corresponding phthalate diesters were estimated based on the urinary PM concentrations. A high variability was recorded for the levels of each PM in both obese and control urine samples showing the exposure to high levels of PMs in specific subgroups. The most important PM metabolite as percentage contribution to the total PM levels was mono-ethyl phthalate followed by the metabolites of di-butyl phthalate and di 2-ethyl-hexyl phthalate (DEHP). No differences in the PM levels and profiles between obese entering the program and controls were observed. Although paralleled by a significant decrease of their weight, an increase in the urinary PM levels after 3 to 6months loss was seen. Constant figures for the estimated phthalates daily intake were observed over the studied period, suggesting that besides food consumption, other human exposure sources to phthalates (e.g. air, dust) might be also important. The weight loss treatment method followed by obese individuals influenced the correlations between PM levels, suggesting a change of the intake sources with time. Except for few gender differences recorded between the urinary DEHP metabolites correlations, no other differences were observed for the urinary PM levels as a function of age, body
Zhang, Lifan; Zhou, Xiang; Michal, Jennifer J.; Ding, Bo; Li, Rui; Jiang, Zhihua
2014-01-01
Birth weight is an economically important trait in pig production because it directly impacts piglet growth and survival rate. In the present study, we performed a genome wide survey of candidate genes and pathways associated with individual birth weight (IBW) using the Illumina PorcineSNP60 BeadChip on 24 high (HEBV) and 24 low estimated breeding value (LEBV) animals. These animals were selected from a reference population of 522 individuals produced by three sires and six dam lines, which were crossbreds with multiple breeds. After quality-control, 43,257 SNPs (single nucleotide polymorphisms), including 42,243 autosomal SNPs and 1,014 SNPs on chromosome X, were used in the data analysis. A total of 27 differentially selected regions (DSRs), including 1 on Sus scrofa chromosome 1 (SSC1), 1 on SSC4, 2 on SSC5, 4 on SSC6, 2 on SSC7, 5 on SSC8, 3 on SSC9, 1 on SSC14, 3 on SSC18, and 5 on SSCX, were identified to show the genome wide separations between the HEBV and LEBV groups for IBW in piglets. A DSR with the most number of significant SNPs (including 7 top 0.1% and 31 top 5% SNPs) was located on SSC6, while another DSR with the largest genetic differences in FST was found on SSC18. These regions harbor known functionally important genes involved in growth and development, such as TNFRSF9 (tumor necrosis factor receptor superfamily member 9), CA6 (carbonic anhydrase VI) and MDFIC (MyoD family inhibitor domain containing). A DSR rich in imprinting genes appeared on SSC9, which included PEG10 (paternally expressed 10), SGCE (sarcoglycan, epsilon), PPP1R9A (protein phosphatase 1, regulatory subunit 9A) and ASB4 (ankyrin repeat and SOCS box containing 4). More importantly, our present study provided evidence to support six quantitative trait loci (QTL) regions for pig birth weight, six QTL regions for average birth weight (ABW) and three QTL regions for litter birth weight (LBW) reported previously by other groups. Furthermore, gene ontology analysis with 183 genes
ERIC Educational Resources Information Center
Mozumdar, Arupendra; Liguori, Gary
2016-01-01
Purpose: Estimating obesity prevalence using self-reported height and weight is an economic and effective method and is often used in national surveys. However, self-reporting of height and weight can involve misreporting of those variables and has been found to be associated to the size of the individual. This study investigated the biases in…
Almirall, Daniel; Griffin, Beth Ann; McCaffrey, Daniel F.; Ramchand, Rajeev; Yuen, Robert A.; Murphy, Susan A.
2014-01-01
This article considers the problem of examining time-varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time-varying causal effects of interest in a conditional mean model for a continuous response given time-varying treatments and moderators. We present an easy-to-use estimator of the SNMM that combines an existing regression-with-residuals (RR) approach with an inverse-probability-of-treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time-varying causal effects if the time-varying moderators are also the sole time-varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time-varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time-varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time-varying moderators and time-varying confounders. We illustrate the methodology in a case study to assess if time-varying substance use moderates treatment effects on future substance use. PMID:23873437
Precise and Accurate Density Determination of Explosives Using Hydrostatic Weighing
B. Olinger
2005-07-01
Precise and accurate density determination requires weight measurements in air and water using sufficiently precise analytical balances, knowledge of the densities of air and water, knowledge of thermal expansions, availability of a density standard, and a method to estimate the time to achieve thermal equilibrium with water. Density distributions in pressed explosives are inferred from the densities of elements from a central slice.
Friedrich, Jan O; Beyene, Joseph; Adhikari, Neill KJ
2009-01-01
statistically significant. Conclusion We have shown that alternative reasonable methodological approaches to the rosiglitazone meta-analysis can yield increased or decreased risks that are either statistically significant or not significant at the p = 0.05 level for both myocardial infarction and cardiovascular death. Completion of ongoing trials may help to generate more accurate estimates of rosiglitazone's effect on cardiovascular outcomes. However, given that almost all point estimates suggest harm rather than benefit and the availability of alternative agents, the use of rosiglitazone may greatly decline prior to more definitive safety data being generated. PMID:19134216
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1990-01-01
A real-time digital signal combining system for use with Ka-band feed arrays is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed samples by using a sliding-window implementation of a vector maximum-likelihood parameter estimator. It is shown that with averaging times of about 0.1 second, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the feed array, even in the presence of severe wind gusts and similar disturbances.
Yang, Lu; Mester, Zoltán; Sturgeon, Ralph E; Meija, Juris
2012-03-01
The much anticipated overhaul of the International System of Units (SI) will result in new definitions of base units in terms of fundamental constants. However, redefinition of the kilogram in terms of the Planck constant (h) cannot proceed without consistency between the Avogadro and Planck constants, which are both related through the Rydberg constant. In this work, an independent assessment of the atomic weight of silicon in a highly enriched (28)Si crystal supplied by the International Avogadro Coordination (IAC) was performed. This recent analytical approach, based on dissolution with NaOH and its isotopic characterization by multicollector inductively coupled plasma mass spectrometry, is critically evaluated. The resultant atomic weight A(r)(Si) = 27.976 968 39(24)(k=1) differs significantly from the most recent value of A(r)(Si) = 27.976 970 27(23)(k=1). Using the results generated herein for A(r)(Si) along with other IAC measurement results for mass, volume, and the lattice spacing, the estimate of the Avogadro constant becomes N(A) = 6.022 140 40(19) × 10(23) mol(-1).
Austin, Peter C; Stuart, Elizabeth A
2015-12-10
The propensity score is defined as a subject's probability of treatment selection, conditional on observed baseline covariates. Weighting subjects by the inverse probability of treatment received creates a synthetic sample in which treatment assignment is independent of measured baseline covariates. Inverse probability of treatment weighting (IPTW) using the propensity score allows one to obtain unbiased estimates of average treatment effects. However, these estimates are only valid if there are no residual systematic differences in observed baseline characteristics between treated and control subjects in the sample weighted by the estimated inverse probability of treatment. We report on a systematic literature review, in which we found that the use of IPTW has increased rapidly in recent years, but that in the most recent year, a majority of studies did not formally examine whether weighting balanced measured covariates between treatment groups. We then proceed to describe a suite of quantitative and qualitative methods that allow one to assess whether measured baseline covariates are balanced between treatment groups in the weighted sample. The quantitative methods use the weighted standardized difference to compare means, prevalences, higher-order moments, and interactions. The qualitative methods employ graphical methods to compare the distribution of continuous baseline covariates between treated and control subjects in the weighted sample. Finally, we illustrate the application of these methods in an empirical case study. We propose a formal set of balance diagnostics that contribute towards an evolving concept of 'best practice' when using IPTW to estimate causal treatment effects using observational data. PMID:26238958
NASA Astrophysics Data System (ADS)
Du, Lin; Shi, Shuo; Gong, Wei; Yang, Jian; Sun, Jia; Mao, Feiyue
2016-06-01
Hyperspectral LiDAR (HSL) is a novel tool in the field of active remote sensing, which has been widely used in many domains because of its advantageous ability of spectrum-gained. Especially in the precise monitoring of nitrogen in green plants, the HSL plays a dispensable role. The exiting HSL system used for nitrogen status monitoring has a multi-channel detector, which can improve the spectral resolution and receiving range, but maybe result in data redundancy, difficulty in system integration and high cost as well. Thus, it is necessary and urgent to pick out the nitrogen-sensitive feature wavelengths among the spectral range. The present study, aiming at solving this problem, assigns a feature weighting to each centre wavelength of HSL system by using matrix coefficient analysis and divergence threshold. The feature weighting is a criterion to amend the centre wavelength of the detector to accommodate different purpose, especially the estimation of leaf nitrogen content (LNC) in rice. By this way, the wavelengths high-correlated to the LNC can be ranked in a descending order, which are used to estimate rice LNC sequentially. In this paper, a HSL system which works based on a wide spectrum emission and a 32-channel detector is conducted to collect the reflectance spectra of rice leaf. These spectra collected by HSL cover a range of 538 nm - 910 nm with a resolution of 12 nm. These 32 wavelengths are strong absorbed by chlorophyll in green plant among this range. The relationship between the rice LNC and reflectance-based spectra is modeled using partial least squares (PLS) and support vector machines (SVMs) based on calibration and validation datasets respectively. The results indicate that I) wavelength selection method of HSL based on feature weighting is effective to choose the nitrogen-sensitive wavelengths, which can also be co-adapted with the hardware of HSL system friendly. II) The chosen wavelength has a high correlation with rice LNC which can be
2012-01-01
Background Although the measurement site at L4–L5 for visceral adipose tissue (VAT) has been commonly accepted, some researchers suggest that additional upper sites (i.e., L1–L2 and L2–L3) are useful for estimating VAT volume. Therefore, determining the optimum measurement site remains challenging and has become important in determining VAT volume. We investigated the influence of a single-slice measurement site on the prediction of VAT volume and changes in VAT volume in obese Japanese men. Methods Twenty-four men, aged 30–65 years with a mean BMI of 30 kg/m2, were included in a 12-week weight loss program. We obtained continuous T1-weighted abdominal magnetic resonance images from T9 to S1 with a 1.5-T system to measure the VAT area. These VAT areas were then summed to determine VAT volume before and after the program. Results Single-slice images at 3–11 cm above L4–L5 had significant and high correlations with VAT volume at baseline (r = 0.94–0.97). The single-slice image with the highest correlation coefficient with respect to VAT volume was located at 5 cm above L4–L5 (r = 0.97). The highest correlation coefficient between the individual changes in VAT area and changes in VAT volume was located at 6 cm above L4–L5 (r = 0.90). Conclusions Individual measurement sites have different abilities to estimate VAT volume and changes in VAT volume in obese Japanese men. Best zone located at 5–6 cm above L4–L5 may be a better predictor of VAT volume than the L4–L5 image in terms of both baseline and changes with weight loss. PMID:22698384
Shuaib, Muhammad; Becker, Stan; Rahman, Md. Mokhlesur; Peters, David H.
2011-01-01
Due to an urgent need for information on the coverage of health service for women and children after the fall of Taliban regime in Afghanistan, a multiple indicator cluster survey (MICS) was conducted in 2003 using the outdated 1979 census as the sampling frame. When 2004 pre-census data became available, population-sampling weights were generated based on the survey-sampling scheme. Using these weights, the population estimates for seven maternal and child healthcare-coverage indicators were generated and compared with the unweighted MICS 2003 estimates. The use of sample weights provided unbiased estimates of population parameters. Results of the comparison of weighted and unweighted estimates showed some wide differences for individual provincial estimates and confidence intervals. However, the mean, median and absolute mean of the differences between weighted and unweighted estimates and their confidence intervals were close to zero for all indicators at the national level. Ranking of the five highest and the five lowest provinces on weighted and unweighted estimates also yielded similar results. The general consistency of results suggests that outdated sampling frames can be appropriate for use in similar situations to obtain initial estimates from household surveys to guide policy and programming directions. However, the power to detect change from these estimates is lower than originally planned, requiring a greater tolerance for error when the data are used as a baseline for evaluation. The generalizability of using outdated sampling frames in similar settings is qualified by the specific characteristics of the MICS 2003—low replacement rate of clusters and zero probability of inclusion of clusters created after the 1979 census. PMID:21957678
Tortajada, Salvador; Fuster-Garcia, Elies; Vicente, Javier; Wesseling, Pieter; Howe, Franklyn A; Julià-Sapé, Margarida; Candiota, Ana-Paula; Monleón, Daniel; Moreno-Torres, Angel; Pujol, Jesús; Griffiths, John R; Wright, Alan; Peet, Andrew C; Martínez-Bisbal, M Carmen; Celda, Bernardo; Arús, Carles; Robles, Montserrat; García-Gómez, Juan Miguel
2011-08-01
In the last decade, machine learning (ML) techniques have been used for developing classifiers for automatic brain tumour diagnosis. However, the development of these ML models rely on a unique training set and learning stops once this set has been processed. Training these classifiers requires a representative amount of data, but the gathering, preprocess, and validation of samples is expensive and time-consuming. Therefore, for a classical, non-incremental approach to ML, it is necessary to wait long enough to collect all the required data. In contrast, an incremental learning approach may allow us to build an initial classifier with a smaller number of samples and update it incrementally when new data are collected. In this study, an incremental learning algorithm for Gaussian Discriminant Analysis (iGDA) based on the Graybill and Deal weighted combination of estimators is introduced. Each time a new set of data becomes available, a new estimation is carried out and a combination with a previous estimation is performed. iGDA does not require access to the previously used data and is able to include new classes that were not in the original analysis, thus allowing the customization of the models to the distribution of data at a particular clinical center. An evaluation using five benchmark databases has been used to evaluate the behaviour of the iGDA algorithm in terms of stability-plasticity, class inclusion and order effect. Finally, the iGDA algorithm has been applied to automatic brain tumour classification with magnetic resonance spectroscopy, and compared with two state-of-the-art incremental algorithms. The empirical results obtained show the ability of the algorithm to learn in an incremental fashion, improving the performance of the models when new information is available, and converging in the course of time. Furthermore, the algorithm shows a negligible instance and concept order effect, avoiding the bias that such effects could introduce. PMID
Miles, Donna; Perrin, Eliana M.; Coyne-Beasley, Tamera; Ford, Carol
2013-01-01
Objective We compared parental reports of children’s height and weight when the values were estimated vs. parent-measured to determine how these reports influence the estimated prevalence of childhood obesity. Methods In the 2007 and 2008 North Carolina Child Health Assessment and Monitoring Program surveys, parents reported height and weight for children aged 3–17 years. When parents reported the values were not measured (by doctor, school, or home), they were asked to measure their child and were later called back. We categorized body mass index status using standard CDC definitions, and we used Chi-square tests and the Stuart-Maxwell test of marginal homogeneity to examine reporting differences. Results About 80% (n=509) of the 638 parents who reported an unmeasured height and/or weight participated in a callback and provided updated measures. Children originally classified as obese were subsequently classified as obese (67%), overweight (13%), and healthy weight (19%). An estimated 28% of younger children (<10 years of age) vs. 6% of older children (aged ≥10 years) were reclassified on callback. Having parents who guessed the height and weight of their children and then reported updated values did not significantly change the overall population estimates of obesity. Conclusion Our findings demonstrate that using parent-reported height and weight values may be sufficient to provide reasonable estimates of obesity prevalence. Systematically asking the source of height and weight information may help improve how it is applied to research of the prevalence of childhood obesity when gold-standard measurements are not available. PMID:23277659
Bradley, Paul M.; Journey, Celeste A.; Bringham, Mark E.; Burns, Douglas A.; Button, Daniel T.; Riva-Murray, Karen
2013-01-01
To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (p = 0.78; p = 0.003) and annual FMeHg basin yield (p = 0.66; p = 0.026). Improved correlation (p = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered.
Bradley, Paul M; Journey, Celeste A; Brigham, Mark E; Burns, Douglas A; Button, Daniel T; Riva-Murray, Karen
2013-01-01
To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (ρ = 0.78; p = 0.003) and annual FMeHg basin yield (ρ = 0.66; p = 0.026). Improved correlation (ρ = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered. PMID:22982552
NASA Astrophysics Data System (ADS)
Rebello, N. Sanjay
2012-02-01
Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.
Chittawatanarat, Kaweesak; Pruenglampoo, Sakda; Trakulhoon, Vibul; Ungpinitpong, Winai; Patumanond, Jayanton
2012-01-01
Background Many medical procedures routinely use body weight as a parameter for calculation. However, these measurements are not always available. In addition, the commonly used visual estimation has had high error rates. Therefore, the aim of this study was to develop a predictive equation for body weight using body circumferences. Methods A prospective study was performed in healthy volunteers. Body weight, height, and eight circumferential level parameters including neck, arm, chest, waist, umbilical level, hip, thigh, and calf were recorded. Linear regression equations were developed in a modeling sample group divided by sex and age (younger <60 years and older ≥60 years). Original regression equations were modified to simple equations by coefficients and intercepts adjustment. These equations were tested in an independent validation sample. Results A total of 2000 volunteers were included in this study. These were randomly separated into two groups (1000 in each modeling and validation group). Equations using height and one covariate circumference were developed. After the covariate selection processes, covariate circumference of chest, waist, umbilical level, and hip were selected for single covariate equations (Sco). To reduce the body somatotype difference, the combination covariate circumferences were created by summation between the chest and one torso circumference of waist, umbilical level, or hip and used in the equation development as a combination covariate equation (Cco). Of these equations, Cco had significantly higher 10% threshold error tolerance compared with Sco (mean percentage error tolerance of Cco versus Sco [95% confidence interval; 95% CI]: 76.9 [74.2–79.6] versus 70.3 [68.4–72.3]; P < 0.01, respectively). Although simple covariate equations had more evidence errors than the original covariate equations, there was comparable error tolerance between the types of equations (original versus simple: 74.5 [71.9–77.1] versus 71.7 [69.2
NASA Astrophysics Data System (ADS)
Sanford, W. E.
2015-12-01
Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially
Woo, Kyong-Je; Kim, Eun-Ji; Lee, Kyeong-Tae; Mun, Goo-Hyun
2016-09-01
Background Preoperative estimation of abdominal flap volume is valuable for breast reconstruction, especially in lean patients. The purpose of this study was to develop a formula to estimate the weight of the deep inferior epigastric artery perforator (DIEP) flap using unidimensional parameters. Methods We retrospectively collected data on 100 consecutive patients who underwent breast reconstruction using the DIEP flap. Multiple linear regression analysis was used to develop a formula to estimate the weight of the flap. Predictor variables included body mass index, height of the flap, width of the flap, and flap thickness on computed tomography angiographic images at three paraumbilical sites: 5 cm right, left, and inferior from the umbilicus. Then we prospectively tested the accuracy of the developed formula in 38 consecutive patients who underwent breast reconstruction with free DIEP flaps. Results A calculation formula and a smartphone application, DIEP-W was developed from retrospective analysis (R (2) = 92.7%, p < 0.001). In the prospective study, the average estimated weight was 96.3% of the actual weight, giving the formula a mean absolute percentage error of 7.7% (average differences of 45 g). The flap size in the prospective group was significantly smaller (p < 0.001) and donor-site complications were less (p = 0.002) than those of retrospective group. Conclusion Surgeons can easily calculate the DIEP weight with varying flap dimensions in a real-time fashion using this formula during preoperative planning and intraoperative design. Estimating the flap weight facilitates economical use of the flap, which can lead to reduced donor-site complications.
Technology Transfer Automated Retrieval System (TEKTRAN)
Phosphorus sorption data for soil of the Pembroke classification are recorded at high replication — 10 experiments at each of 7 initial concentrations — for characterizing the data error structure through variance function estimation. The results permit the assignment of reliable weights for the su...
Jensen, Bente R; Hovgaard-Hansen, Line; Cappelen, Katrine L
2016-08-01
Running on a lower-body positive-pressure (LBPP) treadmill allows effects of weight support on leg muscle activation to be assessed systematically, and has the potential to facilitate rehabilitation and prevent overloading. The aim was to study the effect of running with weight support on leg muscle activation and to estimate relative knee and ankle joint forces. Runners performed 6-min running sessions at 2.22 m/s and 3.33 m/s, at 100%, 80%, 60%, 40%, and 20% body weight (BW). Surface electromyography, ground reaction force, and running characteristics were measured. Relative knee and ankle joint forces were estimated. Leg muscles responded differently to unweighting during running, reflecting different relative contribution to propulsion and antigravity forces. At 20% BW, knee extensor EMGpeak decreased to 22% at 2.22 m/s and 28% at 3.33 m/s of 100% BW values. Plantar flexors decreased to 52% and 58% at 20% BW, while activity of biceps femoris muscle remained unchanged. Unweighting with LBPP reduced estimated joint force significantly although less than proportional to the degree of weight support (ankle). It was concluded that leg muscle activation adapted to the new biomechanical environment, and the effect of unweighting on estimated knee force was more pronounced than on ankle force.
ERIC Educational Resources Information Center
Tao, Jian; Shi, Ning-Zhong; Chang, Hua-Hua
2012-01-01
For mixed-type tests composed of both dichotomous and polytomous items, polytomous items often yield more information than dichotomous ones. To reflect the difference between the two types of items, polytomous items are usually pre-assigned with larger weights. We propose an item-weighted likelihood method to better assess examinees' ability…
Coplen, T.B.; Peiser, H.S.
1998-01-01
International commissions and national committees for atomic weights (mean relative atomic masses) have recommended regularly updated, best values for these atomic weights as applicable to terrestrial sources of the chemical elements. Presented here is a historically complete listing starting with the values in F. W. Clarke's 1882 recalculation, followed by the recommended values in the annual reports of the American Chemical Society's Atomic Weights Commission. From 1903, an International Commission published such reports and its values (scaled to an atomic weight of 16 for oxygen) are here used in preference to those of national committees of Britain, Germany, Spain, Switzerland, and the U.S.A. We have, however, made scaling adjustments from Ar(16O) to Ar(12C) where not negligible. From 1920, this International Commission constituted itself under the International Union of Pure and Applied Chemistry (IUPAC). Since then, IUPAC has published reports (mostly biennially) listing the recommended atomic weights, which are reproduced here. Since 1979, these values have been called the "standard atomic weights" and, since 1969, all values have been published, with their estimated uncertainties. Few of the earlier values were published with uncertainties. Nevertheless, we assessed such uncertainties on the basis of our understanding of the likely contemporary judgement of the values' reliability. While neglecting remaining uncertainties of 1997 values, we derive "differences" and a retrospective index of reliability of atomic-weight values in relation to assessments of uncertainties at the time of their publication. A striking improvement in reliability appears to have been achieved since the commissions have imposed upon themselves the rule of recording estimated uncertainties from all recognized sources of error.
García-Pastor, Andrés; Díaz-Otero, Fernando; Funes-Molina, Carmen; Benito-Conde, Beatriz; Grandes-Velasco, Sandra; Sobrino-García, Pilar; Vázquez-Alén, Pilar; Fernández-Bullido, Yolanda; Villanueva-Osorio, Jose Antonio; Gil-Núñez, Antonio
2015-10-01
A dose of 0.9 mg/kg of intravenous tissue plasminogen activator (t-PA) has proven to be beneficial in the treatment of acute ischemic stroke (AIS). Dosing of t-PA based on estimated patient weight (PW) increases the likelihood of errors. Our objectives were to evaluate the accuracy of estimated PW and assess the effectiveness and safety of the actual applied dose (AAD) of t-PA. We performed a prospective single-center study of AIS patients treated with t-PA from May 2010 to December 2011. Dose was calculated according to estimated PW. Patients were weighed during the 24 h following treatment with t-PA. Estimation errors and AAD were calculated. Actual PW was measured in 97 of the 108 included patients. PW estimation errors were recorded in 22.7 % and were more frequent when weight was estimated by stroke unit staff (44 %). Only 11 % of patients misreported their own weight. Mean AAD was significantly higher in patients who had intracerebral hemorrhage (ICH) after t-PA than in patients who did not (0.96 vs. 0.92 mg/kg; p = 0.02). Multivariate analysis showed an increased risk of ICH for each 10 % increase in t-PA dose above the optimal dose of 0.90 mg/kg (OR 3.10; 95 % CI 1.14-8.39; p = 0.026). No effects of t-PA misdosing were observed on symptomatic ICH, functional outcome or mortality. Estimated PW is frequently inaccurate and leads to t-PA dosing errors. Increasing doses of t-PA above 0.90 mg/kg may increase the risk of ICH. Standardized weighing methods before t-PA is administered should be considered.
Al-Obaidly, Sawsan; Parrish, Jacqueline; Murphy, Kellie E; Glanc, Phyllis; Maxwell, Cynthia
2015-08-01
Objectifs : Cette étude avait pour objectif de déterminer si la présence d’un IMC maternel prégrossesse accru entraînait une baisse de la précision de l’échographie pour ce qui est de l’estimation du poids fœtal et de la discordance intergémellaire en matière de poids dans le cadre de grossesses gémellaires, par comparaison avec des femmes présentant un IMC normal. Méthodes : Nous avons mené une étude de cohorte rétrospective portant sur des femmes qui présentaient un IMC prégrossesse (ou aux débuts de la grossesse) connu, qui ont accouché après 28 semaines de gestation à la suite d’une grossesse gémellaire viable entre 2008 et 2011, et qui ont subi un examen échographique visant l’estimation du poids fœtal dans les deux semaines ayant précédé l’accouchement. Le poids fœtal estimé (PFE) par échographie a été comparé au poids réel de chacun des jumeaux, puis la discordance intergémellaire en matière de poids (définie comme une différence de poids entre les jumeaux de plus de 25 %) a été stratifiée en fonction de l’IMC de la patiente. Nous avons cherché à déterminer si le PFE et la discordance intergémellaire en matière de poids avaient été affectés lorsque l’accouchement était survenu de 8 à 14 jours à la suite de l’échographie, par comparaison avec un accouchement étant survenu dans les sept jours de la tenue de l’échographie. Résultats : Nous avons pu identifier, au total, 300 grossesses gémellaires pour lesquelles l’IMC maternel prégrossesse était connu : 179 femmes présentaient une insuffisance pondérale ou un poids normal (IMC < 25 kg/m(2)), 67 présentaient une surcharge pondérale (IMC = de 25 à 29,9 kg/m(2)) et 54 étaient obèses (IMC ≥ 30 kg/m(2)). Dans tous les groupes d’IMC, la précision de l’échographie menée entre 8 et 14 jours avant l’accouchement a été comparée à celle de l’échographie menée dans les sept jours de l
Padula, Amy M.; Mortimer, Kathleen; Hubbard, Alan; Lurmann, Frederick; Jerrett, Michael; Tager, Ira B.
2012-01-01
Traffic-related air pollution is recognized as an important contributor to health problems. Epidemiologic analyses suggest that prenatal exposure to traffic-related air pollutants may be associated with adverse birth outcomes; however, there is insufficient evidence to conclude that the relation is causal. The Study of Air Pollution, Genetics and Early Life Events comprises all births to women living in 4 counties in California's San Joaquin Valley during the years 2000–2006. The probability of low birth weight among full-term infants in the population was estimated using machine learning and targeted maximum likelihood estimation for each quartile of traffic exposure during pregnancy. If everyone lived near high-volume freeways (approximated as the fourth quartile of traffic density), the estimated probability of term low birth weight would be 2.27% (95% confidence interval: 2.16, 2.38) as compared with 2.02% (95% confidence interval: 1.90, 2.12) if everyone lived near smaller local roads (first quartile of traffic density). Assessment of potentially causal associations, in the absence of arbitrary model assumptions applied to the data, should result in relatively unbiased estimates. The current results support findings from previous studies that prenatal exposure to traffic-related air pollution may adversely affect birth weight among full-term infants. PMID:23045474
Maeder, M T; Muenzer, T; Rickli, H; Brunner-La Rocca, H P; Myers, J; Ammann, P
2008-08-01
Maximal exercise capacity expressed as metabolic equivalents (METs) is rarely directly measured (measured METs; mMETs) but estimated from maximal workload (estimated METs; eMETs). We assessed the accuracy of predicting mMETs by eMETs in asymptomatic subjects. Thirty-four healthy volunteers without cardiovascular risk factors (controls) and 90 patients with at least one risk factor underwent cardiopulmonary exercise testing using individualized treadmill ramp protocols. The equation of the American College of Sports Medicine (ACSM) was employed to calculate eMETs. Despite a close correlation between eMETs and mMETs (patients: r = 0.82, controls: r = 0.88; p < 0.001 for both), eMETs were higher than mMETs in both patients [11.7 (8.9 - 13.4) vs. 8.2 (7.0 - 10.6) METs; p < 0.001] and controls [17.0 (16.2 - 18.2) vs. 15.6 (14.2 - 17.0) METs; p < 0.001]. The absolute [2.5 (1.6 - 3.7) vs. 1.3 (0.9 - 2.1) METs; p < 0.001] and the relative [28 (19 - 47) vs. 9 (6 - 14) %; p < 0.001] difference between eMETs and mMETs was higher in patients. In patients, ratio limits of agreement of 1.33 (*/ divided by 1.40) between eMETs and mMETs were obtained, whereas the ratio limits of agreement were 1.09 (*/ divided by 1.13) in controls. The ACSM equation is associated with a significant overestimation of mMETs in young and fit subjects, which is markedly more pronounced in older and less fit subjects with cardiovascular risk factors.
NASA Technical Reports Server (NTRS)
MacConochie, Ian O.; White, Nancy H.; Mills, Janelle C.
2004-01-01
A program, entitled Weights, Areas, and Mass Properties (or WAMI) is centered around an array of menus that contain constants that can be used in various mass estimating relationships for the ultimate purpose of obtaining the mass properties of Earth-to-Orbit Transports. The current Shuttle mass property data was relied upon heavily for baseline equation constant values from which other options were derived.
Reported maternal education is an important predictor of pregnancy outcomes. Like income, it is believed to allow women to locate in more favorable conditions than less educated or affluent peers. We examine the effect of reported educational attainment on term birth weight (birt...
ERIC Educational Resources Information Center
Lee, Sunghee; Satter, Delight E.; Ponce, Ninez A.
2009-01-01
Racial classification is a paramount concern in data collection and analysis for American Indians and Alaska Natives (AI/ANs) and has far-reaching implications in health research. We examine how different racial classifications affect survey weights and consequently change health-related indicators for the AI/AN population in California. Using a…
ERIC Educational Resources Information Center
Lien, Diana S.; Evans, William
2005-01-01
Substantial increases in cigarette taxes result in decrease in smoking by pregnant women. It is also observed that there is consequent improvement in infant birth weight. The conclusions are based on the data from four states that opted to raise cigarette taxes by a large margin.
Technology Transfer Automated Retrieval System (TEKTRAN)
Birth weight (BWT) and calving difficulty (CD) were recorded on 4,579 first parity females from the Germplasm Evaluation (GPE) program at the U.S. Meat Animal Research Center (USMARC). Both traits were analyzed using a bivariate animal model with direct and maternal effects. Calving difficulty was...
Ríos-Utrera, A; Cundiff, L V; Gregory, K E; Koch, R M; Dikeman, M E; Koohmaraie, M; Van Vleck, L D
2006-01-01
The influence of different levels of adjusted fat thickness (AFT) and HCW slaughter end points (covariates) on estimates of breed and retained heterosis effects was studied for 14 carcass traits from serially slaughtered purebred and composite steers from the US Meat Animal Research Center (MARC). Contrasts among breed solutions were estimated at 0.7, 1.1, and 1.5 cm of AFT, and at 295.1, 340.5, and 385.9 kg of HCW. For constant slaughter age, contrasts were adjusted to the overall mean (432.5 d). Breed effects for Red Poll, Hereford, Limousin, Braunvieh, Pinzgauer, Gelbvieh, Simmental, Charolais, MARC I, MARC II, and MARC III were estimated as deviations from Angus. In addition, purebreds were pooled into 3 groups based on lean-to-fat ratio, and then differences were estimated among groups. Retention of combined individual and maternal heterosis was estimated for each composite. Mean retained heterosis for the 3 composites also was estimated. Breed rankings and expression of heterosis varied within and among end points. For example, Charolais had greater (P < 0.05) dressing percentages than Angus at the 2 largest levels of AFT and smaller (P < 0.01) percentages at the 2 largest levels of HCW, whereas the 2 breeds did not differ (P > or = 0.05) at a constant age. The MARC III composite produced 9.7 kg more (P < 0.01) fat than Angus at AFT of 0.7 cm, but 7.9 kg less (P < 0.05) at AFT of 1.5 cm. For MARC III, the estimate of retained heterosis for HCW was significant (P < 0.05) at the lowest level of AFT, but at the intermediate and greatest levels estimates were nil. The pattern was the same for MARC I and MARC III for LM area. Adjustment for age resulted in near zero estimates of retained heterosis for AFT, and similarly, adjustment for HCW resulted in nil estimates of retained heterosis for LM area. For actual retail product as a percentage of HCW, the estimate of retained heterosis for MARC III was negative (-1.27%; P < 0.05) at 0.7 cm but was significantly
Ríos-Utrera, A; Cundiff, L V; Gregory, K E; Koch, R M; Dikeman, M E; Koohmaraie, M; Van Vleck, L D
2006-01-01
The influence of different levels of adjusted fat thickness (AFT) and HCW slaughter end points (covariates) on estimates of breed and retained heterosis effects was studied for 14 carcass traits from serially slaughtered purebred and composite steers from the US Meat Animal Research Center (MARC). Contrasts among breed solutions were estimated at 0.7, 1.1, and 1.5 cm of AFT, and at 295.1, 340.5, and 385.9 kg of HCW. For constant slaughter age, contrasts were adjusted to the overall mean (432.5 d). Breed effects for Red Poll, Hereford, Limousin, Braunvieh, Pinzgauer, Gelbvieh, Simmental, Charolais, MARC I, MARC II, and MARC III were estimated as deviations from Angus. In addition, purebreds were pooled into 3 groups based on lean-to-fat ratio, and then differences were estimated among groups. Retention of combined individual and maternal heterosis was estimated for each composite. Mean retained heterosis for the 3 composites also was estimated. Breed rankings and expression of heterosis varied within and among end points. For example, Charolais had greater (P < 0.05) dressing percentages than Angus at the 2 largest levels of AFT and smaller (P < 0.01) percentages at the 2 largest levels of HCW, whereas the 2 breeds did not differ (P > or = 0.05) at a constant age. The MARC III composite produced 9.7 kg more (P < 0.01) fat than Angus at AFT of 0.7 cm, but 7.9 kg less (P < 0.05) at AFT of 1.5 cm. For MARC III, the estimate of retained heterosis for HCW was significant (P < 0.05) at the lowest level of AFT, but at the intermediate and greatest levels estimates were nil. The pattern was the same for MARC I and MARC III for LM area. Adjustment for age resulted in near zero estimates of retained heterosis for AFT, and similarly, adjustment for HCW resulted in nil estimates of retained heterosis for LM area. For actual retail product as a percentage of HCW, the estimate of retained heterosis for MARC III was negative (-1.27%; P < 0.05) at 0.7 cm but was significantly
Lear, J.L.; Feyerabend, A.; Gregory, C.
1989-08-01
Discordance between effective renal plasma flow (ERPF) measurements from radionuclide techniques that use single versus multiple plasma samples was investigated. In particular, the authors determined whether effects of variations in distribution volume (Vd) of iodine-131 iodohippurate on measurement of ERPF could be ignored, an assumption implicit in the single-sample technique. The influence of Vd on ERPF was found to be significant, a factor indicating an important and previously unappreciated source of error in the single-sample technique. Therefore, a new two-compartment, two-plasma-sample technique was developed on the basis of the observations that while variations in Vd occur from patient to patient, the relationship between intravascular and extravascular components of Vd and the rate of iodohippurate exchange between the components are stable throughout a wide range of physiologic and pathologic conditions. The new technique was applied in a series of 30 studies in 19 patients. Results were compared with those achieved with the reference, single-sample, and slope-intercept techniques. The new two-compartment, two-sample technique yielded estimates of ERPF that more closely agreed with the reference multiple-sample method than either the single-sample or slope-intercept techniques.
NASA Astrophysics Data System (ADS)
Zouch, Wassim; Slima, Mohamed Ben; Feki, Imed; Derambure, Philippe; Taleb-Ahmed, Abdelmalik; Hamida, Ahmed Ben
2010-12-01
A new nonparametric method, based on the smooth weighted-minimum-norm (WMN) focal underdetermined-system solver (FOCUSS), for electrical cerebral activity localization using electroencephalography measurements is proposed. This method iteratively adjusts the spatial sources by reducing the size of the lead-field and the weighting matrix. Thus, an enhancement of source localization is obtained, as well as a reduction of the computational complexity. The performance of the proposed method, in terms of localization errors, robustness, and computation time, is compared with the WMN-FOCUSS and nonshrinking smooth WMN-FOCUSS methods as well as with standard generalized inverse methods (unweighted minimum norm, WMN, and FOCUSS). Simulation results for single-source localization confirm the effectiveness and robustness of the proposed method with respect to the reconstruction accuracy of a simulated single dipole.
Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.
2012-04-01
traditionally used to estimate spinal cord NTCP may not apply to the dosimetry of SRS. Further research with additional NTCP models is needed.
Validity of Mothers' Reports of Children's Weight in Japan.
Nosaka, Nobuyuki; Fujiwara, Takeo; Knaup, Emily; Okada, Ayumi; Tsukahara, Hirokazu
2016-08-01
Estimation methods for pediatric weight have not been evaluated for Japanese children. This study aimed to assess the accuracy of mothers' reports of their children's weight in Japan. We also evaluated potential alternatives to the estimation of weight, including the Broselow tape (BT), Advanced Pediatric Life Support (APLS), and Park's formulae. We prospectively collected cross-sectional data on a convenience sample of 237 children aged less than 10 years who presented to a general pediatric outpatient clinic with their mothers. Each weight estimation method was evaluated using Bland- Altman plots and by calculating the proportion within 10% and 20% of the measured weight. Mothers' reports of weight were the most accurate method, with 94.9% within 10% of the measured weight, the lowest mean difference (0.27kg), and the shortest 95% limit of agreement (－1.4 to 1.9kg). The BT was the most reliable alternative, followed by APLS and Park's formulae. Mothers' reports of their children 's weight are more accurate than other weight estimation methods. When no report of a child's weight by the mother is available, BT is the best alternative. When an aged-based formula is the only option, the APLS formula is preferred. PMID:27549669
NASA Technical Reports Server (NTRS)
Herman, Jay R.
2010-01-01
Multiple scattering radiative transfer results are used to calculate action spectrum weighted irradiances and fractional irradiance changes in terms of a power law in ozone OMEGA, U(OMEGA/200)(sup -RAF), where the new radiation amplification factor (RAF) is just a function of solar zenith angle. Including Rayleigh scattering caused small differences in the estimated 30 year changes in action spectrum-weighted irradiances compared to estimates that neglect multiple scattering. The radiative transfer results are applied to several action spectra and to an instrument response function corresponding to the Solar Light 501 meter. The effect of changing ozone on two plant damage action spectra are shown for plants with high sensitivity to UVB (280-315 run) and those with lower sensitivity, showing that the probability for plant damage for the latter has increased since 1979, especially at middle to high latitudes in the Southern Hemisphere. Similarly, there has been an increase in rates of erythemal skin damage and pre-vitamin D3 production corresponding to measured ozone decreases. An example conversion function is derived to obtain erythemal irradiances and the UV index from measurements with the Solar Light 501 instrument response function. An analytic expressions is given to convert changes in erythemal irradiances to changes in CIE vitamin-D action spectrum weighted irradiances.
NASA Astrophysics Data System (ADS)
Herman, Jay R.
2010-12-01
Multiple scattering radiative transfer results are used to calculate action spectrum weighted irradiances and fractional irradiance changes in terms of a power law in ozone Ω, U(Ω/200)-RAF, where the new radiation amplification factor (RAF) is just a function of solar zenith angle. Including Rayleigh scattering caused small differences in the estimated 30 year changes in action spectrum-weighted irradiances compared to estimates that neglect multiple scattering. The radiative transfer results are applied to several action spectra and to an instrument response function corresponding to the Solar Light 501 meter. The effect of changing ozone on two plant damage action spectra are shown for plants with high sensitivity to UVB (280-315 nm) and those with lower sensitivity, showing that the probability for plant damage for the latter has increased since 1979, especially at middle to high latitudes in the Southern Hemisphere. Similarly, there has been an increase in rates of erythemal skin damage and pre-vitamin D3 production corresponding to measured ozone decreases. An example conversion function is derived to obtain erythemal irradiances and the UV index from measurements with the Solar Light 501 instrument response function. An analytic expressions is given to convert changes in erythemal irradiances to changes in CIE vitamin-D action spectrum weighted irradiances.
Gilbertson, Lynn; Lutfi, Robert A.; Lee, Jungmee
2015-01-01
Gilbertson and Lutfi [(2014). Hear. Res. 317, 9–14] report that older adults perform similarly to younger adults on a masked vowel discrimination task when the fundamental frequency (F0) of target and masker vowel differ but that the older adults perform more poorly when the F0 is the same. This paper presents an alternative analysis of those data to support the conclusion that the poorer performance of older adults is due to an increase in the decision weight on masker reflecting poorer selective attention in noise of older adults. PMID:26093447
49 CFR 375.405 - How must I provide a non-binding estimate?
Code of Federal Regulations, 2013 CFR
2013-10-01
... provide reasonably accurate non-binding estimates based upon both the estimated weight or volume of the... a shipper with an estimate based on volume that will later be converted to a weight-based rate, you must provide the shipper an explanation in writing of the formula used to calculate the conversion...
49 CFR 375.405 - How must I provide a non-binding estimate?
Code of Federal Regulations, 2011 CFR
2011-10-01
... provide reasonably accurate non-binding estimates based upon both the estimated weight or volume of the... a shipper with an estimate based on volume that will later be converted to a weight-based rate, you must provide the shipper an explanation in writing of the formula used to calculate the conversion...
49 CFR 375.405 - How must I provide a non-binding estimate?
Code of Federal Regulations, 2010 CFR
2010-10-01
... provide reasonably accurate non-binding estimates based upon both the estimated weight or volume of the... a shipper with an estimate based on volume that will later be converted to a weight-based rate, you must provide the shipper an explanation in writing of the formula used to calculate the conversion...
NASA Astrophysics Data System (ADS)
Wang, Gaili; Liu, Liping; Ding, Yuanyuan
2012-05-01
The errors in radar quantitative precipitation estimations consist not only of systematic biases caused by random noises but also spatially nonuniform biases in radar rainfall at individual rain-gauge stations. In this study, a real-time adjustment to the radar reflectivity-rainfall rates ( Z-R) relationship scheme and the gauge-corrected, radar-based, estimation scheme with inverse distance weighting interpolation was developed. Based on the characteristics of the two schemes, the two-step correction technique of radar quantitative precipitation estimation is proposed. To minimize the errors between radar quantitative precipitation estimations and rain gauge observations, a real-time adjustment to the Z-R relationship scheme is used to remove systematic bias on the time-domain. The gauge-corrected, radar-based, estimation scheme is then used to eliminate non-uniform errors in space. Based on radar data and rain gauge observations near the Huaihe River, the two-step correction technique was evaluated using two heavy-precipitation events. The results show that the proposed scheme improved not only in the underestimation of rainfall but also reduced the root-mean-square error and the mean relative error of radar-rain gauge pairs.
Zhang, Tianhao; Liu, Gang; Zhu, Zhongmin; Gong, Wei; Ji, Yuxi; Huang, Yusi
2016-01-01
The real-time estimation of ambient particulate matter with diameter no greater than 2.5 μm (PM2.5) is currently quite limited in China. A semi-physical geographically weighted regression (GWR) model was adopted to estimate PM2.5 mass concentrations at national scale using the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) Aerosol Optical Depth product fused by the Dark Target (DT) and Deep Blue (DB) algorithms, combined with meteorological parameters. The fitting results could explain over 80% of the variability in the corresponding PM2.5 mass concentrations, and the estimation tends to overestimate when measurement is low and tends to underestimate when measurement is high. Based on World Health Organization standards, results indicate that most regions in China suffered severe PM2.5 pollution during winter. Seasonal average mass concentrations of PM2.5 predicted by the model indicate that residential regions, namely Jing-Jin-Ji Region and Central China, were faced with challenge from fine particles. Moreover, estimation deviation caused primarily by the spatially uneven distribution of monitoring sites and the changes of elevation in a relatively small region has been discussed. In summary, real-time PM2.5 was estimated effectively by the satellite-based semi-physical GWR model, and the results could provide reasonable references for assessing health impacts and offer guidance on air quality management in China. PMID:27706054
Datta, Surjya Narayan; Kaur, Vaneet Inder; Dhawan, Asha; Jassal, Geeta
2013-01-01
Comparative study was conducted to observe the efficacy of different feeding regimes on growth of Channa punctata. Six iso- proteinous diets were prepared by using different agro industrial by-products. Maximum weight gain was recorded with diet having 66.75% rice bran, 11.50% mustard cake, 23.0% groundnut cake, 5% molasses, 1.5% vitamin-mineral mixture and 0.5% salt with specific growth rate of 0.408. The experimental fish recorded the value of exponent 'b' in the range of 2.7675 to 4.3922. The condition factor 'K' of all experimental fish was above 1.0 (1.094- 1.235) indicating robustness or well being of experimented fish.
Mertens, Bart J A; Jacobs, Wilco C H; Brand, Ronald; Peul, Wilco C
2014-01-01
We consider a re-analysis of the wait-and-see (control) arm of a recent clinical trial on sciatica. While the original randomised trial was designed to evaluate the public policy effect of a conservative wait-and-see approach versus early surgery, we investigate the impact of surgery at the individual patient level in a re-analysis of the wait-and-see group data. Both marginal structural model re-weighted estimates as well as propensity score adjusted analyses are presented. Results indicate that patients with high propensity to receive surgery may have beneficial effects at 2 years from delayed disc surgery.
Guan, Yihong; Zhu, Qinfang; Huang, Delai; Zhao, Shuyi; Jan Lo, Li; Peng, Jinrong
2015-01-01
The molecular weight (MW) of a protein can be predicted based on its amino acids (AA) composition. However, in many cases a non-chemically modified protein shows an SDS PAGE-displayed MW larger than its predicted size. Some reports linked this fact to high content of acidic AA in the protein. However, the exact relationship between the acidic AA composition and the SDS PAGE-displayed MW is not established. Zebrafish nucleolar protein Def is composed of 753 AA and shows an SDS PAGE-displayed MW approximately 13 kDa larger than its predicted MW. The first 188 AA in Def is defined by a glutamate-rich region containing ~35.6% of acidic AA. In this report, we analyzed the relationship between the SDS PAGE-displayed MW of thirteen peptides derived from Def and the AA composition in each peptide. We found that the difference between the predicted and SDS PAGE-displayed MW showed a linear correlation with the percentage of acidic AA that fits the equation y = 276.5x - 31.33 (x represents the percentage of acidic AA, 11.4% ≤ x ≤ 51.1%; y represents the average ΔMW per AA). We demonstrated that this equation could be applied to predict the SDS PAGE-displayed MW for thirteen different natural acidic proteins. PMID:26311515
Inertial Estimator Learning Automata
NASA Astrophysics Data System (ADS)
Zhang, Junqi; Ni, Lina; Xie, Chen; Gao, Shangce; Tang, Zheng
This paper presents an inertial estimator learning automata scheme by which both the short-term and long-term perspectives of the environment can be incorporated in the stochastic estimator — the long term information crystallized in terms of the running reward-probability estimates, and the short term information used by considering whether the most recent response was a reward or a penalty. Thus, when the short-term perspective is considered, the stochastic estimator becomes pertinent in the context of the estimator algorithms. The proposed automata employ an inertial weight estimator as the short-term perspective to achieve a rapid and accurate convergence when operating in stationary random environments. According to the proposed inertial estimator scheme, the estimates of the reward probabilities of actions are affected by the last response from environment. In this way, actions that have gotten the positive response from environment in the short time, have the opportunity to be estimated as “optimal”, to increase their choice probability and consequently, to be selected. The estimates become more reliable and consequently, the automaton rapidly and accurately converges to the optimal action. The asymptotic behavior of the proposed scheme is analyzed and it is proved to be ε-optimal in every stationary random environment. Extensive simulation results indicate that the proposed algorithm converges faster than the traditional stochastic-estimator-based SERI scheme, and the deterministic-estimator-based DGPA and DPRI schemes when operating in stationary random environments.
Rühm, W; Walsh, L
2007-01-01
Currently, most analyses of the A-bomb survivors' solid tumour and leukaemia data are based on a constant neutron relative biological effectiveness (RBE) value of 10 that is applied to all survivors, independent of their distance to the hypocentre at the time of bombing. The results of these analyses are then used as a major basis for current risk estimates suggested by the International Commission on Radiological Protection (ICRP) for use in international safety guidelines. It is shown here that (i) a constant value of 10 is not consistent with weighting factors recommended by the ICRP for neutrons and (ii) it does not account for the hardening of the neutron spectra in Hiroshima and Nagasaki, which takes place with increasing distance from the hypocentres. The purpose of this paper is to present new RBE values for the neutrons, calculated as a function of distance from the hypocentres for both cities that are consistent with the ICRP60 neutron weighting factor. If based on neutron spectra from the DS86 dosimetry system, these calculations suggest values of about 31 at 1000 m and 23 at 2000 m ground range in Hiroshima, while the corresponding values for Nagasaki are 24 and 22. If the neutron weighting factor that is consistent with ICRP92 is used, the corresponding values are about 23 and 21 for Hiroshima and 21 and 20 for Nagasaki, respectively. It is concluded that the current risk estimates will be subject to some changes in view of the changed RBE values. This conclusion does not change significantly if the new doses from the Dosimetry System DS02 are used.
Chioccioli, Maurizio; Hankamer, Ben; Ross, Ian L.
2014-01-01
Dry weight biomass is an important parameter in algaculture. Direct measurement requires weighing milligram quantities of dried biomass, which is problematic for small volume systems containing few cells, such as laboratory studies and high throughput assays in microwell plates. In these cases indirect methods must be used, inducing measurement artefacts which vary in severity with the cell type and conditions employed. Here, we utilise flow cytometry pulse width data for the estimation of cell density and biomass, using Chlorella vulgaris and Chlamydomonas reinhardtii as model algae and compare it to optical density methods. Measurement of cell concentration by flow cytometry was shown to be more sensitive than optical density at 750 nm (OD750) for monitoring culture growth. However, neither cell concentration nor optical density correlates well to biomass when growth conditions vary. Compared to the growth of C. vulgaris in TAP (tris-acetate-phosphate) medium, cells grown in TAP + glucose displayed a slowed cell division rate and a 2-fold increased dry biomass accumulation compared to growth without glucose. This was accompanied by increased cellular volume. Laser scattering characteristics during flow cytometry were used to estimate cell diameters and it was shown that an empirical but nonlinear relationship could be shown between flow cytometric pulse width and dry weight biomass per cell. This relationship could be linearised by the use of hypertonic conditions (1 M NaCl) to dehydrate the cells, as shown by density gradient centrifugation. Flow cytometry for biomass estimation is easy to perform, sensitive and offers more comprehensive information than optical density measurements. In addition, periodic flow cytometry measurements can be used to calibrate OD750 measurements for both convenience and accuracy. This approach is particularly useful for small samples and where cellular characteristics, especially cell size, are expected to vary during growth. PMID
Rodríguez-Almeida, F A; Van Vleck, L D; Gregory, K E
1997-05-01
Direct and maternal breed effects on birth and 200-d weights were estimated for nine parental breeds (Hereford [H], Angus [A], Braunvieh [B], Limousin [L], Charolais [C], Simmental [S], Gelbvieh [G], Red Poll [R], and Pinzgauer [P]) that contributed to three composite populations (MARC I = 1/4B, 1/4C, 1/4L, 1/8H, 1/8A; MARC II = 1/4G, 1/4S, 1/4H, 1/4A; and MARC III = 1/4R, 1/4P, 1/4H, 1/4A). Records from each population, the composite plus pure breeds and crosses used to create each composite, were analyzed separately. The animal model included fixed effects of contemporary group (birth year-sex-dam age), proportions of individual and maternal heterosis and breed inheritance as covariates, and random effects of additive direct genetic (a) and additive maternal genetic (m) with covariance (a,m), permanent environment, and residual. Sampling correlations among estimates of genetic fixed effects were large, especially between direct and maternal heterosis and between direct and maternal breed genetic effects for the same breed, which were close to -1. This resulted in some large estimates with opposite sign and large standard errors for direct and maternal breed genetic effects. Data from a diallel experiment with H, A, B, and R breeds, from grading up and from a top cross experiment were required to separate breed effects satisfactorily into direct and maternal genetic effects. Results indicate that estimation of direct and maternal breed effects needed to predict hybrid EPD for multibreed populations from field data may not be possible. Information from designed crossbreeding experiments will need to be incorporated in some way.
Reichenheim, M E; Best, N G
2000-01-01
Victora et al. (1998) proposed the use of low weight-for-age prevalence to estimate the prevalence of height-for-age deficit in Brazilian children. This procedure was justified by the need to simplify methods used in the context of community health programs. From the same perspective, the present article broadens this proposal by using a Bayesian approach (based on Markov Chain Monte Carlo (MCMC) methods) to deal with the imprecision resulting from Victora et al.'s model. In order to avoid invalid estimated prevalence values which can occur with the original linear model, truncation or a logit transformation of the prevalences are suggested. The Bayesian approach is illustrated using a community study as an example. Imprecision arising from methodological complexities in the community study design, such as multi-stage sampling and clustering, is easily handled within the Bayesian framework by introducing a hierarchical or multilevel model structure. Since growth deficit was also evaluated in the community study, the article may also serve to validate the procedure proposed by Victora et al.
NASA Astrophysics Data System (ADS)
Lemieux, Louis
2001-07-01
A new fully automatic algorithm for the segmentation of the brain and cerebro-spinal fluid (CSF) from T1-weighted volume MRI scans of the head was specifically developed in the context of serial intra-cranial volumetry. The method is an extension of a previously published brain extraction algorithm. The brain mask is used as a basis for CSF segmentation based on morphological operations, automatic histogram analysis and thresholding. Brain segmentation is then obtained by iterative tracking of the brain-CSF interface. Grey matter (GM), white matter (WM) and CSF volumes are calculated based on a model of intensity probability distribution that includes partial volume effects. Accuracy was assessed using a digital phantom scan. Reproducibility was assessed by segmenting pairs of scans from 20 normal subjects scanned 8 months apart and 11 patients with epilepsy scanned 3.5 years apart. Segmentation accuracy as measured by overlap was 98% for the brain and 96% for the intra-cranial tissues. The volume errors were: total brain (TBV): -1.0%, intra-cranial (ICV):0.1%, CSF: +4.8%. For repeated scans, matching resulted in improved reproducibility. In the controls, the coefficient of reliability (CR) was 1.5% for the TVB and 1.0% for the ICV. In the patients, the Cr for the ICV was 1.2%.
You, Wei; Zang, Zengliang; Zhang, Lifeng; Li, Yi; Wang, Weiqi
2016-05-01
Taking advantage of the continuous spatial coverage, satellite-derived aerosol optical depth (AOD) products have been widely used to assess the spatial and temporal characteristics of fine particulate matter (PM2.5) on the ground and their effects on human health. However, the national-scale ground-level PM2.5 estimation is still very limited because the lack of ground PM2.5 measurements to calibrate the model in China. In this study, a national-scale geographically weighted regression (GWR) model was developed to estimate ground-level PM2.5 concentration based on satellite AODs, newly released national-wide hourly PM2.5 concentrations, and meteorological parameters. The results showed good agreements between satellite-retrieved and ground-observed PM2.5 concentration at 943 stations in China. The overall cross-validation (CV) R (2) is 0.76 and root mean squared prediction error (RMSE) is 22.26 μg/m(3) for MODIS-derived AOD. The MISR-derived AOD also exhibits comparable performance with a CV R (2) and RMSE are 0.81 and 27.46 μg/m(3), respectively. Annual PM2.5 concentrations retrieved either by MODIS or MISR AOD indicated that most of the residential community areas exceeded the new annual Chinese PM2.5 National Standard level 2. These results suggest that this approach is useful for estimating large-scale ground-level PM2.5 distributions especially for the regions without PMs monitoring sites. PMID:26780051
Kriging without negative weights
Szidarovszky, F.; Baafi, E.Y.; Kim, Y.C.
1987-08-01
Under a constant drift, the linear kriging estimator is considered as a weighted average of n available sample values. Kriging weights are determined such that the estimator is unbiased and optimal. To meet these requirements, negative kriging weights are sometimes found. Use of negative weights can produce negative block grades, which makes no practical sense. In some applications, all kriging weights may be required to be nonnegative. In this paper, a derivation of a set of nonlinear equations with the nonnegative constraint is presented. A numerical algorithm also is developed for the solution of the new set of kriging equations.
Donato, Mary M.
2006-01-01
Streamflow and trace-metal concentration data collected at 10 locations in the Spokane River basin of northern Idaho and eastern Washington during 1999-2004 were used as input for the U.S. Geological Survey software, LOADEST, to estimate annual loads and mean flow-weighted concentrations of total and dissolved cadmium, lead, and zinc. Cadmium composed less than 1 percent of the total metal load at all stations; lead constituted from 6 to 42 percent of the total load at stations upstream from Coeur d'Alene Lake and from 2 to 4 percent at stations downstream of the lake. Zinc composed more than 90 percent of the total metal load at 6 of the 10 stations examined in this study. Trace-metal loads were lowest at the station on Pine Creek below Amy Gulch, where the mean annual total cadmium load for 1999-2004 was 39 kilograms per year (kg/yr), the mean estimated total lead load was about 1,700 kg/yr, and the mean annual total zinc load was 14,000 kg/yr. The trace-metal loads at stations on North Fork Coeur d'Alene River at Enaville, Ninemile Creek, and Canyon Creek also were relatively low. Trace-metal loads were highest at the station at Coeur d'Alene River near Harrison. The mean annual total cadmium load was 3,400 kg/yr, the mean total lead load was 240,000 kg/yr, and the mean total zinc load was 510,000 kg/yr for 1999-2004. Trace-metal loads at the station at South Fork Coeur d'Alene River near Pinehurst and the three stations on the Spokane River downstream of Coeur d'Alene Lake also were relatively high. Differences in metal loads, particularly lead, between stations upstream and downstream of Coeur d'Alene Lake likely are due to trapping and retention of metals in lakebed sediments. LOADEST software was used to estimate loads for water years 1999-2001 for many of the same sites discussed in this report. Overall, results from this study and those from a previous study are in good agreement. Observed differences between the two studies are attributable to streamflow
Weighting Regressions by Propensity Scores
ERIC Educational Resources Information Center
Freedman, David A.; Berk, Richard A.
2008-01-01
Regressions can be weighted by propensity scores in order to reduce bias. However, weighting is likely to increase random error in the estimates, and to bias the estimated standard errors downward, even when selection mechanisms are well understood. Moreover, in some cases, weighting will increase the bias in estimated causal parameters. If…
Donato, Mary M.
2006-01-01
Streamflow and trace-metal concentration data collected at 10 locations in the Spokane River basin of northern Idaho and eastern Washington during 1999-2004 were used as input for the U.S. Geological Survey software, LOADEST, to estimate annual loads and mean flow-weighted concentrations of total and dissolved cadmium, lead, and zinc. Cadmium composed less than 1 percent of the total metal load at all stations; lead constituted from 6 to 42 percent of the total load at stations upstream from Coeur d'Alene Lake and from 2 to 4 percent at stations downstream of the lake. Zinc composed more than 90 percent of the total metal load at 6 of the 10 stations examined in this study. Trace-metal loads were lowest at the station on Pine Creek below Amy Gulch, where the mean annual total cadmium load for 1999-2004 was 39 kilograms per year (kg/yr), the mean estimated total lead load was about 1,700 kg/yr, and the mean annual total zinc load was 14,000 kg/yr. The trace-metal loads at stations on North Fork Coeur d'Alene River at Enaville, Ninemile Creek, and Canyon Creek also were relatively low. Trace-metal loads were highest at the station at Coeur d'Alene River near Harrison. The mean annual total cadmium load was 3,400 kg/yr, the mean total lead load was 240,000 kg/yr, and the mean total zinc load was 510,000 kg/yr for 1999-2004. Trace-metal loads at the station at South Fork Coeur d'Alene River near Pinehurst and the three stations on the Spokane River downstream of Coeur d'Alene Lake also were relatively high. Differences in metal loads, particularly lead, between stations upstream and downstream of Coeur d'Alene Lake likely are due to trapping and retention of metals in lakebed sediments. LOADEST software was used to estimate loads for water years 1999-2001 for many of the same sites discussed in this report. Overall, results from this study and those from a previous study are in good agreement. Observed differences between the two studies are attributable to streamflow
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Hwang, Jun Hyun; Ryu, Dong Hee; Park, Soon-Woo
2015-08-01
We investigated the interaction effect between body weight perception and chronic disease comorbidities on body weight control behavior in overweight/obese Korean adults. We analyzed data from 9,138 overweight/obese adults ≥20 yr of age from a nationally representative cross-sectional survey. Multiple logistic regression using an interaction model was performed to estimate the effect of chronic disease comorbidities on weight control behavior regarding weight perception. Adjusted odds ratios for weight control behavior tended to increase significantly with an increasing number of comorbidities in men regardless of weight perception (P<0.05 for trend), suggesting no interaction. Unlike women who perceived their weight accurately, women who under-perceived their weight did not show significant improvements in weight control behavior even with an increasing number of comorbidities. Thus, a significant interaction between weight perception and comorbidities was found only in women (P=0.031 for interaction). The effect of the relationship between accurate weight perception and chronic disease comorbidities on weight control behavior varied by sex. Improving awareness of body image is particularly necessary for overweight and obese women to prevent complications. PMID:26240477
The value of body weight measurement to assess dehydration in children.
Pruvost, Isabelle; Dubos, François; Chazard, Emmanuel; Hue, Valérie; Duhamel, Alain; Martinot, Alain
2013-01-01
Dehydration secondary to gastroenteritis is one of the most common reasons for office visits and hospital admissions. The indicator most commonly used to estimate dehydration status is acute weight loss. Post-illness weight gain is considered as the gold-standard to determine the true level of dehydration and is widely used to estimate weight loss in research. To determine the value of post-illness weight gain as a gold standard for acute dehydration, we conducted a prospective cohort study in which 293 children, aged 1 month to 2 years, with acute diarrhea were followed for 7 days during a 3-year period. The main outcome measures were an accurate pre-illness weight (if available within 8 days before the diarrhea), post-illness weight, and theoretical weight (predicted from the child's individual growth chart). Post-illness weight was measured for 231 (79%) and both theoretical and post-illness weights were obtained for 111 (39%). Only 62 (21%) had an accurate pre-illness weight. The correlation between post-illness and theoretical weight was excellent (0.978), but bootstrapped linear regression analysis showed that post-illness weight underestimated theoretical weight by 0.48 kg (95% CI: 0.06-0.79, p<0.02). The mean difference in the fluid deficit calculated was 4.0% of body weight (95% CI: 3.2-4.7, p<0.0001). Theoretical weight overestimated accurate pre-illness weight by 0.21 kg (95% CI: 0.08-0.34, p = 0.002). Post-illness weight underestimated pre-illness weight by 0.19 kg (95% CI: 0.03-0.36, p = 0.02). The prevalence of 5% dehydration according to post-illness weight (21%) was significantly lower than the prevalence estimated by either theoretical weight (60%) or clinical assessment (66%, p<0.0001).These data suggest that post-illness weight is of little value as a gold standard to determine the true level of dehydration. The performance of dehydration signs or scales determined by using post-illness weight as a gold standard has to be reconsidered.
Sran, Meena M; Khan, Karim M; Keiver, Kathy; Chew, Jason B; McKay, Heather A; Oxland, Thomas R
2005-12-01
Biomechanical studies of the thoracic spine often scan cadaveric segments by dual energy X-ray absorptiometry (DXA) to obtain measures of bone mass. Only one study has reported the accuracy of lateral scans of thoracic vertebral bodies. The accuracy of DXA scans of thoracic spine segments and of anterior-posterior (AP) thoracic scans has not been investigated. We have examined the accuracy of AP and lateral thoracic DXA scans by comparison with ash weight, the gold-standard for measuring bone mineral content (BMC). We have also compared three methods of estimating volumetric bone mineral density (vBMD) with a novel standard-ash weight (g)/bone volume (cm3) as measured by computed tomography (CT). Twelve T5-T8 spine segments were scanned with DXA (AP and lateral) and CT. The T6 vertebrae were excised, the posterior elements removed and then the vertebral bodies were ashed in a muffle furnace. We proposed a new method of estimating vBMD and compared it with two previously published methods. BMC values from lateral DXA scans displayed the strongest correlation with ash weight (r=0.99) and were on average 12.8% higher (p<0.001). As expected, BMC (AP or lateral) was more strongly correlated with ash weight than areal bone mineral density (aBMD; AP: r=0.54, or lateral: r=0.71) or estimated vBMD. Estimates of vBMD with either of the three methods were strongly and similarly correlated with volumetric BMD calculated by dividing ash weight by CT-derived volume. These data suggest that readily available DXA scanning is an appropriate surrogate measure for thoracic spine bone mineral and that the lateral scan might be the scan method of choice. PMID:15616862
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
ERIC Educational Resources Information Center
Nutter, June
1995-01-01
Secondary level physical education teachers can have their students use math concepts while working out on the weight-room equipment. The article explains how students can reinforce math skills while weightlifting by estimating their strength, estimating their power, or calculating other formulas. (SM)
A Simple Model Predicting Individual Weight Change in Humans
Thomas, Diana M.; Martin, Corby K.; Heymsfield, Steven; Redman, Leanne M.; Schoeller, Dale A.; Levine, James A.
2010-01-01
Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319
... obese. Achieving a healthy weight can help you control your cholesterol, blood pressure and blood sugar. It ... use more calories than you eat. A weight-control strategy might include Choosing low-fat, low-calorie ...
... sign of a medical problem. Causes for sudden weight loss can include Thyroid problems Cancer Infectious diseases Digestive diseases Certain medicines Sudden weight gain can be due to medicines, thyroid problems, ...
Calabrò, P S; Moraci, N; Suraci, P
2012-03-15
This paper presents the results of laboratory column tests aimed at defining the optimum weight ratio of zero-valent iron (ZVI)/pumice granular mixtures to be used in permeable reactive barriers (PRBs) for the removal of nickel from contaminated groundwater. The tests were carried out feeding the columns with aqueous solutions of nickel nitrate at concentrations of 5 and 50 mg/l using three ZVI/pumice granular mixtures at various weight ratios (10/90, 30/70 and 50/50), for a total of six column tests; two additional tests were carried out using ZVI alone. The most successful compromise between reactivity (higher ZVI content) and long-term hydraulic performance (higher Pumice content) seems to be given by the ZVI/pumice granular mixture with a 30/70 weight ratio.
Accurate masses for dispersion-supported galaxies
NASA Astrophysics Data System (ADS)
Wolf, Joe; Martinez, Gregory D.; Bullock, James S.; Kaplinghat, Manoj; Geha, Marla; Muñoz, Ricardo R.; Simon, Joshua D.; Avedo, Frank F.
2010-08-01
We derive an accurate mass estimator for dispersion-supported stellar systems and demonstrate its validity by analysing resolved line-of-sight velocity data for globular clusters, dwarf galaxies and elliptical galaxies. Specifically, by manipulating the spherical Jeans equation we show that the mass enclosed within the 3D deprojected half-light radius r1/2 can be determined with only mild assumptions about the spatial variation of the stellar velocity dispersion anisotropy as long as the projected velocity dispersion profile is fairly flat near the half-light radius, as is typically observed. We find M1/2 = 3 G-1< σ2los > r1/2 ~= 4 G-1< σ2los > Re, where < σ2los > is the luminosity-weighted square of the line-of-sight velocity dispersion and Re is the 2D projected half-light radius. While deceptively familiar in form, this formula is not the virial theorem, which cannot be used to determine accurate masses unless the radial profile of the total mass is known a priori. We utilize this finding to show that all of the Milky Way dwarf spheroidal galaxies (MW dSphs) are consistent with having formed within a halo of a mass of approximately 3 × 109 Msolar, assuming a Λ cold dark matter cosmology. The faintest MW dSphs seem to have formed in dark matter haloes that are at least as massive as those of the brightest MW dSphs, despite the almost five orders of magnitude spread in luminosity between them. We expand our analysis to the full range of observed dispersion-supported stellar systems and examine their dynamical I-band mass-to-light ratios ΥI1/2. The ΥI1/2 versus M1/2 relation for dispersion-supported galaxies follows a U shape, with a broad minimum near ΥI1/2 ~= 3 that spans dwarf elliptical galaxies to normal ellipticals, a steep rise to ΥI1/2 ~= 3200 for ultra-faint dSphs and a more shallow rise to ΥI1/2 ~= 800 for galaxy cluster spheroids.
Accurate Optical Reference Catalogs
NASA Astrophysics Data System (ADS)
Zacharias, N.
2006-08-01
Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.
ERIC Educational Resources Information Center
Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.
1999-01-01
Performed an empirical Monte Carlo study using predictor and criterion data from 84,808 U.S. Air Force enlistees. Compared formula-based, traditional empirical, and equal-weights procedures. Discusses issues for basic research on validation and cross-validation. (SLD)
Modeling operating weight and axle weight distributions for highway vehicles
Greene, D.L.; Liang, J.C.
1988-07-01
The estimation of highway cost responsibility requires detailed information on vehicle operating weights and axle weights by type of vehicle. Typically, 10--20 vehicle types must be cross-classified by 10--20 registered weight classes and again by 20 or more operating weight categories, resulting in 100--400 relative frequencies to be determined for each vehicle type. For each of these, gross operating weight must be distributed to each axle or axle unit. Given the rarity of many of the heaviest vehicle types, direct estimation of these frequencies and axle weights from traffic classification count statistics and truck weight data may exceed the reliability of even the largest (e.g., 250,000 record) data sources. An alternative is to estimate statistical models of operating weight distributions as functions of registered weight, and models of axle weight shares as functions of operating weight. This paper describes the estimation of such functions using the multinomial logit model (a log-linear model) and the implementation of the modeling framework as a PC-based FORTRAN program. Areas for further research include the addition of highway class and region as explanatory variables in operating weight distribution models, and the development of theory for including registration costs and costs of operating overweight in the modeling framework. 14 refs., 45 figs., 5 tabs.
Accurate inference of shoot biomass from high-throughput images of cereal plants
2011-01-01
With the establishment of advanced technology facilities for high throughput plant phenotyping, the problem of estimating plant biomass of individual plants from their two dimensional images is becoming increasingly important. The approach predominantly cited in literature is to estimate the biomass of a plant as a linear function of the projected shoot area of plants in the images. However, the estimation error from this model, which is solely a function of projected shoot area, is large, prohibiting accurate estimation of the biomass of plants, particularly for the salt-stressed plants. In this paper, we propose a method based on plant specific weight for improving the accuracy of the linear model and reducing the estimation bias (the difference between actual shoot dry weight and the value of the shoot dry weight estimated with a predictive model). For the proposed method in this study, we modeled the plant shoot dry weight as a function of plant area and plant age. The data used for developing our model and comparing the results with the linear model were collected from a completely randomized block design experiment. A total of 320 plants from two bread wheat varieties were grown in a supported hydroponics system in a greenhouse. The plants were exposed to two levels of hydroponic salt treatments (NaCl at 0 and 100 mM) for 6 weeks. Five harvests were carried out. Each time 64 randomly selected plants were imaged and then harvested to measure the shoot fresh weight and shoot dry weight. The results of statistical analysis showed that with our proposed method, most of the observed variance can be explained, and moreover only a small difference between actual and estimated shoot dry weight was obtained. The low estimation bias indicates that our proposed method can be used to estimate biomass of individual plants regardless of what variety the plant is and what salt treatment has been applied. We validated this model on an independent set of barley data. The
Fujita, Masahiro; Yajima, Tomonari; Iijima, Kazuaki; Sato, Kiyoshi
2012-05-01
The uncertainty in pesticide residue levels (UPRL) associated with sampling size was estimated using individual acetamiprid and cypermethrin residue data from preharvested apple, broccoli, cabbage, grape, and sweet pepper samples. The relative standard deviation from the mean of each sampling size (n = 2(x), where x = 1-6) of randomly selected samples was defined as the UPRL for each sampling size. The estimated UPRLs, which were calculated on the basis of the regulatory sampling size recommended by the OECD Guidelines on Crop Field Trials (weights from 1 to 5 kg, and commodity unit numbers from 12 to 24), ranged from 2.1% for cypermethrin in sweet peppers to 14.6% for cypermethrin in cabbage samples. The percentages of commodity exceeding the maximum residue limits (MRLs) specified by the Japanese Food Sanitation Law may be predicted from the equation derived from this study, which was based on samples of various size ranges with mean residue levels below the MRL. The estimated UPRLs have confirmed that sufficient sampling weight and numbers are required for analysis and/or re-examination of subsamples to provide accurate values of pesticide residue levels for the enforcement of MRLs. The equation derived from the present study would aid the estimation of more accurate residue levels even from small sampling sizes. PMID:22475588
Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping
2015-06-26
In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.
Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping
2015-06-26
In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349
... Quit Smoking Benefits of Quitting Health Effects of Smoking Secondhand Smoke Withdrawal Ways to Quit QuitGuide Pregnancy & Motherhood Pregnancy & Motherhood Before Your Baby is Born From Birth to 2 Years Quitting for Two SmokefreeMom Healthy Kids Parenting & ... Weight Management Weight Management ...
ERIC Educational Resources Information Center
Clarke, Doug
1990-01-01
The author, using a weight machine in an airport lounge, varies the machine's input parameters of height and gender to generate data sets of ideal weight. These data are later used at in-service workshops and in both primary and secondary classrooms to explore patterns and make predictions. (JJK)
Automatically determining lag phase in grapes to assist yield estimation practices
Technology Transfer Automated Retrieval System (TEKTRAN)
Estimating grapevine yield is an important though often difficult task. Accurate yield projections ensure that enough physical infrastructure is available to process fruit that cannot be stored for later handling. Crop estimation is based on recording cluster numbers and cluster weights of represent...
Random Weighted Sobolev Inequalities and Application to Quantum Ergodicity
NASA Astrophysics Data System (ADS)
Robert, Didier; Thomann, Laurent
2015-05-01
This paper is a continuation of Poiret et al. (Ann Henri Poincaré 16:651-689, 2015), where we studied a randomisation method based on the Laplacian with harmonic potential. Here we extend our previous results to the case of any polynomial and confining potential V on . We construct measures, under concentration type assumptions, on the support of which we prove optimal weighted Sobolev estimates on . This construction relies on accurate estimates on the spectral function in a non-compact configuration space. Then we prove random quantum ergodicity results without specific assumption on the classical dynamics. Finally, we prove that almost all bases of Hermite functions are quantum uniquely ergodic.
NASA Astrophysics Data System (ADS)
Ghelawi, M. A.; Moore, J. S.; Bisby, R. H.; Dodd, N. J. F.
2001-01-01
Food spoilage is caused by infestation by insects, contamination by bacteria and fungi and by deterioration by enzymes. In the third world, it has been estimated that 25% of agricultural products are lost before they reach the market. One way to decrease such losses is by treatment with ionising radiation and maximum permitted doses have been established for treatment of a wide variety of foods. For dates this dose is 2.0 kGy. Detection of irradiated foods is now essential and here we have used ESR to detect and estimate the dose received by a single date. The ESR spectrum of unirradiated date stone contains a single line g=2.0045 (signal A). Irradiation up to 2.0 kGy induces radical formation with g=1.9895, g=2.0159 (signal C) and g=1.9984 (signal B) high field. The lines with g=1.9895 and 2.0159 are readily detected and stable at room temperature for at least 27 months for samples irradiated up to this dose. The yield of the radicals resulting in these lines increase linearly up to a dose of 5.0 kGy as is evidenced by the linear increase in their intensity. In blind trials of 21 unirradiated and irradiated dates we are able to identify with 100% accuracy an irradiated sample and to estimate the dose to which the sample was irradiated to within ˜0.5 kGy.
Paternal contribution to birth weight
Magnus, P; Gjessing, H; Skrondal, A; Skjarven, R
2001-01-01
STUDY OBJECTIVE—Understanding causes of variation in birth weight has been limited by lack of sufficient sets of data that include paternal birth weight. The objective was to estimate risks of low birth weight dependent on parental birth weights and to estimate father-mother-offspring correlations for birth weight to explain the variability in birth weight in terms of effects of genes and environmental factors. DESIGN—A family design, using trios of father-mother-firstborn child. SETTING—The complete birth population in Norway 1967-98. PARTICIPANTS—67 795 families. MAIN RESULTS—The birth weight correlations were 0.226 for mother-child and 0.126 for father-child. The spousal correlation was low, 0.020. The relative risk of low birth weight in the first born child was 8.2 if both parents were low birth weight themselves, with both parents being above 4 kg as the reference. The estimate of heritability is about 0.25 for birth weight, under the assumption that cultural transmission on the paternal side has no effect on offspring prenatal growth. CONCLUSIONS—Paternal birth weight is a significant and independent predictor of low birth weight in offspring. The estimate of the heritability of birth weight in this study is lower than previously estimated from data within one generation in the Norwegian population. Keywords: birth weight; genes; paternal effects PMID:11707480
NASA Astrophysics Data System (ADS)
tang, ling; tian, yudong; lin, xin
2014-05-01
Precipitation retrievals from space-borne Passive Microwave (PMW) radiometers are the major source in modern satellite-based global rainfall dataset. The error characteristics in these individual retrievals directly affect the merged end products and applications, but have not been systematically studied. In this paper, we undertake a critical investigation of the seasonal and sensor type skill and errors of both in PMW radiometers over the continental United States (CONUS). A high-resolution ground radar-based datasets - NOAA's National Severe Storms Laboratory (NSSL) Q2 radar derived precipitation estimates are used as the ground reference. The high spatial and temporal resolution of the reference data allows near-instantaneous collocation (within 5 minutes) and relatively more precise comparison with the satellite overpasses. We compare precipitation retrievals from twelve satellites, including six imagers (one TMI, AMSR-E, SSM/I and three SSMIS) and six sounders (three AMSU-B and three MHS) against the Q2 radar precipitation. Results show that precipitation retrievals from PMW radiometers exhibit fairly systematic biases depending on season and precipitation intensity, with overestimates in summer at moderate to high precipitation rates and underestimates in winter at low and moderate precipitation rates. This result is also showing in satellite-based multi-sensor precipitation products, indicating the transferring of uncertainties from single sensor input to multi-sensor precipitation estimates. Meanwhile, retrievals from the microwave imagers have notably better performance than those from the microwave sounders. The sounders have higher biases, about two times at small rain rates and two-three times at the moderate to high end rain rates, compared to the imagers. The sounders also have a narrower dynamic range, and higher random errors, which are also detailed in the paper.
... with the placenta and substance abuse by the mother. Some low birth weight babies may be more at risk for certain health problems. Some may become sick in the first days of life or develop infections. Others may suffer ...
NASA Technical Reports Server (NTRS)
Howard, W. H.; Young, D. R.
1972-01-01
Device applies compressive force to bone to minimize loss of bone calcium during weightlessness or bedrest. Force is applied through weights, or hydraulic, pneumatic or electrically actuated devices. Device is lightweight and easy to maintain and operate.
Universality: Accurate Checks in Dyson's Hierarchical Model
NASA Astrophysics Data System (ADS)
Godina, J. J.; Meurice, Y.; Oktay, M. B.
2003-06-01
In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.
Accurate bulk density determination of irregularly shaped translucent and opaque aerogels
NASA Astrophysics Data System (ADS)
Petkov, M. P.; Jones, S. M.
2016-05-01
We present a volumetric method for accurate determination of bulk density of aerogels, calculated from extrapolated weight of the dry pure solid and volume estimates based on the Archimedes' principle of volume displacement, using packed 100 μm-sized monodispersed glass spheres as a "quasi-fluid" media. Hard particle packing theory is invoked to demonstrate the reproducibility of the apparent density of the quasi-fluid. Accuracy rivaling that of the refractive index method is demonstrated for both translucent and opaque aerogels with different absorptive properties, as well as for aerogels with regular and irregular shapes.
Wong, Angelita Pui-Yee; Pipitone, Jon; Park, Min Tae M; Dickie, Erin W; Leonard, Gabriel; Perron, Michel; Pike, Bruce G; Richer, Louis; Veillette, Suzanne; Chakravarty, M Mallar; Pausova, Zdenka; Paus, Tomáš
2014-07-01
The pituitary gland is a key structure in the hypothalamic-pituitary-gonadal (HPG) axis--it plays an important role in sexual maturation during puberty. Despite its small size, its volume can be quantified using magnetic resonance imaging (MRI). Here, we study a cohort of 962 typically developing adolescents from the Saguenay Youth Study and estimate pituitary volumes using a newly developed multi-atlas segmentation method known as the MAGeT Brain algorithm. We found that age and puberty stage (controlled for age) each predicts adjusted pituitary volumes (controlled for total brain volume) in both males and females. Controlling for the effects of age and puberty stage, total testosterone and estradiol levels also predict adjusted pituitary volumes in males and pre-menarche females, respectively. These findings demonstrate that the pituitary gland grows during adolescence, and its volume relates to circulating plasma-levels of sex steroids in both males and females.
Hsu, Ya-Wen; Liou, Tsan-Hon; Liou, Yiing Mei; Chen, Hsin-Jen; Chien, Li-Yin
2016-01-01
Children and adolescents tend to lose weight, which may be associated with misperceptions of weight. Previous studies have emphasized establishing correlations between eating disorders and an overestimated perception of body weight, but few studies have focused on an underestimated perception of body weight. The objective of this study was to explore the relationship between misperceptions of body weight and weight-related risk factors, such as eating disorders, inactivity, and unhealthy behaviors, among overweight children who underestimated their body weight. We conducted a cross-sectional, descriptive study between December 1, 2006 and February 15, 2007. A total of 29,313 children and adolescents studying in grades 4-12 were enrolled in this nationwide, cross-sectional survey, and they were asked to complete questionnaires. A multivariate logistic regression using maximum likelihood estimates was used. The prevalence of body weight misperception was 43.2% (26.4% overestimation and 16.8% underestimation). Factors associated with the underestimated perception of weight among overweight children were parental obesity, dietary control for weight loss, breakfast consumption, self-induced vomiting as a weight control strategy, fried food consumption, engaging in vigorous physical activities, and sleeping for >8 hours per day (odds ratios=0.86, 0.42, 0.88, 1.37, 1.13, 1.11, and 1.17, respectively). In conclusion, the early establishment of an accurate perception of body weight may mitigate unhealthy behaviors. PMID:26965769
Hsu, Ya-Wen; Liou, Tsan-Hon; Liou, Yiing Mei; Chen, Hsin-Jen; Chien, Li-Yin
2016-01-01
Children and adolescents tend to lose weight, which may be associated with misperceptions of weight. Previous studies have emphasized establishing correlations between eating disorders and an overestimated perception of body weight, but few studies have focused on an underestimated perception of body weight. The objective of this study was to explore the relationship between misperceptions of body weight and weight-related risk factors, such as eating disorders, inactivity, and unhealthy behaviors, among overweight children who underestimated their body weight. We conducted a cross-sectional, descriptive study between December 1, 2006 and February 15, 2007. A total of 29,313 children and adolescents studying in grades 4-12 were enrolled in this nationwide, cross-sectional survey, and they were asked to complete questionnaires. A multivariate logistic regression using maximum likelihood estimates was used. The prevalence of body weight misperception was 43.2% (26.4% overestimation and 16.8% underestimation). Factors associated with the underestimated perception of weight among overweight children were parental obesity, dietary control for weight loss, breakfast consumption, self-induced vomiting as a weight control strategy, fried food consumption, engaging in vigorous physical activities, and sleeping for >8 hours per day (odds ratios=0.86, 0.42, 0.88, 1.37, 1.13, 1.11, and 1.17, respectively). In conclusion, the early establishment of an accurate perception of body weight may mitigate unhealthy behaviors.
NNLOPS accurate associated HW production
NASA Astrophysics Data System (ADS)
Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia
2016-06-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.
Cheung, Yin Bun; Chan, Jerry Kok Yen; Tint, Mya Thway; Godfrey, Keith M.; Gluckman, Peter D.; Kwek, Kenneth; Saw, Seang Mei; Chong, Yap-Seng; Lee, Yung Seng; Yap, Fabian; Lek, Ngee
2016-01-01
Objective Inaccurate parental perception of their child’s weight status is commonly reported in Western countries. It is unclear whether similar misperception exists in Asian populations. This study aimed to evaluate the ability of Singaporean mothers to accurately describe their three-year-old child’s weight status verbally and visually. Methods At three years post-delivery, weight and height of the children were measured. Body mass index (BMI) was calculated and converted into actual weight status using International Obesity Task Force criteria. The mothers were blinded to their child’s measurements and asked to verbally and visually describe what they perceived was their child’s actual weight status. Agreement between actual and described weight status was assessed using Cohen’s Kappa statistic (κ). Results Of 1237 recruited participants, 66.4% (n = 821) with complete data on mothers’ verbal and visual perceptions and children’s anthropometric measurements were analysed. Nearly thirty percent of the mothers were unable to describe their child’s weight status accurately. In verbal description, 17.9% under-estimated and 11.8% over-estimated their child’s weight status. In visual description, 10.4% under-estimated and 19.6% over-estimated their child’s weight status. Many mothers of underweight children over-estimated (verbal 51.6%; visual 88.8%), and many mothers of overweight and obese children under-estimated (verbal 82.6%; visual 73.9%), their child’s weight status. In contrast, significantly fewer mothers of normal-weight children were inaccurate (verbal 16.8%; visual 8.8%). Birth order (p<0.001), maternal (p = 0.004) and child’s weight status (p<0.001) were associated with consistently inaccurate verbal and visual descriptions. Conclusions Singaporean mothers, especially those of underweight and overweight children, may not be able to perceive their young child’s weight status accurately. To facilitate prevention of childhood
NASA Technical Reports Server (NTRS)
Chatterji, Gano
2011-01-01
Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.
Stefanick, M L
1993-01-01
. Because diet and exercise habits are difficult to assess and to quantify in free-living populations, it continues to be difficult to evaluate the success of diet and/or exercise prescriptions for weight loss accurately and we continue to be plagued with questions regarding the effectiveness vs. efficacy of exercise as a means to control body weight. It would seem that the wide range of health benefits derived from regular exercise would justify emphasizing increased activity for inactive people, particularly for obese, sedentary individuals, whether or not ideal body weight or significant weight loss is achieved.
Determining the Statistical Significance of Relative Weights
ERIC Educational Resources Information Center
Tonidandel, Scott; LeBreton, James M.; Johnson, Jeff W.
2009-01-01
Relative weight analysis is a procedure for estimating the relative importance of correlated predictors in a regression equation. Because the sampling distribution of relative weights is unknown, researchers using relative weight analysis are unable to make judgments regarding the statistical significance of the relative weights. J. W. Johnson…
Accurate estimation of the elastic properties of porous fibers
Thissell, W.R.; Zurek, A.K.; Addessio, F.
1997-05-01
A procedure is described to calculate polycrystalline anisotropic fiber elastic properties with cylindrical symmetry and porosity. It uses a preferred orientation model (Tome ellipsoidal self-consistent model) for the determination of anisotropic elastic properties for the case of highly oriented carbon fibers. The model predictions, corrected for porosity, are compared to back-calculated fiber elastic properties of an IM6/3501-6 unidirectional composite whose elastic properties have been determined via resonant ultrasound spectroscopy. The Halpin-Tsai equations used to back-calculated fiber elastic properties are found to be inappropriate for anisotropic composite constituents. Modifications are proposed to the Halpin-Tsai equations to expand their applicability to anisotropic reinforcement materials.
Weight Advice Associated With Male Firefighter Weight Perception and Behavior
Brown, Austin L.; Poston, Walker S.C.; Jahnke, Sara A.; Haddock, C. Keith; Luo, Sheng; Delclos, George L.; Day, R. Sue
2016-01-01
Introduction The high prevalence of overweight and obesity threatens the health and safety of the fire service. Healthcare professionals may play an important role in helping firefighters achieve a healthy weight by providing weight loss counseling to at-risk firefighters. This study characterizes the impact of healthcare professional weight loss advice on firefighter weight perceptions and weight loss behaviors among overweight and obese male firefighters. Methods A national sample of 763 overweight and obese male firefighters who recalled visiting a healthcare provider in the past 12 months reported information regarding healthcare visits, weight perceptions, current weight loss behaviors, and other covariates in 2011–2012. Analyzed in 2013, four unique multilevel logistic regression models estimated the association between healthcare professional weight loss advice and the outcomes of firefighter-reported weight perceptions, intentions to lose weight, reduced caloric intake, and increased physical activity. Results Healthcare professional weight loss advice was significantly associated with self-perception as overweight (OR=4.78, 95% CI=2.16, 10.57) and attempted weight loss (OR=2.06, 95% CI=1.25, 3.38), but not significantly associated with reduced caloric intake (OR=1.26, 95% CI=0.82, 1.95) and increased physical activity (OR=1.51, 95% CI=0.89, 2.61), after adjusting for confounders. Conclusions Healthcare professional weight loss advice appears to increase the accuracy of firefighter weight perceptions, promote weight loss attempts, and may encourage dieting and physical activity behaviors among overweight firefighters. Healthcare providers should acknowledge their ability to influence the health behaviors of overweight and obese patients and make efforts to increase the quality and frequency of weight loss recommendations for all firefighters. PMID:26141913
NASA Technical Reports Server (NTRS)
1995-01-01
The Attitude Adjuster is a system for weight repositioning corresponding to a SCUBA diver's changing positions. Compact tubes on the diver's air tank permit controlled movement of lead balls within the Adjuster, automatically repositioning when the diver changes position. Manufactured by Think Tank Technologies, the system is light and small, reducing drag and energy requirements and contributing to lower air consumption. The Mid-Continent Technology Transfer Center helped the company with both technical and business information and arranged for the testing at Marshall Space Flight Center's Weightlessness Environmental Training Facility for astronauts.
Weighted EMPCA: Weighted Expectation Maximization Principal Component Analysis
NASA Astrophysics Data System (ADS)
Bailey, Stephen
2016-09-01
Weighted EMPCA performs principal component analysis (PCA) on noisy datasets with missing values. Estimates of the measurement error are used to weight the input data such that the resulting eigenvectors, when compared to classic PCA, are more sensitive to the true underlying signal variations rather than being pulled by heteroskedastic measurement noise. Missing data are simply limiting cases of weight = 0. The underlying algorithm is a noise weighted expectation maximization (EM) PCA, which has additional benefits of implementation speed and flexibility for smoothing eigenvectors to reduce the noise contribution.
Ensemble estimators for multivariate entropy estimation
Sricharan, Kumar; Wei, Dennis; Hero, Alfred O.
2015-01-01
The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size T as the dimension d of the samples increases. In particular, the rate is often glacially slow of order O(T−γ/d), where γ > 0 is a rate parameter. Examples of such estimators include kernel density estimators, k-nearest neighbor (k-NN) density estimators, k-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of O(T−1). Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample. PMID:25897177
Profitable capitation requires accurate costing.
West, D A; Hicks, L L; Balas, E A; West, T D
1996-01-01
In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799
Link Prediction in Weighted Networks: A Weighted Mutual Information Model
Zhu, Boyao; Xia, Yongxiang
2016-01-01
The link-prediction problem is an open issue in data mining and knowledge discovery, which attracts researchers from disparate scientific communities. A wealth of methods have been proposed to deal with this problem. Among these approaches, most are applied in unweighted networks, with only a few taking the weights of links into consideration. In this paper, we present a weighted model for undirected and weighted networks based on the mutual information of local network structures, where link weights are applied to further enhance the distinguishable extent of candidate links. Empirical experiments are conducted on four weighted networks, and results show that the proposed method can provide more accurate predictions than not only traditional unweighted indices but also typical weighted indices. Furthermore, some in-depth discussions on the effects of weak ties in link prediction as well as the potential to predict link weights are also given. This work may shed light on the design of algorithms for link prediction in weighted networks. PMID:26849659
Link Prediction in Weighted Networks: A Weighted Mutual Information Model.
Zhu, Boyao; Xia, Yongxiang
2016-01-01
The link-prediction problem is an open issue in data mining and knowledge discovery, which attracts researchers from disparate scientific communities. A wealth of methods have been proposed to deal with this problem. Among these approaches, most are applied in unweighted networks, with only a few taking the weights of links into consideration. In this paper, we present a weighted model for undirected and weighted networks based on the mutual information of local network structures, where link weights are applied to further enhance the distinguishable extent of candidate links. Empirical experiments are conducted on four weighted networks, and results show that the proposed method can provide more accurate predictions than not only traditional unweighted indices but also typical weighted indices. Furthermore, some in-depth discussions on the effects of weak ties in link prediction as well as the potential to predict link weights are also given. This work may shed light on the design of algorithms for link prediction in weighted networks.
Ekizoglu, Oguzhan; Hocaoglu, Elif; Inci, Ercan; Can, Ismail Ozgur; Aksoy, Sema; Kazimoglu, Cemal
2016-03-01
Radiation exposure during forensic age estimation is associated with ethical implications. It is important to prevent repetitive radiation exposure when conducting advanced ultrasonography (USG) and magnetic resonance imaging (MRI). The purpose of this study was to investigate the utility of 3.0-T MRI in determining the degree of ossification of the distal femoral and proximal tibial epiphyses in a group of Turkish population. We retrospectively evaluated coronal T2-weighted and turbo spin-echo sequences taken upon MRI of 503 patients (305 males, 198 females; age 10-30 years) using a five-stage method. Intra- and interobserver variations were very low. (Intraobserver reliability was κ=0.919 for the distal femoral epiphysis and κ=0.961 for the proximal tibial epiphysis, and interobserver reliability was κ=0.836 for the distal femoral epiphysis and κ=0.885 for the proximal tibial epiphysis.) Spearman's rank correlation analysis indicated a significant positive relationship between age and the extent of ossification of the distal femoral and proximal tibial epiphyses (p<0.001). Comparison of male and female data revealed significant between-gender differences in the ages at first attainment of stages 2, 3, and 4 ossifications of the distal femoral epiphysis and stage 1 and 4 ossifications of the proximal tibial epiphysis (p<0.05). The earliest ages at which ossification of stages 3, 4, and 5 was evident in the distal femoral epiphysis were 14, 17, and 22 years in males and 13, 16, and 21 years in females, respectively. Proximal tibial epiphysis of stages 3, 4, and 5 ossification was first noted at ages 14, 17, and 18 years in males and 13, 15, and 16 years in females, respectively. MRI of the distal femoral and proximal tibial epiphyses is an alternative, noninvasive, and reliable technique to estimate age.
Accurate documentation and wound measurement.
Hampton, Sylvie
This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.
Accurate calculation of diffraction-limited encircled and ensquared energy.
Andersen, Torben B
2015-09-01
Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873
Azpurua, Humberto; Funai, Edmund F; Coraluzzi, Luisa M; Doherty, Leo F; Sasson, Isaac E; Kliman, Merwin; Kliman, Harvey J
2010-02-01
An abnormally decreased placental weight has been linked to increased perinatal complications, including intrauterine fetal demise (IUFD) and fetal growth restriction (IUGR). Despite its promise, determining placental weight prenatally using three-dimensional systems is time-consuming and requires expensive technology and expertise. We propose a novel method using two-dimensional sonography that provides an immediate estimation of placental volume. Placental volume was calculated in 29 third-trimester pregnancies using linear measurements of placental width, height, and thickness to calculate the convex-concave shell volume within 24 hours of birth. Data were analyzed to calculate Spearman's rho (r (s)) and significance. There was a significant correlation between estimated placental volume (EPV) and actual placental weight (r (s) = 0.80, P < 0.001). Subgroup analysis of preterm gestations ( N = 14) revealed an even more significant correlation of EPV to actual placental weight (r (s) = 0.89, P < 0.001). Placental weight can be accurately predicted by two-dimensional ultrasound with volumetric calculations. This method is simple, rapid, and accurate, making it practical for routine prenatal care, as well as for high-risk cases with decreased fetal movement and IUGR. Routine EPV surveillance may decrease the rates of perinatal complications and unexpected IUFD. PMID:19653142
Fast and Accurate Construction of Confidence Intervals for Heritability.
Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran
2016-06-01
Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052
Accuracy of hands v. household measures as portion size estimation aids.
Gibson, Alice A; Hsu, Michelle S H; Rangan, Anna M; Seimon, Radhika V; Lee, Crystal M Y; Das, Arpita; Finch, Charles H; Sainsbury, Amanda
2016-01-01
Accurate estimation of food portion size is critical in dietary studies. Hands are potentially useful as portion size estimation aids; however, their accuracy has not been tested. The aim of the present study was to test the accuracy of a novel portion size estimation method using the width of the fingers as a 'ruler' to measure the dimensions of foods ('finger width method'), as well as fists and thumb or finger tips. These hand measures were also compared with household measures (cups and spoons). A total of sixty-seven participants (70 % female; age 32·7 (sd 13·7) years; BMI 23·2 (sd 3·5) kg/m(2)) attended a 1·5 h session in which they estimated the portion sizes of forty-two pre-weighed foods and liquids. Hand measurements were used in conjunction with geometric formulas to convert estimations to volumes. Volumes determined with hand and household methods were converted to estimated weights using density factors. Estimated weights were compared with true weights, and the percentage difference from the true weight was used to compare accuracy between the hand and household methods. Of geometrically shaped foods and liquids estimated with the finger width method, 80 % were within ±25 % of the true weight of the food, and 13 % were within ±10 %, in contrast to 29 % of those estimated with the household method being within ±25 % of the true weight of the food, and 8 % being within ±10 %. For foods that closely resemble a geometric shape, the finger width method provides a novel and acceptably accurate method of estimating portion size. PMID:27547392
Multi sensor transducer and weight factor
NASA Technical Reports Server (NTRS)
Immer, Christopher D. (Inventor); Lane, John (Inventor); Eckhoff, Anthony J. (Inventor); Perotti, Jose M. (Inventor)
2004-01-01
A multi-sensor transducer and processing method allow insitu monitoring of the senor accuracy and transducer `health`. In one embodiment, the transducer has multiple sensors to provide corresponding output signals in response to a stimulus, such as pressure. A processor applies individual weight factors to reach of the output signals and provide a single transducer output that reduces the contribution from inaccurate sensors. The weight factors can be updated and stored. The processor can use the weight factors to provide a `health` of the transducer based upon the number of accurate versus in-accurate sensors in the transducer.
NASA Astrophysics Data System (ADS)
Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio
2015-12-01
This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).
Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio
2015-12-01
This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).
Toward Accurate and Quantitative Comparative Metagenomics.
Nayfach, Stephen; Pollard, Katherine S
2016-08-25
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Toward Accurate and Quantitative Comparative Metagenomics
Nayfach, Stephen; Pollard, Katherine S.
2016-01-01
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
2012-01-01
Background Self-reported anthropometric data are commonly used to estimate prevalence of obesity in population and community-based studies. We aim to: 1) Determine whether survey participants are able and willing to self-report height and weight; 2) Assess the accuracy of self-reported compared to measured anthropometric data in a community-based sample of young people. Methods Participants (16–29 years) of a behaviour survey, recruited at a Melbourne music festival (January 2011), were asked to self-report height and weight; researchers independently weighed and measured a sub-sample. Body Mass Index was calculated and overweight/obesity classified as ≥25kg/m2. Differences between measured and self-reported values were assessed using paired t-test/Wilcoxon signed ranks test. Accurate report of height and weight were defined as <2cm and <2kg difference between self-report and measured values, respectively. Agreement between classification of overweight/obesity by self-report and measured values was assessed using McNemar’s test. Results Of 1405 survey participants, 82% of males and 72% of females self-reported their height and weight. Among 67 participants who were also independently measured, self-reported height and weight were significantly less than measured height (p=0.01) and weight (p<0.01) among females, but no differences were detected among males. Overall, 52% accurately self-reported height, 30% under-reported, and 18% over-reported; 34% accurately self-reported weight, 52% under-reported and 13% over-reported. More females (70%) than males (35%) under-reported weight (p=0.01). Prevalence of overweight/obesity was 33% based on self-report data and 39% based on measured data (p=0.16). Conclusions Self-reported measurements may underestimate weight but accurately identified overweight/obesity in the majority of this sample of young people. PMID:23170838
NASA Astrophysics Data System (ADS)
Leihong, Zhang; Bei, Li; Dong, Liang; Xiuhua, Ma
2016-07-01
In order to reconstruct the spectral reflectance accurately, a new method of spectral reflectance reconstruction based on the weighted measurement matrix is proposed in this paper. By optimizing the measurement matrix between spectral reflectance and the response of a camera, the method can improve the reconstruction accuracy. The new method is a combination of three kinds of common reflectance reconstruction methods, which are the pseudo inverse method, the Wiener estimation method and the principal component analysis method. The new measurement matrix can be achieved after weighting the measurement matrices of these three methods to reconstruct the spectral reflectance. What is more, the weights of the three methods can be obtained by minimizing the color difference. Results show that the CIE1976 color difference and RMSE value of the weighted reconstructed spectra are less than that of three common reconstruction methods. The spectral matching accuracy GFC of the method is higher than 0.99 and its reconstruction accuracy is high.
SPLASH: Accurate OH maser positions
NASA Astrophysics Data System (ADS)
Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney
2013-10-01
The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.
Accurate thickness measurement of graphene
NASA Astrophysics Data System (ADS)
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Accurate thickness measurement of graphene.
Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T
2016-03-29
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Weighting climate model projections using observational constraints.
Gillett, Nathan P
2015-11-13
Projected climate change integrates the net response to multiple climate feedbacks. Whereas existing long-term climate change projections are typically based on unweighted individual climate model simulations, as observed climate change intensifies it is increasingly becoming possible to constrain the net response to feedbacks and hence projected warming directly from observed climate change. One approach scales simulated future warming based on a fit to observations over the historical period, but this approach is only accurate for near-term projections and for scenarios of continuously increasing radiative forcing. For this reason, the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) included such observationally constrained projections in its assessment of warming to 2035, but used raw model projections of longer term warming to 2100. Here a simple approach to weighting model projections based on an observational constraint is proposed which does not assume a linear relationship between past and future changes. This approach is used to weight model projections of warming in 2081-2100 relative to 1986-2005 under the Representative Concentration Pathway 4.5 forcing scenario, based on an observationally constrained estimate of the Transient Climate Response derived from a detection and attribution analysis. The resulting observationally constrained 5-95% warming range of 0.8-2.5 K is somewhat lower than the unweighted range of 1.1-2.6 K reported in the IPCC AR5.
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.
1985-01-01
Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.
Weight-ing: the experience of waiting on weight loss.
Glenn, Nicole M
2013-03-01
Perhaps we want to be perfect, strive for health, beauty, and the admiring gaze of others. Maybe we desire the body of our youth, the "healthy" body, the body that has just the right fit. Regardless of the motivation, we might find ourselves striving, wanting, and waiting on weight loss. What is it to wait on weight loss? I explore the meaning of this experience-as-lived using van Manen's guide to phenomenological reflection and writing. Weight has become an increasing focus of contemporary culture, demonstrated, for example, by a growing weight-loss industry and global obesity "epidemic." Weight has become synonymous with health status, and weight loss with "healthier." I examine the weight wait through experiences of the common and uncommon, considering relations to time, body, space, and the other with the aim of evoking a felt, embodied, emotive understanding of the meaning of waiting on weight loss. I also discuss the implications of the findings.
Schroeder, Jonathan P.; Van Riper, David C.
2014-01-01
Areal interpolation transforms data for a variable of interest from a set of source zones to estimate the same variable's distribution over a set of target zones. One common practice has been to guide interpolation by using ancillary control zones that are related to the variable of interest's spatial distribution. This guidance typically involves using source zone data to estimate the density of the variable of interest within each control zone. This article introduces a novel approach to density estimation, the geographically weighted expectation-maximization (GWEM) algorithm, which combines features of two previously used techniques, the expectation-maximization (EM) algorithm and geographically weighted regression. The EM algorithm provides a framework for incorporating proper constraints on data distributions, and using geographical weighting allows estimated control-zone density ratios to vary spatially. We assess the accuracy of GWEM by applying it with land-use/land-cover ancillary data to population counts from a nationwide sample of 1980 United States census tract pairs. We find that GWEM generally is more accurate in this setting than several previously studied methods. Because target-density weighting (TDW)—using 1970 tract densities to guide interpolation—outperforms GWEM in many cases, we also consider two GWEM-TDW hybrid approaches, and find them to improve estimates substantially. PMID:24653524
Ultrasonic fetal weight prediction: role of head circumference and femur length.
Weiner, C P; Sabbagha, R E; Vaisrub, N; Socol, M L
1985-06-01
The accurate sonographic estimate of fetal weight is helpful in those instances when the fetal weight estimate might alter clinical management. Most sonographic weight predicting formulas have been based predominantly on measurements from the term fetus and then applied to the preterm fetus. Yet, the morphology of the preterm and term fetus differs considerably. The authors have examined the predictive accuracy of three published sonographic formulas in 69 preterm fetuses scanned within 48 hours of delivery. The mean birth weight was 1396 g. Thirty-nine of the infants were less than 1500 g. Sixty-two percent were products of pregnancies complicated by premature rupture of membranes. The results were compared with new equations derived from combinations of head and abdominal circumferences, biparietal diameter, and femur length obtained from the first 33 fetuses and then tested on the remaining 36. Whereas each formula correlated highly with birth weight, the selected new formula was more accurate than the published formulas by each criteria examined. In contrast to the latter, the mean error (actual minus predicted weight) of most new equations did not significantly differ from zero when tested prospectively. In addition, it appeared that the accuracy of two new formulas not incorporating femur length could be further enhanced in the group of fetuses whose femur length differed from the mean by at least 2 standard deviations by multiplying the predicted weight by the ratio of actual to mean femur length. The authors conclude that the use of head circumference and femur length coupled with a population restricted to the preterm fetus enhances the accuracy of sonographic weight predictions. PMID:3889747
Weight loss surgery helps people with extreme obesity to lose weight. It may be an option if you cannot lose weight ... obesity. There are different types of weight loss surgery. They often limit the amount of food you ...
Accurate adiabatic correction in the hydrogen molecule
NASA Astrophysics Data System (ADS)
Pachucki, Krzysztof; Komasa, Jacek
2014-12-01
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate adiabatic correction in the hydrogen molecule
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate adiabatic correction in the hydrogen molecule.
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728
Drake, Keith M.; Longacre, Meghan R.; Dalton, Madeline A.; Langeloh, Gail; Peterson, Karen E.; Titus, Linda J.; Beach, Michael L.
2013-01-01
Background Despite validation studies demonstrating substantial bias, epidemiologic studies typically use self-reported height and weight as primary measures of body mass index due to feasibility and resource limitations. Purpose To demonstrate a method for calculating accurate and precise estimates that use body mass index when objectively measuring height and weight in a full sample is not feasible. Methods As part of a longitudinal study of adolescent health, 1,840 adolescents (aged 12–18) self-reported their height and weight during telephone surveys. Height and weight was measured for 407 of these adolescents. Sex specific, age-adjusted obesity status was calculated from self-reported and from measured height and weight. Prevalence and predictors of obesity were estimated using 1) self-reported data, 2) measured data, and 3) multiple imputation (of measured data). Results Among adolescents with self-reported and measured data, the obesity prevalence was lower when using self-report compared to actual measurements (p < 0.001). The obesity prevalence from multiple imputation (20%) was much closer to estimates based solely on measured data (20%) compared to estimates based solely on self-reported data (12%), indicating improved accuracy. In multivariate models, estimates of predictors of obesity were more accurate and approximately as precise (similar confidence intervals) as estimates based solely on self-reported data. Conclusions The two-method measurement design offers researchers a technique to reduce the bias typically inherent in self-reported height and weight without needing to collect measurements on the full sample. This technique enhances the ability to detect real, statistically significant differences, while minimizing the need for additional resources. PMID:23684216
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Precise attention filters for Weber contrast derived from centroid estimations.
Drew, Stefanie A; Chubb, Charles F; Sperling, George
2010-01-01
How well can observers selectively attend only to dots that are lighter or darker than the background when all dot intensities are present? Observers estimated centroids of briefly flashed, sparse clouds of 8 or 16 dots, ranging in intensity from dark black to bright white on a gray background. Attention instructions were to equally weight: (i) dots brighter than the background, assigning zero weight to others; (ii) dots darker than the background, assigning zero weight to others; (iii) all dots. For each observer, a quantitative estimate of the operational attention filter (the weight exerted in the centroid estimates as a function of dot intensity) was derived for each attention instruction in each dot condition. Attended dots typically have 4× the weights of unattended dots. Whereas observers performed remarkably well in estimating centroids and achieving the three required attention filters, they achieved higher accuracy when equally weighing all dots than when selectively attending to dots of only one contrast polarity. Although their attention filters are similar, individual observers use significantly different parameters in their centroid computations. The complete model of performance enables perceptual measurements of observers' attention filters for shades of gray that are as accurate as physical measurements of color filters.
Rapid Weight Loss in Sports with Weight Classes.
Khodaee, Morteza; Olewinski, Lucianne; Shadgan, Babak; Kiningham, Robert R
2015-01-01
Weight-sensitive sports are popular among elite and nonelite athletes. Rapid weight loss (RWL) practice has been an essential part of many of these sports for many decades. Due to the limited epidemiological studies on the prevalence of RWL, its true prevalence is unknown. It is estimated that more than half of athletes in weight-class sports have practiced RWL during the competitive periods. As RWL can have significant physical, physiological, and psychological negative effects on athletes, its practice has been discouraged for many years. It seems that appropriate rule changes have had the biggest impact on the practice of RWL in sports like wrestling. An individualized and well-planned gradual and safe weight loss program under the supervision of a team of coaching staff, athletic trainers, sports nutritionists, and sports physicians is recommended. PMID:26561763
Rapid Weight Loss in Sports with Weight Classes.
Khodaee, Morteza; Olewinski, Lucianne; Shadgan, Babak; Kiningham, Robert R
2015-01-01
Weight-sensitive sports are popular among elite and nonelite athletes. Rapid weight loss (RWL) practice has been an essential part of many of these sports for many decades. Due to the limited epidemiological studies on the prevalence of RWL, its true prevalence is unknown. It is estimated that more than half of athletes in weight-class sports have practiced RWL during the competitive periods. As RWL can have significant physical, physiological, and psychological negative effects on athletes, its practice has been discouraged for many years. It seems that appropriate rule changes have had the biggest impact on the practice of RWL in sports like wrestling. An individualized and well-planned gradual and safe weight loss program under the supervision of a team of coaching staff, athletic trainers, sports nutritionists, and sports physicians is recommended.
Combining Study Outcome Measures Using Dominance Adjusted Weights
ERIC Educational Resources Information Center
Makambi, Kepher H.; Lu, Wenxin
2013-01-01
Weighting of studies in meta-analysis is usually implemented by using the estimated inverse variances of treatment effect estimates. However, there is a possibility of one study dominating other studies in the estimation process by taking on a weight that is above some upper limit. We implement an estimator of the heterogeneity variance that takes…
Jones, Megan; Grilo, Carlos M.; Masheb, Robin M.; White, Marney A.
2009-01-01
Objective This study examined psychological and behavioral correlates of weight status perception in 173 class II obese adult community volunteers. Method Participants completed the EDE-Q, TFEQ, Beck Depression Inventory, and Rosenberg Self-Esteem Scale online. Key items assessed dieting frequency, weight history, and perceived current weight status (normal weight, overweight, or obese). Actual weight status was determined using NIDDK/CDC classification schemes. Results Among class II obese participants, 50.9% incorrectly classified their weight as overweight versus obese, while 49.1% accurately perceived their weight status as obese. Inaccurate participants reported significantly less binge eating and less eating disorder psychopathology. Despite similar BMI, inaccurate participants reported less distress regarding overeating and loss of control over eating. Discussion Our findings suggest that obesity status under-estimation is associated with less eating disorder psychopathology. Under-estimation of obesity status may exacerbate risk for negative health consequences due to a failure to recognize and respond to excess weight. PMID:19718673
Accurate body composition measures from whole-body silhouettes
Xie, Bowen; Avila, Jesus I.; Ng, Bennett K.; Fan, Bo; Loo, Victoria; Gilsanz, Vicente; Hangartner, Thomas; Kalkwarf, Heidi J.; Lappe, Joan; Oberfield, Sharon; Winer, Karen; Zemel, Babette; Shepherd, John A.
2015-01-01
Purpose: Obesity and its consequences, such as diabetes, are global health issues that burden about 171 × 106 adult individuals worldwide. Fat mass index (FMI, kg/m2), fat-free mass index (FFMI, kg/m2), and percent fat mass may be useful to evaluate under- and overnutrition and muscle development in a clinical or research environment. This proof-of-concept study tested whether frontal whole-body silhouettes could be used to accurately measure body composition parameters using active shape modeling (ASM) techniques. Methods: Binary shape images (silhouettes) were generated from the skin outline of dual-energy x-ray absorptiometry (DXA) whole-body scans of 200 healthy children of ages from 6 to 16 yr. The silhouette shape variation from the average was described using an ASM, which computed principal components for unique modes of shape. Predictive models were derived from the modes for FMI, FFMI, and percent fat using stepwise linear regression. The models were compared to simple models using demographics alone [age, sex, height, weight, and body mass index z-scores (BMIZ)]. Results: The authors found that 95% of the shape variation of the sampled population could be explained using 26 modes. In most cases, the body composition variables could be predicted similarly between demographics-only and shape-only models. However, the combination of shape with demographics improved all estimates of boys and girls compared to the demographics-only model. The best prediction models for FMI, FFMI, and percent fat agreed with the actual measures with R2 adj. (the coefficient of determination adjusted for the number of parameters used in the model equation) values of 0.86, 0.95, and 0.75 for boys and 0.90, 0.89, and 0.69 for girls, respectively. Conclusions: Whole-body silhouettes in children may be useful to derive estimates of body composition including FMI, FFMI, and percent fat. These results support the feasibility of measuring body composition variables from simple
Weight status and the perception of body image in men
Gardner, Rick M
2014-01-01
Understanding the role of body size in relation to the accuracy of body image perception in men is an important topic because of the implications for avoiding and treating obesity, and it may serve as a potential diagnostic criterion for eating disorders. The early research on this topic produced mixed findings. About one-half of the early studies showed that obese men overestimated their body size, with the remaining half providing accurate estimates. Later, improvements in research technology and methodology provided a clearer indication of the role of weight status in body image perception. Research in our laboratory has also produced diverse findings, including that obese subjects sometimes overestimate their body size. However, when examining our findings across several studies, obese subjects had about the same level of accuracy in estimating their body size as normal-weight subjects. Studies in our laboratory also permitted the separation of sensory and nonsensory factors in body image perception. In all but one instance, no differences were found overall between the ability of obese and normal-weight subjects to detect overall changes in body size. Importantly, however, obese subjects are better at detecting changes in their body size when the image is distorted to be too thin as compared to too wide. Both obese and normal-weight men require about a 3%–7% change in the width of their body size in order to detect the change reliably. Correlations between a range of body mass index values and body size estimation accuracy indicated no relationship between these variables. Numerous studies in other laboratories asked men to place their body size into discrete categorizes, ranging from thin to obese. Researchers found that overweight and obese men underestimate their weight status, and that men are less accurate in their categorizations than are women. Cultural influences have been found to be important, with body size underestimations occurring in cultures
Informed Test Component Weighting.
ERIC Educational Resources Information Center
Rudner, Lawrence M.
2001-01-01
Identifies and evaluates alternative methods for weighting tests. Presents formulas for composite reliability and validity as a function of component weights and suggests a rational process that identifies and considers trade-offs in determining weights. Discusses drawbacks to implicit weighting and explicit weighting and the difficulty of…
Fast Quaternion Attitude Estimation from Two Vector Measurements
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Bauer, Frank H. (Technical Monitor)
2001-01-01
Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. Existing closed-form attitude estimates based on Wahba's optimality criterion for two arbitrarily weighted observations are somewhat slow to evaluate. This paper presents two new fast quaternion attitude estimation algorithms using two vector observations, one optimal and one suboptimal. The suboptimal method gives the same estimate as the TRIAD algorithm, at reduced computational cost. Simulations show that the TRIAD estimate is almost as accurate as the optimal estimate in representative test scenarios.