BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad
NASA Astrophysics Data System (ADS)
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
Accurate pose estimation for forensic identification
NASA Astrophysics Data System (ADS)
Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk
2010-04-01
In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Astrophysics Data System (ADS)
Wheeler, K.; Knuth, K.; Castle, P.
2005-12-01
and IKONOS imagery and the 3-D volume estimates. The combination of these then allow for a rapid and hopefully very accurate estimation of biomass.
IRIS: Towards an Accurate and Fast Stage Weight Prediction Method
NASA Astrophysics Data System (ADS)
Taponier, V.; Balu, A.
2002-01-01
The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator
Ultrasound versus Clinical Examination to Estimate Fetal Weight at Term
Lanowski, Jan-Simon; Lanowski, Gabriele; Schippert, Cordula; Drinkut, Kristina; Hillemanns, Peter; Staboulidou, Ismini
2017-01-01
Introduction At term, fetal weight estimation is an important factor for decisions about the delivery mode and the timing of labor induction. This study aimed to compare the accuracy of abdominal palpation with that of ultrasound performed by different examiners to estimate fetal weight. The study investigated whether differences in the examinersʼ training affected fetal weight estimates. The accuracy of the weight estimates made for fetuses with extreme birth weights was also evaluated. Finally, the accuracy of Johnsonʼs method and of Insler and Bernsteinʼs formula for estimating fetal weight were compared with the other two methods. Methods This prospective study included singleton pregnancies between 37 weeks of gestation and 12 days post-term planned for vaginal delivery or cesarean section. Ultrasound and abdominal palpation using Leopoldʼs maneuvers were performed by examiners with different levels of professional experience. Fetal weight was additionally estimated using Insler and Bernsteinʼs formula and Johnsonʼs method. Statistical analysis calculated the accuracy of fetal weight estimates for the different examiners and the four methods. Results A total of 204 women were included in the analysis. Trained ultrasound examiners were most accurate when estimating fetal weight compared with all other examiners. The comparison of all four methods showed that fetal weight was assessed most accurately with ultrasound. No learning curve could be established. BMI and advanced gestational age affected the accuracy of the estimated weight. The analysis showed that a greater deviation between estimated weight and actual weight occurred with all four methods for fetuses at either end of the extremes of fetal weight, i.e., with very low or very high birth weights. Conclusion Fetal weight should be estimated using ultrasound. A good ultrasound training is essential. PMID:28392581
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
Preparing Rapid, Accurate Construction Cost Estimates with a Personal Computer.
ERIC Educational Resources Information Center
Gerstel, Sanford M.
1986-01-01
An inexpensive and rapid method for preparing accurate cost estimates of construction projects in a university setting, using a personal computer, purchased software, and one estimator, is described. The case against defined estimates, the rapid estimating system, and adjusting standard unit costs are discussed. (MLW)
Calculating weighted estimates of peak streamflow statistics
Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.
2012-01-01
According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.
Estimating a weighted average of stratum-specific parameters.
Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul
2008-10-30
This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.
Weighted conditional least-squares estimation
Booth, J.G.
1987-01-01
A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Scale requirements for accurate... PROCEDURES AND REQUIREMENTS FOR ACCURATE WEIGHTS § 442.3 Scale requirements for accurate weights, repairs, adjustments, and replacements after inspection. (a) All scales used to determine the net weight of meat...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Scale requirements for accurate... PROCEDURES AND REQUIREMENTS FOR ACCURATE WEIGHTS § 442.3 Scale requirements for accurate weights, repairs, adjustments, and replacements after inspection. (a) All scales used to determine the net weight of meat...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Scale requirements for accurate... PROCEDURES AND REQUIREMENTS FOR ACCURATE WEIGHTS § 442.3 Scale requirements for accurate weights, repairs, adjustments, and replacements after inspection. (a) All scales used to determine the net weight of meat...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Scale requirements for accurate... PROCEDURES AND REQUIREMENTS FOR ACCURATE WEIGHTS § 442.3 Scale requirements for accurate weights, repairs, adjustments, and replacements after inspection. (a) All scales used to determine the net weight of meat...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Scale requirements for accurate... PROCEDURES AND REQUIREMENTS FOR ACCURATE WEIGHTS § 442.3 Scale requirements for accurate weights, repairs, adjustments, and replacements after inspection. (a) All scales used to determine the net weight of meat...
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Analytical Fuselage and Wing Weight Estimation of Transport Aircraft
NASA Technical Reports Server (NTRS)
Chambers, Mark C.; Ardema, Mark D.; Patron, Anthony P.; Hahn, Andrew S.; Miura, Hirokazu; Moore, Mark D.
1996-01-01
A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft, and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT has traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight. Using statistical analysis techniques, relations between the load-bearing fuselage and wing weights calculated by PDCYL and corresponding actual weights were determined.
Accurate genome relative abundance estimation based on shotgun metagenomic reads.
Xia, Li C; Cram, Jacob A; Chen, Ting; Fuhrman, Jed A; Sun, Fengzhu
2011-01-01
Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy) by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy). GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets) in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based) even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.
Accurate absolute GPS positioning through satellite clock error estimation
NASA Astrophysics Data System (ADS)
Han, S.-C.; Kwon, J. H.; Jekeli, C.
2001-05-01
An algorithm for very accurate absolute positioning through Global Positioning System (GPS) satellite clock estimation has been developed. Using International GPS Service (IGS) precise orbits and measurements, GPS clock errors were estimated at 30-s intervals. Compared to values determined by the Jet Propulsion Laboratory, the agreement was at the level of about 0.1 ns (3 cm). The clock error estimates were then applied to an absolute positioning algorithm in both static and kinematic modes. For the static case, an IGS station was selected and the coordinates were estimated every 30 s. The estimated absolute position coordinates and the known values had a mean difference of up to 18 cm with standard deviation less than 2 cm. For the kinematic case, data obtained every second from a GPS buoy were tested and the result from the absolute positioning was compared to a differential GPS (DGPS) solution. The mean differences between the coordinates estimated by the two methods are less than 40 cm and the standard deviations are less than 25 cm. It was verified that this poorer standard deviation on 1-s position results is due to the clock error interpolation from 30-s estimates with Selective Availability (SA). After SA was turned off, higher-rate clock error estimates (such as 1 s) could be obtained by a simple interpolation with negligible corruption. Therefore, the proposed absolute positioning technique can be used to within a few centimeters' precision at any rate by estimating 30-s satellite clock errors and interpolating them.
Accurate measure by weight of liquids in industry
Muller, M.R.
1992-12-12
This research's focus was to build a prototype of a computerized liquid dispensing system. This liquid metering system is based on the concept of altering the representative volume to account for temperature changes in the liquid to be dispensed. This is actualized by using a measuring tank and a temperature compensating displacement plunger. By constantly monitoring the temperature of the liquid, the plunger can be used to increase or decrease the specified volume to more accurately dispense liquid with a specified mass. In order to put the device being developed into proper engineering perspective, an extensive literature review was undertaken on all areas of industrial metering of liquids with an emphasis on gravimetric methods.
Accurate measure by weight of liquids in industry. Final report
Muller, M.R.
1992-12-12
This research`s focus was to build a prototype of a computerized liquid dispensing system. This liquid metering system is based on the concept of altering the representative volume to account for temperature changes in the liquid to be dispensed. This is actualized by using a measuring tank and a temperature compensating displacement plunger. By constantly monitoring the temperature of the liquid, the plunger can be used to increase or decrease the specified volume to more accurately dispense liquid with a specified mass. In order to put the device being developed into proper engineering perspective, an extensive literature review was undertaken on all areas of industrial metering of liquids with an emphasis on gravimetric methods.
An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-01-01
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314
Calorie Estimation in Adults Differing in Body Weight Class and Weight Loss Status
Brown, Ruth E; Canning, Karissa L; Fung, Michael; Jiandani, Dishay; Riddell, Michael C; Macpherson, Alison K; Kuk, Jennifer L
2016-01-01
Purpose Ability to accurately estimate calories is important for weight management, yet few studies have investigated whether individuals can accurately estimate calories during exercise, or in a meal. The objective of this study was to determine if accuracy of estimation of moderate or vigorous exercise energy expenditure and calories in food is associated with body weight class or weight loss status. Methods Fifty-eight adults who were either normal weight (NW) or overweight (OW), and either attempting (WL) or not attempting weight loss (noWL), exercised on a treadmill at a moderate (60% HRmax) and a vigorous intensity (75% HRmax) for 25 minutes. Subsequently, participants estimated the number of calories they expended through exercise, and created a meal that they believed to be calorically equivalent to the exercise energy expenditure. Results The mean difference between estimated and measured calories in exercise and food did not differ within or between groups following moderate exercise. Following vigorous exercise, OW-noWL overestimated energy expenditure by 72%, and overestimated the calories in their food by 37% (P<0.05). OW-noWL also significantly overestimated exercise energy expenditure compared to all other groups (P<0.05), and significantly overestimated calories in food compared to both WL groups (P<0.05). However, among all groups there was a considerable range of over and underestimation (−280 kcal to +702 kcal), as reflected by the large and statistically significant absolute error in calorie estimation of exercise and food. Conclusion There was a wide range of under and overestimation of calories during exercise and in a meal. Error in calorie estimation may be greater in overweight adults who are not attempting weight loss. PMID:26469988
Sonography in Fetal Birth Weight Estimation
ERIC Educational Resources Information Center
Akinola, R. A.; Akinola, O. I.; Oyekan, O. O.
2009-01-01
The estimation of fetal birth weight is an important factor in the management of high risk pregnancies. The information and knowledge gained through this study, comparing a combination of various fetal parameters using computer assisted analysis, will help the obstetrician to screen the high risk pregnancies, monitor the growth and development,…
Fast and accurate estimation for astrophysical problems in large databases
NASA Astrophysics Data System (ADS)
Richards, Joseph W.
2010-10-01
A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Scales; accurate weights, repairs... AGRICULTURE REGULATIONS UNDER THE PACKERS AND STOCKYARDS ACT Services § 201.71 Scales; accurate weights, repairs, adjustments or replacements after inspection. (a) All scales used by stockyard owners,...
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Scales; accurate weights, repairs... AGRICULTURE REGULATIONS UNDER THE PACKERS AND STOCKYARDS ACT Services § 201.71 Scales; accurate weights, repairs, adjustments or replacements after inspection. (a) All scales used by stockyard owners,...
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Scales; accurate weights, repairs... AGRICULTURE REGULATIONS UNDER THE PACKERS AND STOCKYARDS ACT Services § 201.71 Scales; accurate weights, repairs, adjustments or replacements after inspection. (a) All scales used by stockyard owners,...
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Scales; accurate weights, repairs... AGRICULTURE REGULATIONS UNDER THE PACKERS AND STOCKYARDS ACT Services § 201.71 Scales; accurate weights, repairs, adjustments or replacements after inspection. (a) All scales used by stockyard owners,...
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Scales; accurate weights, repairs... AGRICULTURE REGULATIONS UNDER THE PACKERS AND STOCKYARDS ACT Services § 201.71 Scales; accurate weights, repairs, adjustments or replacements after inspection. (a) All scales used by stockyard owners,...
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
Convex weighting criteria for speaking rate estimation
Jiao, Yishan; Berisha, Visar; Tu, Ming; Liss, Julie
2015-01-01
Speaking rate estimation directly from the speech waveform is a long-standing problem in speech signal processing. In this paper, we pose the speaking rate estimation problem as that of estimating a temporal density function whose integral over a given interval yields the speaking rate within that interval. In contrast to many existing methods, we avoid the more difficult task of detecting individual phonemes within the speech signal and we avoid heuristics such as thresholding the temporal envelope to estimate the number of vowels. Rather, the proposed method aims to learn an optimal weighting function that can be directly applied to time-frequency features in a speech signal to yield a temporal density function. We propose two convex cost functions for learning the weighting functions and an adaptation strategy to customize the approach to a particular speaker using minimal training. The algorithms are evaluated on the TIMIT corpus, on a dysarthric speech corpus, and on the ICSI Switchboard spontaneous speech corpus. Results show that the proposed methods outperform three competing methods on both healthy and dysarthric speech. In addition, for spontaneous speech rate estimation, the result show a high correlation between the estimated speaking rate and ground truth values. PMID:26167516
Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.
2008-01-01
Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.
Towards accurate and precise estimates of lion density.
Elliot, Nicholas B; Gopalaswamy, Arjun M
2016-12-13
Reliable estimates of animal density are fundamental to our understanding of ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation biology since wildlife authorities rely on these figures to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging species such as carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores. African lions (Panthera leo) provide an excellent example as although abundance indices have been shown to produce poor inferences, they continue to be used to estimate lion density and inform management and policy. In this study we adapt a Bayesian spatially explicit capture-recapture model to estimate lion density in the Maasai Mara National Reserve (MMNR) and surrounding conservancies in Kenya. We utilize sightings data from a three-month survey period to produce statistically rigorous spatial density estimates. Overall posterior mean lion density was estimated to be 16.85 (posterior standard deviation = 1.30) lions over one year of age per 100km(2) with a sex ratio of 2.2♀:1♂. We argue that such methods should be developed, improved and favored over less reliable methods such as track and call-up surveys. We caution against trend analyses based on surveys of differing reliability and call for a unified framework to assess lion numbers across their range in order for better informed management and policy decisions to be made. This article is protected by copyright. All rights reserved.
Accurate estimators of correlation functions in Fourier space
NASA Astrophysics Data System (ADS)
Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.
2016-08-01
Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.
How utilities can achieve more accurate decommissioning cost estimates
Knight, R.
1999-07-01
The number of commercial nuclear power plants that are undergoing decommissioning coupled with the economic pressure of deregulation has increased the focus on adequate funding for decommissioning. The introduction of spent-fuel storage and disposal of low-level radioactive waste into the cost analysis places even greater concern as to the accuracy of the fund calculation basis. The size and adequacy of the decommissioning fund have also played a major part in the negotiations for transfer of plant ownership. For all of these reasons, it is important that the operating plant owner reduce the margin of error in the preparation of decommissioning cost estimates. To data, all of these estimates have been prepared via the building block method. That is, numerous individual calculations defining the planning, engineering, removal, and disposal of plant systems and structures are performed. These activity costs are supplemented by the period-dependent costs reflecting the administration, control, licensing, and permitting of the program. This method will continue to be used in the foreseeable future until adequate performance data are available. The accuracy of the activity cost calculation is directly related to the accuracy of the inventory of plant system component, piping and equipment, and plant structural composition. Typically, it is left up to the cost-estimating contractor to develop this plant inventory. The data are generated by searching and analyzing property asset records, plant databases, piping and instrumentation drawings, piping system isometric drawings, and component assembly drawings. However, experience has shown that these sources may not be up to date, discrepancies may exist, there may be missing data, and the level of detail may not be sufficient. Again, typically, the time constraints associated with the development of the cost estimate preclude perfect resolution of the inventory questions. Another problem area in achieving accurate cost
NASA Astrophysics Data System (ADS)
Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime
2016-03-01
Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.
Estimation of bone permeability using accurate microstructural measurements.
Beno, Thoma; Yoon, Young-June; Cowin, Stephen C; Fritton, Susannah P
2006-01-01
While interstitial fluid flow is necessary for the viability of osteocytes, it is also believed to play a role in bone's mechanosensory system by shearing bone cell membranes or causing cytoskeleton deformation and thus activating biochemical responses that lead to the process of bone adaptation. However, the fluid flow properties that regulate bone's adaptive response are poorly understood. In this paper, we present an analytical approach to determine the degree of anisotropy of the permeability of the lacunar-canalicular porosity in bone. First, we estimate the total number of canaliculi emanating from each osteocyte lacuna based on published measurements from parallel-fibered shaft bones of several species (chick, rabbit, bovine, horse, dog, and human). Next, we determine the local three-dimensional permeability of the lacunar-canalicular porosity for these species using recent microstructural measurements and adapting a previously developed model. Results demonstrated that the number of canaliculi per osteocyte lacuna ranged from 41 for human to 115 for horse. Permeability coefficients were found to be different in three local principal directions, indicating local orthotropic symmetry of bone permeability in parallel-fibered cortical bone for all species examined. For the range of parameters investigated, the local lacunar-canalicular permeability varied more than three orders of magnitude, with the osteocyte lacunar shape and size along with the 3-D canalicular distribution determining the degree of anisotropy of the local permeability. This two-step theoretical approach to determine the degree of anisotropy of the permeability of the lacunar-canalicular porosity will be useful for accurate quantification of interstitial fluid movement in bone.
Hwang, Beomsoo; Jeon, Doyoung
2015-04-09
In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.
Weight Estimation Tool for Children Aged 6 to 59 Months in Limited-Resource Settings
2016-01-01
Importance A simple, reliable anthropometric tool for rapid estimation of weight in children would be useful in limited-resource settings where current weight estimation tools are not uniformly reliable, nearly all global under-five mortality occurs, severe acute malnutrition is a significant contributor in approximately one-third of under-five mortality, and a weight scale may not be immediately available in emergencies to first-response providers. Objective To determine the accuracy and precision of mid-upper arm circumference (MUAC) and height as weight estimation tools in children under five years of age in low-to-middle income countries. Design This was a retrospective observational study. Data were collected in 560 nutritional surveys during 1992–2006 using a modified Expanded Program of Immunization two-stage cluster sample design. Setting Locations with high prevalence of acute and chronic malnutrition. Participants A total of 453,990 children met inclusion criteria (age 6–59 months; weight ≤ 25 kg; MUAC 80–200 mm) and exclusion criteria (bilateral pitting edema; biologically implausible weight-for-height z-score (WHZ), weight-for-age z-score (WAZ), and height-for-age z-score (HAZ) values). Exposures Weight was estimated using Broselow Tape, Hong Kong formula, and database MUAC alone, height alone, and height and MUAC combined. Main Outcomes and Measures Mean percentage difference between true and estimated weight, proportion of estimates accurate to within ± 25% and ± 10% of true weight, weighted Kappa statistic, and Bland-Altman bias were reported as measures of tool accuracy. Standard deviation of mean percentage difference and Bland-Altman 95% limits of agreement were reported as measures of tool precision. Results Database height was a more accurate and precise predictor of weight compared to Broselow Tape 2007 [B], Broselow Tape 2011 [A], and MUAC. Mean percentage difference between true and estimated weight was +0.49% (SD = 10
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Technology Transfer Automated Retrieval System (TEKTRAN)
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...
On Relevance Weight Estimation and Query Expansion.
ERIC Educational Resources Information Center
Robertson, S. E.
1986-01-01
A Bayesian argument is used to suggest modifications to the Robertson and Jones relevance weighting formula to accommodate the addition to the query of terms taken from the relevant documents identified during the search. (Author)
How accurate are physical property estimation programs for organosilicon compounds?
Boethling, Robert; Meylan, William
2013-11-01
Organosilicon compounds are important in chemistry and commerce, and nearly 10% of new chemical substances for which premanufacture notifications are processed by the US Environmental Protection Agency (USEPA) contain silicon (Si). Yet, remarkably few measured values are submitted for key physical properties, and the accuracy of estimation programs such as the Estimation Programs Interface (EPI) Suite and the SPARC Performs Automated Reasoning in Chemistry (SPARC) system is largely unknown. To address this issue, the authors developed an extensive database of measured property values for organic compounds containing Si and evaluated the performance of no-cost estimation programs for several properties of importance in environmental assessment. These included melting point (mp), boiling point (bp), vapor pressure (vp), water solubility, n-octanol/water partition coefficient (log KOW ), and Henry's law constant. For bp and the larger of 2 vp datasets, SPARC, MPBPWIN, and the USEPA's Toxicity Estimation Software Tool (TEST) had similar accuracy. For log KOW and water solubility, the authors tested 11 and 6 no-cost estimators, respectively. The best performers were Molinspiration and WSKOWWIN, respectively. The TEST's consensus mp method outperformed that of MPBPWIN by a considerable margin. Generally, the best programs estimated the listed properties of diverse organosilicon compounds with accuracy sufficient for chemical screening. The results also highlight areas where improvement is most needed.
Gaining Efficiency via Weighted Estimators for Multivariate Failure Time Data*
Fan, Jianqing; Zhou, Yong; Cai, Jianwen; Chen, Min
2009-06-01
Multivariate failure time data arise frequently in survival analysis. A commonly used technique is the working independence estimator for marginal hazard models. Two natural questions are how to improve the efficiency of the working independence estimator and how to identify the situations under which such an estimator has high statistical efficiency. In this paper, three weighted estimators are proposed based on three different optimal criteria in terms of the asymptotic covariance of weighted estimators. Simplified close-form solutions are found, which always outperform the working independence estimator. We also prove that the working independence estimator has high statistical efficiency, when asymptotic covariance of derivatives of partial log-likelihood functions is nearly exchangeable or diagonal. Simulations are conducted to compare the performance of the weighted estimator and working independence estimator. A data set from Busselton population health surveys is analyzed using the proposed estimators.
[Weighted estimation methods for multistage sampling survey data].
Hou, Xiao-Yan; Wei, Yong-Yue; Chen, Feng
2009-06-01
Multistage sampling techniques are widely applied in the cross-sectional study of epidemiology, while methods based on independent assumption are still used to analyze such complex survey data. This paper aims to introduce the application of weighted estimation methods for the complex survey data. A brief overview of basic theory is described, and then a practical analysis is illustrated to apply to the weighted estimation algorithm in a stratified two-stage clustered sampling data. For multistage sampling survey data, weighted estimation method can be used to obtain unbiased point estimation and more reasonable variance estimation, and so make proper statistical inference by correcting the clustering, stratification and unequal probability effects.
Accurate feature detection and estimation using nonlinear and multiresolution analysis
NASA Astrophysics Data System (ADS)
Rudin, Leonid; Osher, Stanley
1994-11-01
A program for feature detection and estimation using nonlinear and multiscale analysis was completed. The state-of-the-art edge detection was combined with multiscale restoration (as suggested by the first author) and robust results in the presence of noise were obtained. Successful applications to numerous images of interest to DOD were made. Also, a new market in the criminal justice field was developed, based in part, on this work.
Accurate response surface approximations for weight equations based on structural optimization
NASA Astrophysics Data System (ADS)
Papila, Melih
Accurate weight prediction methods are vitally important for aircraft design optimization. Therefore, designers seek weight prediction techniques with low computational cost and high accuracy, and usually require a compromise between the two. The compromise can be achieved by combining stress analysis and response surface (RS) methodology. While stress analysis provides accurate weight information, RS techniques help to transmit effectively this information to the optimization procedure. The focus of this dissertation is structural weight equations in the form of RS approximations and their accuracy when fitted to results of structural optimizations that are based on finite element analyses. Use of RS methodology filters out the numerical noise in structural optimization results and provides a smooth weight function that can easily be used in gradient-based configuration optimization. In engineering applications RS approximations of low order polynomials are widely used, but the weight may not be modeled well by low-order polynomials, leading to bias errors. In addition, some structural optimization results may have high-amplitude errors (outliers) that may severely affect the accuracy of the weight equation. Statistical techniques associated with RS methodology are sought in order to deal with these two difficulties: (1) high-amplitude numerical noise (outliers) and (2) approximation model inadequacy. The investigation starts with reducing approximation error by identifying and repairing outliers. A potential reason for outliers in optimization results is premature convergence, and outliers of such nature may be corrected by employing different convergence settings. It is demonstrated that outlier repair can lead to accuracy improvements over the more standard approach of removing outliers. The adequacy of approximation is then studied by a modified lack-of-fit approach, and RS errors due to the approximation model are reduced by using higher order polynomials. In
Accurate tempo estimation based on harmonic + noise decomposition
NASA Astrophysics Data System (ADS)
Alonso, Miguel; Richard, Gael; David, Bertrand
2006-12-01
We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio signal by means of a subspace analysis method. Then, a technique to measure the degree of musical accentuation as a function of time is developed and separately applied to the harmonic and noise parts of the input signal. This is followed by a periodicity estimation block that calculates the salience of musical accents for a large number of potential periods. Next, a multipath dynamic programming searches among all the potential periodicities for the most consistent prospects through time, and finally the most energetic candidate is selected as tempo. Our proposal is validated using a manually annotated test-base containing 961 music signals from various musical genres. In addition, the performance of the algorithm under different configurations is compared. The robustness of the algorithm when processing signals of degraded quality is also measured.
Fast and Accurate Estimates of Divergence Times from Big Data.
Mello, Beatriz; Tao, Qiqing; Tamura, Koichiro; Kumar, Sudhir
2017-01-01
Ongoing advances in sequencing technology have led to an explosive expansion in the molecular data available for building increasingly larger and more comprehensive timetrees. However, Bayesian relaxed-clock approaches frequently used to infer these timetrees impose a large computational burden and discourage critical assessment of the robustness of inferred times to model assumptions, influence of calibrations, and selection of optimal data subsets. We analyzed eight large, recently published, empirical datasets to compare time estimates produced by RelTime (a non-Bayesian method) with those reported by using Bayesian approaches. We find that RelTime estimates are very similar to Bayesian approaches, yet RelTime requires orders of magnitude less computational time. This means that the use of RelTime will enable greater rigor in molecular dating, because faster computational speeds encourage more extensive testing of the robustness of inferred timetrees to prior assumptions (models and calibrations) and data subsets. Thus, RelTime provides a reliable and computationally thrifty approach for dating the tree of life using large-scale molecular datasets.
Weight estimation techniques for composite airplanes in general aviation industry
NASA Technical Reports Server (NTRS)
Paramasivam, T.; Horn, W. J.; Ritter, J.
1986-01-01
Currently available weight estimation methods for general aviation airplanes were investigated. New equations with explicit material properties were developed for the weight estimation of aircraft components such as wing, fuselage and empennage. Regression analysis was applied to the basic equations for a data base of twelve airplanes to determine the coefficients. The resulting equations can be used to predict the component weights of either metallic or composite airplanes.
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John
2016-01-01
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.
Bioaccessibility tests accurately estimate bioavailability of lead to quail.
Beyer, W Nelson; Basta, Nicholas T; Chaney, Rufus L; Henry, Paula F P; Mosby, David E; Rattner, Barnett A; Scheckel, Kirk G; Sprague, Daniel T; Weber, John S
2016-09-01
Hazards of soil-borne lead (Pb) to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, the authors measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from 5 Pb-contaminated Superfund sites had relative bioavailabilities from 33% to 63%, with a mean of approximately 50%. Treatment of 2 of the soils with phosphorus (P) significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in 6 in vitro tests and regressed on bioavailability: the relative bioavailability leaching procedure at pH 1.5, the same test conducted at pH 2.5, the Ohio State University in vitro gastrointestinal method, the urban soil bioaccessible lead test, the modified physiologically based extraction test, and the waterfowl physiologically based extraction test. All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the relative bioavailability leaching procedure at pH 2.5 and Ohio State University in vitro gastrointestinal tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite, and tertiary Pb phosphate) and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb, and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb. Environ Toxicol Chem 2016;35:2311-2319. Published 2016 Wiley Periodicals Inc. on behalf of
Zhang, Xinyue; Lourenco, Daniela; Aguilar, Ignacio; Legarra, Andres; Misztal, Ignacy
2016-01-01
former. Manhattan plots had higher resolution with 5 and 100 QTL. Using a common weight for a window of 20 SNP that sums or averages the SNP variance enhances accuracy of predicting GEBV and provides accurate estimation of marker effects.
Zhang, Xinyue; Lourenco, Daniela; Aguilar, Ignacio; Legarra, Andres; Misztal, Ignacy
2016-01-01
. Manhattan plots had higher resolution with 5 and 100 QTL. Using a common weight for a window of 20 SNP that sums or averages the SNP variance enhances accuracy of predicting GEBV and provides accurate estimation of marker effects. PMID:27594861
Comparative Study of Clinical and Sonographic Estimation of Foetal Weight at Term.
Bakshi, L; Begum, H A; Khan, I; Dey, S K; Bhattacharjee, M; Bakshi, M K; Dey, S; Habib, A; Barman, K K
2015-07-01
A cross sectional comparative study was conducted at Dhaka National Medical College, Dhaka from January to June 2012, to observe the accuracy of clinical and ultrasonographic estimation of foetal weight at term in our environment. Seventy five pregnant women who fulfilled the inclusion criteria had their foetal weight estimated independently using clinical and ultrasonographic methods. Accuracy was determined by percentage error, absolute percentage error and proportion of estimates within 10% of actual birth weight (birth weight fetus of +10%). Statistical analysis was done using the paired t-test, the Wilcoxon signed-rank test, and the chi-square test. The study sample had an actual average birth weight of 2989.60 ± 408.76 (range 2310-4000 gm). Overall, the clinical method overestimated birth-weight, while ultrasound underestimated it. The mean absolute percentage error of the clinical method was more than that of the sonographic method, and the number of estimates within 10% of actual birth weight for the clinical method (41.3%) was less than for the sonographic method (57.3%); the difference was not statistically significant. In the low birth-weight (<2,500 gm) group, the mean absolute percentage error of sonographic estimates were significantly smaller. Significantly more sonographic estimates (75%) were within 10% of actual birth-weight than those of the clinical method (0%). No statistically significant difference was observed in all the measures of accuracy for the normal birth-weight range of 2,500-<4,000 gm and in the macrosomic group (≥ 4,000 gm). Clinical estimation of birth-weight is as accurate as routine ultrasonographic estimation, except in low-birth-weight babies.
NASA Technical Reports Server (NTRS)
Sensmeier, Mark D.; Samareh, Jamshid A.
2005-01-01
An approach is proposed for the application of rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process. This should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. A demonstration of this process is shown for two sample aircraft wing designs.
Estimating the weight of generally configured dual wing systems
NASA Technical Reports Server (NTRS)
Cronin, D. L.; Somnay, R. J.
1985-01-01
Formulas available for the weight estimation of monoplane wings cannot be said to be appropriate for the estimation of generally configured dual wing systems. In the present paper a method is described which simultaneously generates a structural weight estimate and a fully stressed, quasi-optimal structure for a model of a dual wing system. The method is fast and inexpensive. It is ideally suited to preliminary design. To illustrate the method, a dual wing system and a conventional wing system are sized. Numerical computation is shown to be suitably fast for both cases and, for both cases, convergence to a final configuration is shown to be quite rapid. To illustrate the validity of the method, a conventional wing is sized and its weight obtained by the present method is compared to its weight determined by a reputable weight estimation formula. The results are shown to be very close.
Pros, Cons, and Alternatives to Weight Based Cost Estimating
NASA Technical Reports Server (NTRS)
Joyner, Claude R.; Lauriem, Jonathan R.; Levack, Daniel H.; Zapata, Edgar
2011-01-01
Many cost estimating tools use weight as a major parameter in projecting the cost. This is often combined with modifying factors such as complexity, technical maturity of design, environment of operation, etc. to increase the fidelity of the estimate. For a set of conceptual designs, all meeting the same requirements, increased weight can be a major driver in increased cost. However, once a design is fixed, increased weight generally decreases cost, while decreased weight generally increases cost - and the relationship is not linear. Alternative approaches to estimating cost without using weight (except perhaps for materials costs) have been attempted to try to produce a tool usable throughout the design process - from concept studies through development. This paper will address the pros and cons of using weight based models for cost estimating, using liquid rocket engines as the example. It will then examine approaches that minimize the impct of weight based cost estimating. The Rocket Engine- Cost Model (RECM) is an attribute based model developed internally by Pratt & Whitney Rocketdyne for NASA. RECM will be presented primarily to show a successful method to use design and programmatic parameters instead of weight to estimate both design and development costs and production costs. An operations model developed by KSC, the Launch and Landing Effects Ground Operations model (LLEGO), will also be discussed.
NASA Technical Reports Server (NTRS)
Grissom, D. S.; Schneider, W. C.
1971-01-01
The determination of a base line (minimum weight) design for the primary structure of the living quarters modules in an earth-orbiting space base was investigated. Although the design is preliminary in nature, the supporting analysis is sufficiently thorough to provide a reasonably accurate weight estimate of the major components that are considered to comprise the structural weight of the space base.
Classification image weights and internal noise level estimation
NASA Technical Reports Server (NTRS)
Ahumada, Albert J Jr
2002-01-01
For the linear discrimination of two stimuli in white Gaussian noise in the presence of internal noise, a method is described for estimating linear classification weights from the sum of noise images segregated by stimulus and response. The recommended method for combining the two response images for the same stimulus is to difference the average images. Weights are derived for combining images over stimuli and observers. Methods for estimating the level of internal noise are described with emphasis on the case of repeated presentations of the same noise sample. Simple tests for particular hypotheses about the weights are shown based on observer agreement with a noiseless version of the hypothesis.
An evaluation of study design for estimating a time-of-day noise weighting
NASA Technical Reports Server (NTRS)
Fields, J. M.
1986-01-01
The relative importance of daytime and nighttime noise of the same noise level is represented by a time-of-day weight in noise annoyance models. The high correlations between daytime and nighttime noise were regarded as a major reason that previous social surveys of noise annoyance could not accurately estimate the value of the time-of-day weight. Study designs which would reduce the correlation between daytime and nighttime noise are described. It is concluded that designs based on short term variations in nighttime noise levels would not be able to provide valid measures of response to nighttime noise. The accuracy of the estimate of the time-of-day weight is predicted for designs which are based on long term variations in nighttime noise levels. For these designs it is predicted that it is not possible to form satisfactorily precise estimates of the time-of-day weighting.
A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows
NASA Astrophysics Data System (ADS)
Diaz, Steven William
A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the
Robust and efficient estimation with weighted composite quantile regression
NASA Astrophysics Data System (ADS)
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
ERIC Educational Resources Information Center
Natale, Ruby; Uhlhorn, Susan B.; Lopez-Mitnik, Gabriela; Camejo, Stephanie; Englebert, Nicole; Delamater, Alan M.; Messiah, Sarah E.
2016-01-01
Background: One in four preschool-age children in the United States are currently overweight or obese. Previous studies have shown that caregivers of this age group often have difficulty accurately recognizing their child's weight status. The purpose of this study was to examine factors associated with accurate/inaccurate perception of child body…
[Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].
Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling
2013-12-01
Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.
Expected probability weighted moment estimator for censored flood data
NASA Astrophysics Data System (ADS)
Jeon, Jong-June; Kim, Young-Oh; Kim, Yongdai
2011-08-01
Two well-known methods for estimating statistical distributions in hydrology are the Method of Moments (MOMs) and the method of probability weighted moments (PWM). This paper is concerned with the case where a part of the sample is censored. One situation where this might occur is when systematic data (e.g. from gauges) are combined with historical data, since the latter are often only reported if they exceed a high threshold. For this problem, three previously derived estimators are the "B17B" estimator, which is a direct modification of MOM to allow for partial censoring; the "partial PWM estimator", which similarly modifies PWM; and the "expected moments algorithm" estimator, which improves on B17B by replacing a sample adjustment of the censored-data moments with a population adjustment. The present paper proposes a similar modification to the PWM estimator, resulting in the "expected probability weighted moments (EPWM)" estimator. Simulation comparisons of these four estimators and also the maximum likelihood estimator show that the EPWM method is at least competitive with the other four and in many cases the best of the five estimators.
Weight and cost estimating relationships for heavy lift airships
NASA Technical Reports Server (NTRS)
Gray, D. W.
1979-01-01
Weight and cost estimating relationships, including additional parameters that influence the cost and performance of heavy-lift airships (HLA), are discussed. Inputs to a closed loop computer program, consisting of useful load, forward speed, lift module positive or negative thrust, and rotors and propellers, are examined. Detail is given to the HLA cost and weight program (HLACW), which computes component weights, vehicle size, buoyancy lift, rotor and propellar thrust, and engine horse power. This program solves the problem of interrelating the different aerostat, rotors, engines and propeller sizes. Six sets of 'default parameters' are left for the operator to change during each computer run enabling slight data manipulation without altering the program.
Habecker, Patrick; Dombrowski, Kirk; Khan, Bilal
2015-01-01
Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM) provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations) via traditional survey tools such as telephone or mail surveys—by asking a representative sample to estimate the number of people they know who are members of such a “hidden” subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation “trimming” to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights. PMID:26630261
Browning, Sharon R; Browning, Brian L
2015-09-03
Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package.
Browning, Sharon R.; Browning, Brian L.
2015-01-01
Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package. PMID:26299365
LSimpute: accurate estimation of missing values in microarray data with least squares methods.
Bø, Trond Hellem; Dysvik, Bjarte; Jonassen, Inge
2004-02-20
Microarray experiments generate data sets with information on the expression levels of thousands of genes in a set of biological samples. Unfortunately, such experiments often produce multiple missing expression values, normally due to various experimental problems. As many algorithms for gene expression analysis require a complete data matrix as input, the missing values have to be estimated in order to analyze the available data. Alternatively, genes and arrays can be removed until no missing values remain. However, for genes or arrays with only a small number of missing values, it is desirable to impute those values. For the subsequent analysis to be as informative as possible, it is essential that the estimates for the missing gene expression values are accurate. A small amount of badly estimated missing values in the data might be enough for clustering methods, such as hierachical clustering or K-means clustering, to produce misleading results. Thus, accurate methods for missing value estimation are needed. We present novel methods for estimation of missing values in microarray data sets that are based on the least squares principle, and that utilize correlations between both genes and arrays. For this set of methods, we use the common reference name LSimpute. We compare the estimation accuracy of our methods with the widely used KNNimpute on three complete data matrices from public data sets by randomly knocking out data (labeling as missing). From these tests, we conclude that our LSimpute methods produce estimates that consistently are more accurate than those obtained using KNNimpute. Additionally, we examine a more classic approach to missing value estimation based on expectation maximization (EM). We refer to our EM implementations as EMimpute, and the estimate errors using the EMimpute methods are compared with those our novel methods produce. The results indicate that on average, the estimates from our best performing LSimpute method are at least as
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Sensmeier, mark D.; Stewart, Bret A.
2006-01-01
Algorithms for rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process have been developed. Application of these algorithms should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. Recent enhancements to this approach include the porting of the algorithms to a platform-independent software language Python, and modifications to specifically consider morphing aircraft-type configurations. Two sample cases which illustrate these recent developments are presented.
Xiang, G.; Ferson, S.; Ginzburg, L.; Longpré, L.; Mayorga, E.; Kosheleva, O.
2013-01-01
To preserve privacy, the original data points (with exact values) are replaced by boxes containing each (inaccessible) data point. This privacy-motivated uncertainty leads to uncertainty in the statistical characteristics computed based on this data. In a previous paper, we described how to minimize this uncertainty under the assumption that we use the same standard statistical estimates for the desired characteristics. In this paper, we show that we can further decrease the resulting uncertainty if we allow fuzzy-motivated weighted estimates, and we explain how to optimally select the corresponding weights. PMID:25187183
Sample Size Requirements for Accurate Estimation of Squared Semi-Partial Correlation Coefficients.
ERIC Educational Resources Information Center
Algina, James; Moulder, Bradley C.; Moser, Barry K.
2002-01-01
Studied the sample size requirements for accurate estimation of squared semi-partial correlation coefficients through simulation studies. Results show that the sample size necessary for adequate accuracy depends on: (1) the population squared multiple correlation coefficient (p squared); (2) the population increase in p squared; and (3) the…
Development of Non-Optimum Factors for Launch Vehicle Propellant Tank Bulkhead Weight Estimation
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Wallace, Matthew L.; Cerro, Jeffrey A.
2012-01-01
Non-optimum factors are used during aerospace conceptual and preliminary design to account for the increased weights of as-built structures due to future manufacturing and design details. Use of higher-fidelity non-optimum factors in these early stages of vehicle design can result in more accurate predictions of a concept s actual weights and performance. To help achieve this objective, non-optimum factors are calculated for the aluminum-alloy gores that compose the ogive and ellipsoidal bulkheads of the Space Shuttle Super-Lightweight Tank propellant tanks. Minimum values for actual gore skin thicknesses and weld land dimensions are extracted from selected production drawings, and are used to predict reference gore weights. These actual skin thicknesses are also compared to skin thicknesses predicted using classical structural mechanics and tank proof-test pressures. Both coarse and refined weights models are developed for the gores. The coarse model is based on the proof pressure-sized skin thicknesses, and the refined model uses the actual gore skin thicknesses and design detail dimensions. To determine the gore non-optimum factors, these reference weights are then compared to flight hardware weights reported in a mass properties database. When manufacturing tolerance weight estimates are taken into account, the gore non-optimum factors computed using the coarse weights model range from 1.28 to 2.76, with an average non-optimum factor of 1.90. Application of the refined weights model yields non-optimum factors between 1.00 and 1.50, with an average non-optimum factor of 1.14. To demonstrate their use, these calculated non-optimum factors are used to predict heavier, more realistic gore weights for a proposed heavy-lift launch vehicle s propellant tank bulkheads. These results indicate that relatively simple models can be developed to better estimate the actual weights of large structures for future launch vehicles.
EDIN0613P weight estimating program. [for launch vehicles
NASA Technical Reports Server (NTRS)
Hirsch, G. N.
1976-01-01
The weight estimating relationships and program developed for space power system simulation are described. The program was developed to size a two-stage launch vehicle for the space power system. The program is actually part of an overall simulation technique called EDIN (Engineering Design and Integration) system. The program sizes the overall vehicle, generates major component weights and derives a large amount of overall vehicle geometry. The program is written in FORTRAN V and is designed for use on the Univac Exec 8 (1110). By utilizing the flexibility of this program while remaining cognizant of the limits imposed upon output depth and accuracy by utilization of generalized input, this program concept can be a useful tool for estimating purposes at the conceptual design stage of a launch vehicle.
Kurugol, Sila; Freiman, Moti; Afacan, Onur; Domachevsky, Liran; Perez-Rossello, Jeannette M; Callahan, Michael J; Warfield, Simon K
2015-01-01
Non-invasive characterization of water molecule's mobility variations by quantitative analysis of diffusion-weighted MRI (DW-MRI) signal decay in the abdomen has the potential to serve as a biomarker in gastrointestinal and oncological applications. Accurate and reproducible estimation of the signal decay model parameters is challenging due to the presence of respiratory, cardiac, and peristalsis motion. Independent registration of each b-value image to the b-value=0 s/mm(2) image prior to parameter estimation might be sub-optimal because of the low SNR and contrast difference between images of varying b-value. In this work, we introduce a motion-compensated parameter estimation framework that simultaneously solves image registration and model estimation (SIR-ME) problems by utilizing the interdependence of acquired volumes along the diffusion weighting dimension. We evaluated the improvement in model parameters estimation accuracy using 16 in-vivo DW-MRI data sets of Crohn's disease patients by comparing parameter estimates obtained using the SIR-ME model to the parameter estimates obtained by fitting the signal decay model to the acquired DW-MRI images. The proposed SIR-ME model reduced the average root-mean-square error between the observed signal and the fitted model by more than 50%. Moreover, the SIR-ME model estimates discriminate between normal and abnormal bowel loops better than the standard parameter estimates.
Do We Know Whether Researchers and Reviewers are Estimating Risk and Benefit Accurately?
Hey, Spencer Phillips; Kimmelman, Jonathan
2016-10-01
Accurate estimation of risk and benefit is integral to good clinical research planning, ethical review, and study implementation. Some commentators have argued that various actors in clinical research systems are prone to biased or arbitrary risk/benefit estimation. In this commentary, we suggest the evidence supporting such claims is very limited. Most prior work has imputed risk/benefit beliefs based on past behavior or goals, rather than directly measuring them. We describe an approach - forecast analysis - that would enable direct and effective measure of the quality of risk/benefit estimation. We then consider some objections and limitations to the forecasting approach.
Askalany, Ahmed A; Saha, Bidyut B
2017-03-15
Accurate estimation of the isosteric heat of adsorption is mandatory for a good modeling of adsorption processes. In this paper a thermodynamic formalism on adsorbed phase volume which is a function of adsorption pressure and temperature has been proposed for the precise estimation of the isosteric heat of adsorption. The estimated isosteric heat of adsorption using the new correlation has been compared with measured values of prudently selected several adsorbent-refrigerant pairs from open literature. Results showed that the proposed isosteric heat of adsorption correlation fits the experimentally measured values better than the Clausius-Clapeyron equation.
Performance and Weight Estimates for an Advanced Open Rotor Engine
NASA Technical Reports Server (NTRS)
Hendricks, Eric S.; Tong, Michael T.
2012-01-01
NASA s Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft. The open rotor concept (also historically referred to an unducted fan or advanced turboprop) may allow for the achievement of this objective by reducing engine fuel consumption. To evaluate the potential impact of open rotor engines, cycle modeling and engine weight estimation capabilities have been developed. The initial development of the cycle modeling capabilities in the Numerical Propulsion System Simulation (NPSS) tool was presented in a previous paper. Following that initial development, further advancements have been made to the cycle modeling and weight estimation capabilities for open rotor engines and are presented in this paper. The developed modeling capabilities are used to predict the performance of an advanced open rotor concept using modern counter-rotating propeller designs. Finally, performance and weight estimates for this engine are presented and compared to results from a previous NASA study of advanced geared and direct-drive turbofans.
Shu, Chi-Wang
2013-01-13
In this article, we give a brief overview on high-order accurate shock capturing schemes with the aim of applications in compressible turbulence simulations. The emphasis is on the basic methodology and recent algorithm developments for two classes of high-order methods: the weighted essentially non-oscillatory and discontinuous Galerkin methods.
Digital combining-weight estimation for broadband sources using maximum-likelihood estimates
NASA Technical Reports Server (NTRS)
Rodemich, E. R.; Vilnrotter, V. A.
1994-01-01
An algorithm described for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system is compared with the maximum-likelihood estimate. This provides some improvement in performance, with an increase in computational complexity. However, the maximum-likelihood algorithm is simple enough to allow implementation on a PC-based combining system.
Are In-Bed Electronic Weights Recorded in the Medical Record Accurate?
Gerl, Heather; Miko, Alexandra; Nelson, Mandy; Godaire, Lori
2016-01-01
This study found large discrepancies between in-bed weights recorded in the medical record and carefully obtained standing weights with a calibrated, electronic bedside scale. This discrepancy appears to be related to inadequate bed calibration before patient admission and having excessive linen, clothing, and/or equipment on the bed during weighing by caregivers.
On the accurate estimation of gap fraction during daytime with digital cover photography
NASA Astrophysics Data System (ADS)
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.
Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro
2016-01-12
The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy.
[Guidelines for Accurate and Transparent Health Estimates Reporting: the GATHER Statement].
Stevens, Gretchen A; Alkema, Leontine; Black, Robert E; Boerma, J Ties; Collins, Gary S; Ezzati, Majid; Grove, John T; Hogan, Daniel R; Hogan, Margaret C; Horton, Richard; Lawn, Joy E; Marušic, Ana; Mathers, Colin D; Murray, Christopher J L; Rudan, Igor; Salomon, Joshua A; Simpson, Paul J; Vos, Theo; Welch, Vivian
2017-01-01
Measurements of health indicators are rarely available for every population and period of interest, and available data may not be comparable. The Guidelines for Accurate and Transparent Health Estimates Reporting (GATHER) define best reporting practices for studies that calculate health estimates for multiple populations (in time or space) using multiple information sources. Health estimates that fall within the scope of GATHER include all quantitative population-level estimates (including global, regional, national, or subnational estimates) of health indicators, including indicators of health status, incidence and prevalence of diseases, injuries, and disability and functioning; and indicators of health determinants, including health behaviours and health exposures. GATHER comprises a checklist of 18 items that are essential for best reporting practice. A more detailed explanation and elaboration document, describing the interpretation and rationale of each reporting item along with examples of good reporting, is available on the GATHER website (http://gather-statement.org).
Damon, Bruce M; Heemskerk, Anneriet M; Ding, Zhaohua
2012-06-01
Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor magnetic resonance imaging fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image data sets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8 and 15.3 m(-1)), signal-to-noise ratio (50, 75, 100 and 150) and voxel geometry (13.8- and 27.0-mm(3) voxel volume with isotropic resolution; 13.5-mm(3) volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to second-order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m(-1)), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation.
Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua
2012-01-01
Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094
Bounded Self-Weights Estimation Method for Non-Local Means Image Denoising Using Minimax Estimators.
Nguyen, Minh Phuong; Chun, Se Young
2017-04-01
A non-local means (NLM) filter is a weighted average of a large number of non-local pixels with various image intensity values. The NLM filters have been shown to have powerful denoising performance, excellent detail preservation by averaging many noisy pixels, and using appropriate values for the weights, respectively. The NLM weights between two different pixels are determined based on the similarities between two patches that surround these pixels and a smoothing parameter. Another important factor that influences the denoising performance is the self-weight values for the same pixel. The recently introduced local James-Stein type center pixel weight estimation method (LJS) outperforms other existing methods when determining the contribution of the center pixels in the NLM filter. However, the LJS method may result in excessively large self-weight estimates since no upper bound is assumed, and the method uses a relatively large local area for estimating the self-weights, which may lead to a strong bias. In this paper, we investigated these issues in the LJS method, and then propose a novel local self-weight estimation methods using direct bounds (LMM-DB) and reparametrization (LMM-RP) based on the Baranchik's minimax estimator. Both the LMM-DB and LMM-RP methods were evaluated using a wide range of natural images and a clinical MRI image together with the various levels of additive Gaussian noise. Our proposed parameter selection methods yielded an improved bias-variance trade-off, a higher peak signal-to-noise (PSNR) ratio, and fewer visual artifacts when compared with the results of the classical NLM and LJS methods. Our proposed methods also provide a heuristic way to select a suitable global smoothing parameters that can yield PSNR values that are close to the optimal values.
Blanc, Ann K.; Wardlaw, Tessa
2005-01-01
OBJECTIVE: To critically examine the data used to produce estimates of the proportion of infants with low birth weight in developing countries and to describe biases in these data. To assess the effect of adjustment procedures on the estimates and propose a modified estimation procedure for international reporting purposes. METHODS: Mothers' reports about their recent births in 62 nationally representative Demographic and Health Surveys (DHS) conducted between 1990 and 2000 were analysed. The proportion of infants weighed at birth, characteristics of those weighed, extent of misreporting, and mothers' subjective assessments of their children's size at birth were examined. FINDINGS: In many developing countries the majority of infants were not weighed at birth. Those who were weighed were more likely to have mothers who live in urban areas and are educated, and to be born in a medical facility with assistance from medically trained personnel. Birth weights reported by mothers are "heaped" on multiples of 500 grams. CONCLUSION: Current survey-based estimates of the prevalence of low birth weight are biased substantially downwards. Two adjustments to reported data are recommended: a weighting procedure that combines reported birth weights with mothers' assessment of the child's size at birth, and categorization of one-quarter of the infants reported to have a birth weight of exactly 2500 grams as having low birth weight. Averaged over all surveys, these procedures increased the proportion classified as having low birth weight by 25%. We also recommend that the proportion of infants not weighed at birth be routinely reported. Efforts are needed to increase the weighing of newborns and the recording of their weights. PMID:15798841
Harris, H E; Ellison, G T; Holliday, M; Nickson, C
1998-04-01
The accuracy of antenatal weight data recorded in obstetric notes was investigated in the 45 hospital and community antenatal clinics within a South Thames Region NHS Trust. In order to assess the reliability and validity of all 60 clinic scales triplicate measurements of body weight for low- and high-weight subjects were recorded on each clinical scale and on a calibrated standard scale. The quality of weighing practice during antenatal care was investigated by means of semi-structured interviews conducted with all 33 midwives who currently provide antenatal care within the Trust. Beam balances had the highest reliability and validity, whereas scales with spring mechanisms were the least accurate. Only 40% of the clinics surveyed had access to beam balances, yet most of the maternal weight measurements recorded during antenatal care are likely to be out by no more than 1-1.5% of body weight. Weighing practice was generally inconsistent, and serial measurements of maternal body weight collected during pregnancy are probably too imprecise to provide a sensitive screen for conditions associated with unusual weight gain and too inaccurate to assess compliance with guidelines for weight gain.
Aristizabal, John F; Rothman, Jessica M; García-Fería, Luis M; Serio-Silva, Juan Carlos
2017-04-01
Two methods are commonly used to describe the feeding behavior of wild primates, one based on the proportion of time animals spent feeding on specific plant parts ("time-based" estimates) and one based on estimates of the actual amounts of different plant materials ingested ('"weight-based" estimates). However, studies based on feeding time may not be accurate for making quantitative assessments of animals' nutrient and energy intake. We analyzed the diet of two groups of Alouatta pigra living in forest fragments using two different methods (time- and dry weight-based estimates), to explore how these alternative approaches impact estimates of (a) the contribution of each food type to the diet and (b) the macronutrient composition of the diet, including available protein (AP), non-protein energy (NPE), and total energy (TE) intake. We conducted behavioral observations (N = 658 hr and N = 46 full day focal follows), from August 2012 to March 2013. For each feeding bout, we estimated both time spent feeding and actual fresh- and dry-weight consumption by counting the number of food items ingested during the bout. Using time-based estimates, A. pigra showed a predominantly leaf-based diet. In contrast, weight-based estimates described combined a fruit and leaf-based diet. There were no differences between methods when estimating AP intake; however, we found significant differences while estimating NPE and TE intake. Time-based estimates provide us with important information such as the foraging effort spent on food items, trees, or patches, while weight-based estimates may provide more accurate information concerning nutrient and energy intake. We suggest that quantitative estimates of nutrient intake in a primate's diet be based on observations of wet and/or dry weight actually ingested rather than extrapolated from time spent feeding.
Robust and accurate fundamental frequency estimation based on dominant harmonic components.
Nakatani, Tomohiro; Irino, Toshio
2004-12-01
This paper presents a new method for robust and accurate fundamental frequency (F0) estimation in the presence of background noise and spectral distortion. Degree of dominance and dominance spectrum are defined based on instantaneous frequencies. The degree of dominance allows one to evaluate the magnitude of individual harmonic components of the speech signals relative to background noise while reducing the influence of spectral distortion. The fundamental frequency is more accurately estimated from reliable harmonic components which are easy to select given the dominance spectra. Experiments are performed using white and babble background noise with and without spectral distortion as produced by a SRAEN filter. The results show that the present method is better than previously reported methods in terms of both gross and fine F0 errors.
Development of Star Tracker System for Accurate Estimation of Spacecraft Attitude
2009-12-01
TRACKER SYSTEM FOR ACCURATE ESTIMATION OF SPACECRAFT ATTITUDE by Jack A. Tappe December 2009 Thesis Co-Advisors: Jae Jun Kim Brij N... Brij N. Agrawal Co-Advisor Dr. Knox T. Millsaps Chairman, Department of Mechanical and Astronautical Engineering iv THIS PAGE...much with my studies here. I would like to especially thank Professors Barry Leonard, Brij Agrawal, Grand Master Shin, and Comrade Oleg Yakimenko
Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong
2016-01-01
This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469
Bowden, Jack; Davey Smith, George; Haycock, Philip C.
2016-01-01
ABSTRACT Developments in genome‐wide association studies and the increasing availability of summary genetic association data have made application of Mendelian randomization relatively straightforward. However, obtaining reliable results from a Mendelian randomization investigation remains problematic, as the conventional inverse‐variance weighted method only gives consistent estimates if all of the genetic variants in the analysis are valid instrumental variables. We present a novel weighted median estimator for combining data on multiple genetic variants into a single causal estimate. This estimator is consistent even when up to 50% of the information comes from invalid instrumental variables. In a simulation analysis, it is shown to have better finite‐sample Type 1 error rates than the inverse‐variance weighted method, and is complementary to the recently proposed MR‐Egger (Mendelian randomization‐Egger) regression method. In analyses of the causal effects of low‐density lipoprotein cholesterol and high‐density lipoprotein cholesterol on coronary artery disease risk, the inverse‐variance weighted method suggests a causal effect of both lipid fractions, whereas the weighted median and MR‐Egger regression methods suggest a null effect of high‐density lipoprotein cholesterol that corresponds with the experimental evidence. Both median‐based and MR‐Egger regression methods should be considered as sensitivity analyses for Mendelian randomization investigations with multiple genetic variants. PMID:27061298
Accurate computation of weights in classical Gauss-Christoffel quadrature rules
Yakimiw, E.
1996-12-01
For many classical Gauss-Christoffel quadrature rules there does not exist a method which guarantees a uniform level of accuracy for the Gaussian quadrature weights at all quadrature nodes unless the nodes are known exactly. More disturbing, some algebraic expressions for these weights exhibit an excessive sensitivity to even the smallest perturbations in the node location. This sensitivity rapidly increases with high order quadrature rules. Current uses of very high order quadratures are common with the advent of more powerful computers, and a loss of accuracy in the weights has become a problem and must be addressed. A simple but efficient and general method for improving the accuracy of the computation of the quadrature weights even though the nodes may carry a significant large error. In addition, a highly efficient root-finding iterative technique with superlinear converging rates for computing the nodes is developed. It uses solely the quadrature polynomials and their first derivatives. A comparison of this method with the eigenvalue method of Golub and Welsh implemented in most standard software libraries is made. The proposed method outperforms the latter from the point of view of both accuracy and efficiency. The Legendre, Lobatto, Radau, Hermite, and Laguerre quadrature rules are examined. 22 refs., 7 figs., 5 tabs.
Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan
2015-08-11
Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.
Accurate estimation of object location in an image sequence using helicopter flight data
NASA Technical Reports Server (NTRS)
Tang, Yuan-Liang; Kasturi, Rangachar
1994-01-01
In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images
Lavoie, Benjamin R.; Okoniewski, Michal; Fear, Elise C.
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785
Accurate estimates of age at maturity from the growth trajectories of fishes and other ectotherms.
Honsey, Andrew E; Staples, David F; Venturelli, Paul A
2017-01-01
Age at maturity (AAM) is a key life history trait that provides insight into ecology, evolution, and population dynamics. However, maturity data can be costly to collect or may not be available. Life history theory suggests that growth is biphasic for many organisms, with a change-point in growth occurring at maturity. If so, then it should be possible to use a biphasic growth model to estimate AAM from growth data. To test this prediction, we used the Lester biphasic growth model in a likelihood profiling framework to estimate AAM from length at age data. We fit our model to simulated growth trajectories to determine minimum data requirements (in terms of sample size, precision in length at age, and the cost to somatic growth of maturity) for accurate AAM estimates. We then applied our method to a large walleye Sander vitreus data set and show that our AAM estimates are in close agreement with conventional estimates when our model fits well. Finally, we highlight the potential of our method by applying it to length at age data for a variety of ectotherms. Our method shows promise as a tool for estimating AAM and other life history traits from contemporary and historical samples.
Estimating topological properties of weighted networks from limited information.
Cimini, Giulio; Squartini, Tiziano; Gabrielli, Andrea; Garlaschelli, Diego
2015-10-01
A problem typically encountered when studying complex systems is the limitedness of the information available on their topology, which hinders our understanding of their structure and of the dynamical processes taking place on them. A paramount example is provided by financial networks, whose data are privacy protected: Banks publicly disclose only their aggregate exposure towards other banks, keeping individual exposures towards each single bank secret. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here, we develop a reconstruction method, based on statistical mechanics concepts, that makes use of the empirical link density in a highly nontrivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems.
Estimating topological properties of weighted networks from limited information
NASA Astrophysics Data System (ADS)
Cimini, Giulio; Squartini, Tiziano; Gabrielli, Andrea; Garlaschelli, Diego
2015-10-01
A problem typically encountered when studying complex systems is the limitedness of the information available on their topology, which hinders our understanding of their structure and of the dynamical processes taking place on them. A paramount example is provided by financial networks, whose data are privacy protected: Banks publicly disclose only their aggregate exposure towards other banks, keeping individual exposures towards each single bank secret. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here, we develop a reconstruction method, based on statistical mechanics concepts, that makes use of the empirical link density in a highly nontrivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems.
Estimating topological properties of weighted networks from limited information
NASA Astrophysics Data System (ADS)
Gabrielli, Andrea; Cimini, Giulio; Garlaschelli, Diego; Squartini, Angelo
A typical problem met when studying complex systems is the limited information available on their topology, which hinders our understanding of their structural and dynamical properties. A paramount example is provided by financial networks, whose data are privacy protected. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here we develop a reconstruction method, based on statistical mechanics concepts, that exploits the empirical link density in a highly non-trivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems. Acknoweledgement to ``Growthcom'' ICT - EC project (Grant No: 611272) and ``Crisislab'' Italian Project.
Dikaios, Nikolaos; Punwani, Shonit; Hamy, Valentin; Purpura, Pierpaolo; Rice, Scott; Forster, Martin; Mendes, Ruheena; Taylor, Stuart; Atkinson, David
2014-01-01
Purpose Multiexponential decay parameters are estimated from diffusion-weighted-imaging that generally have inherently low signal-to-noise ratio and non-normal noise distributions, especially at high b-values. Conventional nonlinear regression algorithms assume normally distributed noise, introducing bias into the calculated decay parameters and potentially affecting their ability to classify tumors. This study aims to accurately estimate noise of averaged diffusion-weighted-imaging, to correct the noise induced bias, and to assess the effect upon cancer classification. Methods A new adaptation of the median-absolute-deviation technique in the wavelet-domain, using a closed form approximation of convolved probability-distribution-functions, is proposed to estimate noise. Nonlinear regression algorithms that account for the underlying noise (maximum probability) fit the biexponential/stretched exponential decay models to the diffusion-weighted signal. A logistic-regression model was built from the decay parameters to discriminate benign from metastatic neck lymph nodes in 40 patients. Results The adapted median-absolute-deviation method accurately predicted the noise of simulated (R2 = 0.96) and neck diffusion-weighted-imaging (averaged once or four times). Maximum probability recovers the true apparent-diffusion-coefficient of the simulated data better than nonlinear regression (up to 40%), whereas no apparent differences were found for the other decay parameters. Conclusions Perfusion-related parameters were best at cancer classification. Noise-corrected decay parameters did not significantly improve classification for the clinical data set though simulations show benefit for lower signal-to-noise ratio acquisitions. PMID:23913479
NASA Astrophysics Data System (ADS)
Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong
2015-08-01
For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2008-11-30
Prediction of the microbial growth rate as a response to changing temperatures is an important aspect in the control of food safety and food spoilage. Accurate model predictions of the microbial evolution ask for correct model structures and reliable parameter values with good statistical quality. Given the widely accepted validity of the Cardinal Temperature Model with Inflection (CTMI) [Rosso, L., Lobry, J. R., Bajard, S. and Flandrois, J. P., 1995. Convenient model to describe the combined effects of temperature and pH on microbial growth, Applied and Environmental Microbiology, 61: 610-616], this paper focuses on the accurate estimation of its four parameters (T(min), T(opt), T(max) and micro(opt)) by applying the technique of optimal experiment design for parameter estimation (OED/PE). This secondary model describes the influence of temperature on the microbial specific growth rate from the minimum to the maximum temperature for growth. Dynamic temperature profiles are optimized within two temperature regions ([15 degrees C, 43 degrees C] and [15 degrees C, 45 degrees C]), focusing on the minimization of the parameter estimation (co)variance (D-optimal design). The optimal temperature profiles are implemented in a computer controlled bioreactor, and the CTMI parameters are identified from the resulting experimental data. Approximately equal CTMI parameter values were derived irrespective of the temperature region, except for T(max). The latter could only be estimated accurately from the optimal experiments within [15 degrees C, 45 degrees C]. This observation underlines the importance of selecting the upper temperature constraint for OED/PE as close as possible to the true T(max). Cardinal temperature estimates resulting from designs within [15 degrees C, 45 degrees C] correspond with values found in literature, are characterized by a small uncertainty error and yield a good result during validation. As compared to estimates from non-optimized dynamic
Rashid, Mamoon; Pain, Arnab
2013-01-01
Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: arnab.pain@kaust.edu.sa or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23193222
Function estimation by feedforward sigmoidal networks with bounded weights
Rao, N.S.V.; Protopoescu, V.; Qiao, H.
1996-05-01
The authors address the problem of PAC (probably and approximately correct) learning functions f : [0, 1]{sup d} {r_arrow} [{minus}K, K] based on iid (independently and identically distributed) sample generated according to an unknown distribution, by using feedforward sigmoidal networks. They use two basic properties of the neural networks with bounded weights, namely: (a) they form a Euclidean class, and (b) for hidden units of the form tanh ({gamma}z) they are Lipschitz functions. Either property yields sample sizes for PAC function learning under any Lipschitz cost function. The sample size based on the first property is tighter compared to the known bounds based on VC-dimension. The second estimate yields a sample size that can be conveniently adjusted by a single parameter, {gamma}, related to the hidden nodes.
The estimation of tumor cell percentage for molecular testing by pathologists is not accurate.
Smits, Alexander J J; Kummer, J Alain; de Bruin, Peter C; Bol, Mijke; van den Tweel, Jan G; Seldenrijk, Kees A; Willems, Stefan M; Offerhaus, G Johan A; de Weger, Roel A; van Diest, Paul J; Vink, Aryan
2014-02-01
Molecular pathology is becoming more and more important in present day pathology. A major challenge for any molecular test is its ability to reliably detect mutations in samples consisting of mixtures of tumor cells and normal cells, especially when the tumor content is low. The minimum percentage of tumor cells required to detect genetic abnormalities is a major variable. Information on tumor cell percentage is essential for a correct interpretation of the result. In daily practice, the percentage of tumor cells is estimated by pathologists on hematoxylin and eosin (H&E)-stained slides, the reliability of which has been questioned. This study aimed to determine the reliability of estimated tumor cell percentages in tissue samples by pathologists. On 47 H&E-stained slides of lung tumors a tumor area was marked. The percentage of tumor cells within this area was estimated independently by nine pathologists, using categories of 0-5%, 6-10%, 11-20%, 21-30%, and so on, until 91-100%. As gold standard, the percentage of tumor cells was counted manually. On average, the range between the lowest and the highest estimate per sample was 6.3 categories. In 33% of estimates, the deviation from the gold standard was at least three categories. The mean absolute deviation was 2.0 categories (range between observers 1.5-3.1 categories). There was a significant difference between the observers (P<0.001). If 20% of tumor cells were considered the lower limit to detect a mutation, samples with an insufficient tumor cell percentage (<20%) would have been estimated to contain enough tumor cells in 27/72 (38%) observations, possibly causing false negative results. In conclusion, estimates of tumor cell percentages on H&E-stained slides are not accurate, which could result in misinterpretation of test results. Reliability could possibly be improved by using a training set with feedback.
Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti
2016-01-07
The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems.
NASA Astrophysics Data System (ADS)
Powell, Jacob; Heider, Emily C.; Campiglia, Andres; Harper, James K.
2016-10-01
The ability of density functional theory (DFT) methods to predict accurate fluorescence spectra for polycyclic aromatic hydrocarbons (PAHs) is explored. Two methods, PBE0 and CAM-B3LYP, are evaluated both in the gas phase and in solution. Spectra for several of the most toxic PAHs are predicted and compared to experiment, including three isomers of C24H14 and a PAH containing heteroatoms. Unusually high-resolution experimental spectra are obtained for comparison by analyzing each PAH at 4.2 K in an n-alkane matrix. All theoretical spectra visually conform to the profiles of the experimental data but are systematically offset by a small amount. Specifically, when solvent is included the PBE0 functional overestimates peaks by 16.1 ± 6.6 nm while CAM-B3LYP underestimates the same transitions by 14.5 ± 7.6 nm. These calculated spectra can be empirically corrected to decrease the uncertainties to 6.5 ± 5.1 and 5.7 ± 5.1 nm for the PBE0 and CAM-B3LYP methods, respectively. A comparison of computed spectra in the gas phase indicates that the inclusion of n-octane shifts peaks by +11 nm on average and this change is roughly equivalent for PBE0 and CAM-B3LYP. An automated approach for comparing spectra is also described that minimizes residuals between a given theoretical spectrum and all available experimental spectra. This approach identifies the correct spectrum in all cases and excludes approximately 80% of the incorrect spectra, demonstrating that an automated search of theoretical libraries of spectra may eventually become feasible.
Lamb mode selection for accurate wall loss estimation via guided wave tomography
Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.
2014-02-18
Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.
Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates
NASA Astrophysics Data System (ADS)
Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo
2017-03-01
The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.
NASA Astrophysics Data System (ADS)
Bengulescu, Marc; Blanc, Philippe; Boilley, Alexandre; Wald, Lucien
2017-02-01
This study investigates the characteristic time-scales of variability found in long-term time-series of daily means of estimates of surface solar irradiance (SSI). The study is performed at various levels to better understand the causes of variability in the SSI. First, the variability of the solar irradiance at the top of the atmosphere is scrutinized. Then, estimates of the SSI in cloud-free conditions as provided by the McClear model are dealt with, in order to reveal the influence of the clear atmosphere (aerosols, water vapour, etc.). Lastly, the role of clouds on variability is inferred by the analysis of in-situ measurements. A description of how the atmosphere affects SSI variability is thus obtained on a time-scale basis. The analysis is also performed with estimates of the SSI provided by the satellite-derived HelioClim-3 database and by two numerical weather re-analyses: ERA-Interim and MERRA2. It is found that HelioClim-3 estimates render an accurate picture of the variability found in ground measurements, not only globally, but also with respect to individual characteristic time-scales. On the contrary, the variability found in re-analyses correlates poorly with all scales of ground measurements variability.
Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.
Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet
2016-05-01
Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments.
Granata, Daniele; Carnevale, Vincenzo
2016-01-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265
NASA Astrophysics Data System (ADS)
Granata, Daniele; Carnevale, Vincenzo
2016-08-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset.
MIDAS robust trend estimator for accurate GPS station velocities without step detection.
Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C; Gazeaux, Julien
2016-03-01
Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi )/(tj-ti ) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.
Methods for accurate estimation of net discharge in a tidal channel
Simpson, M.R.; Bland, R.
2000-01-01
Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three
MIDAS robust trend estimator for accurate GPS station velocities without step detection
NASA Astrophysics Data System (ADS)
Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-03-01
Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.
MIDAS robust trend estimator for accurate GPS station velocities without step detection
Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-01-01
Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences. PMID:27668140
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.
2017-01-01
velocity gradients reduce the residuals, the relative location uncertainties and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna', T.; Mykkeltveit, S.
2016-10-01
modified velocity gradients reduce the residuals, the relative location uncertainties, and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.
NASA Astrophysics Data System (ADS)
Wang, Q. J.
1990-12-01
Unbiased estimators of probability weighted moments (PWM) and partial probability weighted moments (PPWM) from systematic and historical flood information are derived. Applications are made to estimating parameters and quantiles of the generalized extreme value (GEV) distribution. The effect of lower bound censoring, which might be deliberately introduced in practice, is also considered.
Schütt, Heiko H; Harmeling, Stefan; Macke, Jakob H; Wichmann, Felix A
2016-05-01
The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available.
Accurate estimation of the RMS emittance from single current amplifier data
Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.
2002-05-31
This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H{sup -} ion source.
Accurate estimation of human body orientation from RGB-D sensors.
Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao
2013-10-01
Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.
Accurate estimation of motion blur parameters in noisy remote sensing image
NASA Astrophysics Data System (ADS)
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Accurate and Robust Attitude Estimation Using MEMS Gyroscopes and a Monocular Camera
NASA Astrophysics Data System (ADS)
Kobori, Norimasa; Deguchi, Daisuke; Takahashi, Tomokazu; Ide, Ichiro; Murase, Hiroshi
In order to estimate accurate rotations of mobile robots and vehicle, we propose a hybrid system which combines a low-cost monocular camera with gyro sensors. Gyro sensors have drift errors that accumulate over time. On the other hand, a camera cannot obtain the rotation continuously in the case where feature points cannot be extracted from images, although the accuracy is better than gyro sensors. To solve these problems we propose a method for combining these sensors based on Extended Kalman Filter. The errors of the gyro sensors are corrected by referring to the rotations obtained from the camera. In addition, by using the reliability judgment of camera rotations and devising the state value of the Extended Kalman Filter, even when the rotation is not continuously observable from the camera, the proposed method shows a good performance. Experimental results showed the effectiveness of the proposed method.
Houairi, Kamel; Cassaing, Frédéric
2009-12-01
Two-wavelength interferometry combines measurement at two wavelengths lambda(1) and lambda(2) in order to increase the unambiguous range (UR) for the measurement of an optical path difference. With the usual algorithm, the UR is equal to the synthetic wavelength Lambda=lambda(1)lambda(2)/|lambda(1)-lambda(2)|, and the accuracy is a fraction of Lambda. We propose here a new analytical algorithm based on arithmetic properties, allowing estimation of the absolute fringe order of interference in a noniterative way. This algorithm has nice properties compared with the usual algorithm: it is at least as accurate as the most accurate measurement at one wavelength, whereas the UR is extended to several times the synthetic wavelength. The analysis presented shows how the actual UR depends on the wavelengths and different sources of error. The simulations presented are confirmed by experimental results, showing that the new algorithm has enabled us to reach an UR of 17.3 microm, much larger than the synthetic wavelength, which is only Lambda=2.2 microm. Applications to metrology and fringe tracking are discussed.
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry
NASA Astrophysics Data System (ADS)
van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.
2016-03-01
Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ < 20°). These methods enable accurate depth estimation of surgical tools with respect to anatomical structures. However, they are computationally expensive and time consuming, rendering them unattractive for image-guided interventions. We propose an alternative approach for depth estimation of biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2012-01-01
This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…
The potential of more accurate InSAR covariance matrix estimation for land cover mapping
NASA Astrophysics Data System (ADS)
Jiang, Mi; Yong, Bin; Tian, Xin; Malhotra, Rakesh; Hu, Rui; Li, Zhiwei; Yu, Zhongbo; Zhang, Xinxin
2017-04-01
Synthetic aperture radar (SAR) and Interferometric SAR (InSAR) provide both structural and electromagnetic information for the ground surface and therefore have been widely used for land cover classification. However, relatively few studies have developed analyses that investigate SAR datasets over richly textured areas where heterogeneous land covers exist and intermingle over short distances. One of main difficulties is that the shapes of the structures in a SAR image cannot be represented in detail as mixed pixels are likely to occur when conventional InSAR parameter estimation methods are used. To solve this problem and further extend previous research into remote monitoring of urban environments, we address the use of accurate InSAR covariance matrix estimation to improve the accuracy of land cover mapping. The standard and updated methods were tested using the HH-polarization TerraSAR-X dataset and compared with each other using the random forest classifier. A detailed accuracy assessment complied for six types of surfaces shows that the updated method outperforms the standard approach by around 9%, with an overall accuracy of 82.46% over areas with rich texture in Zhuhai, China. This paper demonstrates that the accuracy of land cover mapping can benefit from the 3 enhancement of the quality of the observations in addition to classifiers selection and multi-source data ingratiation reported in previous studies.
Can student health professionals accurately estimate alcohol content in commonly occurring drinks?
Sinclair, Julia; Searle, Emma
2016-01-01
Objectives: Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students. Methods: A cross-sectional survey of 891 medical and nursing students across different years of training was conducted. Students were asked the alcohol content of 10 different alcoholic drinks by seeing a slide of the drink (with picture, volume and percentage of alcohol by volume) for 30 s. Results: Overall, the mean number of correctly estimated drinks (out of the 10 tested) was 2.4, increasing to just over 3 if a 10% margin of error was used. Wine and premium strength beers were underestimated by over 50% of students. Those who drank alcohol themselves, or who were further on in their clinical training, did better on the task, but overall the levels remained low. Conclusions: Knowledge of, or the ability to work out, the alcohol content of commonly consumed drinks is poor, and further research is needed to understand the reasons for this and the impact this may have on the likelihood to undertake screening or initiate treatment. PMID:27536344
Greater contrast in Martian hydrological history from more accurate estimates of paleodischarge
NASA Astrophysics Data System (ADS)
Jacobsen, R. E.; Burr, D. M.
2016-09-01
Correlative width-discharge relationships from the Missouri River Basin are commonly used to estimate fluvial paleodischarge on Mars. However, hydraulic geometry provides alternative, and causal, width-discharge relationships derived from broader samples of channels, including those in reduced-gravity (submarine) environments. Comparison of these relationships implies that causal relationships from hydraulic geometry should yield more accurate and more precise discharge estimates. Our remote analysis of a Martian-terrestrial analog channel, combined with in situ discharge data, substantiates this implication. Applied to Martian features, these results imply that paleodischarges of interior channels of Noachian-Hesperian (~3.7 Ga) valley networks have been underestimated by a factor of several, whereas paleodischarges for smaller fluvial deposits of the Late Hesperian-Early Amazonian (~3.0 Ga) have been overestimated. Thus, these new paleodischarges significantly magnify the contrast between early and late Martian hydrologic activity. Width-discharge relationships from hydraulic geometry represent validated tools for quantifying fluvial input near candidate landing sites of upcoming missions.
NASA Astrophysics Data System (ADS)
Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray
2016-06-01
Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.
German, A J; Holden, S L; Bissot, T; Morris, P J; Biourge, V
2009-10-01
Prior to starting a weight loss programme, target weight (TW) is often estimated, using starting body condition score (BCS). The current study assessed how well such estimates perform in clinical practice. Information on body weight, BCS and body composition was assessed before and after weight loss in 28 obese, client-owned dogs. Median decrease in starting weight per BCS unit was 10% (5-15%), with no significant difference between dogs losing moderate (1-2 BCS points) or marked (3-4 BCS points) amounts of weight (P=0.627). Mean decrease in body fat per BCS unit change was 5% (3-9%). A model based on a change of 10% of starting weight per unit of BCS above ideal (5/9) most closely estimated actual TW, but marked variability was seen. Therefore, although such calculations may provide a guide to final TW in obese dogs, they can either over- or under-estimate the appropriate end point of weight loss.
Discrete state model and accurate estimation of loop entropy of RNA secondary structures.
Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie
2008-03-28
Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html.
NASA Astrophysics Data System (ADS)
Bui, The Anh; D'Ancona, Piero; Duong, Xuan Thinh; Li, Ji; Ly, Fu Ken
2017-02-01
Let La be a Schrödinger operator with inverse square potential a | x|-2 on Rd, d ≥ 3. The main aim of this paper is to prove weighted estimates for fractional powers of La. The proof is based on weighted Hardy inequalities and weighted inequalities for square functions associated to La. As an application, we obtain smoothing estimates regarding the propagator e itLa.
Accurate optical flow field estimation using mechanical properties of soft tissues
NASA Astrophysics Data System (ADS)
Mehrabian, Hatef; Karimi, Hirad; Samani, Abbas
2009-02-01
A novel optical flow based technique is presented in this paper to measure the nodal displacements of soft tissue undergoing large deformations. In hyperelasticity imaging, soft tissues maybe compressed extensively [1] and the deformation may exceed the number of pixels ordinary optical flow approaches can detect. Furthermore in most biomedical applications there is a large amount of image information that represent the geometry of the tissue and the number of tissue types present in the organ of interest. Such information is often ignored in applications such as image registration. In this work we incorporate the information pertaining to soft tissue mechanical behavior (Neo-Hookean hyperelastic model is used here) in addition to the tissue geometry before compression into a hierarchical Horn-Schunck optical flow method to overcome this large deformation detection weakness. Applying the proposed method to a phantom using several compression levels proved that it yields reasonably accurate displacement fields. Estimated displacement results of this phantom study obtained for displacement fields of 85 pixels/frame and 127 pixels/frame are reported and discussed in this paper.
How accurately can we estimate energetic costs in a marine top predator, the king penguin?
Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J
2007-01-01
King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained.
A Rapid Empirical Method for Estimating the Gross Takeoff Weight of a High Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mack, Robert J.
1999-01-01
During the cruise segment of the flight mission, aircraft flying at supersonic speeds generate sonic booms that are usually maximum at the beginning of cruise. The pressure signature with the shocks causing these perceived booms can be predicted if the aircraft's geometry, Mach number, altitude, angle of attack, and cruise weight are known. Most methods for estimating aircraft weight, especially beginning-cruise weight, are empirical and based on least- square-fit equations that best represent a body of component weight data. The empirical method discussed in this report used simplified weight equations based on a study of performance and weight data from conceptual and real transport aircraft. Like other weight-estimation methods, weights were determined at several points in the mission. While these additional weights were found to be useful, it is the determination of beginning-cruise weight that is most important for the prediction of the aircraft's sonic-boom characteristics.
Hanford, K J; Van Vleck, L D; Snowder, G D
2003-03-01
Genetic parameters from both single-trait and bivariate analyses for prolificacy, weight, and wool traits were estimated using REML with animal models for Targhee sheep from data collected from 1950 to 1998 at the U.S. Sheep Experiment Station, Dubois, ID. Breeding values from both single-trait and seven-trait analyses calculated with the parameters estimated from the single-trait and bivariate analyses were compared across years of birth with respect to genetic trends. The numbers of observations were 38,625 for litter size at birth and litter size at weaning, 33,994 for birth weight, 32,715 for weaning weight, 36,807 for fleece weight and fleece grade, and 3,341 for staple length. Direct heritability estimates from single-trait analyses were 0.10 for litter size at birth, 0.07 for litter size at weaning, 0.25 for birth weight, 0.22 for weaning weight, 0.54 for fleece weight, 0.41 for fleece grade, and 0.65 for staple length. Estimate of direct genetic correlation between litter size at birth and weaning was 0.77 and between birth and weaning weights was 0.52. The estimate of genetic correlation between fleece weight and staple length was positive (0.54), but was negative between fleece weight and fleece grade (-0.47) and between staple length and fleece grade (-0.69). Estimates of genetic correlations were near zero between birth weight and litter size traits and small and positive between weaning weight and litter size traits. Fleece weight was slightly and negatively correlated with both litter size traits. Fleece grade was slightly and positively correlated with both litter size traits. Estimates of correlations between staple length and litter size at birth (-0.14) and litter size at weaning (0.05) were small. Estimates of correlations between weight traits and fleece weight were positive and low to moderate. Estimates of correlations between weight traits and fleece grade were negative and small, whereas estimates between weight traits and staple length were
Steplewski, Z; Buczyńska, G; Rogoszewski, M; Kuban, T; Steplewska-Mazur, K; Jaskólecki, H; Kasperczyk, J
1998-02-01
Low birth weight is still important health problem in many countries. Children's low birth weight increases mortality, injures central nervous system, somatic, interferes with intellectual and emotional development. Low birth weight is frequently occurring in Poland--between 7-9% of live births. There are many risk factors, among them behavioural and environmental. In Poland an attention was put on chemical and physical environmental factors. Behavioural factors (stress) are disregarded. In the present paper it was decided to check the relationship between stress during pregnancy (estimated by pregnant), child birth weight and frequency of low birth weight. The research was carried out by use of a questionnaire using the "case-control study". In the research were involved 450 mothers of new-born children (the group of cases: untimely, premature delivery or child birth weight below 2500 g) and 450 mothers of new-born children (control group-physiologically delivered). Mothers were asked about their relations to the pregnancy; professional and personal stress during pregnancy was estimated. The results were analysed by counting risk ratio coefficient (RR) and correlation coefficient. The research showed, that there is no relation between acceptation of pregnancy, stress and frequency of low birth weight or the average child birth weight. The researches didn't prove unfavourable influence of stress reaction caused by professional and personal stressors on intrauterine foetus development.
Hanford, K J; Van Vleck, L D; Snowder, G D
2002-12-01
Genetic parameters from both single-trait and bivariate analyses for prolificacy, weight and wool traits were estimated using REML with animal models for Columbia sheep from data collected from 1950 to 1998 at the U.S. Sheep Experiment Station (USSES), Dubois, ID. Breeding values from both single-trait and seven-trait analyses calculated using the parameters estimated from the single-trait and bivariate analyses were compared with respect to genetic trends. Number of observations were 31,401 for litter size at birth and litter size at weaning, 24,741 for birth weight, 23,903 for weaning weight, 29,572 for fleece weight and fleece grade, and 2,449 for staple length. Direct heritability estimates from single-trait analyses were 0.09 for litter size at birth, 0.06 for litter size at weaning, 0.27 for birth weight, 0.16 for weaning weight, 0.53 for fleece weight, 0.41 for fleece grade, and 0.55 for staple length. Estimate of direct genetic correlation between littersize at birth and weaning was 0.84 and between birth and weaning weights was 0.56. Estimate of genetic correlation between fleece weight and staple length was positive (0.55) but negative between fleece weight and fleece grade (-0.47) and between staple length and fleece grade (-0.70). Estimates of genetic correlations were positive but small between birth weight and litter size traits and moderate and positive between weaning weight and litter size traits. Fleece weight was lowly and negatively correlated with both litter size traits. Fleece grade was lowly and positively correlated with both litter size traits, while staple length was lowly and negatively correlated with the litter size traits. Estimates of correlations between weight traits and fleece weight were positive and low to moderate. Estimates of correlations between weight traits and fleece grade were negative and small. Estimates of correlations between staple length and birth weight (0.05) and weaning weight were small (-0.04). Estimated
A Weighted Least Squares Approach To Robustify Least Squares Estimates.
ERIC Educational Resources Information Center
Lin, Chowhong; Davenport, Ernest C., Jr.
This study developed a robust linear regression technique based on the idea of weighted least squares. In this technique, a subsample of the full data of interest is drawn, based on a measure of distance, and an initial set of regression coefficients is calculated. The rest of the data points are then taken into the subsample, one after another,…
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1983-01-01
Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.
Krieger, Christine C; Gershengorn, Marvin C
2014-02-01
Excess production of hyaluronan (hyaluronic acid [HA]) in the retro-orbital space is a major component of Graves' ophthalmopathy, and regulation of HA production by orbital cells is a major research area. In most previous studies, HA was measured by ELISAs that used HA-binding proteins for detection and rooster comb HA as standards. We show that the binding efficiency of HA-binding protein in the ELISA is a function of HA polymer size. Using gel electrophoresis, we show that HA secreted from orbital cells is primarily comprised of polymers more than 500 000. We modified a commercially available ELISA by using 1 million molecular weight HA as standard to accurately measure HA of this size. We demonstrated that IL-1β-stimulated HA secretion is at least 2-fold greater than previously reported, and activation of the TSH receptor by an activating antibody M22 from a patient with Graves' disease led to more than 3-fold increase in HA production in both fibroblasts/preadipocytes and adipocytes. These effects were not consistently detected with the commercial ELISA using rooster comb HA as standard and suggest that fibroblasts/preadipocytes may play a more prominent role in HA remodeling in Graves' ophthalmopathy than previously appreciated.
Underdetermined DOA Estimation Using MVDR-Weighted LASSO
Salama, Amgad A.; Ahmad, M. Omair; Swamy, M. N. S.
2016-01-01
The direction of arrival (DOA) estimation problem is formulated in a compressive sensing (CS) framework, and an extended array aperture is presented to increase the number of degrees of freedom of the array. The ordinary least square adaptable least absolute shrinkage and selection operator (OLS A-LASSO) is applied for the first time for DOA estimation. Furthermore, a new LASSO algorithm, the minimum variance distortionless response (MVDR) A-LASSO, which solves the DOA problem in the CS framework, is presented. The proposed algorithm does not depend on the singular value decomposition nor on the orthogonality of the signal and the noise subspaces. Hence, the DOA estimation can be done without a priori knowledge of the number of sources. The proposed algorithm can estimate up to ((M2−2)/2+M−1)/2 sources using M sensors without any constraints or assumptions about the nature of the signal sources. Furthermore, the proposed algorithm exhibits performance that is superior compared to that of the classical DOA estimation methods, especially for low signal to noise ratios (SNR), spatially-closed sources and coherent scenarios. PMID:27657080
Lee, Christina D.; Chae, Junghoon; Schap, TusaRebecca E.; Kerr, Deborah A.; Delp, Edward J.; Ebert, David S.; Boushey, Carol J.
2012-01-01
Background Diet is a critical element of diabetes self-management. An emerging area of research is the use of images for dietary records using mobile telephones with embedded cameras. These tools are being designed to reduce user burden and to improve accuracy of portion-size estimation through automation. The objectives of this study were to (1) assess the error of automatically determined portion weights compared to known portion weights of foods and (2) to compare the error between automation and human. Methods Adolescents (n = 15) captured images of their eating occasions over a 24 h period. All foods and beverages served were weighed. Adolescents self-reported portion sizes for one meal. Image analysis was used to estimate portion weights. Data analysis compared known weights, automated weights, and self-reported portions. Results For the 19 foods, the mean ratio of automated weight estimate to known weight ranged from 0.89 to 4.61, and 9 foods were within 0.80 to 1.20. The largest error was for lettuce and the most accurate was strawberry jam. The children were fairly accurate with portion estimates for two foods (sausage links, toast) using one type of estimation aid and two foods (sausage links, scrambled eggs) using another aid. The automated method was fairly accurate for two foods (sausage links, jam); however, the 95% confidence intervals for the automated estimates were consistently narrower than human estimates. Conclusions The ability of humans to estimate portion sizes of foods remains a problem and a perceived burden. Errors in automated portion-size estimation can be systematically addressed while minimizing the burden on people. Future applications that take over the burden of these processes may translate to better diabetes self-management. PMID:22538157
2012-01-01
Background Few equations have been developed in veterinary medicine compared to human medicine to predict body composition. The present study was done to evaluate the influence of weight loss on biometry (BIO), bioimpedance analysis (BIA) and ultrasonography (US) in cats, proposing equations to estimate fat (FM) and lean (LM) body mass, as compared to dual energy x-ray absorptiometry (DXA) as the referenced method. For this were used 16 gonadectomized obese cats (8 males and 8 females) in a weight loss program. DXA, BIO, BIA and US were performed in the obese state (T0; obese animals), after 10% of weight loss (T1) and after 20% of weight loss (T2). Stepwise regression was used to analyze the relationship between the dependent variables (FM, LM) determined by DXA and the independent variables obtained by BIO, BIA and US. The better models chosen were evaluated by a simple regression analysis and means predicted vs. determined by DXA were compared to verify the accuracy of the equations. Results The independent variables determined by BIO, BIA and US that best correlated (p < 0.005) with the dependent variables (FM and LM) were BW (body weight), TC (thoracic circumference), PC (pelvic circumference), R (resistance) and SFLT (subcutaneous fat layer thickness). Using Mallows’Cp statistics, p value and r2, 19 equations were selected (12 for FM, 7 for LM); however, only 7 equations accurately predicted FM and one LM of cats. Conclusions The equations with two variables are better to use because they are effective and will be an alternative method to estimate body composition in the clinical routine. For estimated lean mass the equations using body weight associated with biometrics measures can be proposed. For estimated fat mass the equations using body weight associated with bioimpedance analysis can be proposed. PMID:22781317
Information Weighted Consensus for Distributed Estimation in Vision Networks
ERIC Educational Resources Information Center
Kamal, Ahmed Tashrif
2013-01-01
Due to their high fault-tolerance, ease of installation and scalability to large networks, distributed algorithms have recently gained immense popularity in the sensor networks community, especially in computer vision. Multi-target tracking in a camera network is one of the fundamental problems in this domain. Distributed estimation algorithms…
Are early first trimester weights valid proxies for preconception weight?
Technology Transfer Automated Retrieval System (TEKTRAN)
An accurate estimate of preconception weight is necessary for providing a gestational weight gain range based on the Institute of Medicine’s guidelines; however, an accurate and proximal preconception weight is not available for most women. We examined the validity of first trimester weights for est...
Improved covariance matrix estimators for weighted analysis of microarray data.
Astrand, Magnus; Mostad, Petter; Rudemo, Mats
2007-12-01
Empirical Bayes models have been shown to be powerful tools for identifying differentially expressed genes from gene expression microarray data. An example is the WAME model, where a global covariance matrix accounts for array-to-array correlations as well as differing variances between arrays. However, the existing method for estimating the covariance matrix is very computationally intensive and the estimator is biased when data contains many regulated genes. In this paper, two new methods for estimating the covariance matrix are proposed. The first method is a direct application of the EM algorithm for fitting the multivariate t-distribution of the WAME model. In the second method, a prior distribution for the log fold-change is added to the WAME model, and a discrete approximation is used for this prior. Both methods are evaluated using simulated and real data. The first method shows equal performance compared to the existing method in terms of bias and variability, but is superior in terms of computer time. For large data sets (>15 arrays), the second method also shows superior computer run time. Moreover, for simulated data with regulated genes the second method greatly reduces the bias. With the proposed methods it is possible to apply the WAME model to large data sets with reasonable computer run times. The second method shows a small bias for simulated data, but appears to have a larger bias for real data with many regulated genes.
NASA Astrophysics Data System (ADS)
Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui
2017-03-01
The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere
with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.
ERIC Educational Resources Information Center
Penfield, Randall D.; Bergeron, Jennifer M.
2005-01-01
This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…
NASA Technical Reports Server (NTRS)
Mullen, J., Jr.
1978-01-01
The implementation of the changes to the program for Wing Aeroelastic Design and the development of a program to estimate aircraft fuselage weights are described. The equations to implement the modified planform description, the stiffened panel skin representation, the trim loads calculation, and the flutter constraint approximation are presented. A comparison of the wing model with the actual F-5A weight material distributions and loads is given. The equations and program techniques used for the estimation of aircraft fuselage weights are described. These equations were incorporated as a computer code. The weight predictions of this program are compared with data from the C-141.
Comparison of Errors of 35 Weight Estimation Formulae in a Standard Collective
Hoopmann, M.; Kagan, K. O.; Sauter, A.; Abele, H.; Wagner, P.
2016-01-01
Issue: The estimation of foetal weight is an integral part of prenatal care and obstetric routine. In spite of its known susceptibility to errors in cases of underweight or overweight babies, important obstetric decisions depend on it. In the present contribution we have examined the accuracy and error distribution of 35 weight estimation formulae within the normal weight range of 2500–4000 g. The aim of the study was to identify the weight estimation formulae with the best possible correspondence to the requirements of clinical routine. Materials and Methods: 35 clinically established weight estimation formulae were analysed in 3416 foetuses with weights between 2500 and 4000 g. For this we determined and compared the mean percentage error (MPE), the mean absolute percentage error (MAPE), and the proportions of estimates within the error ranges of 5, 10, 20 and 30 %. In addition, separate regression lines were calculated for the relationship between estimated and actual birth weights for the weight range 2500–4000 g. The formulae were thus examined for possible inhomogeneities. Results: The lowest MPE were achieved with the Hadlock III and V formulae (0.8 %, STW 9.2 % or, respectively, −0.8 %, STW 10.0 %). The lowest absolute error (6.6 %) as well as the most favourable frequency distribution in cases below 5 % and 10 % error (43.9 and 77.5) were seen for the Halaska formula. In graphic representations of the regression lines, 16 formulae revealed a weight overestimation in the lower weight range and an underestimation in the upper range. 14 formulae gave underestimations and merely 5 gave overestimations over the entire tested weight range. Conclusion: The majority of the tested formulae gave underestimations of the actual birth weight over the entire weight range or at least in the upper weight range. This result supports the current strategy of a two-stage weight estimation in which a formula is first chosen after a pre-estimation of
Children Can Accurately Monitor and Control Their Number-Line Estimation Performance
ERIC Educational Resources Information Center
Wall, Jenna L.; Thompson, Clarissa A.; Dunlosky, John; Merriman, William E.
2016-01-01
Accurate monitoring and control are essential for effective self-regulated learning. These metacognitive abilities may be particularly important for developing math skills, such as when children are deciding whether a math task is difficult or whether they made a mistake on a particular item. The present experiments investigate children's ability…
Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.
Technology Transfer Automated Retrieval System (TEKTRAN)
Aims: To simplify the determination of the nuclear condition of the pathogenic Rhizoctonia, which currently needs to be performed either using two fluorescent dyes, thus is more costly and time-consuming, or using only one fluorescent dye, and thus less accurate. Methods and Results: A red primary ...
Schneider, Iris K; Parzuchowski, Michal; Wojciszke, Bogdan; Schwarz, Norbert; Koole, Sander L
2014-01-01
Previous work suggests that perceived importance of an object influences estimates of its weight. Specifically, important books were estimated to be heavier than non-important books. However, the experimental set-up of these studies may have suffered from a potential confound and findings may be confined to books only. Addressing this, we investigate the effect of importance on weight estimates by examining whether the importance of information stored on a data storage device (USB-stick or portable hard drive) can alter weight estimates. Results show that people thinking a USB-stick holds important tax information (vs. expired tax information vs. no information) estimate it to be heavier (Experiment 1) compared to people who do not. Similarly, people who are told a portable hard drive holds personally relevant information (vs. irrelevant), also estimate the drive to be heavier (Experiments 2A,B).
Schneider, Iris K.; Parzuchowski, Michal; Wojciszke, Bogdan; Schwarz, Norbert; Koole, Sander L.
2015-01-01
Previous work suggests that perceived importance of an object influences estimates of its weight. Specifically, important books were estimated to be heavier than non-important books. However, the experimental set-up of these studies may have suffered from a potential confound and findings may be confined to books only. Addressing this, we investigate the effect of importance on weight estimates by examining whether the importance of information stored on a data storage device (USB-stick or portable hard drive) can alter weight estimates. Results show that people thinking a USB-stick holds important tax information (vs. expired tax information vs. no information) estimate it to be heavier (Experiment 1) compared to people who do not. Similarly, people who are told a portable hard drive holds personally relevant information (vs. irrelevant), also estimate the drive to be heavier (Experiments 2A,B). PMID:25620942
Zhu, Fangqiang; Hummer, Gerhard
2012-02-05
The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this article, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimally allocating of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here, we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations.
Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; Dechant, Lawrence
2016-05-31
Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.
Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter
NASA Astrophysics Data System (ADS)
Strano, Salvatore; Terzo, Mario
2016-06-01
The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.
Reference Models for Structural Technology Assessment and Weight Estimation
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Eldred, Lloyd
2005-01-01
Previously the Exploration Concepts Branch of NASA Langley Research Center has developed techniques for automating the preliminary design level of launch vehicle airframe structural analysis for purposes of enhancing historical regression based mass estimating relationships. This past work was useful and greatly reduced design time, however its application area was very narrow in terms of being able to handle a large variety in structural and vehicle general arrangement alternatives. Implementation of the analysis approach presented herein also incorporates some newly developed computer programs. Loft is a program developed to create analysis meshes and simultaneously define structural element design regions. A simple component defining ASCII file is read by Loft to begin the design process. HSLoad is a Visual Basic implementation of the HyperSizer Application Programming Interface, which automates the structural element design process. Details of these two programs and their use are explained in this paper. A feature which falls naturally out of the above analysis paradigm is the concept of "reference models". The flexibility of the FEA based JAVA processing procedures and associated process control classes coupled with the general utility of Loft and HSLoad make it possible to create generic program template files for analysis of components ranging from something as simple as a stiffened flat panel, to curved panels, fuselage and cryogenic tank components, flight control surfaces, wings, through full air and space vehicle general arrangements.
Accurate liability estimation improves power in ascertained case-control studies.
Weissbrod, Omer; Lippert, Christoph; Geiger, Dan; Heckerman, David
2015-04-01
Linear mixed models (LMMs) have emerged as the method of choice for confounded genome-wide association studies. However, the performance of LMMs in nonrandomly ascertained case-control studies deteriorates with increasing sample size. We propose a framework called LEAP (liability estimator as a phenotype; https://github.com/omerwe/LEAP) that tests for association with estimated latent values corresponding to severity of phenotype, and we demonstrate that this can lead to a substantial power increase.
Robust and Accurate Vision-Based Pose Estimation Algorithm Based on Four Coplanar Feature Points
Zhang, Zimiao; Zhang, Shihai; Li, Qiu
2016-01-01
Vision-based pose estimation is an important application of machine vision. Currently, analytical and iterative methods are used to solve the object pose. The analytical solutions generally take less computation time. However, the analytical solutions are extremely susceptible to noise. The iterative solutions minimize the distance error between feature points based on 2D image pixel coordinates. However, the non-linear optimization needs a good initial estimate of the true solution, otherwise they are more time consuming than analytical solutions. Moreover, the image processing error grows rapidly with measurement range increase. This leads to pose estimation errors. All the reasons mentioned above will cause accuracy to decrease. To solve this problem, a novel pose estimation method based on four coplanar points is proposed. Firstly, the coordinates of feature points are determined according to the linear constraints formed by the four points. The initial coordinates of feature points acquired through the linear method are then optimized through an iterative method. Finally, the coordinate system of object motion is established and a method is introduced to solve the object pose. The growing image processing error causes pose estimation errors the measurement range increases. Through the coordinate system, the pose estimation errors could be decreased. The proposed method is compared with two other existing methods through experiments. Experimental results demonstrate that the proposed method works efficiently and stably. PMID:27999338
NASA Astrophysics Data System (ADS)
Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong
2017-03-01
Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.
Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Kim, Dohyun; Lee, Jongshill; Park, Hoon Ki; Jang, Dong Pyo; Song, Soohwa; Cho, Baek Hwan; Jung, Yoo-Suk; Park, Rae-Woong; Joo, Nam-Seok; Kim, In Young
2016-08-24
The purpose of the study is to analyse how the standard of resting metabolic rate (RMR) affects estimation of the metabolic equivalent of task (MET) using an accelerometer. In order to investigate the effect on estimation according to intensity of activity, comparisons were conducted between the 3.5 ml O2 · kg(-1) · min(-1) and individually measured resting VO2 as the standard of 1 MET. MET was estimated by linear regression equations that were derived through five-fold cross-validation using 2 types of MET values and accelerations; the accuracy of estimation was analysed through cross-validation, Bland and Altman plot, and one-way ANOVA test. There were no significant differences in the RMS error after cross-validation. However, the individual RMR-based estimations had as many as 0.5 METs of mean difference in modified Bland and Altman plots than RMR of 3.5 ml O2 · kg(-1) · min(-1). Finally, the results of an ANOVA test indicated that the individual RMR-based estimations had less significant differences between the reference and estimated values at each intensity of activity. In conclusion, the RMR standard is a factor that affects accurate estimation of METs by acceleration; therefore, RMR requires individual specification when it is used for estimation of METs using an accelerometer.
ACCURATE ESTIMATIONS OF STELLAR AND INTERSTELLAR TRANSITION LINES OF TRIPLY IONIZED GERMANIUM
Dutta, Narendra Nath; Majumder, Sonjoy E-mail: sonjoy@gmail.com
2011-08-10
In this paper, we report on weighted oscillator strengths of E1 transitions and transition probabilities of E2 transitions among different low-lying states of triply ionized germanium using highly correlated relativistic coupled cluster (RCC) method. Due to the abundance of Ge IV in the solar system, planetary nebulae, white dwarf stars, etc., the study of such transitions is important from an astrophysical point of view. The weighted oscillator strengths of E1 transitions are presented in length and velocity gauge forms to check the accuracy of the calculations. We find excellent agreement between calculated and experimental excitation energies. Oscillator strengths of few transitions, wherever studied in the literature via other theoretical and experimental approaches, are compared with our RCC calculations.
NASA Astrophysics Data System (ADS)
Ran, Qiwen; Yang, Zhonghua; Ma, Jing; Tan, Liying; Liao, Huixi; Liu, Qingfeng
2013-02-01
In this paper, a weighted adaptive threshold estimating method is proposed to deal with long and deep channel fades in Satellite-to-Ground optical communications. During the channel correlation interval where there are sufficient correlations in adjacent signal samples, the correlations in its change rates are described by weighted equations in the form of Toeplitz matrix. As vital inputs to the proposed adaptive threshold estimator, the optimal values of the change rates can be obtained by solving the weighted equation systems. The effect of channel fades and aberrant samples can be mitigated by joint use of weighted equation systems and Kalman estimation. Based on the channel information data from star observation trails, simulations are made and the numerical results show that the proposed method have better anti-fade performances than the D-value adaptive threshold estimating method in both weak and strong turbulence conditions.
Goudar, Chetan T
2011-10-01
We have identified an error in the published integral form of the modified Michaelis-Menten equation that accounts for endogenous substrate production. The correct solution is presented and the error in both the substrate concentration, S, and the kinetic parameters Vm , Km , and R resulting from the incorrect solution was characterized. The incorrect integral form resulted in substrate concentration errors as high as 50% resulting in 7-50% error in kinetic parameter estimates. To better reflect experimental scenarios, noise containing substrate depletion data were analyzed by both the incorrect and correct integral equations. While both equations resulted in identical fits to substrate depletion data, the final estimates of Vm , Km , and R were different and Km and R estimates from the incorrect integral equation deviated substantially from the actual values. Another observation was that at R = 0, the incorrect integral equation reduced to the correct form of the Michaelis-Menten equation. We believe this combination of excellent fits to experimental data, albeit with incorrect kinetic parameter estimates, and the reduction to the Michaelis-Menten equation at R = 0 is primarily responsible for the incorrectness to go unnoticed. However, the resulting error in kinetic parameter estimates will lead to incorrect biological interpretation and we urge the use of the correct integral form presented in this study.
On estimating mean lifetimes by a weighted sum of lifetime measurements
NASA Astrophysics Data System (ADS)
Prosper, Harrison Bertrand
1987-10-01
Given N lifetime measurements an estimate of the mean lifetime can be obtained from a weighted sum of these measurements. We derive exact expressions for the probability density function, the moment-generating function, and the cumulative distribution function for the weighted sum. We indicate how these results might be used in the estimation of particle lifetimes. The probability distribution function of Yost for the distribution of lifetime measurements with finite measurement error is our starting point.
A method to estimate weight and dimensions of large and small gas turbine engines
NASA Technical Reports Server (NTRS)
Onat, E.; Klees, G. W.
1979-01-01
A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.
Empirical expressions for estimating length and weight of axial-flow components of VTOL powerplants
NASA Technical Reports Server (NTRS)
Sagerser, D. A.; Lieblein, S.; Krebs, R. P.
1971-01-01
Simplified equations are presented for estimating the length and weight of major powerplant components of VTOL aircraft. The equations were developed from correlations of lift and cruise engine data. Components involved include fan, fan duct, compressor, combustor, turbine, structure, and accessories. Comparisons of actual and calculated total engine weights are included for several representative engines.
Precision of sugarcane biomass estimates in pot studies using fresh and dry weights
Technology Transfer Automated Retrieval System (TEKTRAN)
Sugarcane (Saccharum spp.) field studies generally report fresh weight (FW) rather than dry weight (DW) due to logistical difficulties in drying large amounts of biomass. Pot studies often measure biomass of young plants with DW under the assumption that DW provides a more precise estimate of treatm...
Alpha's standard error (ASE): an accurate and precise confidence interval estimate.
Duhachek, Adam; Lacobucci, Dawn
2004-10-01
This research presents the inferential statistics for Cronbach's coefficient alpha on the basis of the standard statistical assumption of multivariate normality. The estimation of alpha's standard error (ASE) and confidence intervals are described, and the authors analytically and empirically investigate the effects of the components of these equations. The authors then demonstrate the superiority of this estimate compared with previous derivations of ASE in a separate Monte Carlo simulation. The authors also present a sampling error and test statistic for a test of independent sample alphas. They conclude with a recommendation that all alpha coefficients be reported in conjunction with standard error or confidence interval estimates and offer SAS and SPSS programming codes for easy implementation.
Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle
NASA Technical Reports Server (NTRS)
VanEepoel, John; Thienel, Julie; Sanner, Robert M.
2006-01-01
In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.
Accurate State Estimation and Tracking of a Non-Cooperative Target Vehicle
NASA Technical Reports Server (NTRS)
Thienel, Julie K.; Sanner, Robert M.
2006-01-01
Autonomous space rendezvous scenarios require knowledge of the target vehicle state in order to safely dock with the chaser vehicle. Ideally, the target vehicle state information is derived from telemetered data, or with the use of known tracking points on the target vehicle. However, if the target vehicle is non-cooperative and does not have the ability to maintain attitude control, or transmit attitude knowledge, the docking becomes more challenging. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a tracking control scheme. The approach is tested with the robotic servicing mission concept for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates, but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST.
A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system
Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob
2013-01-01
Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541
Fast and accurate probability density estimation in large high dimensional astronomical datasets
NASA Astrophysics Data System (ADS)
Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.
2015-01-01
Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.
Spectral estimation from laser scanner data for accurate color rendering of objects
NASA Astrophysics Data System (ADS)
Baribeau, Rejean
2002-06-01
Estimation methods are studied for the recovery of the spectral reflectance across the visible range from the sensing at just three discrete laser wavelengths. Methods based on principal component analysis and on spline interpolation are judged based on the CIE94 color differences for some reference data sets. These include the Macbeth color checker, the OSA-UCS color charts, some artist pigments, and a collection of miscellaneous surface colors. The optimal three sampling wavelengths are also investigated. It is found that color can be estimated with average accuracy ΔE94 = 2.3 when optimal wavelengths 455 nm, 540 n, and 610 nm are used.
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1985-01-01
Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.
Accurate and unbiased estimation of power-law exponents from single-emitter blinking data.
Hoogenboom, Jacob P; den Otter, Wouter K; Offerhaus, Herman L
2006-11-28
Single emitter blinking with a power-law distribution for the on and off times has been observed on a variety of systems including semiconductor nanocrystals, conjugated polymers, fluorescent proteins, and organic fluorophores. The origin of this behavior is still under debate. Reliable estimation of power exponents from experimental data is crucial in validating the various models under consideration. We derive a maximum likelihood estimator for power-law distributed data and analyze its accuracy as a function of data set size and power exponent both analytically and numerically. Results are compared to least-squares fitting of the double logarithmically transformed probability density. We demonstrate that least-squares fitting introduces a severe bias in the estimation result and that the maximum likelihood procedure is superior in retrieving the correct exponent and reducing the statistical error. For a data set as small as 50 data points, the error margins of the maximum likelihood estimator are already below 7%, giving the possibility to quantify blinking behavior when data set size is limited, e.g., due to photobleaching.
How Accurate and Robust Are the Phylogenetic Estimates of Austronesian Language Relationships?
Greenhill, Simon J.; Drummond, Alexei J.; Gray, Russell D.
2010-01-01
We recently used computational phylogenetic methods on lexical data to test between two scenarios for the peopling of the Pacific. Our analyses of lexical data supported a pulse-pause scenario of Pacific settlement in which the Austronesian speakers originated in Taiwan around 5,200 years ago and rapidly spread through the Pacific in a series of expansion pulses and settlement pauses. We claimed that there was high congruence between traditional language subgroups and those observed in the language phylogenies, and that the estimated age of the Austronesian expansion at 5,200 years ago was consistent with the archaeological evidence. However, the congruence between the language phylogenies and the evidence from historical linguistics was not quantitatively assessed using tree comparison metrics. The robustness of the divergence time estimates to different calibration points was also not investigated exhaustively. Here we address these limitations by using a systematic tree comparison metric to calculate the similarity between the Bayesian phylogenetic trees and the subgroups proposed by historical linguistics, and by re-estimating the age of the Austronesian expansion using only the most robust calibrations. The results show that the Austronesian language phylogenies are highly congruent with the traditional subgroupings, and the date estimates are robust even when calculated using a restricted set of historical calibrations. PMID:20224774
Accurate estimation of influenza epidemics using Google search data via ARGO
Yang, Shihao; Santillana, Mauricio; Kou, S. C.
2015-01-01
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980
Do hand-held calorimeters provide reliable and accurate estimates of resting metabolic rate?
Van Loan, Marta D
2007-12-01
This paper provides an overview of a new technique for indirect calorimetry and the assessment of resting metabolic rate. Information from the research literature includes findings on the reliability and validity of a new hand-held indirect calorimeter as well as use in clinical and field settings. Research findings to date are of mixed results. The MedGem instrument has provided more consistent results when compared to the Douglas bag method of measuring metabolic rate. The BodyGem instrument has been shown to be less accurate when compared to standard metabolic carts. Furthermore, when the Body Gem has been used with clinical patients or with under nourished individuals the results have not been acceptable. Overall, there is not a large enough body of evidence to definitively support the use of these hand-held devices for assessment of metabolic rate in a wide variety of clinical or research environments.
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.
Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle
NASA Astrophysics Data System (ADS)
Timinis, Constantinos; Pitris, Costas
2016-03-01
The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.
Multiple candidates and multiple constraints based accurate depth estimation for multi-view stereo
NASA Astrophysics Data System (ADS)
Zhang, Chao; Zhou, Fugen; Xue, Bindang
2017-02-01
In this paper, we propose a depth estimation method for multi-view image sequence. To enhance the accuracy of dense matching and reduce the inaccurate matching which is produced by inaccurate feature description, we select multiple matching points to build candidate matching sets. Then we compute an optimal depth from a candidate matching set which satisfies multiple constraints (epipolar constraint, similarity constraint and depth consistency constraint). To further increase the accuracy of depth estimation, depth consistency constraint of neighbor pixels is used to filter the inaccurate matching. On this basis, in order to get more complete depth map, depth diffusion is performed by neighbor pixels' depth consistency constraint. Through experiments on the benchmark datasets for multiple view stereo, we demonstrate the superiority of proposed method over the state-of-the-art method in terms of accuracy.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Porter, Albert A.
1990-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
Accurate dynamic power estimation for CMOS combinational logic circuits with real gate delay model.
Fadl, Omnia S; Abu-Elyazeed, Mohamed F; Abdelhalim, Mohamed B; Amer, Hassanein H; Madian, Ahmed H
2016-01-01
Dynamic power estimation is essential in designing VLSI circuits where many parameters are involved but the only circuit parameter that is related to the circuit operation is the nodes' toggle rate. This paper discusses a deterministic and fast method to estimate the dynamic power consumption for CMOS combinational logic circuits using gate-level descriptions based on the Logic Pictures concept to obtain the circuit nodes' toggle rate. The delay model for the logic gates is the real-delay model. To validate the results, the method is applied to several circuits and compared against exhaustive, as well as Monte Carlo, simulations. The proposed technique was shown to save up to 96% processing time compared to exhaustive simulation.
Rogers, Kevin J; Finn, Anthony
2017-02-01
Acoustic atmospheric tomography calculates temperature and wind velocity fields in a slice or volume of atmosphere based on travel time estimates between strategically located sources and receivers. The technique discussed in this paper uses the natural acoustic signature of an unmanned aerial vehicle as it overflies an array of microphones on the ground. The sound emitted by the aircraft is recorded on-board and by the ground microphones. The group velocities of the intersecting sound rays are then derived by comparing these measurements. Tomographic inversion is used to estimate the temperature and wind fields from the group velocity measurements. This paper describes a technique for deriving travel time (and hence group velocity) with an accuracy of 0.1% using these assets. This is shown to be sufficient to obtain highly plausible tomographic inversion results that correlate well with independent SODAR measurements.
Techniques for accurate estimation of net discharge in a tidal channel
Simpson, Michael R.; Bland, Roger
1999-01-01
An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. The relative magnitude of equipment errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second. Typical maximum flow rates during the data-collection period averaged 750 cubic meters per second.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Porter, Albert A.
1991-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
Bowen, L; Zyambo, M; Snell, D; Kinnear, J; Bould, M D
2017-04-01
Limited resources and access to healthcare in sub-Saharan Africa are associated with high rates of malnourished children, although many countries globally are demonstrating increasing childhood obesity. This study evaluated how well current age- or height-based formulae estimate the weight of children undergoing surgery in Zambia. All children under 14 years of age presenting for elective surgery at the University Teaching Hospital, Lusaka, had both height and weight measured. Their actual weight was compared against estimated weight from various formulae. The Broselow tape outperformed all the age-based formulae, demonstrating the lowest median percentage error of -5.8%, with 46.0% of estimates falling within 10% of the actual measured weight (p < 0.001). Of the 1111 children who were eligible for World Health Organization growth standard appraisal, 88 (8%) met the weight criteria for severe acute malnutrition. Our results are consistent with other studies in finding that the Broselow tape is the best estimator of weight in a lower middle-income country, followed by the original Advanced Paediatric Life Support formula if the Broselow tape is unavailable.
Development of a conceptual flight vehicle design weight estimation method library and documentation
NASA Astrophysics Data System (ADS)
Walker, Andrew S.
The state of the art in estimating the volumetric size and mass of flight vehicles is held today by an elite group of engineers in the Aerospace Conceptual Design Industry. This is not a skill readily accessible or taught in academia. To estimate flight vehicle mass properties, many aerospace engineering students are encouraged to read the latest design textbooks, learn how to use a few basic statistical equations, and plunge into the details of parametric mass properties analysis. Specifications for and a prototype of a standardized engineering "tool-box" of conceptual and preliminary design weight estimation methods were developed to manage the growing and ever-changing body of weight estimation knowledge. This also bridges the gap in Mass Properties education for aerospace engineering students. The Weight Method Library will also be used as a living document for use by future aerospace students. This "tool-box" consists of a weight estimation method bibliography containing unclassified, open-source literature for conceptual and preliminary flight vehicle design phases. Transport aircraft validation cases have been applied to each entry in the AVD Weight Method Library in order to provide a sense of context and applicability to each method. The weight methodology validation results indicate consensus and agreement of the individual methods. This generic specification of a method library will be applicable for use by other disciplines within the AVD Lab, Post-Graduate design labs, or engineering design professionals.
Univariate and Default Standard Unit Biases in Estimation of Body Weight and Caloric Content
ERIC Educational Resources Information Center
Geier, Andrew B.; Rozin, Paul
2009-01-01
College students estimated the weight of adult women from either photographs or a live presentation by a set of models and estimated the calories in 1 of 2 actual meals. The 2 meals had the same items, but 1 had larger portion sizes than the other. The results suggest: (a) Judgments are biased toward transforming the example in question to the…
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
ERIC Educational Resources Information Center
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Development of weight and cost estimates for lifting surfaces with active controls
NASA Technical Reports Server (NTRS)
Anderson, R. D.; Flora, C. C.; Nelson, R. M.; Raymond, E. T.; Vincent, J. H.
1976-01-01
Equations and methodology were developed for estimating the weight and cost incrementals due to active controls added to the wing and horizontal tail of a subsonic transport airplane. The methods are sufficiently generalized to be suitable for preliminary design. Supporting methodology and input specifications for the weight and cost equations are provided. The weight and cost equations are structured to be flexible in terms of the active control technology (ACT) flight control system specification. In order to present a self-contained package, methodology is also presented for generating ACT flight control system characteristics for the weight and cost equations. Use of the methodology is illustrated.
Cruz, Paulina; Johnson, Bruce D.; Karpinski, Susan C.; Limoges, Katherine A.; Warren, Beth A.; Olsen, Kerry D.; Somers, Virend K.; Jensen, Michael D.; Clark, Matthew M.; Lopez-Jimenez, Francisco
2014-01-01
The accuracy of weight loss in estimating successful changes in body composition (BC), namely fat mass loss, is not known and was addressed in our study. To assess the correlation between change in body weight and change in fat mass (FM), fat % and fat-free mass (FFM), 465 participants (41% male; 41±13years), who met the criteria for weight change at a wellness center, underwent air-displacement plethysmography. Body weight and BC were measured at the same time. We categorized the change in body weight, FM and FFM as an increase if there was >1 kg gain, a decrease if there was >1 kg loss and no change if the difference was ≤ 1 kg. We estimated the diagnostic performance of weight change to identify improvement in BC. After a median time of 132 days, 255 people who lost >1 kg of weight, 216(84.7%) had lost >1 kg of FM, but 69(27.1%) had lost >1 kg of FFM. Of the 143 people with no weight change, 42(29.4%) had actually lost >1kg of FM. Of the 67 who gained >1 kg of weight at follow-up, in 23(34.3%) this was due to an increase in FFM but not in FM. Weight change had a NPV of 73%. Mean weight change was 2.4 kg. Our results indicate that favorable improvements in BC may go undetected in almost 1/3 of people whose weight remains the same and in 1/3 of people who gain weight after attending a wellness center. These results underscore the potential role of BC measurements in people attempting lifestyle changes. PMID:21566566
Essink-Bot, Marie-Louise; Pereira, Joaquin; Packer, Claire; Schwarzinger, Michael; Burstrom, Kristina
2002-01-01
OBJECTIVE: To investigate the sources of cross-national variation in disability-adjusted life-years (DALYs) in the European Disability Weights Project. METHODS: Disability weights for 15 disease stages were derived empirically in five countries by means of a standardized procedure and the cross-national differences in visual analogue scale (VAS) scores were analysed. For each country the burden of dementia in women, used as an illustrative example, was estimated in DALYs. An analysis was performed of the relative effects of cross-national variations in demography, epidemiology and disability weights on DALY estimates. FINDINGS: Cross-national comparison of VAS scores showed almost identical ranking orders. After standardization for population size and age structure of the populations, the DALY rates per 100000 women ranged from 1050 in France to 1404 in the Netherlands. Because of uncertainties in the epidemiological data, the extent to which these differences reflected true variation between countries was difficult to estimate. The use of European rather than country-specific disability weights did not lead to a significant change in the burden of disease estimates for dementia. CONCLUSIONS: Sound epidemiological data are the first requirement for burden of disease estimation and relevant between-countries comparisons. DALY estimates for dementia were relatively insensitive to differences in disability weights between European countries. PMID:12219156
NASA Technical Reports Server (NTRS)
Macconochie, Ian O.
1988-01-01
The objective of this study was to make estimates of the weight savings that might be realized on all the subsystems on an advanced rocket-powered shuttle (designated Shuttle 2) by using advanced technologies having a projected maturity date of 1992. The current Shuttle with external tank was used as a baseline from which to make the estimates of weight savings on each subsystem. The subsystems with the greatest potential for weight reduction are the body shell and the thermal protection system. For the body shell, a reduction of 5.2 percent in the weight of the vehicle at main engine cutoff is projected through the application of new technologies, and an additional configuration-based reduction of 5 percent is projected through simplification of body shape. A reduction of 5 percent is projected for the thermal protection system through experience with the current Space Shuttle and the potential for reducing thermal protection system thicknesses in selected areas. Main propellant tanks are expected to increase slightly in weight. The main propulsion system is also projected to increase in weight because of the requirement to operate engines at derated power levels in order to accommodate one-engine-out capability. The projections for weight reductions through improvements in the remaining subsystems are relatively small. By summing all the technology factors, a projected reduction of 16 percent in the vehicle weight at main engine cutoff is obtained. By summarizing the configurational factors, a potential reduction of 12 percent in vehicle weight is obtained.
Xiaopeng, Q I; Liang, Wei; Barker, Laurie; Lekiachvili, Akaki; Xingyou, Zhang
Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature's association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly-or 30-day-basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R(2), mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R(2) range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS's merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects.
Xiaopeng, QI; Liang, WEI; BARKER, Laurie; LEKIACHVILI, Akaki; Xingyou, ZHANG
2015-01-01
Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature’s association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly—or 30-day—basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R2, mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS’s merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects. PMID:26167169
A Simple and Accurate Equation for Peak Capacity Estimation in Two Dimensional Liquid Chromatography
Li, Xiaoping; Stoll, Dwight R.; Carr, Peter W.
2009-01-01
Two dimensional liquid chromatography (2DLC) is a very powerful way to greatly increase the resolving power and overall peak capacity of liquid chromatography. The traditional “product rule” for peak capacity usually overestimates the true resolving power due to neglect of the often quite severe under-sampling effect and thus provides poor guidance for optimizing the separation and biases comparisons to optimized one dimensional gradient liquid chromatography. Here we derive a simple yet accurate equation for the effective two dimensional peak capacity that incorporates a correction for under-sampling of the first dimension. The results show that not only is the speed of the second dimension separation important for reducing the overall analysis time, but it plays a vital role in determining the overall peak capacity when the first dimension is under-sampled. A surprising subsidiary finding is that for relatively short 2DLC separations (much less than a couple of hours), the first dimension peak capacity is far less important than is commonly believed and need not be highly optimized, for example through use of long columns or very small particles. PMID:19053226
Mandal, A; Neser, F W C; Roy, R; Rout, P K; Notter, D R
2009-02-01
Variance components and genetic parameters for greasy fleece weights of Muzaffarnagari sheep maintained at the Central Institute for Research on Goats, Makhdoom, Mathura, India, over a period of 29 years (1976 to 2004) were estimated by restricted maximum likelihood (REML), fitting six animal models including various combinations of maternal effects. Data on body weights at 6 (W6) and 12 months (W12) of age were also included in the study. Records of 2807 lambs descended from 160 rams and 1202 ewes were used for the study. Direct heritability estimates for fleece weight at 6 (FW6) and 12 months of age (FW12), and total fleece weights up to 1 year of age (TFW) were 0.14, 0.16 and 0.25, respectively. Maternal genetic and permanent environmental effects did not significantly influence any of the traits under study. Genetic correlations among fleece weights and body weights were obtained from multivariate analyses. Direct genetic correlations of FW6 with W6 and W12 were relatively large, ranging from 0.61 to 0.67, but only moderate genetic correlations existed between FW12 and W6 (0.39) and between FW12 and W12 (0.49). The genetic correlation between FW6 and FW12 was very high (0.95), but the corresponding phenotypic correlation was much lower (0.28). Heritability estimates for all traits were at least 0.15, indicating that there is potential for their improvement by selection. The moderate to high positive genetic correlations between fleece weights and body weights at 6 and 12 months of age suggest that some of the genetic factors that influence animal growth also influence wool growth. Thus selection to improve the body weights or fleece weights at 6 months of age will also result in genetic improvement of fleece weights at subsequent stages of growth.
Accurate Estimation of Expression Levels of Homologous Genes in RNA-seq Experiments
NASA Astrophysics Data System (ADS)
Paşaniuc, Bogdan; Zaitlen, Noah; Halperin, Eran
Next generation high throughput sequencing (NGS) is poised to replace array based technologies as the experiment of choice for measuring RNA expression levels. Several groups have demonstrated the power of this new approach (RNA-seq), making significant and novel contributions and simultaneously proposing methodologies for the analysis of RNA-seq data. In a typical experiment, millions of short sequences (reads) are sampled from RNA extracts and mapped back to a reference genome. The number of reads mapping to each gene is used as proxy for its corresponding RNA concentration. A significant challenge in analyzing RNA expression of homologous genes is the large fraction of the reads that map to multiple locations in the reference genome. Currently, these reads are either dropped from the analysis, or a naïve algorithm is used to estimate their underlying distribution. In this work, we present a rigorous alternative for handling the reads generated in an RNA-seq experiment within a probabilistic model for RNA-seq data; we develop maximum likelihood based methods for estimating the model parameters. In contrast to previous methods, our model takes into account the fact that the DNA of the sequenced individual is not a perfect copy of the reference sequence. We show with both simulated and real RNA-seq data that our new method improves the accuracy and power of RNA-seq experiments.
Accurate estimation of expression levels of homologous genes in RNA-seq experiments.
Paşaniuc, Bogdan; Zaitlen, Noah; Halperin, Eran
2011-03-01
Abstract Next generation high-throughput sequencing (NGS) is poised to replace array-based technologies as the experiment of choice for measuring RNA expression levels. Several groups have demonstrated the power of this new approach (RNA-seq), making significant and novel contributions and simultaneously proposing methodologies for the analysis of RNA-seq data. In a typical experiment, millions of short sequences (reads) are sampled from RNA extracts and mapped back to a reference genome. The number of reads mapping to each gene is used as proxy for its corresponding RNA concentration. A significant challenge in analyzing RNA expression of homologous genes is the large fraction of the reads that map to multiple locations in the reference genome. Currently, these reads are either dropped from the analysis, or a naive algorithm is used to estimate their underlying distribution. In this work, we present a rigorous alternative for handling the reads generated in an RNA-seq experiment within a probabilistic model for RNA-seq data; we develop maximum likelihood-based methods for estimating the model parameters. In contrast to previous methods, our model takes into account the fact that the DNA of the sequenced individual is not a perfect copy of the reference sequence. We show with both simulated and real RNA-seq data that our new method improves the accuracy and power of RNA-seq experiments.
NASA Astrophysics Data System (ADS)
Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.
2015-03-01
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.
Monaco, James P; Madabhushi, Anant
2012-12-01
Many estimation tasks require Bayesian classifiers capable of adjusting their performance (e.g. sensitivity/specificity). In situations where the optimal classification decision can be identified by an exhaustive search over all possible classes, means for adjusting classifier performance, such as probability thresholding or weighting the a posteriori probabilities, are well established. Unfortunately, analogous methods compatible with Markov random fields (i.e. large collections of dependent random variables) are noticeably absent from the literature. Consequently, most Markov random field (MRF) based classification systems typically restrict their performance to a single, static operating point (i.e. a paired sensitivity/specificity). To address this deficiency, we previously introduced an extension of maximum posterior marginals (MPM) estimation that allows certain classes to be weighted more heavily than others, thus providing a means for varying classifier performance. However, this extension is not appropriate for the more popular maximum a posteriori (MAP) estimation. Thus, a strategy for varying the performance of MAP estimators is still needed. Such a strategy is essential for several reasons: (1) the MAP cost function may be more appropriate in certain classification tasks than the MPM cost function, (2) the literature provides a surfeit of MAP estimation implementations, several of which are considerably faster than the typical Markov Chain Monte Carlo methods used for MPM, and (3) MAP estimation is used far more often than MPM. Consequently, in this paper we introduce multiplicative weighted MAP (MWMAP) estimation-achieved via the incorporation of multiplicative weights into the MAP cost function-which allows certain classes to be preferred over others. This creates a natural bias for specific classes, and consequently a means for adjusting classifier performance. Similarly, we show how this multiplicative weighting strategy can be applied to the MPM
NASA Astrophysics Data System (ADS)
Saslow, Wayne M.
2014-04-01
Three common approaches to F→=ma→ are: (1) as an exactly true definition of force F→ in terms of measured inertial mass m and measured acceleration a→; (2) as an exactly true axiom relating measured values of a→, F→ and m; and (3) as an imperfect but accurately true physical law relating measured a→ to measured F→, with m an experimentally determined, matter-dependent constant, in the spirit of the resistance R in Ohm's law. In the third case, the natural units are those of a→ and F→, where a→ is normally specified using distance and time as standard units, and F→ from a spring scale as a standard unit; thus mass units are derived from force, distance, and time units such as newtons, meters, and seconds. The present work develops the third approach when one includes a second physical law (again, imperfect but accurate)—that balance-scale weight W is proportional to m—and the fact that balance-scale measurements of relative weight are more accurate than those of absolute force. When distance and time also are more accurately measurable than absolute force, this second physical law permits a shift to standards of mass, distance, and time units, such as kilograms, meters, and seconds, with the unit of force—the newton—a derived unit. However, were force and distance more accurately measurable than time (e.g., time measured with an hourglass), this second physical law would permit a shift to standards of force, mass, and distance units such as newtons, kilograms, and meters, with the unit of time—the second—a derived unit. Therefore, the choice of the most accurate standard units depends both on what is most accurately measurable and on the accuracy of physical law.
Zurlo de Mirotti, S M; Lesa, A M; Barrón de Carbonetti, M; Roitter, H; Villagra de Lacuara, S
1995-01-01
Our aim was to confirm in our environment what has been observed and described by other writers about the importance of achieving a "critical body weight'' and an adequate "fat percentage'' -on the basis of the calculation of total body water- for the initiation and development of pubertal events. This study included 92 girls, healthy, well nourished, belonging to upper middle class from a high school of The National University of Cordoba. The longitudinal method of control was used every 6 months and at the precise moment of menarche. Out of 20 antropometrical variables observed height, weight and height, TBW as percentage of body weight, lean body and fat weight, fat percentage and skin folds ppercentiles for each girl at menarche. A regression between fat percentage and skin folds was done. Percentiles 5 to 95 of fat percentage in relation to body water percentage were estimated. At menarche the average for the different variables are: Heigth 155.6 cm +/- 0.469; Weight 45.8 Kg +/- 0,5; TBW 25.216 lit. +/- 0.318; lean body weigth 35.02 Kg (S.D.2.98); fat weigth 10.86 Kg (S. D. 3.17). The addition of skin folds was correlated fat percentage, thus, an equation was obtained for the average calculation of such percentage %F= 12.16 + (0.313 x fold addition). The minium percentage for the onset of menstrual cycles is 17.3% and corresponds to percentile 10. However, there is a 5% of girls who start to menstruate with a 15.5% of fat and none of them is below that value. The reasons mentioned above suggest that is necessary to obtain a "critical body weigth'' as well as a "fat percentage'' minimum for the onset and maintenance of menstrual cycles, among our girls, similar o what has been obtained by doctor Frisch.
[Research on maize multispectral image accurate segmentation and chlorophyll index estimation].
Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e
2015-01-01
In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray
Golmakani, Nahid; Khaleghinezhad, Khosheh; Dadgar, Selmeh; Hashempor, Majid; Baharian, Nosrat
2015-01-01
Introduction: In developing countries, hemorrhage accounts for 30% of the maternal deaths. Postpartum hemorrhage has been defined as blood loss of around 500 ml or more, after completing the third phase of labor. Most cases of postpartum hemorrhage occur during the first hour after birth. The most common reason for bleeding in the early hours after childbirth is uterine atony. Bleeding during delivery is usually a visual estimate that is measured by the midwife. It has a high error rate. However, studies have shown that the use of a standard can improve the estimation. The aim of the research is to compare the estimation of postpartum hemorrhage using the weighting method and the National Guideline for postpartum hemorrhage estimation. Materials and Methods: This descriptive study was conducted on 112 females in the Omolbanin Maternity Department of Mashhad, for a six-month period, from November 2012 to May 2013. The accessible method was used for sampling. The data collection tools were case selection, observation and interview forms. For postpartum hemorrhage estimation, after the third section of labor was complete, the quantity of bleeding was estimated in the first and second hours after delivery, by the midwife in charge, using the National Guideline for vaginal delivery, provided by the Maternal Health Office. Also, after visual estimation by using the National Guideline, the sheets under parturient in first and second hours after delivery were exchanged and weighted. The data were analyzed using descriptive statistics and the t-test. Results: According to the results, a significant difference was found between the estimated blood loss based on the weighting methods and that using the National Guideline (weighting method 62.68 ± 16.858 cc vs. National Guideline 45.31 ± 13.484 cc in the first hour after delivery) (P = 0.000) and (weighting method 41.26 ± 10.518 vs. National Guideline 30.24 ± 8.439 in second hour after delivery) (P = 0.000). Conclusions
The challenges of accurately estimating time of long bone injury in children.
Pickett, Tracy A
2015-07-01
The ability to determine the time an injury occurred can be of crucial significance in forensic medicine and holds special relevance to the investigation of child abuse. However, dating paediatric long bone injury, including fractures, is nuanced by complexities specific to the paediatric population. These challenges include the ability to identify bone injury in a growing or only partially-calcified skeleton, different injury patterns seen within the spectrum of the paediatric population, the effects of bone growth on healing as a separate entity from injury, differential healing rates seen at different ages, and the relative scarcity of information regarding healing rates in children, especially the very young. The challenges posed by these factors are compounded by a lack of consistency in defining and categorizing healing parameters. This paper sets out the primary limitations of existing knowledge regarding estimating timing of paediatric bone injury. Consideration and understanding of the multitude of factors affecting bone injury and healing in children will assist those providing opinion in the medical-legal forum.
NASA Astrophysics Data System (ADS)
Guerdoux, Simon; Fourment, Lionel
2007-05-01
An Arbitrary Lagrangian Eulerian (ALE) formulation is developed to simulate the different stages of the Friction Stir Welding (FSW) process with the FORGE3® F.E. software. A splitting method is utilized: a) the material velocity/pressure and temperature fields are calculated, b) the mesh velocity is derived from the domain boundary evolution and an adaptive refinement criterion provided by error estimation, c) P1 and P0 variables are remapped. Different velocity computation and remap techniques have been investigated, providing significant improvement with respect to more standard approaches. The proposed ALE formulation is applied to FSW simulation. Steady state welding, but also transient phases are simulated, showing good robustness and accuracy of the developed formulation. Friction parameters are identified for an Eulerian steady state simulation by comparison with experimental results. Void formation can be simulated. Simulations of the transient plunge and welding phases help to better understand the deposition process that occurs at the trailing edge of the probe. Flexibility and robustness of the model finally allows investigating the influence of new tooling designs on the deposition process.
A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates
NASA Astrophysics Data System (ADS)
Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.
2015-08-01
We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.
Neural network-based visual body weight estimation for drug dosage finding
NASA Astrophysics Data System (ADS)
Pfitzner, Christian; May, Stefan; Nüchter, Andreas
2016-03-01
Body weight adapted drug dosages are important for emergency treatments: Inaccuracies in body weight estimation may lead to inaccurate drug dosing. This paper describes an improved approach to estimating the body weight of emergency patients in a trauma room, based on images from an RGB-D and a thermal camera. The improvements are specific to several aspects: Fusion of RGB-D and thermal camera eases filtering and segmentation of the patient's body from the background. Robustness and accuracy is gained by an artificial neural network, which considers geometric features from the sensors as input, e.g. the patient's volume, and shape parameters. Preliminary experiments with 69 patients show an accuracy close to 90 percent, with less than 10 percent relative error and the results are compared with the physician's estimate, the patient's statement and an established anthropometric method.
Saarela, Olli; Liu, Zhihui Amy
2016-10-15
Marginal structural Cox models are used for quantifying marginal treatment effects on outcome event hazard function. Such models are estimated using inverse probability of treatment and censoring (IPTC) weighting, which properly accounts for the impact of time-dependent confounders, avoiding conditioning on factors on the causal pathway. To estimate the IPTC weights, the treatment assignment mechanism is conventionally modeled in discrete time. While this is natural in situations where treatment information is recorded at scheduled follow-up visits, in other contexts, the events specifying the treatment history can be modeled in continuous time using the tools of event history analysis. This is particularly the case for treatment procedures, such as surgeries. In this paper, we propose a novel approach for flexible parametric estimation of continuous-time IPTC weights and illustrate it in assessing the relationship between metastasectomy and mortality in metastatic renal cell carcinoma patients. Copyright © 2016 John Wiley & Sons, Ltd.
Tan, Zhiqiang; Xia, Junchao; Zhang, Bin W.; Levy, Ronald M.
2016-01-01
The weighted histogram analysis method (WHAM) including its binless extension has been developed independently in several different contexts, and widely used in chemistry, physics, and statistics, for computing free energies and expectations from multiple ensembles. However, this method, while statistically efficient, is computationally costly or even infeasible when a large number, hundreds or more, of distributions are studied. We develop a locally WHAM (local WHAM) from the perspective of simulations of simulations (SOS), using generalized serial tempering (GST) to resample simulated data from multiple ensembles. The local WHAM equations based on one jump attempt per GST cycle can be solved by optimization algorithms orders of magnitude faster than standard implementations of global WHAM, but yield similarly accurate estimates of free energies to global WHAM estimates. Moreover, we propose an adaptive SOS procedure for solving local WHAM equations stochastically when multiple jump attempts are performed per GST cycle. Such a stochastic procedure can lead to more accurate estimates of equilibrium distributions than local WHAM with one jump attempt per cycle. The proposed methods are broadly applicable when the original data to be “WHAMMED” are obtained properly by any sampling algorithm including serial tempering and parallel tempering (replica exchange). To illustrate the methods, we estimated absolute binding free energies and binding energy distributions using the binding energy distribution analysis method from one and two dimensional replica exchange molecular dynamics simulations for the beta-cyclodextrin-heptanoate host-guest system. In addition to the computational advantage of handling large datasets, our two dimensional WHAM analysis also demonstrates that accurate results similar to those from well-converged data can be obtained from simulations for which sampling is limited and not fully equilibrated. PMID:26801020
NASA Astrophysics Data System (ADS)
Tan, Zhiqiang; Xia, Junchao; Zhang, Bin W.; Levy, Ronald M.
2016-01-01
The weighted histogram analysis method (WHAM) including its binless extension has been developed independently in several different contexts, and widely used in chemistry, physics, and statistics, for computing free energies and expectations from multiple ensembles. However, this method, while statistically efficient, is computationally costly or even infeasible when a large number, hundreds or more, of distributions are studied. We develop a locally WHAM (local WHAM) from the perspective of simulations of simulations (SOS), using generalized serial tempering (GST) to resample simulated data from multiple ensembles. The local WHAM equations based on one jump attempt per GST cycle can be solved by optimization algorithms orders of magnitude faster than standard implementations of global WHAM, but yield similarly accurate estimates of free energies to global WHAM estimates. Moreover, we propose an adaptive SOS procedure for solving local WHAM equations stochastically when multiple jump attempts are performed per GST cycle. Such a stochastic procedure can lead to more accurate estimates of equilibrium distributions than local WHAM with one jump attempt per cycle. The proposed methods are broadly applicable when the original data to be "WHAMMED" are obtained properly by any sampling algorithm including serial tempering and parallel tempering (replica exchange). To illustrate the methods, we estimated absolute binding free energies and binding energy distributions using the binding energy distribution analysis method from one and two dimensional replica exchange molecular dynamics simulations for the beta-cyclodextrin-heptanoate host-guest system. In addition to the computational advantage of handling large datasets, our two dimensional WHAM analysis also demonstrates that accurate results similar to those from well-converged data can be obtained from simulations for which sampling is limited and not fully equilibrated.
Pinkerton, Steven D; Galletly, Carol L; McAuliffe, Timothy L; DiFranceisco, Wayne; Raymond, H Fisher; Chesson, Harrell W
2010-02-01
The sexual behaviors of HIV/sexually transmitted infection (STI) prevention intervention participants can be assessed on a partner-by-partner basis: in aggregate (i.e., total numbers of sex acts, collapsed across partners) or using a combination of these two methods (e.g., assessing five partners in detail and any remaining partners in aggregate). There is a natural trade-off between the level of sexual behavior detail and the precision of HIV/STI acquisition risk estimates. The results of this study indicate that relatively simple aggregate data collection techniques suffice to adequately estimate HIV risk. For highly infectious STIs, in contrast, accurate STI risk assessment requires more intensive partner-by-partner methods.
Reid, H L; Onwuameze, I C
1984-03-01
A clot-weight and radial immunodiffusion method for estimating fibrinogen concentration were compared using plasma from 58 pregnant women and diabetic patients. The two methods gave a correlation coefficient, r = 0.53 (p less than 0.005). There was no significant variation between the mean fibrinogen concentrations as determined by both methods. The coefficient of variation for the clot-weight and immunodiffusion methods were 1.54% and 2.9%, respectively. It is concluded that the clot-weight method is more readily applicable than the radial immunodiffusion method to fibrinogen measurements, especially in patients when rapid results are required.
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1994-01-01
An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.
NASA Technical Reports Server (NTRS)
Martinovic, Zoran N.; Cerro, Jeffrey A.
2002-01-01
This is an interim user's manual for current procedures used in the Vehicle Analysis Branch at NASA Langley Research Center, Hampton, Virginia, for launch vehicle structural subsystem weight estimation based on finite element modeling and structural analysis. The process is intended to complement traditional methods of conceptual and early preliminary structural design such as the application of empirical weight estimation or application of classical engineering design equations and criteria on one dimensional "line" models. Functions of two commercially available software codes are coupled together. Vehicle modeling and analysis are done using SDRC/I-DEAS, and structural sizing is performed with the Collier Research Corp. HyperSizer program.
Grosse, Scott D; Chaugule, Shraddha S; Hay, Joel W
2015-01-01
Estimates of preference-weighted health outcomes or health state utilities are needed to assess improvements in health in terms of quality-adjusted life-years. Gains in quality-adjusted life-years are used to assess the cost–effectiveness of prophylactic use of clotting factor compared with on-demand treatment among people with hemophilia, a congenital bleeding disorder. Published estimates of health utilities for people with hemophilia vary, contributing to uncertainty in the estimates of cost–effectiveness of prophylaxis. Challenges in estimating utility weights for the purpose of evaluating hemophilia treatment include selection bias in observational data, difficulty in adjusting for predictors of health-related quality of life and lack of preference-based data comparing adults with lifetime or primary prophylaxis versus no prophylaxis living within the same country and healthcare system. PMID:25585817
Vieira, Vasco M. N. C. S.; Engelen, Aschwin H.; Huanel, Oscar R.; Guillemin, Marie-Laure
2016-01-01
Survival is a fundamental demographic component and the importance of its accurate estimation goes beyond the traditional estimation of life expectancy. The evolutionary stability of isomorphic biphasic life-cycles and the occurrence of its different ploidy phases at uneven abundances are hypothesized to be driven by differences in survival rates between haploids and diploids. We monitored Gracilaria chilensis, a commercially exploited red alga with an isomorphic biphasic life-cycle, having found density-dependent survival with competition and Allee effects. While estimating the linear-in-the-parameters survival function, all model I regression methods (i.e, vertical least squares) provided biased line-fits rendering them inappropriate for studies about ecology, evolution or population management. Hence, we developed an iterative two-step non-linear model II regression (i.e, oblique least squares), which provided improved line-fits and estimates of survival function parameters, while robust to the data aspects that usually turn the regression methods numerically unstable. PMID:27936048
Estimation of breed-specific heterosis effects for birth, weaning, and yearling weight in cattle.
Schiermiester, L N; Thallman, R M; Kuehn, L A; Kachman, S D; Spangler, M L
2015-01-01
Heterosis, assumed proportional to expected breed heterozygosity, was calculated for 6834 individuals with birth, weaning and yearling weight records from Cycle VII and advanced generations of the U.S. Meat Animal Research Center (USMARC) Germplasm Evaluation (GPE) project. Breeds represented in these data included: Angus, Hereford, Red Angus, Charolais, Gelbvieh, Simmental, Limousin and Composite MARC III. Heterosis was further estimated by proportions of British × British (B × B), British × Continental (B × C) and Continental × Continental (C × C) crosses and by breed-specific combinations. Model 1 fitted fixed covariates for heterosis within biological types while Model 2 fitted random breed-specific combinations nested within the fixed biological type covariates. Direct heritability estimates (SE) for birth, weaning ,and yearling weight for Model 1 were 0.42 (0.04), 0.22 (0.03), and 0.39 (0.05), respectively. The direct heritability estimates (SE) of birth, weaning, and yearling weight for Model 2 were the same as Model 1, except yearling weight heritability was 0.38 (0.05). The B × B, B × C, and C × C heterosis estimates for birth weight were 0.47 (0.37), 0.75 (0.32), and 0.73 (0.54) kg, respectively. The B × B, B × C, and C × C heterosis estimates for weaning weight were 6.43 (1.80), 8.65 (1.54), and 5.86 (2.57) kg, respectively. Yearling weight estimates for B × B, B × C, and C × C heterosis were 17.59(3.06), 13.88 (2.63), and 9.12 (4.34) kg, respectively. Differences did exist among estimates of breed-specific heterosis for weaning and yearling weight, although the variance component associated with breed-specific heterosis was not significant. These results illustrate that there are differences in breed-specific heterosis and exploiting these differences can lead to varying levels of heterosis among mating plans.
Walters, William A.; Lennon, Niall J.; Bochicchio, James; Krohn, Andrew; Pennanen, Taina
2016-01-01
ABSTRACT While high-throughput sequencing methods are revolutionizing fungal ecology, recovering accurate estimates of species richness and abundance has proven elusive. We sought to design internal transcribed spacer (ITS) primers and an Illumina protocol that would maximize coverage of the kingdom Fungi while minimizing nontarget eukaryotes. We inspected alignments of the 5.8S and large subunit (LSU) ribosomal genes and evaluated potential primers using PrimerProspector. We tested the resulting primers using tiered-abundance mock communities and five previously characterized soil samples. We recovered operational taxonomic units (OTUs) belonging to all 8 members in both mock communities, despite DNA abundances spanning 3 orders of magnitude. The expected and observed read counts were strongly correlated (r = 0.94 to 0.97). However, several taxa were consistently over- or underrepresented, likely due to variation in rRNA gene copy numbers. The Illumina data resulted in clustering of soil samples identical to that obtained with Sanger sequence clone library data using different primers. Furthermore, the two methods produced distance matrices with a Mantel correlation of 0.92. Nonfungal sequences comprised less than 0.5% of the soil data set, with most attributable to vascular plants. Our results suggest that high-throughput methods can produce fairly accurate estimates of fungal abundances in complex communities. Further improvements might be achieved through corrections for rRNA copy number and utilization of standardized mock communities. IMPORTANCE Fungi play numerous important roles in the environment. Improvements in sequencing methods are providing revolutionary insights into fungal biodiversity, yet accurate estimates of the number of fungal species (i.e., richness) and their relative abundances in an environmental sample (e.g., soil, roots, water, etc.) remain difficult to obtain. We present improved methods for high-throughput Illumina sequencing of the
Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation
ter Horst, Arjan C.; Koppen, Mathieu; Selen, Luc P. J.; Medendorp, W. Pieter
2015-01-01
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement. PMID:26658990
Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation.
ter Horst, Arjan C; Koppen, Mathieu; Selen, Luc P J; Medendorp, W Pieter
2015-01-01
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.
Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul
2015-01-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821
Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix
2015-12-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.
A Computer Code for Gas Turbine Engine Weight And Disk Life Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Ghosn, Louis J.; Halliwell, Ian; Wickenheiser, Tim (Technical Monitor)
2002-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. In this paper, the major enhancements to NASA's engine-weight estimate computer code (WATE) are described. These enhancements include the incorporation of improved weight-calculation routines for the compressor and turbine disks using the finite-difference technique. Furthermore, the stress distribution for various disk geometries was also incorporated, for a life-prediction module to calculate disk life. A material database, consisting of the material data of most of the commonly-used aerospace materials, has also been incorporated into WATE. Collectively, these enhancements provide a more realistic and systematic way to calculate the engine weight. They also provide additional insight into the design trade-off between engine life and engine weight. To demonstrate the new capabilities, the enhanced WATE code is used to perform an engine weight/life trade-off assessment on a production aircraft engine.
Kushwaha, B P; Mandal, A; Arora, A L; Kumar, R; Kumar, S; Notter, D R
2009-08-01
Estimates of (co)variance components were obtained for weights at birth, weaning and 6, 9 and 12 months of age in Chokla sheep maintained at the Central Sheep and Wool Research Institute, Avikanagar, Rajasthan, India, over a period of 21 years (1980-2000). Records of 2030 lambs descended from 150 rams and 616 ewes were used in the study. Analyses were carried out by restricted maximum likelihood (REML) fitting an animal model and ignoring or including maternal genetic or permanent environmental effects. Six different animal models were fitted for all traits. The best model was chosen after testing the improvement of the log-likelihood values. Direct heritability estimates were inflated substantially for all traits when maternal effects were ignored. Heritability estimates for weight at birth, weaning and 6, 9 and 12 months of age were 0.20, 0.18, 0.16, 0.22 and 0.23, respectively in the best models. Additive maternal and maternal permanent environmental effects were both significant at birth, accounting for 9% and 12% of phenotypic variance, respectively, but the source of maternal effects (additive versus permanent environmental) at later ages could not be clearly identified. The estimated repeatabilities across years of ewe effects on lamb body weights were 0.26, 0.14, 0.12, 0.13, and 0.15 at birth, weaning, 6, 9 and 12 months of age, respectively. These results indicate that modest rates of genetic progress are possible for all weights.
Preliminary weight and cost estimates for transport aircraft composite structural design concepts
NASA Technical Reports Server (NTRS)
1973-01-01
Preliminary weight and cost estimates have been prepared for design concepts utilized for a transonic long range transport airframe with extensive applications of advanced composite materials. The design concepts, manufacturing approach, and anticipated details of manufacturing cost reflected in the composite airframe are substantially different from those found in conventional metal structure and offer further evidence of the advantages of advanced composite materials.
Valchev, Nikola; Zijdewind, Inge; Keysers, Christian; Gazzola, Valeria; Avenanti, Alessio; Maurits, Natasha M.
2016-01-01
Seeing others performing an action induces the observers’ motor cortex to “resonate” with the observed action. Transcranial magnetic stimulation (TMS) studies suggest that such motor resonance reflects the encoding of various motor features of the observed action, including the apparent motor effort. However, it is unclear whether such encoding requires direct observation or whether force requirements can be inferred when the moving body part is partially occluded. To address this issue, we presented participants with videos of a right hand lifting a box of three different weights and asked them to estimate its weight. During each trial we delivered one transcranial magnetic stimulation (TMS) pulse over the left primary motor cortex of the observer and recorded the motor evoked potentials (MEPs) from three muscles of the right hand (first dorsal interosseous, FDI, abductor digiti minimi, ADM, and brachioradialis, BR). Importantly, because the hand shown in the videos was hidden behind a screen, only the contractions in the actor’s BR muscle under the bare skin were observable during the entire videos, while the contractions in the actor’s FDI and ADM muscles were hidden during the grasp and actual lift. The amplitudes of the MEPs recorded from the BR (observable) and FDI (hidden) muscle increased with the weight of the box. These findings indicate that the modulation of motor excitability induced by action observation extends to the cortical representation of muscles with contractions that could not be observed. Thus, motor resonance appears to reflect force requirements of observed lifting actions even when the moving body part is occluded from view. PMID:25462196
Valchev, Nikola; Zijdewind, Inge; Keysers, Christian; Gazzola, Valeria; Avenanti, Alessio; Maurits, Natasha M
2015-01-01
Seeing others performing an action induces the observers' motor cortex to "resonate" with the observed action. Transcranial magnetic stimulation (TMS) studies suggest that such motor resonance reflects the encoding of various motor features of the observed action, including the apparent motor effort. However, it is unclear whether such encoding requires direct observation or whether force requirements can be inferred when the moving body part is partially occluded. To address this issue, we presented participants with videos of a right hand lifting a box of three different weights and asked them to estimate its weight. During each trial we delivered one transcranial magnetic stimulation (TMS) pulse over the left primary motor cortex of the observer and recorded the motor evoked potentials (MEPs) from three muscles of the right hand (first dorsal interosseous, FDI, abductor digiti minimi, ADM, and brachioradialis, BR). Importantly, because the hand shown in the videos was hidden behind a screen, only the contractions in the actor's BR muscle under the bare skin were observable during the entire videos, while the contractions in the actor's FDI and ADM muscles were hidden during the grasp and actual lift. The amplitudes of the MEPs recorded from the BR (observable) and FDI (hidden) muscle increased with the weight of the box. These findings indicate that the modulation of motor excitability induced by action observation extends to the cortical representation of muscles with contractions that could not be observed. Thus, motor resonance appears to reflect force requirements of observed lifting actions even when the moving body part is occluded from view.
Estimation of the global average temperature with optimally weighted point gauges
NASA Technical Reports Server (NTRS)
Hardin, James W.; Upson, Robert B.
1993-01-01
This paper considers the minimum mean squared error (MSE) incurred in estimating an idealized Earth's global average temperature with a finite network of point gauges located over the globe. We follow the spectral MSE formalism given by North et al. (1992) and derive the optimal weights for N gauges in the problem of estimating the Earth's global average temperature. Our results suggest that for commonly used configurations the variance of the estimate due to sampling error can be reduced by as much as 50%.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
NASA Technical Reports Server (NTRS)
Wahba, Grace; Deepak, A. (Editor)
1988-01-01
The problem of merging direct and remotely sensed (indirect) data with forecast data to get an estimate of the present state of the atmosphere for the purpose of numerical weather prediction is examined. To carry out this merging optimally, it is necessary to provide an estimate of the relative weights to be given to the observations and forecast. It is possible to do this dynamically from the information to be merged, if the correlation structure of the errors from the various sources is sufficiently different. Some new statistical approaches to doing this are described, and conditions quantified in which such estimates are likely to be good.
NASA Technical Reports Server (NTRS)
Hale, P. L.
1982-01-01
The weight and major envelope dimensions of small aircraft propulsion gas turbine engines are estimated. The computerized method, called WATE-S (Weight Analysis of Turbine Engines-Small) is a derivative of the WATE-2 computer code. WATE-S determines the weight of each major component in the engine including compressors, burners, turbines, heat exchangers, nozzles, propellers, and accessories. A preliminary design approach is used where the stress levels, maximum pressures and temperatures, material properties, geometry, stage loading, hub/tip radius ratio, and mechanical overspeed are used to determine the component weights and dimensions. The accuracy of the method is generally better than + or - 10 percent as verified by analysis of four small aircraft propulsion gas turbine engines.
The use of sampling weights in Bayesian hierarchical models for small area estimation
Chen, Cici; Wakefield, Jon; Lumely, Thomas
2015-01-01
Hierarchical modeling has been used extensively for small area estimation. However, design weights that are required to reflect complex surveys are rarely considered in these models. We develop computationally efficient, Bayesian spatial smoothing models that acknowledge the design weights. Computation is carried out using the integrated nested Laplace approximation, which is fast. A simulation study is presented that considers the effects of non-response and non-random selection of individuals. We examine the impact of ignoring the design weights and the benefits of spatial smoothing. The results show that, when compared with standard approaches, mean squared error can be greatly reduced with the proposed models. Bias reduction occurs through the inclusion of the design weights, with variance reduction being achieved through hierarchical smoothing. We analyze data from the Washington State 2006 Behavioral Risk Factor Surveillance System. The models are easily and quickly fitted within the R environment, using existing packages. PMID:25457595
Lake, Douglas E; Moorman, J Randall
2011-01-01
Entropy estimation is useful but difficult in short time series. For example, automated detection of atrial fibrillation (AF) in very short heart beat interval time series would be useful in patients with cardiac implantable electronic devices that record only from the ventricle. Such devices require efficient algorithms, and the clinical situation demands accuracy. Toward these ends, we optimized the sample entropy measure, which reports the probability that short templates will match with others within the series. We developed general methods for the rational selection of the template length m and the tolerance matching r. The major innovation was to allow r to vary so that sufficient matches are found for confident entropy estimation, with conversion of the final probability to a density by dividing by the matching region volume, 2r(m). The optimized sample entropy estimate and the mean heart beat interval each contributed to accurate detection of AF in as few as 12 heartbeats. The final algorithm, called the coefficient of sample entropy (COSEn), was developed using the canonical MIT-BIH database and validated in a new and much larger set of consecutive Holter monitor recordings from the University of Virginia. In patients over the age of 40 yr old, COSEn has high degrees of accuracy in distinguishing AF from normal sinus rhythm in 12-beat calculations performed hourly. The most common errors are atrial or ventricular ectopy, which increase entropy despite sinus rhythm, and atrial flutter, which can have low or high entropy states depending on dynamics of atrioventricular conduction.
Estimation of relative economic weights of hanwoo carcass traits based on carcass market price.
Choy, Yun Ho; Park, Byoung Ho; Choi, Tae Jung; Choi, Jae Gwan; Cho, Kwang Hyun; Lee, Seung Soo; Choi, You Lim; Koh, Kyung Chul; Kim, Hyo Sun
2012-12-01
The objective of this study was to estimate economic weights of Hanwoo carcass traits that can be used to build economic selection indexes for selection of seedstocks. Data from carcass measures for determining beef yield and quality grades were collected and provided by the Korean Institute for Animal Products Quality Evaluation (KAPE). Out of 1,556,971 records, 476,430 records collected from 13 abattoirs from 2008 to 2010 after deletion of outlying observations were used to estimate relative economic weights of bid price per kg carcass weight on cold carcass weight (CW), eye muscle area (EMA), backfat thickness (BF) and marbling score (MS) and the phenotypic relationships among component traits. Price of carcass tended to increase linearly as yield grades or quality grades, in marginal or in combination, increased. Partial regression coefficients for MS, EMA, BF, and for CW in original scales were +948.5 won/score, +27.3 won/cm(2), -95.2 won/mm and +7.3 won/kg when all three sex categories were taken into account. Among four grade determining traits, relative economic weight of MS was the greatest. Variations in partial regression coefficients by sex categories were great but the trends in relative weights for each carcass measures were similar. Relative economic weights of four traits in integer values when standardized measures were fit into covariance model were +4:+1:-1:+1 for MS:EMA:BF:CW. Further research is required to account for the cost of production per unit carcass weight or per unit production under different economic situations.
Applying fuzzy logic to estimate the parameters of the length-weight relationship.
Bitar, S D; Campos, C P; Freitas, C E C
2016-05-03
We evaluated three mathematical procedures to estimate the parameters of the relationship between weight and length for Cichla monoculus: least squares ordinary regression on log-transformed data, non-linear estimation using raw data and a mix of multivariate analysis and fuzzy logic. Our goal was to find an alternative approach that considers the uncertainties inherent to this biological model. We found that non-linear estimation generated more consistent estimates than least squares regression. Our results also indicate that it is possible to find consistent estimates of the parameters directly from the centers of mass of each cluster. However, the most important result is the intervals obtained with the fuzzy inference system.
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
NASA Astrophysics Data System (ADS)
Zhang, Zhisen; Wu, Tao; Wang, Qi; Pan, Haihua; Tang, Ruikang
2014-01-01
The interactions between proteins/peptides and materials are crucial to research and development in many biomedical engineering fields. The energetics of such interactions are key in the evaluation of new proteins/peptides and materials. Much research has recently focused on the quality of free energy profiles by Jarzynski's equality, a widely used equation in biosystems. In the present work, considerable discrepancies were observed between the results obtained by Jarzynski's equality and those derived by umbrella sampling in biomaterial-water model systems. Detailed analyses confirm that such discrepancies turn up only when the target molecule moves in the high-density water layer on a material surface. Then a hybrid scheme was adopted based on this observation. The agreement between the results of the hybrid scheme and umbrella sampling confirms the former observation, which indicates an approach to a fast and accurate estimation of adsorption free energy for large biomaterial interfacial systems.
2011-01-01
Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645
NASA Astrophysics Data System (ADS)
Omoniyi, Bayonle; Stow, Dorrik
2016-04-01
One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.
Large-scale Advanced Propfan (LAP) performance, acoustic and weight estimation, January, 1984
NASA Technical Reports Server (NTRS)
Parzych, D.; Shenkman, A.; Cohen, S.
1985-01-01
In comparison to turbo-prop applications, the Prop-Fan is designed to operate in a significantly higher range of aircraft flight speeds. Two concerns arise regarding operation at very high speeds: aerodynamic performance and noise generation. This data package covers both topics over a broad range of operating conditions for the eight (8) bladed SR-7L Prop-Fan. Operating conditions covered are: Flight Mach Number 0 - 0.85; blade tip speed 600-800 ft/sec; and cruise power loading 20-40 SHP/D2. Prop-Fan weight and weight scaling estimates are also included.
An Object-oriented Computer Code for Aircraft Engine Weight Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Naylor, Bret A.
2008-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented
Palmstrom, Christin R.
2015-01-01
There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size), as measured by computerized tomography (CT) scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex. PMID:26082858
NASA Astrophysics Data System (ADS)
An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.
2017-01-01
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.
Niknafs, Shahram; Nejati-Javaremi, Ardeshir; Mehrabani-Yeganeh, Hassan; Fatemi, Seyed Abolghasem
2012-10-01
Native chicken breeding station of Mazandaran was established in 1988 with two main objectives: genetic improvement through selection programs and dissemination of indigenous Mazandarani birds. (Co)variance components and genetic parameters for economically important traits were estimated using (bi) univariate animal models with ASREML procedure in Mazandarani native chicken. The data were from 18 generations of selection (1988-2009). Heritability estimates for body weight at different ages [at hatch (bw1), 8 (bw8), 12 (bw12) weeks of ages and sex maturation (wsm)] ranged from 0.24 ± 0.00 to 0.47 ± 0.01. Heritability for reproductive traits including age at sex maturation (asm); egg number (en); weight of first egg (ew1); average egg weight at 28 (ew28), 30 (ew30), and 32 (ew32) weeks of age; their averages (av); average egg weight for the first 12 weeks of production (ew12); egg mass (em); and egg intensity (eint) varied from 0.16 ± 0.01 to 0.43 ± 0.01. Generally, the magnitudes of heritability for the investigated traits were moderate. However, egg production traits showed smaller heritability compared with growth traits. Genetic correlations among egg weight at different ages were mostly higher than 0.8. On the one hand, body weight at different ages showed positive and relatively moderate genetic correlations with egg weight traits (ew1, ew28, ew30, ew32, ew12, and av) and varied from 0.30 ± 0.03 to 0.59 ± 0.02. On the other hand, low negative genetic correlations were obtained between body weight traits (bw1, bw8, bw12, and wsm) and egg number (en). Also, there is low negative genetic correlation (-24 ± 0.04 to -29 ± 0.05) between egg number and egg weight. Therefore, during simultaneous selection process for both growth and egg production traits, probable reduction in egg production due to low reduction in egg number may be compensated by increases in egg weight.
Estimation of Disability Weights in the General Population of South Korea Using a Paired Comparison
Ock, Minsu; Ahn, Jeonghoon; Yoon, Seok-Jun; Jo, Min-Woo
2016-01-01
We estimated the disability weights in the South Korean population by using a paired comparison-only model wherein ‘full health’ and ‘being dead’ were included as anchor points, without resorting to a cardinal method, such as person trade-off. The study was conducted via 2 types of survey: a household survey involving computer-assisted face-to-face interviews and a web-based survey (similar to that of the GBD 2010 disability weight study). With regard to the valuation methods, paired comparison, visual analogue scale (VAS), and standard gamble (SG) were used in the household survey, whereas paired comparison and population health equivalence (PHE) were used in the web-based survey. Accordingly, we described a total of 258 health states, with ‘full health’ and ‘being dead’ designated as anchor points. In the analysis, 4 models were considered: a paired comparison-only model; hybrid model between paired comparison and PHE; VAS model; and SG model. A total of 2,728 and 3,188 individuals participated in the household and web-based survey, respectively. The Pearson correlation coefficients of the disability weights of health states between the GBD 2010 study and the current models were 0.802 for Model 2, 0.796 for Model 1, 0.681 for Model 3, and 0.574 for Model 4 (all P-values<0.001). The discrimination of values according to health state severity was most suitable in Model 1. Based on these results, the paired comparison-only model was selected as the best model for estimating disability weights in South Korea, and for maintaining simplicity in the analysis. Thus, disability weights can be more easily estimated by using paired comparison alone, with ‘full health’ and ‘being dead’ as one of the health states. As noted in our study, we believe that additional evidence regarding the universality of disability weight can be observed by using a simplified methodology of estimating disability weights. PMID:27606626
Reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization
Shi, Xin Zhao, Xiangmo Hui, Fei Ma, Junyan Yang, Lan
2014-10-06
Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations is constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.
NASA Astrophysics Data System (ADS)
Iwaki, Sunao; Ueno, Shoogo
1998-06-01
The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.
NASA Astrophysics Data System (ADS)
Lin, Bangjiang; Li, Yiwei; Zhang, Shihao; Tang, Xuan
2015-10-01
Weighted interframe averaging (WIFA)-based channel estimation (CE) is presented for orthogonal frequency division multiplexing passive optical network (OFDM-PON), in which the CE results of the adjacent frames are directly averaged to increase the estimation accuracy. The effectiveness of WIFA combined with conventional least square, intrasymbol frequency-domain averaging, and minimum mean square error, respectively, is demonstrated through 26.7-km standard single-mode fiber transmission. The experimental results show that the WIFA method with low complexity can significantly enhance transmission performance of OFDM-PON.
An Object-Oriented Computer Code for Aircraft Engine Weight Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Naylor, Bret A.
2009-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.
NASA Technical Reports Server (NTRS)
Jensen, J. K.; Wright, R. L.
1981-01-01
Estimates of total spacecraft weight and packaging options were made for three conceptual designs of a microwave radiometer spacecraft. Erectable structures were found to be slightly lighter than deployable structures but could be packaged in one-tenth the volume. The tension rim concept, an unconventional design approach, was found to be the lightest and transportable to orbit in the least number of shuttle flights.
A priori Estimates and Existence for Elliptic Systems via Bootstrap in Weighted Lebesgue Spaces
NASA Astrophysics Data System (ADS)
Quittner, P.; Souplet, Ph.
2004-10-01
We present a new general method to obtain regularity and a priori estimates for solutions of semilinear elliptic systems in bounded domains. This method is based on a bootstrap procedure, used alternatively on each component, in the scale of weighted Lebesgue spaces Lpδ(Ω)=Lp(Ωδ(x) dx), where δ(x) is the distance to the boundary. Using this method, we significantly improve the known existence results for various classes of elliptic systems.
Weighted L-estimates for dissipative wave equations with variable coefficients
NASA Astrophysics Data System (ADS)
Todorova, Grozdena; Yordanov, Borislav
We establish weighted L-estimates for the wave equation with variable damping u-Δu+au=0 in R, where a(x)⩾a(1 with a>0 and α∈[0,1). In particular, we show that the energy of solutions decays at a polynomial rate t if a(x)˜a| for large |x|. We derive these results by strengthening significantly the multiplier method. This approach can be adapted to other hyperbolic equations with damping.
Estimates of genetic parameters of body weight in descendants of X-irradiated rat spermatogonia.
Gianola, D; Chapman, A B; Rutledge, J J
1977-08-01
Effects of nine generations of 450r per generation of ancestral spermatogonial X irradiation of inbred rats on genetic parameters of body weight at 3, 6, and 10 weeks of age and of weight gains between these periods were studied. Covariances among relatives were estimated by mixed model and regression techniques in randomly selected lines with (R) and without (C) radiation history. Analyses of the data were based on five linear genetic models combining additive direct, additive indirect (maternal), dominance and environmental effects. Parameters in these models were estimated by generalized least-squares. A model including direct and indirect genetic effects fit more closely to the data in both R and C lines. Overdominance of induced mutations did not seem to be present. Ancestral irradiation increased maternal additive genetic variances of body weights and gains but not direct genetic variances. Theoretically, due to a negative direct-maternal genetic correlation, within full-sib family selection would be ineffective in increasing body weight at six weeks in both R and C lines. However, progress from mass selection would be expected to be faster in the R lines.
NASA Astrophysics Data System (ADS)
Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.
2017-04-01
In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the ;exact; adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.
Effects of a limited class of nonlinearities on estimates of relative weights
NASA Astrophysics Data System (ADS)
Richards, Virginia M.
2002-02-01
Perturbation analyses have been applied in recent years to determine the relative contribution of individual stimulus components in detection and discrimination tasks. Responses to stimulus samples are compared to stimulus parameters to determine the details of the decision rule. Often, a linear model is assumed and it is of interest to determine the relative contribution of different stimulus elements to the decision. Here, biases in estimated relative weights are considered for the case where the decision variable is given by D=(∑(αiXin)k)m and the stimulus components, the Xi, are normally distributed, of equal variance, and mutually independent. The αi are the ``true'' combination weights, and n, k, and m are positive reals. The method used to estimate relative weights is the correlation coefficient between the Xi and the observer's responses. Estimates of relative αi do not depend on m but may depend on the mean values of the Xi and the values of n and k (a dependence on the variance, σi2, holds even for linear transformations).
Broiler weight estimation based on machine vision and artificial neural network.
Amraei, S; Abdanan Mehdizadeh, S; Salari, S
2017-03-09
1. Machine vision and artificial neural network (ANN) procedures were used to estimate live body weight of broiler chickens in 30 1-d-old broiler chickens reared for 42 d. 2. Imaging was performed two times daily. To localise chickens within the pen, an ellipse fitting algorithm was used and the chickens' head and tail removed using the Chan-Vese method. 3. The correlations between the body weight and 6 physical extracted features indicated that there were strong correlations between body weight and the 5 features including area, perimeter, convex area, major and minor axis length. 5. According to statistical analysis there was no significant difference between morning and afternoon data over 42 d. 6. In an attempt to improve the accuracy of live weight approximation different ANN techniques, including Bayesian regulation, Levenberg-Marquardt, Scaled conjugate gradient and gradient descent were used. Bayesian regulation with R(2) value of 0.98 was the best network for prediction of broiler weight. 7. The accuracy of the machine vision technique was examined and most errors were less than 50 g.
NASA Astrophysics Data System (ADS)
Sollberger, S.; Perez, K.; Schubert, C. J.; Eugster, W.; Wehrli, B.; Del Sontro, T.
2013-12-01
Currently, carbon dioxide (CO2) and methane (CH4) emissions from lakes, reservoirs and rivers are readily investigated due to the global warming potential of those gases and the role these inland waters play in the carbon cycle. However, there is a lack of high spatiotemporally-resolved emission estimates, and how to accurately assess the gas transfer velocity (K) remains controversial. In anthropogenically-impacted systems where run-of-river reservoirs disrupt the flow of sediments by increasing the erosion and load accumulation patterns, the resulting production of carbonic greenhouse gases (GH-C) is likely to be enhanced. The GH-C flux is thus counteracting the terrestrial carbon sink in these environments that act as net carbon emitters. The aim of this project was to determine the GH-C emissions from a medium-sized river heavily impacted by several impoundments and channelization through a densely-populated region of Switzerland. Estimating gas emission from rivers is not trivial and recently several models have been put forth to do so; therefore a second goal of this project was to compare the river emission models available with direct measurements. Finally, we further validated the modeled fluxes by using a combined approach with water sampling, chamber measurements, and highly temporal GH-C monitoring using an equilibrator. We conducted monthly surveys along the 120 km of the lower Aare River where we sampled for dissolved CH4 (';manual' sampling) at a 5-km sampling resolution, and measured gas emissions directly with chambers over a 35 km section. We calculated fluxes (F) via the boundary layer equation (F=K×(Cw-Ceq)) that uses the water-air GH-C concentration (C) gradient (Cw-Ceq) and K, which is the most sensitive parameter. K was estimated using 11 different models found in the literature with varying dependencies on: river hydrology (n=7), wind (2), heat exchange (1), and river width (1). We found that chamber fluxes were always higher than boundary
ERIC Educational Resources Information Center
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
Estimation of parameters of random effects models from samples collected via complex multistage designs is considered. One way to reduce estimation bias due to unequal probabilities of selection is to incorporate sampling weights. Many researchers have been proposed various weighting methods (Korn, & Graubard, 2003; Pfeffermann, Skinner,…
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-07-12
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.
Fouad, Mohamed M; Dansereau, Richard M; Whitehead, Anthony D
2012-03-01
We present an image registration model for image sets with arbitrarily shaped local illumination variations between images. Any nongeometric variations tend to degrade the geometric registration precision and impact subsequent processing. Traditional image registration approaches do not typically account for changes and movement of light sources, which result in interimage illumination differences with arbitrary shape. In addition, these approaches typically use a least-square estimator that is sensitive to outliers, where interimage illumination variations are often large enough to act as outliers. In this paper, we propose an image registration approach that compensates for arbitrarily shaped interimage illumination variations, which are processed using robust M -estimators tuned to that region. Each M-estimator for each illumination region has a distinct cost function by which small and large interimage residuals are unevenly penalized. Since the segmentation of the interimage illumination variations may not be perfect, a segmentation confidence weighting is also imposed to reduce the negative effect of mis-segmentation around illumination region boundaries. The proposed approach is cast in an iterative coarse-to-fine framework, which allows a convergence rate similar to competing intensity-based image registration approaches. The overall proposed approach is presented in a general framework, but experimental results use the bisquare M-estimator with region segmentation confidence weighting. A nearly tenfold improvement in subpixel registration precision is seen with the proposed technique when convergence is attained, as compared with competing techniques using both simulated and real data sets with interimage illumination variations.
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-01-01
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS. PMID:27420066
Zare, Ali; Mahmoodi, Mahmood; Mohammad, Kazem; Zeraati, Hojjat; Hosseini, Mostafa; Holakouie Naieni, Kourosh
2014-01-01
The 5-year survival rate is a good prognostic indicator for patients with Gastric cancer that is usually estimated based on Kaplan-Meier. In situations where censored observations are too many, this method produces biased estimations. This study aimed to compare estimations of Kaplan-Meier and Weighted Kaplan-Meier as an alternative method to deal with the problem of heavy-censoring. Data from 330 patients with Gastric cancer who had undergone surgery at Iran Cancer Institute from 1995- 1999 were analyzed. The Survival Time of these patients was determined after surgery, and the 5-year survival rate for these patients was evaluated based on Kaplan-Meier and Weighted Kaplan-Meier methods. A total of 239 (72.4%) patients passed away by the end of the study and 91(27.6%) patients were censored. The mean and median of survival time for these patients were 24.86±23.73 and 16.33 months, respectively. The one-year, two-year, three-year, four-year, and five-year survival rates of these patients with standard error estimation based on Kaplan-Meier were 0.66 (0.0264), 0.42 (0.0284), 0.31 (0.0274), 0.26 (0.0264) and 0.21 (0.0256) months, respectively. The estimations of Weighted Kaplan-Meier for these patients were 0.62 (0.0251), 0.35 (0.0237), 0.24 (0.0211), 0.17 (0.0172), and 0.10 (0.0125) months, consecutively. In cases where censoring assumption is not made, and the study has many censored observations, estimations obtained from the Kaplan-Meier are biased and are estimated higher than its real amount. But Weighted Kaplan-Meier decreases bias of survival probabilities by providing appropriate weights and presents more accurate understanding.
NASA Astrophysics Data System (ADS)
Katata, Genki; Kajino, Mizuo; Hiraki, Takatoshi; Aikawa, Masahide; Kobayashi, Tomiki; Nagai, Haruyasu
2011-10-01
To apply a meteorological model to investigate fog occurrence, acidification and deposition in mountain forests, the meteorological model WRF was modified to calculate fog deposition accurately by the simple linear function of fog deposition onto vegetation derived from numerical experiments using the detailed multilayer atmosphere-vegetation-soil model (SOLVEG). The modified version of WRF that includes fog deposition (fog-WRF) was tested in a mountain forest on Mt. Rokko in Japan. fog-WRF provided a distinctly better prediction of liquid water content of fog (LWC) than the original version of WRF. It also successfully simulated throughfall observations due to fog deposition inside the forest during the summer season that excluded the effect of forest edges. Using the linear relationship between fog deposition and altitude given by the fog-WRF calculations and the data from throughfall observations at a given altitude, the vertical distribution of fog deposition can be roughly estimated in mountain forests. A meteorological model that includes fog deposition will be useful in mapping fog deposition in mountain cloud forests.
NASA Astrophysics Data System (ADS)
Kassinopoulos, Michalis; Pitris, Costas
2016-03-01
The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.
Weighted Least Squares Estimates of the Magnetotelluric Transfer Functions from Nonstationary Data
Stodt, John A.
1982-11-01
Magnetotelluric field measurements can generally be viewed as sums of signal and additive random noise components. The standard unweighted least squares estimates of the impedance and tipper functions which are usually calculated from noisy data are not optimal when the measured fields are nonstationary. The nonstationary behavior of the signals and noises should be exploited by weighting the data appropriately to reduce errors in the estimates of the impedances and tippers. Insight into the effects of noise on the estimates is gained by careful development of a statistical model, within a linear system framework, which allows for nonstationary behavior of both the signal and noise components of the measured fields. The signal components are, by definition, linearly related to each other by the impedance and tipper functions. It is therefore appropriate to treat them as deterministic parameters, rather than as random variables, when analyzing the effects of noise on the calculated impedances and tippers. From this viewpoint, weighted least squares procedures are developed to reduce the errors in impedances and tippers which are calculated from nonstationary data.
Granger causality-based synaptic weights estimation for analyzing neuronal networks.
Shao, Pei-Chiang; Huang, Jian-Jia; Shann, Wei-Chang; Yen, Chen-Tung; Tsai, Meng-Li; Yen, Chien-Chang
2015-06-01
Granger causality (GC) analysis has emerged as a powerful analytical method for estimating the causal relationship among various types of neural activity data. However, two problems remain not very clear and further researches are needed: (1) The GC measure is designed to be nonnegative in its original form, lacking of the trait for differentiating the effects of excitations and inhibitions between neurons. (2) How is the estimated causality related to the underlying synaptic weights? Based on the GC, we propose a computational algorithm under a best linear predictor assumption for analyzing neuronal networks by estimating the synaptic weights among them. Under this assumption, the GC analysis can be extended to measure both excitatory and inhibitory effects between neurons. The method was examined by three sorts of simulated networks: those with linear, almost linear, and nonlinear network structures. The method was also illustrated to analyze real spike train data from the anterior cingulate cortex (ACC) and the striatum (STR). The results showed, under the quinpirole administration, the significant existence of excitatory effects inside the ACC, excitatory effects from the ACC to the STR, and inhibitory effects inside the STR.
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
Calculation of weighted averages approach for the estimation of ping tolerance values
Silalom, S.; Carter, J.L.; Chantaramongkol, P.
2010-01-01
A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
On the least-square estimation of parameters for statistical diffusion weighted imaging model.
Yuan, Jing; Zhang, Qinwei
2013-01-01
Statistical model for diffusion-weighted imaging (DWI) has been proposed for better tissue characterization by introducing a distribution function for apparent diffusion coefficients (ADC) to account for the restrictions and hindrances to water diffusion in biological tissues. This paper studies the precision and uncertainty in the estimation of parameters for statistical DWI model with Gaussian distribution, i.e. the position of distribution maxima (Dm) and the distribution width (σ), by using non-linear least-square (NLLS) fitting. Numerical simulation shows that precise parameter estimation, particularly for σ, imposes critical requirements on the extremely high signal-to-noise ratio (SNR) of DWI signal when NLLS fitting is used. Unfortunately, such extremely high SNR may be difficult to achieve for the normal setting of clinical DWI scan. For Dm and σ parameter mapping of in vivo human brain, multiple local minima are found and result in large uncertainties in the estimation of distribution width σ. The estimation error by using NLLS fitting originates primarily from the insensitivity of DWI signal intensity to distribution width σ, as given in the function form of the Gaussian-type statistical DWI model.
An improved method for Q-factor estimates based on the frequency-weighted-exponential function
NASA Astrophysics Data System (ADS)
Li, Chuanhui; Liu, Xuewei
2016-11-01
The frequency-weighted-exponential (FWE) function was designed to fit asymmetric amplitude spectra by two parameters: symmetry index and bandwidth factor. It was applied to Q-factor estimates by fitting the amplitude spectra of source and attenuated wavelet. This method for Q-factor estimates was called the FWE method. The accuracy of the Q-factor estimates by the FWE method depends on the similarity between the modeled FWE functions and the amplitude spectra of source and attenuated wavelet. However, the amplitude spectra of source and attenuated wavelet are poorly fitted when the FWE function are modeled by measuring the symmetry index and bandwidth factor by their definitions. Hence we perform an improvement to the FWE method, where two FWE functions are employed to fit the amplitude spectra of source and attenuated wavelet by the Least Square Method to obtain the optimal symmetry index and bandwidth factor. The improved FWE method enhances the accuracy of the Q-factor estimates, and it also maintains the advantages of good applicability and tolerance to random noise of the original FWE method.
The brain weights body-based cues higher than vision when estimating walked distances.
Campos, Jennifer L; Byrne, Patrick; Sun, Hong-Jin
2010-05-01
Optic flow is the stream of retinal information generated when an observer's body, head or eyes move relative to their environment, and it plays a defining role in many influential theories of active perception. Traditionally, studies of optic flow have used artificially generated flow in the absence of the body-based cues typically coincident with self-motion (e.g. proprioceptive, efference copy, and vestibular). While optic flow alone can be used to judge the direction, speed and magnitude of self-motion, little is known about the precise extent to which it is used during natural locomotor behaviours such as walking. In this study, walked distances were estimated in an open outdoor environment. This study employed two novel complementary techniques to dissociate the contributions of optic flow from body-based cues when estimating distance travelled in a flat, open, outdoor environment void of distinct proximal visual landmarks. First, lenses were used to magnify or minify the visual environment. Second, two walked distances were presented in succession and were either the same or different in magnitude; vision was either present or absent in each. A computational model was developed based on the results of both experiments. Highly convergent cue-weighting values were observed, indicating that the brain consistently weighted body-based cues about twice as high as optic flow, the combination of the two cues being additive. The current experiments represent some of the first to isolate and quantify the contributions of optic flow during natural human locomotor behaviour.
Lin, Fa-Hsuan; Witzel, Thomas; Ahlfors, Seppo P; Stufflebeam, Steven M; Belliveau, John W; Hämäläinen, Matti S
2006-05-15
Cerebral currents responsible for the extra-cranially recorded magnetoencephalography (MEG) data can be estimated by applying a suitable source model. A popular choice is the distributed minimum-norm estimate (MNE) which minimizes the l2-norm of the estimated current. Under the l2-norm constraint, the current estimate is related to the measurements by a linear inverse operator. However, the MNE has a bias towards superficial sources, which can be reduced by applying depth weighting. We studied the effect of depth weighting in MNE using a shift metric. We assessed the localization performance of the depth-weighted MNE as well as depth-weighted noise-normalized MNE solutions under different cortical orientation constraints, source space densities, and signal-to-noise ratios (SNRs) in multiple subjects. We found that MNE with depth weighting parameter between 0.6 and 0.8 showed improved localization accuracy, reducing the mean displacement error from 12 mm to 7 mm. The noise-normalized MNE was insensitive to depth weighting. A similar investigation of EEG data indicated that depth weighting parameter between 2.0 and 5.0 resulted in an improved localization accuracy. The application of depth weighting to auditory and somatosensory experimental data illustrated the beneficial effect of depth weighting on the accuracy of spatiotemporal mapping of neuronal sources.
Size and shape of soil humic acids estimated by viscosity and molecular weight.
Kawahigashi, Masayuki; Sumida, Hiroaki; Yamamoto, Kazuhiko
2005-04-15
Ultrafiltration fractions of three soil humic acids were characterized by viscometry and high performance size-exclusion chromatography (HPSEC) in order to estimate shapes and hydrodynamic sizes. Intrinsic viscosities under given solute/solvent/temperature conditions were obtained by extrapolating the concentration dependence of reduced viscosities to zero concentration. Molecular mass (weight average molecular weight (M (w)) and number average molecular weight (M (n))) and hydrodynamic radius (R(H)) were determined by HPSEC using pullulan as calibrant. Values of M (w) and M (n) ranged from 15 to 118 x 10(3) and from 9 to 50 x 10(3) (g mol(-1)), respectively. Polydispersity, as indicated by M (w)/M (n), increased with increasing filter size from 1.5 to 2.4. The hydrodynamic radii (R(H)) ranged between 2.2 and 6.4 nm. For each humic acid, M (w) and [eta] were related. Mark-Houwink coefficients calculated on the basis of the M (w)-[eta] relationships suggested restricted flexible chains for two of the humic acids and a branched structure for the third humic acid. Those structures probably behave as hydrated sphere colloids in a good solvent. Hydrodynamic radii of fractions calculated from [eta] using Einstein's equation, which is applicable to hydrated sphere colloids, ranged from 2.2 to 7.1 nm. These dimensions are fit to the size of nanospaces on and between clay minerals and micropores in soil particle aggregates. On the other hand, the good agreement of R(H) values obtained by applying Einstein's equation with those directly determined by HPSEC suggests that pullulan is a suitable calibrant for estimation of molecular mass and size of humic acids by HPSEC.
NASA Astrophysics Data System (ADS)
Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.
2015-12-01
Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi
Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi
2015-01-01
Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm2) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060–0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996–0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359
Hu, Jing; Hu, Jie; Wang, Yuanmei
2003-03-01
In magnetoencepholography(MEG) inverse research, according to the point source model and distributed source model, the neuromagnetic source reconstruction methods are classified as parametric current dipole localization and nonparametric source imaging (or current density reconstruction). MEG source imaging technique can be formulated as an inherent ill-posed and highly underdetermined linear inverse problem. In order to yield a robust and plausible neural current distribution image, various approaches have been proposed. Among those, the weighted minimum-norm estimation with Tikhonov regularization is a popular technique. The authors present a relatively overall theoretical framework Followed by a discussion of the development, several regularized minimum-norm algorithms have been described in detail, including the depth normalization, low resolution electromagnetic tomography(LORETA), focal underdetermined system solver(FOCUSS), selective minimum-norm(SMN). In addition, some other imaging methods, e.g., maximum entropy method(MEM), the method incorporating other brain functional information such as fMRI data and maximum a posteriori(MAP) method using Markov random field model, are explained as well. From the generalized point of view based on minimum-norm estimation with Tikhonov regularization, all these algorithms are aiming to resolve the tradeoff between fidelity to the measured data and the constraints assumptions about the neural source configuration such as anatomical and physiological information. In conclusion, almost all the source imaging approaches can be consistent with the regularized minimum-norm estimation to some extent.
ERIC Educational Resources Information Center
Feldt, Leonard S.
2004-01-01
In some settings, the validity of a battery composite or a test score is enhanced by weighting some parts or items more heavily than others in the total score. This article describes methods of estimating the total score reliability coefficient when differential weights are used with items or parts.
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
ERIC Educational Resources Information Center
Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.
1997-01-01
This review finds that formula-based procedures can be used in place of empirical validation for estimating population validity or in place of empirical cross-validation for estimating population cross-validity. Discusses conditions under which the equal weights procedure is a viable alternative. (SLD)
Ahlberg, C M; Kuehn, L A; Thallman, R M; Kachman, S D; Snelling, W M; Spangler, M L
2016-05-01
Birth weight (BWT) and calving difficulty (CD) were recorded on 4,579 first-parity females from the Germplasm Evaluation Program at the U.S. Meat Animal Research Center (USMARC). Both traits were analyzed using a bivariate animal model with direct and maternal effects. Calving difficulty was transformed from the USMARC scores to corresponding -scores from the standard normal distribution based on the incidence rate of the USMARC scores. Breed fraction covariates were included to estimate breed differences. Heritability estimates (SE) for BWT direct, CD direct, BWT maternal, and CD maternal were 0.34 (0.10), 0.29 (0.10), 0.15 (0.08), and 0.13 (0.08), respectively. Calving difficulty direct breed effects deviated from Angus ranged from -0.13 to 0.77 and maternal breed effects deviated from Angus ranged from -0.27 to 0.36. Hereford-, Angus-, Gelbvieh-, and Brangus-sired calves would be the least likely to require assistance at birth, whereas Chiangus-, Charolais-, and Limousin-sired calves would be the most likely to require assistance at birth. Maternal breed effects for CD were least for Simmental and Charolais and greatest for Red Angus and Chiangus. Results showed that the diverse biological types of cattle have different effects on both BWT and CD. Furthermore, results provide a mechanism whereby beef cattle producers can compare EBV for CD direct and maternal arising from disjoined and breed-specific genetic evaluations.
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-01-01
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384
Ades, A E; Cliffe, S
2002-01-01
Decision models are usually populated 1 parameter at a time, with 1 item of information informing each parameter. Often, however, data may not be available on the parameters themselves but on several functions of parameters, and there may be more items of information than there are parameters to be estimated. The authors show how in these circumstances all the model parameters can be estimated simultaneously using Bayesian Markov chain Monte Carlo methods. Consistency of the information and/or the adequacy of the model can also be assessed within this framework. Statistical evidence synthesis using all available data should result in more precise estimates of parameters and functions of parameters, and is compatible with the emphasis currently placed on systematic use of evidence. To illustrate this, WinBUGS software is used to estimate a simple 9-parameter model of the epidemiology of HIV in women attending prenatal clinics, using information on 12 functions of parameters, and to thereby compute the expected net benefit of 2 alternative prenatal testing strategies, universal testing and targeted testing of high-risk groups. The authors demonstrate improved precision of estimates, and lower estimates of the expected value of perfect information, resulting from the use of all available data.
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-05-07
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path.
Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris
2012-01-01
Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa
Vasanawala, Shreyas S; Yu, Huanzhou; Shimakawa, Ann; Jeng, Michael; Brittain, Jean H
2012-01-01
MRI imaging of hepatic iron overload can be achieved by estimating T(2) values using multiple-echo sequences. The purpose of this work is to develop and clinically evaluate a weighted least squares algorithm based on T(2) Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation (IDEAL) technique for volumetric estimation of hepatic T(2) in the setting of iron overload. The weighted least squares T(2) IDEAL technique improves T(2) estimation by automatically decreasing the impact of later, noise-dominated echoes. The technique was evaluated in 37 patients with iron overload. Each patient underwent (i) a standard 2D multiple-echo gradient echo sequence for T(2) assessment with nonlinear exponential fitting, and (ii) a 3D T(2) IDEAL technique, with and without a weighted least squares fit. Regression and Bland-Altman analysis demonstrated strong correlation between conventional 2D and T(2) IDEAL estimation. In cases of severe iron overload, T(2) IDEAL without weighted least squares reconstruction resulted in a relative overestimation of T(2) compared with weighted least squares.
Dorval, Alan D
2008-08-15
The maximal information that the spike train of any neuron can pass on to subsequent neurons can be quantified as the neuronal firing pattern entropy. Difficulties associated with estimating entropy from small datasets have proven an obstacle to the widespread reporting of firing pattern entropies and more generally, the use of information theory within the neuroscience community. In the most accessible class of entropy estimation techniques, spike trains are partitioned linearly in time and entropy is estimated from the probability distribution of firing patterns within a partition. Ample previous work has focused on various techniques to minimize the finite dataset bias and standard deviation of entropy estimates from under-sampled probability distributions on spike timing events partitioned linearly in time. In this manuscript we present evidence that all distribution-based techniques would benefit from inter-spike intervals being partitioned in logarithmic time. We show that with logarithmic partitioning, firing rate changes become independent of firing pattern entropy. We delineate the entire entropy estimation process with two example neuronal models, demonstrating the robust improvements in bias and standard deviation that the logarithmic time method yields over two widely used linearly partitioned time approaches.
NASA Astrophysics Data System (ADS)
Cook, Tessa S.; Chadalavada, Seetharam C.; Boonn, William W.
2013-03-01
One of the biggest challenges in dose monitoring is customization of CT dose estimates to the patient. Patient size remains a highly significant variable. One metric that has previously been used for patient size is patient weight, though this is often criticized as inaccurate. In this work, we compare patients' weight to their effective diameters obtained from a CT scan of the chest or the abdomen. CT exams of the chest (N=163) and abdomen/pelvis (N=168) performed on adult patients in July 2012 were randomly selected for analysis. The effective diameter of the patient for each exam was determined using the central slice of the scan region for each exam using eXposure™ (Radimetrics, Inc., Toronto, Canada). In some cases, the same patient had both a chest and abdominopelvic CT, so effective diameters from both regions were analyzed. In this small sample size, there appears to be a linear relationship between patient weight and effective diameter when measured in the mid-chest and mid-abdomen of adult patients. However, for each weight, patient effective diameter can vary by 5 cm from the regression line in both the chest and the abdomen. A 5-cm difference corresponds to a difference of approximately 0.2 in the chest and 0.3 in the abdomen/pelvis for the correction factors recommended for size-specific dose estimation by the AAPM. This preliminary data suggests that weight-based CT protocoling may in fact be appropriate for some adults. However, more work is needed to identify those patients in whom weight-based protocoling is not appropriate.
How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates
ERIC Educational Resources Information Center
Otterbach, Steffen; Sousa-Poza, Alfonso
2010-01-01
This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…
A simple method for accurate liver volume estimation by use of curve-fitting: a pilot study.
Aoyama, Masahito; Nakayama, Yoshiharu; Awai, Kazuo; Inomata, Yukihiro; Yamashita, Yasuyuki
2013-01-01
In this paper, we describe the effectiveness of our curve-fitting method by comparing liver volumes estimated by our new technique to volumes obtained with the standard manual contour-tracing method. Hepatic parenchymal-phase images of 13 patients were obtained with multi-detector CT scanners after intravenous bolus administration of 120-150 mL of contrast material (300 mgI/mL). The liver contours of all sections were traced manually by an abdominal radiologist, and the liver volume was computed by summing of the volumes inside the contours. The section number between the first and last slice was then divided into 100 equal parts, and each volume was re-sampled by use of linear interpolation. We generated 13 model profile curves by averaging 12 cases, leaving out one case, and we estimated the profile curve for each patient by fitting the volume values at 4 points using a scale and translation transform. Finally, we determined the liver volume by integrating the sampling points of the profile curve. We used Bland-Altman analysis to evaluate the agreement between the volumes estimated with our curve-fitting method and the volumes measured by the manual contour-tracing method. The correlation between the volume measured by manual tracing and that estimated with our curve-fitting method was relatively high (r = 0.98; slope 0.97; p < 0.001). The mean difference between the manual tracing and our method was -22.9 cm(3) (SD of the difference, 46.2 cm(3)). Our volume-estimating technique that requires the tracing of only 4 images exhibited a relatively high linear correlation with the manual tracing technique.
Haasl, Ryan J; Payseur, Bret A
2010-12-01
Theoretical work focused on microsatellite variation has produced a number of important results, including the expected distribution of repeat sizes and the expected squared difference in repeat size between two randomly selected samples. However, closed-form expressions for the sampling distribution and frequency spectrum of microsatellite variation have not been identified. Here, we use coalescent simulations of the stepwise mutation model to develop gamma and exponential approximations of the microsatellite allele frequency spectrum, a distribution central to the description of microsatellite variation across the genome. For both approximations, the parameter of biological relevance is the number of alleles at a locus, which we express as a function of θ, the population-scaled mutation rate, based on simulated data. Discovered relationships between θ, the number of alleles, and the frequency spectrum support the development of three new estimators of microsatellite θ. The three estimators exhibit roughly similar mean squared errors (MSEs) and all are biased. However, across a broad range of sample sizes and θ values, the MSEs of these estimators are frequently lower than all other estimators tested. The new estimators are also reasonably robust to mutation that includes step sizes greater than one. Finally, our approximation to the microsatellite allele frequency spectrum provides a null distribution of microsatellite variation. In this context, a preliminary analysis of the effects of demographic change on the frequency spectrum is performed. We suggest that simulations of the microsatellite frequency spectrum under evolutionary scenarios of interest may guide investigators to the use of relevant and sometimes novel summary statistics.
Ohta, Hiroyuki; Sakuma, Masae; Suzuki, Akitsu; Morimoto, Yuuka; Ishikawa, Makoto; Umeda, Minako; Arai, Hidekazu
2016-01-01
Fibroblast growth factor 23 (FGF23) is a molecule involved in regulating phosphorus homeostasis. Although some studies indicated an association between serum FGF23 levels and sex, the association has not been fully investigated. The purpose of this study was to evaluate whether sex could influence FGF23 responsiveness to dietary phosphorus intake in healthy individuals. Thirty two healthy subjects between 21 and 28 years were recruited for this study. Subjects performed 24-hour urine collection and blood samples were collected. We estimated phosphorus intake (UC-P) from the urine collection (UC), and evaluated any association between UC-P and serum FGF23 levels. Subsequently, we compared serum FGF23 levels between males and females. Positive correlation was observed between UC-P and serum FGF23 levels. Serum FGF23 levels were significantly higher in males than in females. Serum FGF23 levels/UC-P was significantly higher in females than in males. There was no significant difference in serum FGF23 levels/UC-P/BW between the male and female groups. Our results indicate that there was no gender difference between FGF23 responsiveness to phosphorus intake per body weight.
ERIC Educational Resources Information Center
Mozumdar, Arupendra; Liguori, Gary
2011-01-01
The purposes of this study were to generate correction equations for self-reported height and weight quartiles and to test the accuracy of the body mass index (BMI) classification based on corrected self-reported height and weight among 739 male and 434 female college students. The BMIqc (from height and weight quartile-specific, corrected…
2011-01-01
Background The purpose of this study is to explore how a patient's height and weight can be used to predict the effective dose to a reference phantom with similar height and weight from a chest abdomen pelvis computed tomography scan when machine-based parameters are unknown. Since machine-based scanning parameters can be misplaced or lost, a predictive model will enable the medical professional to quantify a patient's cumulative radiation dose. Methods One hundred mathematical phantoms of varying heights and weights were defined within an x-ray Monte Carlo based software code in order to calculate organ absorbed doses and effective doses from a chest abdomen pelvis scan. Regression analysis was used to develop an effective dose predictive model. The regression model was experimentally verified using anthropomorphic phantoms and validated against a real patient population. Results Estimates of the effective doses as calculated by the predictive model were within 10% of the estimates of the effective doses using experimentally measured absorbed doses within the anthropomorphic phantoms. Comparisons of the patient population effective doses show that the predictive model is within 33% of current methods of estimating effective dose using machine-based parameters. Conclusions A patient's height and weight can be used to estimate the effective dose from a chest abdomen pelvis computed tomography scan. The presented predictive model can be used interchangeably with current effective dose estimating techniques that rely on computed tomography machine-based techniques. PMID:22004072
NASA Astrophysics Data System (ADS)
Blockley, Simon P. E.; Bronk Ramsey, C.; Pyle, D. M.
2008-10-01
The role of tephrochronology, as a dating and stratigraphic tool, in precise palaeoclimate and environmental reconstruction, has expanded significantly in recent years. The power of tephrochronology rests on the fact that a tephra layer can stratigraphically link records at the resolution of as little as a few years, and that the most precise age for a particular tephra can be imported into any site where it is found. In order to maximise the potential of tephras for this purpose it is necessary to have the most precise and robustly tested age estimate possible available for key tephras. Given the varying number and quality of dates associated with different tephras it is important to be able to build age models to test competing tephra dates. Recent advances in Bayesian age modelling of dates in sequence have radically extended our ability to build such stratigraphic age models. As an example of the potential here we use Bayesian methods, now widely applied, to examine the dating of some key Late Quaternary tephras from Italy. These are: the Agnano Monte Spina Tephra (AMST), the Neapolitan Yellow Tuff (NYT) and the Agnano Pomici Principali (APP), and all of them have multiple estimates of their true age. Further, we use the Bayesian approaches to generate a revised mixed radiocarbon/varve chronology for the important Lateglacial section of the Lago Grande Monticchio record, as a further illustration of what can be achieved by a Bayesian approach. With all three tephras we were able to produce viable model ages for the tephra, validate the proposed 40Ar/ 39Ar age ranges for these tephras, and provide relatively high precision age models. The results of the Bayesian integration of dating and stratigraphic information, suggest that the current best 95% confidence calendar age estimates for the AMST are 4690-4300 cal BP, the NYT 14320-13900 cal BP, and the APP 12380-12140 cal BP.
Hernández-Vicente, Adrián; Pérez-Isaac, Raúl; Santín-Medeiros, Fernanda; Cristi-Montero, Carlos; Casajús, Jose Antonio; Garatachea, Nuria
2017-01-01
Background The SenseWear Armband (SWA) is a monitor that can be used to estimate energy expenditure (EE); however, it has not been validated in healthy adults. The objective of this paper was to study the validity of the SWA for quantifying EE levels. Methods Twenty-three healthy adults (age 40–55 years, mean: 48±3.42 years) performed different types of standardized physical activity (PA) for 10 minutes (rest, walking at 3 and 5 km·h-1, running at 7 and 9 km·h-1, and sitting/standing at a rate of 30 cycle·min-1). Participants wore the SWA on their right arm, and their EE was measured by indirect calorimetry (IC) the gold standard. Results There were significant differences between the SWA and IC, except in the group that ran at 9 km·h-1 (>9 METs). Bland-Altman analysis showed a BIAS of 1.56 METs (±1.83 METs) and limits of agreement (LOA) at 95% of −2.03 to 5.16 METs. There were indications of heteroscedasticity (R2 =0.03; P<0.05). Analysis of the receiver operating characteristic (ROC) curves showed that the SWA seems to be not sensitive enough to estimate the level of EE at highest intensities. Conclusions The SWA is not as precise in estimating EE as IC, but it could be a useful tool to determine levels of EE at low intensities. PMID:28361062
Blache, Yoann; Bobbert, Maarten; Argaud, Sebastien; Pairot de Fontenay, Benoit; Monteil, Karine M
2013-08-01
In experiments investigating vertical squat jumping, the HAT segment is typically defined as a line drawn from the hip to some point proximally on the upper body (eg, the neck, the acromion), and the hip joint as the angle between this line and the upper legs (θUL-HAT). In reality, the hip joint is the angle between the pelvis and the upper legs (θUL-pelvis). This study aimed to estimate to what extent hip joint definition affects hip joint work in maximal squat jumping. Moreover, the initial pelvic tilt was manipulated to maximize the difference in hip joint work as a function of hip joint definition. Twenty-two male athletes performed maximum effort squat jumps in three different initial pelvic tilt conditions: backward (pelvisB), neutral (pelvisN), and forward (pelvisF). Hip joint work was calculated by integrating the hip net joint torque with respect to θUL-HAT (WUL-HAT) or with respect to θUL-pelvis (WUL-pelvis). θUL-HAT was greater than θUL-pelvis in all conditions. WUL-HAT overestimated WULpelvis by 33%, 39%, and 49% in conditions pelvisF, pelvisN, and pelvisB, respectively. It was concluded that θUL-pelvis should be measured when the mechanical output of hip extensor muscles is estimated.
Sahin, A; Ulutas, Z; Yilmaz Adkinson, A; Adkinson, R W
2012-06-01
A study was conducted to assess the influence of genetic and environmental factors on Brown Swiss calf birth weight, and to estimate variance components, genetic parameters, and breeding values. Data were collected on 1,761 Brown Swiss calves born from 1990 to 2005 in the Konuklar State Farm in Turkey. Mean birth weight for all calves was 39.3 ± 0.09 kg. Least squares mean birth weights for male and female Brown Swiss calves were 40.3 ± 0.02 and 39.0 ± 0.02 kg, respectively. Variance components, genetic parameters, and breeding values for birth weight in Brown Swiss calves were estimated by restricted error maximum likelihood (REML)-best linear unbiased prediction(BLUP) procedures using an MTDFREML (multiple trait derivative free restricted maximum likelihood) program employing an animal model. Direct heritability (h(d)(2)), maternal heritability (h(m)(2)), total heritability (h(T)(2)), r(am) and c(am) estimates were 0.12, 0.09, 0.23, -0.58, and -0.06, respectively. The estimated maternal permanent environmental variance expressed as a proportion of the phenotypic variance (c(2)) was 0.05. Breeding values were estimated for the trait and used to evaluate genetic trends across the time period investigated. The genetic trend linear regression was not different from zero. No genetic trend for birth weight was expected, since there had been no direct selection pressure on the trait. Absence of a trend confirms that there was no change due to selection pressure on correlated traits. Genetic and environmental parameter estimates were similar to literature values indicating that effective selection methods used in more developed improvement programs would be effective in Turkey as well.
Quantitative shape analysis with weighted covariance estimates for increased statistical efficiency
2013-01-01
Background The introduction and statistical formalisation of landmark-based methods for analysing biological shape has made a major impact on comparative morphometric analyses. However, a satisfactory solution for including information from 2D/3D shapes represented by ‘semi-landmarks’ alongside well-defined landmarks into the analyses is still missing. Also, there has not been an integration of a statistical treatment of measurement error in the current approaches. Results We propose a procedure based upon the description of landmarks with measurement covariance, which extends statistical linear modelling processes to semi-landmarks for further analysis. Our formulation is based upon a self consistent approach to the construction of likelihood-based parameter estimation and includes corrections for parameter bias, induced by the degrees of freedom within the linear model. The method has been implemented and tested on measurements from 2D fly wing, 2D mouse mandible and 3D mouse skull data. We use these data to explore possible advantages and disadvantages over the use of standard Procrustes/PCA analysis via a combination of Monte-Carlo studies and quantitative statistical tests. In the process we show how appropriate weighting provides not only greater stability but also more efficient use of the available landmark data. The set of new landmarks generated in our procedure (‘ghost points’) can then be used in any further downstream statistical analysis. Conclusions Our approach provides a consistent way of including different forms of landmarks into an analysis and reduces instabilities due to poorly defined points. Our results suggest that the method has the potential to be utilised for the analysis of 2D/3D data, and in particular, for the inclusion of information from surfaces represented by multiple landmark points. PMID:23548043
How accurate is the estimation of anthropogenic carbon in the ocean? An evaluation of the ΔC* method
NASA Astrophysics Data System (ADS)
Matsumoto, Katsumi; Gruber, Nicolas
2005-09-01
The ΔC* method of Gruber et al. (1996) is widely used to estimate the distribution of anthropogenic carbon in the ocean; however, as yet, no thorough assessment of its accuracy has been made. Here we provide a critical re-assessment of the method and determine its accuracy by applying it to synthetic data from a global ocean biogeochemistry model, for which we know the "true" anthropogenic CO2 distribution. Our results indicate that the ΔC* method tends to overestimate anthropogenic carbon in relatively young waters but underestimate it in older waters. Main sources of these biases are (1) the time evolution of the air-sea CO2 disequilibrium, which is not properly accounted for in the ΔC* method, (2) a pCFC ventilation age bias that arises from mixing, and (3) errors in identifying the different end-member water types. We largely support the findings of Hall et al. (2004), who have also identified the first two bias sources. An extrapolation of the errors that we quantified on a number of representative isopycnals to the global ocean suggests a positive bias of about 7% in the ΔC*-derived global anthropogenic CO2 inventory. The magnitude of this bias is within the previously estimated 20% uncertainty of the method, but regional biases can be larger. Finally, we propose two improvements to the ΔC* method in order to account for the evolution of air-sea CO2 disequilibrium and the ventilation age mixing bias.
NASA Astrophysics Data System (ADS)
Perez-Quezada, Jorge F.; Brito, Carla E.; Cabezas, Julián; Galleguillos, Mauricio; Fuentes, Juan P.; Bown, Horacio E.; Franck, Nicolás
2016-12-01
Making accurate estimations of daily and annual Rs fluxes is key for understanding the carbon cycle process and projecting effects of climate change. In this study we used high-frequency sampling (24 measurements per day) of Rs in a temperate rainforest during 1 year, with the objective of answering the questions of when and how often measurements should be made to obtain accurate estimations of daily and annual Rs. We randomly selected data to simulate samplings of 1, 2, 4 or 6 measurements per day (distributed either during the whole day or only during daytime), combined with 4, 6, 12, 26 or 52 measurements per year. Based on the comparison of partial-data series with the full-data series, we estimated the performance of different partial sampling strategies based on bias, precision and accuracy. In the case of annual Rs estimation, we compared the performance of interpolation vs. using non-linear modelling based on soil temperature. The results show that, under our study conditions, sampling twice a day was enough to accurately estimate daily Rs (RMSE < 10 % of average daily flux), even if both measurements were done during daytime. The highest reduction in RMSE for the estimation of annual Rs was achieved when increasing from four to six measurements per year, but reductions were still relevant when further increasing the frequency of sampling. We found that increasing the number of field campaigns was more effective than increasing the number of measurements per day, provided a minimum of two measurements per day was used. Including night-time measurements significantly reduced the bias and was relevant in reducing the number of field campaigns when a lower level of acceptable error (RMSE < 5 %) was established. Using non-linear modelling instead of linear interpolation did improve the estimation of annual Rs, but not as expected. In conclusion, given that most of the studies of Rs use manual sampling techniques and apply only one measurement per day, we
An exploratory investigation of weight estimation techniques for hypersonic flight vehicles
NASA Technical Reports Server (NTRS)
Cook, E. L.
1981-01-01
The three basic methods of weight prediction (fixed-fraction, statistical correlation, and point stress analysis) and some of the computer programs that have been developed to implement them are discussed. A modified version of the WAATS (Weights Analysis of Advanced Transportation Systems) program is presented, along with input data forms and an example problem.
Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo
2016-01-01
Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255
Shockley, Keith R
2016-06-15
High-throughput in vitro screening experiments can be used to generate concentration-response data for large chemical libraries. It is often desirable to estimate the concentration needed to achieve a particular effect, or potency, for each chemical tested in an assay. Potency estimates can be used to directly compare chemical profiles and prioritize compounds for confirmation studies, or employed as input data for prediction modeling and association mapping. The concentration for half-maximal activity derived from the Hill equation model (i.e., AC50) is the most common potency measure applied in pharmacological research and toxicity testing. However, the AC50 parameter is subject to large uncertainty for many concentration-response relationships. In this study we introduce a new measure of potency based on a weighted Shannon entropy measure termed the weighted entropy score (WES). Our potency estimator (Point of Departure, PODWES) is defined as the concentration producing the maximum rate of change in weighted entropy along a concentration-response profile. This approach provides a new tool for potency estimation that does not depend on the assumption of monotonicity or any other pre-specified concentration-response relationship. PODWES estimates potency with greater precision and less bias compared to the conventional AC50 assessed across a range of simulated conditions.
Shockley, Keith R.
2016-01-01
High-throughput in vitro screening experiments can be used to generate concentration-response data for large chemical libraries. It is often desirable to estimate the concentration needed to achieve a particular effect, or potency, for each chemical tested in an assay. Potency estimates can be used to directly compare chemical profiles and prioritize compounds for confirmation studies, or employed as input data for prediction modeling and association mapping. The concentration for half-maximal activity derived from the Hill equation model (i.e., AC50) is the most common potency measure applied in pharmacological research and toxicity testing. However, the AC50 parameter is subject to large uncertainty for many concentration-response relationships. In this study we introduce a new measure of potency based on a weighted Shannon entropy measure termed the weighted entropy score (WES). Our potency estimator (Point of Departure, PODWES) is defined as the concentration producing the maximum rate of change in weighted entropy along a concentration-response profile. This approach provides a new tool for potency estimation that does not depend on the assumption of monotonicity or any other pre-specified concentration-response relationship. PODWES estimates potency with greater precision and less bias compared to the conventional AC50 assessed across a range of simulated conditions. PMID:27302286
Saha, K; Shahida, S M; Chowdhury, N I; Mostafa, G; Saha, S K; Jahan, S
2014-10-01
Low birth weight (LBW) baby predisposes to long term renal disease, adult hypertension and related cardiovascular disease. This could be due to reduced nephron number in early life. From different studies, it is becoming increasingly clear that nephron number, indirectly reflected in renal volume may be related with normal or retarded foetal growth. This prospective study was undertaken in the department of Obstetric and Gynae in Bangabandhu Sheikh Mujib Medical University, Dhaka, Bangladesh. One hundred pregnant women were included in this study and divided into two groups (IUGR and normally growing foetuses). Forty one foetuses weighted less than 2.5kg and fifty nine foetuses weighed 2.5kg or more. Kidney dimensions and estimated feotal weight were measured by USG by the same ultrasonologist. There were no significant difference between two groups regarding age, height, weight, and parity. The subjects with intrauterine growth retardation had smaller head circumference, abdominal circumferences, biparietal diameters, femur length, estimated foetal weight and lower amniotic fluid indices than did the subjects with non-intrauterine growth retardation (IUGR). All biometric data shows significant deference except head circumference (HC). Intrauterine growth retardation (IUGR) foetus had significantly lower kidney volume than normally growing foetuses.
Hessian estimates in weighted Lebesgue spaces for fully nonlinear elliptic equations
NASA Astrophysics Data System (ADS)
Byun, Sun-Sig; Lee, Mikyoung; Palagachev, Dian K.
2016-03-01
We prove global regularity in weighted Lebesgue spaces for the viscosity solutions to the Dirichlet problem for fully nonlinear elliptic equations. As a consequence, regularity in Morrey spaces of the Hessian is derived as well.
Error estimates of Lagrange interpolation and orthonormal expansions for Freud weights
NASA Astrophysics Data System (ADS)
Kwon, K. H.; Lee, D. W.
2001-08-01
Let Sn[f] be the nth partial sum of the orthonormal polynomials expansion with respect to a Freud weight. Then we obtain sufficient conditions for the boundedness of Sn[f] and discuss the speed of the convergence of Sn[f] in weighted Lp space. We also find sufficient conditions for the boundedness of the Lagrange interpolation polynomial Ln[f], whose nodal points are the zeros of orthonormal polynomials with respect to a Freud weight. In particular, if W(x)=e-(1/2)x2 is the Hermite weight function, then we obtain sufficient conditions for the inequalities to hold:andwhere and k=0,1,2...,r.
Using machine vision to estimate bird weight in the poultry industry
NASA Astrophysics Data System (ADS)
Lotufo, Roberto A.; Taube-Netto, Miguel; Conejo, Eduardo; Hoyos, Francisco J. d.
1999-03-01
This work describes a real-time continuous broiler weighting system based on machine vision, used for size sort planning in a process plant. We demonstrate that this technology can be used successfully as an alternative to classical electromechanical carcasses weighting system. A digitized silhouette image of the carcass hung by its feet is divided in six regions: the legs, the wings, the neck and the breast. Once the parts are separated, their individual areas are measured and used in a polynomial equation that predicts the overall bird weight. A sample of 1400 birds were collected, labeled and weighted in a precision scale, in different days and different hours. We found an error standard deviation of 78 grams for broilers weighing from 750 to 2100 grams. The morphological image processing algorithms demonstrated to be robust to extract the individual parts of the carcass.
Vargas, Alfredo; Krivokapic, Itana; Hauser, Andreas; Lawson Daku, Latévi Max
2013-03-21
We report a detailed DFT study of the energetic and structural properties of the spin-crossover Co(ii) complex [Co(tpy)(2)](2+) (tpy = 2,2':6',2''-terpyridine) in the low-spin (LS) and the high-spin (HS) states, using several generalized gradient approximation and hybrid functionals. In either spin-state, the results obtained with the functionals are consistent with one another and in good agreement with available experimental data. Although the different functionals correctly predict the LS state as the electronic ground state of [Co(tpy)(2)](2+), they give estimates of the HS-LS zero-point energy difference which strongly depend on the functional used. This dependency on the functional was also reported for the DFT estimates of the zero-point energy difference in the HS complex [Co(bpy)(3)](2+) (bpy = 2,2'-bipyridine) [A. Vargas, A. Hauser and L. M. Lawson Daku, J. Chem. Theory Comput., 2009, 5, 97]. The comparison of the and estimates showed that all functionals correctly predict an increase of the zero-point energy difference upon the bpy → tpy ligand substitution, which furthermore weakly depends on the functionals, amounting to . From these results and basic thermodynamic considerations, we establish that, despite their limitations, current DFT methods can be applied to the accurate determination of the spin-state energetics of complexes of a transition metal ion, or of these complexes in different environments, provided that the spin-state energetics is accurately known in one case. Thus, making use of the availability of a highly accurate ab initio estimate of the HS-LS energy difference in the complex [Co(NCH)(6)](2+) [L. M. Lawson Daku, F. Aquilante, T. W. Robinson and A. Hauser, J. Chem. Theory Comput., 2012, 8, 4216], we obtain for [Co(tpy)(2)](2+) and [Co(bpy)(3)](2+) best estimates of and , in good agreement with the known magnetic behaviour of the two complexes.
McMillan, K; Bostani, M; McNitt-Gray, M; McCollough, C
2015-06-15
Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate the complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not
Gowane, G R; Chopra, A; Prince, L L L; Paswan, C; Arora, A L
2010-03-01
(Co)variance components and genetic parameters of weight at birth (BWT), weaning (3WT), 6, 9 and 12 months of age (6WT, 9WT and 12WT, respectively) and first greasy fleece weight (GFW) of Bharat Merino sheep, maintained at Central Sheep and Wool Research Institute, Avikanagar, Rajasthan, India, were estimated by restricted maximum likelihood, fitting six animal models with various combinations of direct and maternal effects. Data were collected over a period of 10 years (1998 to 2007). A log-likelihood ratio test was used to select the most appropriate univariate model for each trait, which was subsequently used in bivariate analysis. Heritability estimates for BWT, 3WT, 6WT, 9WT and 12WT and first GFW were 0.05 ± 0.03, 0.04 ± 0.02, 0.00, 0.03 ± 0.03, 0.09 ± 0.05 and 0.05 ± 0.03, respectively. There was no evidence for the maternal genetic effect on the traits under study. Maternal permanent environmental effect contributed 19% for BWT and 6% to 11% from 3WT to 9WT and 11% for first GFW. Maternal permanent environmental effect on the post-3WT was a carryover effect of maternal influences during pre-weaning age. A low rate of genetic progress seems possible in the flock through selection. Direct genetic correlations between body weight traits were positive and ranged from 0.36 between BWT and 6WT to 0.94 between 3WT and 6WT and between 6WT and 12WT. Genetic correlations of 3WT with 6WT, 9WT and 12WT were high and positive (0.94, 0.93 and 0.93, respectively), suggesting that genetic gain in post-3WT will be maintained if selection age is reduced to 3 months. The genetic correlations of GFW with live weights were 0.01, 0.16, 0.18, 0.40 and 0.32 for BWT, 3WT, 6WT, 9WT and 12WT, respectively. Correlations of permanent environmental effects of the dam across different traits were high and positive for all the traits (0.45 to 0.98).
Chan, Yi-Hsin; Tsai, Wei-Chung; Shen, Changyu; Han, Seongwook; Chen, Lan S.; Lin, Shien-Fong; Chen, Peng-Sheng
2015-01-01
Background We recently reported that subcutaneous nerve activity (SCNA) can be used to estimate sympathetic tone. Objectives To test the hypothesis that left thoracic SCNA is more accurate than heart rate variability (HRV) in estimating cardiac sympathetic tone in ambulatory dogs with myocardial infarction (MI). Methods We used an implanted radiotransmitter to study left stellate ganglion nerve activity (SGNA), vagal nerve activity (VNA), and thoracic SCNA in 9 dogs at baseline and up to 8 weeks after MI. HRV was determined based by time-domain, frequency-domain and non-linear analyses. Results The correlation coefficients between integrated SGNA and SCNA averaged 0.74 (95% confidence interval (CI), 0.41–1.06) at baseline and 0.82 (95% CI, 0.63–1.01) after MI (P<.05 for both). The absolute values of the correlation coefficients were significant larger than that between SGNA and HRV analysis based on time-domain, frequency-domain and non-linear analyses, respectively, at baseline (P<.05 for all) and after MI (P<.05 for all). There was a clear increment of SGNA and SCNA at 2, 4, 6 and 8 weeks after MI, while HRV parameters showed no significant changes. Significant circadian variations were noted in SCNA, SGNA and all HRV parameters at baseline and after MI, respectively. Atrial tachycardia (AT) episodes were invariably preceded by the SCNA and SGNA, which were progressively increased from 120th, 90th, 60th to 30th s before the AT onset. No such changes of HRV parameters were observed before AT onset. Conclusion SCNA is more accurate than HRV in estimating cardiac sympathetic tone in ambulatory dogs with MI. PMID:25778433
Baldi, F; Albuquerque, L G; Alencar, M M
2010-08-01
The objective of this work was to estimate covariance functions for direct and maternal genetic effects, animal and maternal permanent environmental effects, and subsequently, to derive relevant genetic parameters for growth traits in Canchim cattle. Data comprised 49,011 weight records on 2435 females from birth to adult age. The model of analysis included fixed effects of contemporary groups (year and month of birth and at weighing) and age of dam as quadratic covariable. Mean trends were taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were allowed to vary and were modelled by a step function with 1, 4 or 11 classes based on animal's age. The model fitting four classes of residual variances was the best. A total of 12 random regression models from second to seventh order were used to model direct and maternal genetic effects, animal and maternal permanent environmental effects. The model with direct and maternal genetic effects, animal and maternal permanent environmental effects fitted by quadric, cubic, quintic and linear Legendre polynomials, respectively, was the most adequate to describe the covariance structure of the data. Estimates of direct and maternal heritability obtained by multi-trait (seven traits) and random regression models were very similar. Selection for higher weight at any age, especially after weaning, will produce an increase in mature cow weight. The possibility to modify the growth curve in Canchim cattle to obtain animals with rapid growth at early ages and moderate to low mature cow weight is limited.
ERIC Educational Resources Information Center
Qing, Siyu
2014-01-01
The National Science Foundation (NSF) Survey of Doctorate Recipients (SDR) collects information on a sample of individuals in the United States with PhD degrees. A significant portion of the sampled individuals appear in multiple survey years and can be linked across time. Survey weights in each year are created and adjusted for oversampling and…
Estimating overnight weight loss of corralled yearling steers in semiarid rangeland
Technology Transfer Automated Retrieval System (TEKTRAN)
Free-ranging livestock grazing native vegetation on rangelands are frequently gathered and confined overnight in a corral (sensu drylot) prior to weighing to determine periodic weight gains for grazing studies. Quantification of this overnight percent shrink across the grazing season could provide t...
Estimation of breed-specific heterosis effects for birth, weaning, and yearling weight in cattle
Technology Transfer Automated Retrieval System (TEKTRAN)
Heterosis, assumed proportional to expected breed heterozygosity, was calculated for 6,834 individuals with birth, weaning and yearling weight records from Cycle VII and advanced generations of the U.S. Meat Animal Research Center (USMARC) Germplasm Evaluation (GPE) project. Breeds represented in t...
NASA Astrophysics Data System (ADS)
Ganguli, Anurag; Saha, Bhaskar; Raghavan, Ajay; Kiesel, Peter; Arakaki, Kyle; Schuh, Andreas; Schwartz, Julian; Hegyi, Alex; Sommer, Lars Wilko; Lochbaum, Alexander; Sahu, Saroj; Alamgir, Mohamed
2017-02-01
A key challenge hindering the mass adoption of Lithium-ion and other next-gen chemistries in advanced battery applications such as hybrid/electric vehicles (xEVs) has been management of their functional performance for more effective battery utilization and control over their life. Contemporary battery management systems (BMS) reliant on monitoring external parameters such as voltage and current to ensure safe battery operation with the required performance usually result in overdesign and inefficient use of capacity. More informative embedded sensors are desirable for internal cell state monitoring, which could provide accurate state-of-charge (SOC) and state-of-health (SOH) estimates and early failure indicators. Here we present a promising new embedded sensing option developed by our team for cell monitoring, fiber-optic (FO) sensors. High-performance large-format pouch cells with embedded FO sensors were fabricated. This second part of the paper focuses on the internal signals obtained from these FO sensors. The details of the method to isolate intercalation strain and temperature signals are discussed. Data collected under various xEV operational conditions are presented. An algorithm employing dynamic time warping and Kalman filtering was used to estimate state-of-charge with high accuracy from these internal FO signals. Their utility for high-accuracy, predictive state-of-health estimation is also explored.
NASA Astrophysics Data System (ADS)
Forkert, Nils Daniel; Kaesemann, Philipp; Fiehler, Jens; Thomalla, Götz
2012-03-01
Acute stroke is a major cause for death and disability among adults in the western hemisphere. Time-resolved perfusion-weighted (PWI) and diffusion-weighted (DWI) MR datasets are typically used for the estimation of tissue-at-risk, which is an important variable for acute stroke therapy decision-making. Although several parameters, which can be estimated based on PWI concentration curves, have been proposed for tissue-at-risk definition in the past, the time-to-peak (TTP) or time-to-max (Tmax) parameter is used most frequently in recent trials. Unfortunately, there is no clear consensus which method should be used for estimation of Tmax or TTP maps. Consequently, tissue-at-risk estimations and following treatment decision might vary considerably with the method used. In this work, 5 PWI datasets of acute stroke patients were used to calculate TTP or Tmax maps using 10 different estimation techniques. The resulting maps were segmented using a typical threshold of +4s and the corresponding PWI-lesions were calculated. The first results suggest that the TTP or Tmax method used has a major impact on the resulting tissue-at-risk volume. Numerically, the calculated volumes differed up to a factor of 3. In general, the deconvolution-based Tmax techniques estimate the ischemic penumbra rather smaller compared to direct TTP based techniques. In conclusion, the comparison of different methods for TTP or Tmax estimation revealed high variations regarding the resulting tissue-at-risk volume, which might lead to different therapy decisions. Therefore, a consensus how TTP or Tmax maps should be calculated seems necessary.
Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi
2015-01-01
We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.
2015-01-01
A structural concept called pultruded rod stitched efficient unitized structure (PRSEUS) was developed by the Boeing Company to address the complex structural design aspects associated with a pressurized hybrid wing body (HWB) aircraft configuration. While PRSEUS was an enabling technology for the pressurized HWB structure, limited investigation of PRSEUS for other aircraft structures, such as circular fuselages and wings, has been done. Therefore, a study was undertaken to investigate the potential weight savings afforded by using the PRSEUS concept for a commercial transport wing. The study applied PRSEUS to the Advanced Subsonic Technology (AST) Program composite semi-span test article, which was sized using three load cases. The initial PRSEUS design was developed by matching cross-sectional stiffnesses for each stringer/skin combination within the wing covers, then the design was modified to ensure that the PRSEUS design satisfied the design criteria. It was found that the PRSEUS wing design exhibited weight savings over the blade-stiffened composite AST Program wing of nearly 9%, and a weight savings of 49% and 29% for the lower and upper covers, respectively, compared to an equivalent metallic wing.
NASA Astrophysics Data System (ADS)
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
Dimassi, Kaouther; Ajroudi, Meryam; Saidi, Olfa; Salem, Safa; Robbana, Monia; Triki, Amel; Gara, Mohammed Faouzi
2016-01-01
Ultrasound is a valuable tool commonly used in the delivery room. It has multiple applications. The objective of this study was to investigate whether systematic fetal weight estimation by ultrasound in the delivery room increases the risk of cesarean delivery. Monocentric cohort study. All parturients with singleton pregnancies who gave birth full-term at = 39 weeks were enrolled in the study. We excluded all patients with a contraindication to vaginal birth as well as those in whom fetal weight estimation (FWE) by ultrasound on day of delivery was deemed necessary in making obstetric decision. Parturients enrolled in the study were divided into two groups: - G1: parturients who systematically underwent FWE - G2: parturients who never underwent FWE. We compared cesarean delivery rate with adjustment for potentially confounding factors according to logistic regression. 838 parturients were enrolled in the study. Prematurity, FWE and weight at birth were risk factors for cesarean delivery. After adjustment for confounding factors, FWE by ultrasound systematically performed in G1 proved to be an independent risk factor for cesarean delivery with OR = 3.8 (CI 95% = [2.67 to 5.48]). This risk increased significantly with estimated fetal weight (EFW): OR=2,27(CI 95;1,15-4,47; p=0.018) for 3500 < EFW < 4000g and OR = 10.64 (CI 95; 4.28 to 26.41; p < 0.001 ) for EFW > 4000 g. FWE by ultrasound systematically performed in the delivery room represents an independent and potentially modifiable risk factor for cesarean delivery.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-10
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Bedside Estimation of Patient Height for Calculating Ideal Body Weight in the Emergency Department
2009-01-01
ured tibial length. All methods were more accurate than sing the conventional 70 kg male/60 kg female IBW tandard. © 2010 Published by Elsevier Inc...formula sing measured tibial length, and compare all to the con- entional 70 kg male/60 kg female standard IBW. Methods: rospective, observational...formula for men tibial length-IBW kg) 25.83 1.11 tibial length; for women tibial ength-IBW 7.90 1.20 tibial length; R2 0.89, p < .001. Inter
Haapea, Marianne; Veijola, Juha; Tanskanen, Päivikki; Jääskeläinen, Erika; Isohanni, Matti; Miettunen, Jouko
2011-12-30
Low participation is a potential source of bias in population-based studies. This article presents use of inverse probability weighting (IPW) in adjusting for non-participation in estimation of brain volumes among subjects with schizophrenia. Altogether 101 schizophrenia subjects and 187 non-psychotic comparison subjects belonging to the Northern Finland 1966 Birth Cohort were invited to participate in a field study during 1999-2001. Volumes of grey matter (GM), white matter (WM) and cerebrospinal fluid (CSF) were compared between the 54 participating schizophrenia subjects and 100 comparison subjects. IPW by illness-related auxiliary variables did not affect the estimated GM and WM mean volumes, but increased the estimated CSF mean volume in schizophrenia subjects. When adjusted for intracranial volume and family history of psychosis, IPW led to smaller estimated GM and WM mean volumes. Especially IPW by a disability pension and a higher amount of hospitalisation due to psychosis had effect on estimated mean brain volumes. The IPW method can be used to improve estimates affected by non-participation by reflecting the true differences in the target population.
García-Santos, Glenda; Scheiben, Dominik; Binder, Claudia R
2011-03-01
Investigations of occupational and environmental risk caused by the use of agrochemicals have received considerable interest over the last decades. And yet, in developing countries, the lack of staff and analytical equipment as well the costs of chemical analyses make it difficult, if not impossible, to monitor pesticide contamination and residues in humans, air, water, and soils. A new and simple method is presented here for estimation of pesticide deposition in humans and soil after application. The estimate is derived on the basis of water mass balance measured in a given number of high absorbent papers under low evaporative conditions and unsaturated atmosphere. The method is presented as a suitable, rapid, low cost screening tool, complementary to toxicological tests, to assess occupational and environmental exposure caused by knapsack sprayers, where there is a lack of analytical instruments. This new method, called the "weight method", was tested to obtain drift deposition on the neighbouring field and the clothes of the applicator after spraying water with a knapsack sprayer in one of the largest areas of potato production in Colombia. The results were confirmed by experimental data using a tracer and the same set up used for the weight method. The weight method was able to explain 86% of the airborne drift and deposition variance.
Harrow, Lisa; Espie, Colin
2010-03-01
The 'quarter-hour rule' (QHR) instructs the person with insomnia to get out of bed after 15 min of wakefulness and return to bed only when sleep feels imminent. Recent research has identified that sleep can be significantly improved using this simple intervention (Malaffo and Espie, Sleep, 27(s), 2004, 280; Sleep, 29 (s), 2006, 257), but successful implementation depends on estimating time without clock monitoring, and the insomnia literature indicates poor time perception is a maintaining factor in primary insomnia (Harvey, Behav. Res. Ther., 40, 2002, 869). This study expands upon previous research with the aim of identifying whether people with insomnia can accurately perceive a 15-min interval during the sleep-onset period, and therefore successfully implements the QHR. A mixed models anova design was applied with between-participants factor of group (insomnia versus good sleepers) and within-participants factor of context (night versus day). Results indicated no differences between groups and contexts on time estimation tasks. This was despite an increase in arousal in the night context for both groups, and tentative support for the impact of arousal in inducing underestimations of time. These results provide promising support for the successful application of the QHR in people with insomnia. The results are discussed in terms of whether the design employed successfully accessed the processes that are involved in distorting time perception in insomnia. Suggestions for future research are provided and limitations of the current study discussed.
NASA Astrophysics Data System (ADS)
Craig, Norman C.; Demaison, Jean; Groner, Peter; Rudolph, Heinz Dieter; Vogt, Natalja
2015-06-01
An accurate equilibrium structure of trans-hexatriene has been determined by the mixed estimation method with rotational constants from 8 deuterium and carbon isotopologues and high-level quantum chemical calculations. In the mixed estimation method bond parameters are fit concurrently to moments of inertia of various isotopologues and to theoretical bond parameters, each data set carrying appropriate uncertainties. The accuracy of this structure is 0.001 Å and 0.1°. Structures of similar accuracy have been computed for the cis,cis, trans,trans, and cis,trans isomers of octatetraene at the CCSD(T) level with a basis set of wCVQZ(ae) quality adjusted in accord with the experience gained with trans-hexatriene. The structures are compared with butadiene and with cis-hexatriene to show how increasing the length of the chain in polyenes leads to increased blurring of the difference between single and double bonds in the carbon chain. In trans-hexatriene r(“C_1=C_2") = 1.339 Å and r(“C_3=C_4") = 1.346 Å compared to 1.338 Å for the “double" bond in butadiene; r(“C_2-C_3") = 1.449 Å compared to 1.454 Å for the “single" bond in butadiene. “Double" bonds increase in length; “single" bonds decrease in length.
Khan, Adil Mehmood; Siddiqi, Muhammad Hameed; Lee, Seok-Won
2013-01-01
Smartphone-based activity recognition (SP-AR) recognizes users' activities using the embedded accelerometer sensor. Only a small number of previous works can be classified as online systems, i.e., the whole process (pre-processing, feature extraction, and classification) is performed on the device. Most of these online systems use either a high sampling rate (SR) or long data-window (DW) to achieve high accuracy, resulting in short battery life or delayed system response, respectively. This paper introduces a real-time/online SP-AR system that solves this problem. Exploratory data analysis was performed on acceleration signals of 6 activities, collected from 30 subjects, to show that these signals are generated by an autoregressive (AR) process, and an accurate AR-model in this case can be built using a low SR (20 Hz) and a small DW (3 s). The high within class variance resulting from placing the phone at different positions was reduced using kernel discriminant analysis to achieve position-independent recognition. Neural networks were used as classifiers. Unlike previous works, true subject-independent evaluation was performed, where 10 new subjects evaluated the system at their homes for 1 week. The results show that our features outperformed three commonly used features by 40% in terms of accuracy for the given SR and DW. PMID:24084108
Guo, Penghong; Rivera, Daniel E.; Downs, Danielle S.; Savage, Jennifer S.
2016-01-01
Excessive gestational weight gain (i.e., weight gain during pregnancy) is a significant public health concern, and has been the recent focus of novel, control systems-based interventions. This paper develops a control-oriented dynamical systems model based on a first-principles energy balance model from the literature, which is evaluated against participant data from a study targeted to obese and overweight pregnant women. The results indicate significant under-reporting of energy intake among the participant population. A series of approaches based on system identification and state estimation are developed in the paper to better understand and characterize the extent of under-reporting; these range from back-calculating energy intake from a closed-form of the energy balance model, to a constrained semi-physical identification approach that estimates the extent of systematic under-reporting in the presence of noise and possibly missing data. Additionally, we describe an adaptive algorithm based on Kalman filtering to estimate energy intake in real-time. The approaches are illustrated with data from both simulated and actual intervention participants. PMID:27570366
van Donkelaar, Aaron; Martin, Randall V; Spurr, Robert J D; Burnett, Richard T
2015-09-01
We used a geographically weighted regression (GWR) statistical model to represent bias of fine particulate matter concentrations (PM2.5) derived from a 1 km optimal estimate (OE) aerosol optical depth (AOD) satellite retrieval that used AOD-to-PM2.5 relationships from a chemical transport model (CTM) for 2004-2008 over North America. This hybrid approach combined the geophysical understanding and global applicability intrinsic to the CTM relationships with the knowledge provided by observational constraints. Adjusting the OE PM2.5 estimates according to the GWR-predicted bias yielded significant improvement compared with unadjusted long-term mean values (R(2) = 0.82 versus R(2) = 0.62), even when a large fraction (70%) of sites were withheld for cross-validation (R(2) = 0.78) and developed seasonal skill (R(2) = 0.62-0.89). The effect of individual GWR predictors on OE PM2.5 estimates additionally provided insight into the sources of uncertainty for global satellite-derived PM2.5 estimates. These predictor-driven effects imply that local variability in surface elevation and urban emissions are important sources of uncertainty in geophysical calculations of the AOD-to-PM2.5 relationship used in satellite-derived PM2.5 estimates over North America, and potentially worldwide.
Danaei, Goodarz; Robins, James M.; Young, Jessica; Hu, Frank B.; Manson, JoAnn E; Hernán, Miguel A.
2016-01-01
Background The evidence on the effect of weight loss on coronary heart disease (CHD) or mortality has been mixed. The effect estimates can be confounded due to undiagnosed diseases that may affect weight loss. Methods We used data from the Nurses’ Health Study to estimate the 26-year risk of CHD under several hypothetical weight loss interventions (e.g. maintain baseline weight, lose 5% of weight every 2 years if overweight/obese). We applied the parametric g-formula and implemented a novel sensitivity analysis for unmeasured confounding due to undiagnosed disease by imposing a lag time for the effect of weight loss on chronic disease. Sensitivity analyses were conducted by using only the first 16 years of follow-up, restricting the analysis to women who had reported intentional weight loss, those who were younger (<49 years old at baseline), and those who never smoked. Results The 26-year risk of CHD under no weight loss intervention was 5.0% (95% Confidence Interval 4.9, 5.3). The estimated risk did not change under hypothetical weight loss interventions using lag times from 0 to 18 years. For a 6-year lag time, the risk ratios of CHD for weight loss compared with no intervention ranged from 1.00 (0.99, 1.02) to 1.02 (0.99, 1.05) for different degrees of weight loss with and without restricting the intervention to participants with no major chronic disease. Similarly, no protective effect of weight loss was estimated for mortality risk. In contrast, we estimated a protective effect of weight loss on risk of type 2 diabetes. The estimated effect of weight loss on CHD and mortality remained null in all sensitivity analyses. Conclusion We estimated that maintaining weight or losing weight after becoming overweight or obese does not reduce the risk of CHD or death in this cohort of middle-aged US women. Unmeasured confounding, measurement error, and model misspecification are possible explanations but they did not prevent us from estimating a beneficial effect of
Identical twins in forensic genetics - Epidemiology and risk based estimation of weight of evidence.
Tvedebrink, Torben; Morling, Niels
2015-12-01
The increase in the number of forensic genetic loci used for identification purposes results in infinitesimal random match probabilities. These probabilities are computed under assumptions made for rather simple population genetic models. Often, the forensic expert reports likelihood ratios, where the alternative hypothesis is assumed not to encompass close relatives. However, this approach implies that important factors present in real human populations are discarded. This approach may be very unfavourable to the defendant. In this paper, we discuss some important aspects concerning the closest familial relationship, i.e., identical (monozygotic) twins, when reporting the weight of evidence. This can be done even when the suspect has no knowledge of an identical twin or when official records hold no twin information about the suspect. The derived expressions are not original as several authors previously have published results accounting for close familial relationships. However, we revisit the discussion to increase the awareness among forensic genetic practitioners and include new information on medical and societal factors to assess the risk of not considering a monozygotic twin as the true perpetrator. If accounting for a monozygotic twin in the weight of evidence, it implies that the likelihood ratio is truncated at a maximal value depending on the prevalence of monozygotic twins and the societal efficiency of recognising a monozygotic twin. If a monozygotic twin is considered as an alternative proposition, then data relevant for the Danish society suggests that the threshold of likelihood ratios should approximately be between 150,000 and 2,000,000 in order to take the risk of an unrecognised identical, monozygotic twin into consideration. In other societies, the threshold of the likelihood ratio in crime cases may reach other, often lower, values depending on the recognition of monozygotic twins and the age of the suspect. In general, more strictly kept
Roberts, Graham J; McDonald, Fraser; Neil, Monica; Lucas, Victoria S
2014-08-01
The mathematical principle of weighting averages to determine the most appropriate numerical outcome is well established in economic and social studies. It has seen little application in forensic dentistry. This study re-evaluated the data from a previous study of age assessment at the 10 year threshold. A semiautomatic process of weighting averages by n-td, x-tds, sd-tds, se-tds, 1/sd-tds, 1/se-tds was prepared in an Excel worksheet and the different weighted mean values reported. In addition the Fixed Effects and Random Effects models for Meta-Analysis were used and applied to the same data sets. In conclusion it has been shown that the most accurate age estimation method is to use the Random Effects Model for the mathematical procedures.
Deshpande, Amol A.; Madhavan, P.; Deshpande, Girish R.; Chandel, Ravi Kumar; Yarbagi, Kaviraj M.; Joshi, Alok R.; Moses Babu, J.; Murali Krishna, R.; Rao, I. M.
2016-01-01
Fondaparinux sodium is a synthetic low-molecular-weight heparin (LMWH). This medication is an anticoagulant or a blood thinner, prescribed for the treatment of pulmonary embolism and prevention and treatment of deep vein thrombosis. Its determination in the presence of related impurities was studied and validated by a novel ion-pair HPLC method. The separation of the drug and its degradation products was achieved with the polymer-based PLRPs column (250 mm × 4.6 mm; 5 μm) in gradient elution mode. The mixture of 100 mM n-hexylamine and 100 mM acetic acid in water was used as buffer solution. Mobile phase A and mobile phase B were prepared by mixing the buffer and acetonitrile in the ratio of 90:10 (v/v) and 20:80 (v/v), respectively. Mobile phases were delivered in isocratic mode (2% B for 0–5 min) followed by gradient mode (2–85% B in 5–60 min). An Evaporative Light Scattering Detector (ELSD) was connected to the LC system to detect the responses of chromatographic separation. Further, the drug was subjected to stress studies for acidic, basic, oxidative, photolytic, and thermal degradations as per ICH guidelines and the drug was found to be labile in acid, base hydrolysis, and oxidation, while stable in neutral, thermal, and photolytic degradation conditions. The method provided linear responses over the concentration range of the LOQ to 0.30% for each impurity with respect to the analyte concentration of 12.5 mg/mL, and regression analysis showed a correlation coefficient value (r2) of more than 0.99 for all the impurities. The LOD and LOQ were found to be 1.4 µg/mL and 4.1 µg/mL, respectively, for fondaparinux. The developed ion-pair method was validated as per ICH guidelines with respect to accuracy, selectivity, precision, linearity, and robustness. PMID:27110496
Ngo, Van A; Kim, Ilsoo; Allen, Toby W; Noskov, Sergei Y
2016-03-08
Nonequilibrium pulling simulations have been a useful approach for investigating a variety of physical and biological problems. The major target in the simulations is to reconstruct reliable potentials of mean force (PMFs) or unperturbed free-energy profiles for quantitatively addressing both equilibrium mechanistic properties and contributions from nonequilibrium processes. While several current nonequilibrium methods were shown to be accurate in computing free-energy profiles in systems with relatively simple dynamics, they have proved to be unsuitable in more complicated systems. To extend the applicability of nonequilibrium sampling, we demonstrate a novel method that combines Minh-Adib's bidirectional estimator with nonlinear WHAM equations to reconstruct and assess PMFs from relatively fast pulling trajectories. We test the method in a one-dimensional model system and in a system of an antibiotic gramicidin-A (gA) channel, which is considered a significant challenge for nonequilibrium sampling. We identify key parameters for efficiently performing pulling simulations to improve and ensure the convergence and accuracy of estimated PMFs. We show that a few pulling trajectories of a relatively fast pulling speed v = 10 Å/ns can return a fair estimate of the PMF of a single potassium ion in gA.
Boligon, A A; Silva, J A V; Sesana, R C; Sesana, J C; Junqueira, J B; Albuquerque, L G
2010-04-01
Data from 129,575 Nellore cattle born between 1993 and 2006, belonging to the Jacarezinho cattle-raising farm, were used to estimate genetic parameters for scrotal circumference measured at 9 (SC9), 12 (SC12), and 18 (SC18) mo of age and testicular volume measured at the same ages (TV9, TV12, and TV18) and to determine their correlation with weaning weight (WW) and yearling weight (YW), to provide information for the definition of selection criteria in beef cattle. Estimates of (co)variance components were calculated by the REML method applying an animal model in single- and multiple-trait analysis. The following heritability estimates and their respective SE were obtained for WW, YW, SC9, SC12, SC18, TV9, TV12, and TV18: 0.33 +/- 0.02, 0.37 +/- 0.03, 0.29 +/- 0.03, 0.39 +/- 0.04, 0.42 +/- 0.03, 0.19 +/- 0.04, 0.26 +/- 0.05, and 0.39 +/- 0.04, respectively. The genetic correlation between WW and YW was positive and high (0.80 +/- 0.04), indicating that these traits are mainly determined by the same genes. Genetic correlations between the growth traits and scrotal circumference measures were positive and of low to moderate magnitude, ranging from 0.23 +/- 0.04 to 0.38 +/- 0.04. On the other hand, increased genetic associations were estimated between scrotal circumference and testicular volume at different ages (0.61 +/- 0.04 to 0.86 +/- 0.04). Selection for greater scrotal circumference in males should result in greater WW, YW, and testicular volume. In conclusion, in view of the difficulty in measuring testicular volume, there is no need to change the selection criterion from scrotal circumference to testicular volume in genetic breeding programs of Zebu breeds.
Belge, Bénédicte; Coche, Emmanuel; Pasquet, Agnès; Vanoverschelde, Jean-Louis J; Gerber, Bernhard L
2006-07-01
Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134+/-51 and 67+/-56 ml) were similar to those by MR (137+/-57 and 70+/-60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55+/-21 vs. 56+/-21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3+/-1.8 vs. 8.8+/-1.9 mm and 12.7+/-3.4 vs. 13.3+/-3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54+/-30 vs. 51+/-31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR.
Zhang, Lifan; Zhou, Xiang; Michal, Jennifer J; Ding, Bo; Li, Rui; Jiang, Zhihua
2014-01-01
Birth weight is an economically important trait in pig production because it directly impacts piglet growth and survival rate. In the present study, we performed a genome wide survey of candidate genes and pathways associated with individual birth weight (IBW) using the Illumina PorcineSNP60 BeadChip on 24 high (HEBV) and 24 low estimated breeding value (LEBV) animals. These animals were selected from a reference population of 522 individuals produced by three sires and six dam lines, which were crossbreds with multiple breeds. After quality-control, 43,257 SNPs (single nucleotide polymorphisms), including 42,243 autosomal SNPs and 1,014 SNPs on chromosome X, were used in the data analysis. A total of 27 differentially selected regions (DSRs), including 1 on Sus scrofa chromosome 1 (SSC1), 1 on SSC4, 2 on SSC5, 4 on SSC6, 2 on SSC7, 5 on SSC8, 3 on SSC9, 1 on SSC14, 3 on SSC18, and 5 on SSCX, were identified to show the genome wide separations between the HEBV and LEBV groups for IBW in piglets. A DSR with the most number of significant SNPs (including 7 top 0.1% and 31 top 5% SNPs) was located on SSC6, while another DSR with the largest genetic differences in F ST was found on SSC18. These regions harbor known functionally important genes involved in growth and development, such as TNFRSF9 (tumor necrosis factor receptor superfamily member 9), CA6 (carbonic anhydrase VI) and MDFIC (MyoD family inhibitor domain containing). A DSR rich in imprinting genes appeared on SSC9, which included PEG10 (paternally expressed 10), SGCE (sarcoglycan, epsilon), PPP1R9A (protein phosphatase 1, regulatory subunit 9A) and ASB4 (ankyrin repeat and SOCS box containing 4). More importantly, our present study provided evidence to support six quantitative trait loci (QTL) regions for pig birth weight, six QTL regions for average birth weight (ABW) and three QTL regions for litter birth weight (LBW) reported previously by other groups. Furthermore, gene ontology analysis with 183 genes
Siow, Bernard; Drobnjak, Ivana; Chatterjee, Aritrick; Lythgoe, Mark F; Alexander, Daniel C
2012-01-01
There has been increasing interest in nuclear magnetic resonance (NMR) techniques that are sensitive to diffusion of molecules containing NMR visible nuclei for the estimation of microstructure parameters. A microstructure parameter of particular interest is pore radius distribution. A recent in silico study optimised the shape of the gradient waveform in diffusion weighted spin-echo experiments for estimating pore size. The study demonstrated that optimised gradient waveform (GEN) protocols improve pore radius estimates compared to optimised pulse gradient spin-echo (PGSE) protocols, particularly at shorter length scales. This study assesses the feasibility of implementing GEN protocols on a small bore 9.4 T scanner and verifies their additional sensitivity to pore radius. We implement GEN and PGSE protocols optimised for pore radii of 1, 2.5, 5, 7.5, 10 μm and constrained to maximum gradient strengths of 40, 80, 200 mT m(-1). We construct microstructure phantoms, which have a single pore radius for each phantom, using microcapillary fibres. The measured signal shows good agreement with simulated signal, strongly indicating that the GEN waveforms can be implemented on a 9.4 T system. We also demonstrate that GEN protocols provide improved sensitivity to the smaller pore radii when compared to optimised PGSE protocols, particularly at the lower gradient amplitudes investigated in this study. Our results suggest that this improved sensitivity of GEN protocols would be reflected in clinical scenarios.
ERIC Educational Resources Information Center
Mozumdar, Arupendra; Liguori, Gary
2016-01-01
Purpose: Estimating obesity prevalence using self-reported height and weight is an economic and effective method and is often used in national surveys. However, self-reporting of height and weight can involve misreporting of those variables and has been found to be associated to the size of the individual. This study investigated the biases in…
Sperduto, Paul W.; Kased, Norbert; Roberge, David; Xu, Zhiyuan; Shanley, Ryan; Luo, Xianghua; Sneed, Penny K.; Chao, Samuel T.; Weil, Robert J.; Suh, John; Bhatt, Amit; Jensen, Ashley W.; Brown, Paul D.; Shih, Helen A.; Kirkpatrick, John; Gaspar, Laurie E.; Fiveash, John B.; Chiang, Veronica; Knisely, Jonathan P.S.; Sperduto, Christina Maria; Lin, Nancy; Mehta, Minesh
2012-01-01
Purpose Our group has previously published the Graded Prognostic Assessment (GPA), a prognostic index for patients with brain metastases. Updates have been published with refinements to create diagnosis-specific Graded Prognostic Assessment indices. The purpose of this report is to present the updated diagnosis-specific GPA indices in a single, unified, user-friendly report to allow ease of access and use by treating physicians. Methods A multi-institutional retrospective (1985 to 2007) database of 3,940 patients with newly diagnosed brain metastases underwent univariate and multivariate analyses of prognostic factors associated with outcomes by primary site and treatment. Significant prognostic factors were used to define the diagnosis-specific GPA prognostic indices. A GPA of 4.0 correlates with the best prognosis, whereas a GPA of 0.0 corresponds with the worst prognosis. Results Significant prognostic factors varied by diagnosis. For lung cancer, prognostic factors were Karnofsky performance score, age, presence of extracranial metastases, and number of brain metastases, confirming the original Lung-GPA. For melanoma and renal cell cancer, prognostic factors were Karnofsky performance score and the number of brain metastases. For breast cancer, prognostic factors were tumor subtype, Karnofsky performance score, and age. For GI cancer, the only prognostic factor was the Karnofsky performance score. The median survival times by GPA score and diagnosis were determined. Conclusion Prognostic factors for patients with brain metastases vary by diagnosis, and for each diagnosis, a robust separation into different GPA scores was discerned, implying considerable heterogeneity in outcome, even within a single tumor type. In summary, these indices and related worksheet provide an accurate and facile diagnosis-specific tool to estimate survival, potentially select appropriate treatment, and stratify clinical trials for patients with brain metastases. PMID:22203767
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1990-01-01
A real-time digital signal combining system for use with Ka-band feed arrays is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed samples by using a sliding-window implementation of a vector maximum-likelihood parameter estimator. It is shown that with averaging times of about 0.1 second, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the feed array, even in the presence of severe wind gusts and similar disturbances.
Yang, Lu; Mester, Zoltán; Sturgeon, Ralph E; Meija, Juris
2012-03-06
The much anticipated overhaul of the International System of Units (SI) will result in new definitions of base units in terms of fundamental constants. However, redefinition of the kilogram in terms of the Planck constant (h) cannot proceed without consistency between the Avogadro and Planck constants, which are both related through the Rydberg constant. In this work, an independent assessment of the atomic weight of silicon in a highly enriched (28)Si crystal supplied by the International Avogadro Coordination (IAC) was performed. This recent analytical approach, based on dissolution with NaOH and its isotopic characterization by multicollector inductively coupled plasma mass spectrometry, is critically evaluated. The resultant atomic weight A(r)(Si) = 27.976 968 39(24)(k=1) differs significantly from the most recent value of A(r)(Si) = 27.976 970 27(23)(k=1). Using the results generated herein for A(r)(Si) along with other IAC measurement results for mass, volume, and the lattice spacing, the estimate of the Avogadro constant becomes N(A) = 6.022 140 40(19) × 10(23) mol(-1).
NASA Astrophysics Data System (ADS)
Zhang, Yonggen; Schaap, Marcel G.
2017-04-01
Pedotransfer functions (PTFs) have been widely used to predict soil hydraulic parameters in favor of expensive laboratory or field measurements. Rosetta (Schaap et al., 2001, denoted as Rosetta1) is one of many PTFs and is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method which allows the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), and their uncertainties. In this study, we present an improved set of hierarchical pedotransfer functions (Rosetta3) that unify the water retention and Ks submodels into one. Parameter uncertainty of the fit of the VG curve to the original retention data is used in the ANN calibration procedure to reduce bias of parameters predicted by the new PTF. One thousand bootstrap replicas were used to calibrate the new models compared to 60 or 100 in Rosetta1, thus allowing the uni-variate and bi-variate probability distributions of predicted parameters to be quantified in greater detail. We determined the optimal weights for VG parameters and Ks, the optimal number of hidden nodes in ANN, and the number of bootstrap replicas required for statistically stable estimates. Results show that matric potential-dependent bias was reduced significantly while root mean square error (RMSE) for water content were reduced modestly; RMSE for Ks was increased by 0.9% (H3w) to 3.3% (H5w) in the new models on log scale of Ks compared with the Rosetta1 model. It was found that estimated distributions of parameters were mildly non-Gaussian and could instead be described rather well with heavy-tailed α-stable distributions. On the other hand, arithmetic means had only a small estimation bias for most textures when compared with the mean-like ;shift; parameter of the α-stable distributions. Arithmetic means and (co-)variances are therefore still recommended as summary statistics of the estimated distributions. However, it
Austin, Peter C; Stuart, Elizabeth A
2015-12-10
The propensity score is defined as a subject's probability of treatment selection, conditional on observed baseline covariates. Weighting subjects by the inverse probability of treatment received creates a synthetic sample in which treatment assignment is independent of measured baseline covariates. Inverse probability of treatment weighting (IPTW) using the propensity score allows one to obtain unbiased estimates of average treatment effects. However, these estimates are only valid if there are no residual systematic differences in observed baseline characteristics between treated and control subjects in the sample weighted by the estimated inverse probability of treatment. We report on a systematic literature review, in which we found that the use of IPTW has increased rapidly in recent years, but that in the most recent year, a majority of studies did not formally examine whether weighting balanced measured covariates between treatment groups. We then proceed to describe a suite of quantitative and qualitative methods that allow one to assess whether measured baseline covariates are balanced between treatment groups in the weighted sample. The quantitative methods use the weighted standardized difference to compare means, prevalences, higher-order moments, and interactions. The qualitative methods employ graphical methods to compare the distribution of continuous baseline covariates between treated and control subjects in the weighted sample. Finally, we illustrate the application of these methods in an empirical case study. We propose a formal set of balance diagnostics that contribute towards an evolving concept of 'best practice' when using IPTW to estimate causal treatment effects using observational data.
NASA Astrophysics Data System (ADS)
Du, Lin; Shi, Shuo; Gong, Wei; Yang, Jian; Sun, Jia; Mao, Feiyue
2016-06-01
Hyperspectral LiDAR (HSL) is a novel tool in the field of active remote sensing, which has been widely used in many domains because of its advantageous ability of spectrum-gained. Especially in the precise monitoring of nitrogen in green plants, the HSL plays a dispensable role. The exiting HSL system used for nitrogen status monitoring has a multi-channel detector, which can improve the spectral resolution and receiving range, but maybe result in data redundancy, difficulty in system integration and high cost as well. Thus, it is necessary and urgent to pick out the nitrogen-sensitive feature wavelengths among the spectral range. The present study, aiming at solving this problem, assigns a feature weighting to each centre wavelength of HSL system by using matrix coefficient analysis and divergence threshold. The feature weighting is a criterion to amend the centre wavelength of the detector to accommodate different purpose, especially the estimation of leaf nitrogen content (LNC) in rice. By this way, the wavelengths high-correlated to the LNC can be ranked in a descending order, which are used to estimate rice LNC sequentially. In this paper, a HSL system which works based on a wide spectrum emission and a 32-channel detector is conducted to collect the reflectance spectra of rice leaf. These spectra collected by HSL cover a range of 538 nm - 910 nm with a resolution of 12 nm. These 32 wavelengths are strong absorbed by chlorophyll in green plant among this range. The relationship between the rice LNC and reflectance-based spectra is modeled using partial least squares (PLS) and support vector machines (SVMs) based on calibration and validation datasets respectively. The results indicate that I) wavelength selection method of HSL based on feature weighting is effective to choose the nitrogen-sensitive wavelengths, which can also be co-adapted with the hardware of HSL system friendly. II) The chosen wavelength has a high correlation with rice LNC which can be
Hasan, Md Kamrul; Anas, Emran Mohammad Abu; Alam, S Kaisar; Lee, Soo Yeol
2012-10-01
Ultrasound elastography is emerging with enormous potential as a medical imaging modality for effective discrimination of pathological changes in soft tissue. It maps the tissue elasticity or strain due to a mechanical deformation applied to it. The strain image most often calculated from the derivative of the local displacement field is highly noisy because of the de-correlation effect mainly due to unstable free-hand scanning and/or irregular tissue motion; consequently, improving the SNR of the strain image is still a challenging problem in this area. In this paper, a novel approach using the nearest-neighbor weighted least-squares is presented for direct estimation of the 'mean' axial strain for high quality strain imaging. Like other time/frequency domain reported schemes, the proposed method exploits the fact that the post-compression rf echo signal is a time-scaled and shifted replica of the pre-compression rf echo signal. However, the elegance of our technique is that it directly computes the mean strain without explicitly using any post filter and/or previous local displacement/strain estimates as is usually done in the conventional approaches. It is implemented in the short-time Fourier transform domain through a nearest-neighbor weighted least-squares-based Fourier spectrum equalization technique. As the local tissue strain is expected to maintain continuity with its neighbors, we show here that the mean strain at the interrogative window can be directly computed from the common stretching factor that minimizes a cost function derived from the exponentially weighted windowed pre- and post-compression rf echo segments in both the lateral and axial directions. The performance of our algorithm is verified for up to 8% applied strain using simulation and experimental phantom data and the results reveal that the SNR of the strain image can be significantly improved compared to other reported algorithms in the literature. The efficacy of the algorithm is also
Shuaib, Muhammad; Becker, Stan; Rahman, Md. Mokhlesur; Peters, David H.
2011-01-01
Due to an urgent need for information on the coverage of health service for women and children after the fall of Taliban regime in Afghanistan, a multiple indicator cluster survey (MICS) was conducted in 2003 using the outdated 1979 census as the sampling frame. When 2004 pre-census data became available, population-sampling weights were generated based on the survey-sampling scheme. Using these weights, the population estimates for seven maternal and child healthcare-coverage indicators were generated and compared with the unweighted MICS 2003 estimates. The use of sample weights provided unbiased estimates of population parameters. Results of the comparison of weighted and unweighted estimates showed some wide differences for individual provincial estimates and confidence intervals. However, the mean, median and absolute mean of the differences between weighted and unweighted estimates and their confidence intervals were close to zero for all indicators at the national level. Ranking of the five highest and the five lowest provinces on weighted and unweighted estimates also yielded similar results. The general consistency of results suggests that outdated sampling frames can be appropriate for use in similar situations to obtain initial estimates from household surveys to guide policy and programming directions. However, the power to detect change from these estimates is lower than originally planned, requiring a greater tolerance for error when the data are used as a baseline for evaluation. The generalizability of using outdated sampling frames in similar settings is qualified by the specific characteristics of the MICS 2003—low replacement rate of clusters and zero probability of inclusion of clusters created after the 1979 census. PMID:21957678
Shen, Jincheng; Wang, Lu; Taylor, Jeremy M G
2016-11-28
Prostate cancer patients are closely followed after the initial therapy and salvage treatment may be prescribed to prevent or delay cancer recurrence. The salvage treatment decision is usually made dynamically based on the patient's evolving history of disease status and other time-dependent clinical covariates. A multi-center prostate cancer observational study has provided us data on longitudinal prostate specific antigen (PSA) measurements, time-varying salvage treatment, and cancer recurrence time. These data enable us to estimate the best dynamic regime of salvage treatment, while accounting for the complicated confounding of time-varying covariates present in the data. A Random Forest based method is used to model the probability of regime adherence and inverse probability weights are used to account for the complexity of selection bias in regime adherence. The optimal regime is then identified by the largest restricted mean survival time. We conduct simulation studies with different PSA trends to mimic both simple and complex regime adherence mechanisms. The proposed method can efficiently accommodate complex and possibly unknown adherence mechanisms, and it is robust to cases where the proportional hazards assumption is violated. We apply the method to data collected from the observational study and estimate the best salvage treatment regime in managing the risk of prostate cancer recurrence.
NASA Astrophysics Data System (ADS)
Wang, Jyun-Guo; Tai, Shen-Chuan; Lin, Cheng-Jian
2016-08-01
This study proposes a hybrid of a recurrent fuzzy cerebellar model articulation controller (RFCMAC) and a weighted strategy for solving single-image visibility in a degraded image. The proposed RFCMAC model is used to estimate the transmission map. The average value of the brightest 1% in a hazy image is calculated for atmospheric light estimation. A new adaptive weighted estimation is then used to refine the transmission map and remove the halo artifact from the sharp edges. Experimental results show that the proposed method has better dehazing capability compared to state-of-the-art techniques and is suitable for real-world applications.
Bradley, Paul M.; Journey, Celeste; Bringham, Mark E.; Burns, Douglas A.; Button, Daniel T.; Riva-Murray, Karen
2013-01-01
To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (p = 0.78; p = 0.003) and annual FMeHg basin yield (p = 0.66; p = 0.026). Improved correlation (p = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered.
Bradley, Paul M; Journey, Celeste A; Brigham, Mark E; Burns, Douglas A; Button, Daniel T; Riva-Murray, Karen
2013-01-01
To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (ρ = 0.78; p = 0.003) and annual FMeHg basin yield (ρ = 0.66; p = 0.026). Improved correlation (ρ = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered.
NASA Astrophysics Data System (ADS)
Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas
2013-03-01
Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.
NASA Astrophysics Data System (ADS)
Rebello, N. Sanjay
2012-02-01
Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.
Bernardes, P A; Grossi, D A; Savegnago, R P; Buzanskas, M E; Urbinati, I; Bezerra, L A F; Lôbo, R B; Munari, D P
2015-11-01
The Tabapuã breed is a beef cattle Brazilian breed known for its sexual precocity and desirable characteristics for tropical conditions. However, this is a newly formed breed and few studies have been conducted regarding genetic parameters and genetic trends for its reproductive traits. The objective of the present study was to estimate the genetic parameters, genetic trends, and relative selection efficiency for weaning weight adjusted to 210 d of age (W210), age at first calving (AFC), average calving interval (ACI), first calving interval (CI1), and accumulated productivity (ACP) among Tabapuã beef cattle. Pedigree data on 15,241 Tabapuã animals born between 1958 and 2011 and phenotype records from 7,340 cows born between 1970 and 2011 were supplied by the National Association of Breeders and Researchers (Associação Nacional de Criadores e Pesquisadores). Analysis through the least squares method assisted in defining the fixed effects that were considered within the models. The estimates for the genetic parameters were obtained through the REML, using a multitrait animal model. The likelihood ratio test applied for W210 was significant ( < 0.05) for the inclusion of maternal additive genetic and permanent environmental effects in the model. Genetic trends were calculated through linear regression of the EBV of the animals, according to the year of birth. The heritability estimates obtained ranged from 0.04 ± 0.03 for CI1 to 0.25 ± 0.05 for W210. The genetic correlations ranged from 0.004 ± 0.19 for W210-AFC and 0.93 ± 0.12 for ACI-CI1. The genetic trend was significant ( < 0.05) and favorable for CI1 and the maternal genetic effect of W210 and was significant ( < 0.05) and unfavorable for AFC, ACI, and ACP. The ACP could be used in the selection index to assist the breeding goal of improved productive and reproductive performance. The genetic trends indicated small and unfavorable genetic changes for AFC, ACI, and ACP in light of the recent
Wenzel, Tom P.
2016-08-22
This report recalculates the estimated relationship between vehicle mass and societal fatality risk, using alternative groupings by vehicle weight, to test whether the trend of decreasing fatality risk from mass reduction as case vehicle mass increases, holds over smaller increments of the range in case vehicle masses. The NHTSA baseline regression model estimates the relationship using for two weight groups for cars and light trucks; we re-estimated the mass reduction coefficients using four, six, and eight bins of vehicle mass. The estimated effect of mass reduction on societal fatality risk was not consistent over the range in vehicle masses in these weight bins. These results suggest that the relationship indicated by the NHTSA baseline model is a result of other, unmeasured attributes of the mix of vehicles in the lighter vs. heavier weight bins, and not necessarily the result of a correlation between mass reduction and societal fatality risk. An analysis of the average vehicle, driver, and crash characteristics across the various weight groupings did not reveal any strong trends that might explain the lack of a consistent trend of decreasing fatality risk from mass reduction in heavier vehicles.
NASA Astrophysics Data System (ADS)
Sanford, W. E.
2015-12-01
Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially
Woo, Kyong-Je; Kim, Eun-Ji; Lee, Kyeong-Tae; Mun, Goo-Hyun
2016-09-01
Background Preoperative estimation of abdominal flap volume is valuable for breast reconstruction, especially in lean patients. The purpose of this study was to develop a formula to estimate the weight of the deep inferior epigastric artery perforator (DIEP) flap using unidimensional parameters. Methods We retrospectively collected data on 100 consecutive patients who underwent breast reconstruction using the DIEP flap. Multiple linear regression analysis was used to develop a formula to estimate the weight of the flap. Predictor variables included body mass index, height of the flap, width of the flap, and flap thickness on computed tomography angiographic images at three paraumbilical sites: 5 cm right, left, and inferior from the umbilicus. Then we prospectively tested the accuracy of the developed formula in 38 consecutive patients who underwent breast reconstruction with free DIEP flaps. Results A calculation formula and a smartphone application, DIEP-W was developed from retrospective analysis (R (2) = 92.7%, p < 0.001). In the prospective study, the average estimated weight was 96.3% of the actual weight, giving the formula a mean absolute percentage error of 7.7% (average differences of 45 g). The flap size in the prospective group was significantly smaller (p < 0.001) and donor-site complications were less (p = 0.002) than those of retrospective group. Conclusion Surgeons can easily calculate the DIEP weight with varying flap dimensions in a real-time fashion using this formula during preoperative planning and intraoperative design. Estimating the flap weight facilitates economical use of the flap, which can lead to reduced donor-site complications.
Carroli, Guillermo; Widmer, Mariana; Neerup Jensen, Lisa; Giordano, Daniel; Abdel Aleem, Hany; Talegawkar, Sameera A.; Benachi, Alexandra; Diemert, Anke; Tshefu Kitoto, Antoinette; Thinkhamrop, Jadsada; Lumbiganon, Pisake; Tabor, Ann; Kriplani, Alka; Gonzalez Perez, Rogelio; Hecher, Kurt; Hanson, Mark A.; Gülmezoglu, A. Metin; Platt, Lawrence D.
2017-01-01
Background Perinatal mortality and morbidity continue to be major global health challenges strongly associated with prematurity and reduced fetal growth, an issue of further interest given the mounting evidence that fetal growth in general is linked to degrees of risk of common noncommunicable diseases in adulthood. Against this background, WHO made it a high priority to provide the present fetal growth charts for estimated fetal weight (EFW) and common ultrasound biometric measurements intended for worldwide use. Methods and Findings We conducted a multinational prospective observational longitudinal study of fetal growth in low-risk singleton pregnancies of women of high or middle socioeconomic status and without known environmental constraints on fetal growth. Centers in ten countries (Argentina, Brazil, Democratic Republic of the Congo, Denmark, Egypt, France, Germany, India, Norway, and Thailand) recruited participants who had reliable information on last menstrual period and gestational age confirmed by crown–rump length measured at 8–13 wk of gestation. Participants had anthropometric and nutritional assessments and seven scheduled ultrasound examinations during pregnancy. Fifty-two participants withdrew consent, and 1,387 participated in the study. At study entry, median maternal age was 28 y (interquartile range [IQR] 25–31), median height was 162 cm (IQR 157–168), median weight was 61 kg (IQR 55–68), 58% of the women were nulliparous, and median daily caloric intake was 1,840 cal (IQR 1,487–2,222). The median pregnancy duration was 39 wk (IQR 38–40) although there were significant differences between countries, the largest difference being 12 d (95% CI 8–16). The median birthweight was 3,300 g (IQR 2,980–3,615). There were differences in birthweight between countries, e.g., India had significantly smaller neonates than the other countries, even after adjusting for gestational age. Thirty-one women had a miscarriage, and three fetuses had
Sadinski, Meredith; Medved, Milica; Karademir, Ibrahim; Wang, Shiyang; Peng, Yahui; Jiang, Yulei; Sammet, Steffen; Karczmar, Gregory; Oto, Aytekin
2015-01-01
Purpose The purpose of the study is to determine short-term reproducibility of apparent diffusion coefficient (ADC) estimated from diffusion-weighted magnetic resonance (DW-MR) imaging of the prostate. Methods Fourteen patients with biopsy-proven prostate cancer were studied under an Institutional Review Board-approved protocol. Each patient underwent two, consecutive and identical DW-MR scans on a 3T system. ADC values were calculated from each scan and a deformable registration was performed to align corresponding images. The prostate and cancerous regions of interest (ROIs) were independently analyzed by two radiologists. The prostate volume was analyzed by sextant. Per-voxel absolute and relative percentage variations in ADC were compared between sextants. Per-voxel and per-ROI variations in ADC were calculated for cancerous ROIs. Results Per-voxel absolute difference in ADC in the prostate ranged from 0 to 1.60 × 10−3 mm2/s (per-voxel relative difference 0% to 200%, mean 10.5%). Variation in ADC was largest in the posterior apex (0% to 200%, mean 11.6%). Difference in ADC variation between sextants was not statistically significant. Cancer ROIs’ per-voxel variation in ADC ranged from 0.001 × 10−3 to 0.841 × 10−3 mm2/s (0% to 67.4%, mean 11.2%) and per-ROI variation ranged from 0 to 0.463 × 10−3 mm2/s (mean 0.122 × 10−3 mm2/s). Conclusions Variation in ADC within the human prostate is reasonably small, and is on the order of 10%. PMID:25805558
Jensen, Bente R; Hovgaard-Hansen, Line; Cappelen, Katrine L
2016-08-01
Running on a lower-body positive-pressure (LBPP) treadmill allows effects of weight support on leg muscle activation to be assessed systematically, and has the potential to facilitate rehabilitation and prevent overloading. The aim was to study the effect of running with weight support on leg muscle activation and to estimate relative knee and ankle joint forces. Runners performed 6-min running sessions at 2.22 m/s and 3.33 m/s, at 100%, 80%, 60%, 40%, and 20% body weight (BW). Surface electromyography, ground reaction force, and running characteristics were measured. Relative knee and ankle joint forces were estimated. Leg muscles responded differently to unweighting during running, reflecting different relative contribution to propulsion and antigravity forces. At 20% BW, knee extensor EMGpeak decreased to 22% at 2.22 m/s and 28% at 3.33 m/s of 100% BW values. Plantar flexors decreased to 52% and 58% at 20% BW, while activity of biceps femoris muscle remained unchanged. Unweighting with LBPP reduced estimated joint force significantly although less than proportional to the degree of weight support (ankle). It was concluded that leg muscle activation adapted to the new biomechanical environment, and the effect of unweighting on estimated knee force was more pronounced than on ankle force.
ERIC Educational Resources Information Center
Qian, Jiahe
2006-01-01
Weighting and variance estimation are two statistical issues involved in survey data analysis for large-scale assessment programs such as the Higher Education Information and Communication Technology (ICT) Literacy Assessment. Because survey data are always acquired by probability sampling, to draw unbiased or almost unbiased inferences for the…
ERIC Educational Resources Information Center
Warne, Russell T.
2016-01-01
Recently Kim (2016) published a meta-analysis on the effects of enrichment programs for gifted students. She found that these programs produced substantial effects for academic achievement (g = 0.96) and socioemotional outcomes (g = 0.55). However, given current theory and empirical research these estimates of the benefits of enrichment programs…
Scott, P K; Finley, B L; Sung, H M; Schulze, R H; Turner, D B
1997-07-01
The primary health concern associated with chromite ore processing residues (COPR) at sites in Hudson County, NJ, is the inhalation of Cr(VI) suspended from surface soils. Since health-based soil standards for Cr(VI) will be derived using the inhalation pathway, soil suspension modeling will be necessary to estimate site-specific, health-based soil cleanup levels (HBSCLs). The purpose of this study was to identify the most appropriate particulate emission and air dispersion models for estimating soil suspension at these sites based on their theoretical underpinnings, scientific acceptability, and past performance. The identified modeling approach, the AP-42 particulate emission model and the fugitive dust model (FDM), was used to calculate concentrations of airborne Cr(VI) and TSP at two COPR sites. These estimated concentrations were then compared to concentrations measured at each site. The TSP concentrations calculated using the AP-42/FDM soil suspension modeling approach were all within a factor of 3 of the measured concentrations. The majority of the estimated air concentrations were greater than the measured, indicating that the AP-42/FDM approach tends to overestimate on-site concentrations. The site-specific Cr(VI) HBSCLs for these two sites calculated using this conservative soil suspension modeling approach ranged from 190 to 420 mg/kg.
Coplen, T.B.; Peiser, H.S.
1998-01-01
International commissions and national committees for atomic weights (mean relative atomic masses) have recommended regularly updated, best values for these atomic weights as applicable to terrestrial sources of the chemical elements. Presented here is a historically complete listing starting with the values in F. W. Clarke's 1882 recalculation, followed by the recommended values in the annual reports of the American Chemical Society's Atomic Weights Commission. From 1903, an International Commission published such reports and its values (scaled to an atomic weight of 16 for oxygen) are here used in preference to those of national committees of Britain, Germany, Spain, Switzerland, and the U.S.A. We have, however, made scaling adjustments from Ar(16O) to Ar(12C) where not negligible. From 1920, this International Commission constituted itself under the International Union of Pure and Applied Chemistry (IUPAC). Since then, IUPAC has published reports (mostly biennially) listing the recommended atomic weights, which are reproduced here. Since 1979, these values have been called the "standard atomic weights" and, since 1969, all values have been published, with their estimated uncertainties. Few of the earlier values were published with uncertainties. Nevertheless, we assessed such uncertainties on the basis of our understanding of the likely contemporary judgement of the values' reliability. While neglecting remaining uncertainties of 1997 values, we derive "differences" and a retrospective index of reliability of atomic-weight values in relation to assessments of uncertainties at the time of their publication. A striking improvement in reliability appears to have been achieved since the commissions have imposed upon themselves the rule of recording estimated uncertainties from all recognized sources of error.
Padula, Amy M; Mortimer, Kathleen; Hubbard, Alan; Lurmann, Frederick; Jerrett, Michael; Tager, Ira B
2012-11-01
Traffic-related air pollution is recognized as an important contributor to health problems. Epidemiologic analyses suggest that prenatal exposure to traffic-related air pollutants may be associated with adverse birth outcomes; however, there is insufficient evidence to conclude that the relation is causal. The Study of Air Pollution, Genetics and Early Life Events comprises all births to women living in 4 counties in California's San Joaquin Valley during the years 2000-2006. The probability of low birth weight among full-term infants in the population was estimated using machine learning and targeted maximum likelihood estimation for each quartile of traffic exposure during pregnancy. If everyone lived near high-volume freeways (approximated as the fourth quartile of traffic density), the estimated probability of term low birth weight would be 2.27% (95% confidence interval: 2.16, 2.38) as compared with 2.02% (95% confidence interval: 1.90, 2.12) if everyone lived near smaller local roads (first quartile of traffic density). Assessment of potentially causal associations, in the absence of arbitrary model assumptions applied to the data, should result in relatively unbiased estimates. The current results support findings from previous studies that prenatal exposure to traffic-related air pollution may adversely affect birth weight among full-term infants.
NASA Astrophysics Data System (ADS)
McCook, G. P.; Guinan, E. F.; Saumon, D.; Kang, Y. W.
1997-05-01
CM Draconis (Gl 630.1; Vmax = +12.93) is an important eclipsing binary consisting of two dM4.5e stars with an orbital period of 1.2684 days. This binary is a high velocity star (s= 164 km/s) and the brighter member of a common proper motion pair with a cool faint white dwarf companion (LP 101-16). CM Dra and its white dwarf companion were once considered by Zwicky to belong to a class of "pygmy stars", but they turned out to be ordinary old, cool white dwarfs or faint red dwarfs. Lacy (ApJ 218,444L) determined the first orbital and physical properties of CM Dra from the analysis of his light and radial velocity curves. In addition to providing directly measured masses, radii, and luminosities for low mass stars, CM Dra was also recognized by Lacy and later by Paczynski and Sienkiewicz (ApJ 286,332) as an important laboratory for cosmology, as a possible old Pop II object where it may be possible to determine the primordial helium abundance. Recently, Metcalfe et al.(ApJ 456,356) obtained accurate RV measures for CM Dra and recomputed refined elements along with its helium abundance. Starting in 1995, we have been carrying out intensive RI photoelectric photometry of CM Dra to obtain well defined, accurate light curves so that its fundamental properties can be improved, and at the same time, to search for evidence of planets around the binary from planetary transit eclipses. During 1996 and 1997 well defined light curves were secured and these were combined with the RV measures of Metcalfe et al. (1996) to determine the orbital and physical parameters of the system, including a refined orbital period. A recent version of the Wilson-Devinney program was used to analyze the data. New radii, masses, mean densities, Teff, and luminosities were found as well as a re-determination of the helium abundance (Y). The results of the recent analyses of the light and RV curves will be presented and modelling results discussed. This research is supported by NSF grants AST-9315365
Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M
2016-08-01
Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air.
Lanzman, Rotem S; Heusch, Philipp; Aissa, Joel; Schleich, Christoph; Thomas, Christoph; Sawicki, Lino M; Antoch, Gerald; Kröpil, Patric
2016-01-01
Objective: To assess the value of body mass index (BMI) in comparison with body weight as a surrogate parameter for the calculation of size-specific dose estimates (SSDEs) in thoracoabdominal CT. Methods: 401 CT examinations in 235 patients (196 chest, 205 abdomen; 95 females, 140 males; age 62.5 ± 15.0 years) were analysed in regard to weight, height and BMI (kg m−2). Effective diameter (Deff, cm) was assessed on axial CT images. The correlation between BMI, weight and Deff was calculated. SSDEs were calculated based on Deff, weight and BMI and lookup tables were developed. Results: Overall height, weight, BMI and Deff were 172.5 ± 9.9 cm, 79.5 ± 19.1 kg, 26.6 ± 5.6 kg m−2 and 30.1 ± 4.3 cm, respectively. There was a significant correlation between Deff and BMI as well as weight (r = 0.85 and r = 0.84; p < 0.05, respectively). Correlation was significantly better for BMI in abdominal CT (r = 0.89 vs r = 0.84; p < 0.05), whereas it was better for weight in chest CT (r = 0.87 vs r = 0.81; p < 0.05). Surrogated SSDEs did not differ significantly from the reference standard with a median absolute relative difference of 4.2% per patient (interquartile range 25–75: 3.1–7.89, range 0–25.3%). Conclusion: BMI and weight exhibit a significant correlation with Deff in adult patients and can be used as surrogates in the calculation of SSDEs. Using the herein-developed lookup charts, SSDEs can be calculated based on patients' weight and BMI. Advances in knowledge: In abdominal CT, BMI has a superior correlation with effective diameter compared with weight, whereas weight is superior in chest CT. Patients' BMI and weight can be used as surrogates in the calculation of SSDEs. PMID:26693878
Forman, Michele R; Zhu, Yeyi; Hernandez, Ladia M; Himes, John H; Dong, Yongquan; Danish, Robert K; James, Kyla E; Caulfield, Laura E; Kerver, Jean M; Arab, Lenore; Voss, Paula; Hale, Daniel E; Kanafani, Nadim; Hirschfeld, Steven
2014-09-01
Surrogate measures are needed when recumbent length or height is unobtainable or unreliable. Arm span has been used as a surrogate but is not feasible in children with shoulder or arm contractures. Ulnar length is not usually impaired by joint deformities, yet its utility as a surrogate has not been adequately studied. In this cross-sectional study, we aimed to examine the accuracy and reliability of ulnar length measured by different tools as a surrogate measure of recumbent length and height. Anthropometrics [recumbent length, height, arm span, and ulnar length by caliper (ULC), ruler (ULR), and grid (ULG)] were measured in 1479 healthy infants and children aged <6 y across 8 study centers in the United States. Multivariate mixed-effects linear regression models for recumbent length and height were developed by using ulnar length and arm span as surrogate measures. The agreement between the measured length or height and the predicted values by ULC, ULR, ULG, and arm span were examined by Bland-Altman plots. All 3 measures of ulnar length and arm span were highly correlated with length and height. The degree of precision of prediction equations for length by ULC, ULR, and ULG (R(2) = 0.95, 0.95, and 0.92, respectively) was comparable with that by arm span (R(2) = 0.97) using age, sex, and ethnicity as covariates; however, height prediction by ULC (R(2) = 0.87), ULR (R(2) = 0.85), and ULG (R(2) = 0.88) was less comparable with arm span (R(2) = 0.94). Our study demonstrates that arm span and ULC, ULR, or ULG can serve as accurate and reliable surrogate measures of recumbent length and height in healthy children; however, ULC, ULR, and ULG tend to slightly overestimate length and height in young infants and children. Further testing of ulnar length as a surrogate is warranted in physically impaired or nonambulatory children.
ERIC Educational Resources Information Center
Samejima, Fumiko
A method is proposed that increases the accuracies of estimation of the operating characteristics of discrete item responses, especially when the true operating characteristic is represented by a steep curve, and also at the lower and upper ends of the ability distribution where the estimation tends to be inaccurate because of the smaller number…
Technology Transfer Automated Retrieval System (TEKTRAN)
The efficacy of live animal, real-time, B-mode ultrasound (US) estimates of carcass traits as (partial) predictors of carcass composition warrants investigation in sheep of varying genetic and environmental backgrounds. Our objectives were to 1) evaluate US estimates of corresponding carcass measure...
NASA Technical Reports Server (NTRS)
MacConochie, Ian O.; White, Nancy H.; Mills, Janelle C.
2004-01-01
A program, entitled Weights, Areas, and Mass Properties (or WAMI) is centered around an array of menus that contain constants that can be used in various mass estimating relationships for the ultimate purpose of obtaining the mass properties of Earth-to-Orbit Transports. The current Shuttle mass property data was relied upon heavily for baseline equation constant values from which other options were derived.
Förster, Alex; Mürle, Bettina; Böhme, Johannes; Al-Zghloul, Mansour; Kerl, Hans U; Wenz, Holger; Groden, Christoph
2016-10-01
Although lacunar infarction accounts for approximately 25% of ischemic strokes, collateral blood flow through anastomoses is not well evaluated in lacunar infarction. In 111 lacunar infarction patients, we analyzed diffusion-weighted images, perfusion-weighted images, and blood flow on dynamic four-dimensional angiograms generated by use of Signal Processing In NMR-Software. Blood flow was classified as absent (type 1), from periphery to center (type 2), from center to periphery (type 3), and combination of type 2 and 3 (type 4). On diffusion-weighted images, lacunar infarction was found in the basal ganglia (11.7%), internal capsule (24.3%), corona radiata (30.6%), thalamus (24.3%), and brainstem (9.0%). In 58 (52.2%) patients, perfusion-weighted image showed a circumscribed hypoperfusion, in one (0.9%) a circumscribed hyperperfusion, whereas the remainder was normal. In 36 (62.1%) patients, a larger perfusion deficit (>7 mm) was observed. In these, blood flow was classified type 1 in four (11.1%), 2 in 17 (47.2%), 3 in 9 (25.0%), and 4 in six (16.7%) patients. Patients with lacunar infarction in the posterior circulation more often demonstrated blood flow type 2 and less often type 3 (p = 0.01). Detailed examination and graduation of blood flow in lacunar infarction by use of dynamic four-dimensional angiograms is feasible and may serve for a better characterization of this stroke subtype.
ERIC Educational Resources Information Center
Lien, Diana S.; Evans, William
2005-01-01
Substantial increases in cigarette taxes result in decrease in smoking by pregnant women. It is also observed that there is consequent improvement in infant birth weight. The conclusions are based on the data from four states that opted to raise cigarette taxes by a large margin.
ERIC Educational Resources Information Center
Lee, Sunghee; Satter, Delight E.; Ponce, Ninez A.
2009-01-01
Racial classification is a paramount concern in data collection and analysis for American Indians and Alaska Natives (AI/ANs) and has far-reaching implications in health research. We examine how different racial classifications affect survey weights and consequently change health-related indicators for the AI/AN population in California. Using a…
Reported maternal education is an important predictor of pregnancy outcomes. Like income, it is believed to allow women to locate in more favorable conditions than less educated or affluent peers. We examine the effect of reported educational attainment on term birth weight (birt...
Technology Transfer Automated Retrieval System (TEKTRAN)
Birth weight (BWT) and calving difficulty (CD) were recorded on 4,579 first parity females from the Germplasm Evaluation (GPE) program at the U.S. Meat Animal Research Center (USMARC). Both traits were analyzed using a bivariate animal model with direct and maternal effects. Calving difficulty was...
Romero-Vivas, C M E; Llinás, H; Falconar, A K I
2007-11-01
The ability of a simple sweeping method, coupled to calibration factors, to accurately estimate the total numbers of Aedes aegypti (L.) (Diptera: Culicidae) pupae in water-storage containers (20-6412-liter capacities at different water levels) throughout their main dengue virus transmission temperature range was evaluated. Using this method, one set of three calibration factors were derived that could accurately estimate the total Ae. aegypti pupae in their principal breeding sites, large water-storage containers, found throughout the world. No significant differences were obtained using the method at different altitudes (14-1630 m above sea level) that included the range of temperatures (20-30 degrees C) at which dengue virus transmission occurs in the world. In addition, no significant differences were found in the results obtained between and within the 10 different teams that applied this method; therefore, this method was extremely robust. One person could estimate the Ae. aegypti pupae in each of the large water-storage containers in only 5 min by using this method, compared with two people requiring between 45 and 90 min to collect and count the total pupae population in each of them. Because the method was both rapid to perform and did not disturb the sediment layers in these domestic water-storage containers, it was more acceptable by the residents, and, therefore, ideally suited for routine surveillance purposes and to assess the efficacy of Ae. aegypti control programs in dengue virus-endemic areas throughout the world.
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)
1995-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.
2012-04-01
traditionally used to estimate spinal cord NTCP may not apply to the dosimetry of SRS. Further research with additional NTCP models is needed.
Cano-Ramírez, Claudia; Santiago-Hernández, Alejandro; Rivera-Orduña, Flor Nohemí; Pineda-Mendoza, Rosa María; Zúñiga, Gerardo; Hidalgo-Lara, María Eugenia
2017-02-01
Here, we describe a zymographic method for the simultaneous detection of enzymatic activity and molecular weight (MW) estimation, following a single electrophoresis step. This involved separating cellulase and xylanase activities from bacteria and fungi, obtained from different sources, such as commercial extracts, crude extract and purified proteins, under denaturing conditions, by 10% polyacrylamide gel electrophoresis, using polyacrylamide gels copolymerized with 1% (w/v) carboxymethylcellulose or beechwood xylan as substrates. Then, enzymes were refolded by treatment with 2.5% Triton X-100 in an appropriate buffer for each enzymatic activity, and visualized by Coomassie blue staining for MW estimation. Finally, Congo red staining revealed bio-active cellulase and xylanase bands after electrophoretic separation of the proteins in the preparations. This method may provide a useful additional tool for screening of particular cellulase and xylanase producers, identification and MW estimation of polypeptides that manifest these activities, and for monitoring and control of fungal and bacterial cellulase and xylanase production.
NASA Astrophysics Data System (ADS)
Herman, Jay R.
2010-12-01
Multiple scattering radiative transfer results are used to calculate action spectrum weighted irradiances and fractional irradiance changes in terms of a power law in ozone Ω, U(Ω/200)-RAF, where the new radiation amplification factor (RAF) is just a function of solar zenith angle. Including Rayleigh scattering caused small differences in the estimated 30 year changes in action spectrum-weighted irradiances compared to estimates that neglect multiple scattering. The radiative transfer results are applied to several action spectra and to an instrument response function corresponding to the Solar Light 501 meter. The effect of changing ozone on two plant damage action spectra are shown for plants with high sensitivity to UVB (280-315 nm) and those with lower sensitivity, showing that the probability for plant damage for the latter has increased since 1979, especially at middle to high latitudes in the Southern Hemisphere. Similarly, there has been an increase in rates of erythemal skin damage and pre-vitamin D3 production corresponding to measured ozone decreases. An example conversion function is derived to obtain erythemal irradiances and the UV index from measurements with the Solar Light 501 instrument response function. An analytic expressions is given to convert changes in erythemal irradiances to changes in CIE vitamin-D action spectrum weighted irradiances.
NASA Technical Reports Server (NTRS)
Herman, Jay R.
2010-01-01
Multiple scattering radiative transfer results are used to calculate action spectrum weighted irradiances and fractional irradiance changes in terms of a power law in ozone OMEGA, U(OMEGA/200)(sup -RAF), where the new radiation amplification factor (RAF) is just a function of solar zenith angle. Including Rayleigh scattering caused small differences in the estimated 30 year changes in action spectrum-weighted irradiances compared to estimates that neglect multiple scattering. The radiative transfer results are applied to several action spectra and to an instrument response function corresponding to the Solar Light 501 meter. The effect of changing ozone on two plant damage action spectra are shown for plants with high sensitivity to UVB (280-315 run) and those with lower sensitivity, showing that the probability for plant damage for the latter has increased since 1979, especially at middle to high latitudes in the Southern Hemisphere. Similarly, there has been an increase in rates of erythemal skin damage and pre-vitamin D3 production corresponding to measured ozone decreases. An example conversion function is derived to obtain erythemal irradiances and the UV index from measurements with the Solar Light 501 instrument response function. An analytic expressions is given to convert changes in erythemal irradiances to changes in CIE vitamin-D action spectrum weighted irradiances.
Estimating equations for biomarker based exposure estimation under non-steady-state conditions.
Bartell, Scott M; Johnson, Wesley O
2011-06-13
Unrealistic steady-state assumptions are often used to estimate toxicant exposure rates from biomarkers. A biomarker may instead be modeled as a weighted sum of historical time-varying exposures. Estimating equations are derived for a zero-inflated gamma distribution for daily exposures with a known exposure frequency. Simulation studies suggest that the estimating equations can provide accurate estimates of exposure magnitude at any reasonable sample size, and reasonable estimates of the exposure variance at larger sample sizes.
Zhang, Tianhao; Liu, Gang; Zhu, Zhongmin; Gong, Wei; Ji, Yuxi; Huang, Yusi
2016-01-01
The real-time estimation of ambient particulate matter with diameter no greater than 2.5 μm (PM2.5) is currently quite limited in China. A semi-physical geographically weighted regression (GWR) model was adopted to estimate PM2.5 mass concentrations at national scale using the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) Aerosol Optical Depth product fused by the Dark Target (DT) and Deep Blue (DB) algorithms, combined with meteorological parameters. The fitting results could explain over 80% of the variability in the corresponding PM2.5 mass concentrations, and the estimation tends to overestimate when measurement is low and tends to underestimate when measurement is high. Based on World Health Organization standards, results indicate that most regions in China suffered severe PM2.5 pollution during winter. Seasonal average mass concentrations of PM2.5 predicted by the model indicate that residential regions, namely Jing-Jin-Ji Region and Central China, were faced with challenge from fine particles. Moreover, estimation deviation caused primarily by the spatially uneven distribution of monitoring sites and the changes of elevation in a relatively small region has been discussed. In summary, real-time PM2.5 was estimated effectively by the satellite-based semi-physical GWR model, and the results could provide reasonable references for assessing health impacts and offer guidance on air quality management in China. PMID:27706054
Software Estimation: Developing an Accurate, Reliable Method
2011-08-01
level 5 organizations. Defects identified here for CMM level 1 and level 5 are captured from Capers Jones who has identified software delivered... Capers , “Software Assessments, Benchmarks, and Best Practices”, Addison-Wesley Professional, April 2000. 1. At the AV-8B Joint System Support
Effect of clothing weight on body weight
Technology Transfer Automated Retrieval System (TEKTRAN)
Background: In clinical settings, it is common to measure weight of clothed patients and estimate a correction for the weight of clothing, but we can find no papers in the medical literature regarding the variability in clothing weight with weather, season, and gender. Methods: Fifty adults (35 wom...
Ramos, Yuddy; St-Onge, Benoît; Blanchet, Jean-Pierre; Smargiassi, Audrey
2016-06-01
Air pollution is a major environmental and health problem, especially in urban agglomerations. Estimating personal exposure to fine particulate matter (PM2.5) remains a great challenge because it requires numerous point measurements to explain the daily spatial variation in pollutant levels. Furthermore, meteorological variables have considerable effects on the dispersion and distribution of pollutants, which also depends on spatio-temporal emission patterns. In this study we developed a hybrid interpolation technique that combined the inverse distance-weighted (IDW) method with Kriging with external drift (KED), and applied it to daily PM2.5 levels observed at 10 monitoring stations. This provided us with downscaled high-resolution maps of PM2.5 for the Island of Montreal. For the KED interpolation, we used spatio-temporal daily meteorological estimates and spatial covariates as land use and vegetation density. Different KED and IDW daily estimation models for the year 2010 were developed for each of the six synoptic weather classes. These clusters were developed using principal component analysis and unsupervised hierarchical classification. The results of the interpolation models were assessed with a leave-one-station-out cross-validation. The performance of the hybrid model was better than that of the KED or the IDW alone for all six synoptic weather classes (the daily estimate for R(2) was 0.66-0.93 and for root mean square error (RMSE) 2.54-1.89 μg/m(3)).
Jansen, Rob T P; Laeven, Mark; Kardol, Wim
2002-06-01
The analytical processes in clinical laboratories should be considered to be non-stationary, non-ergodic and probably non-stochastic processes. Both the process mean and the process standard deviation vary. The variation can be different at different levels of concentration. This behavior is shown in five examples of different analytical systems: alkaline phosphatase on the Hitachi 911 analyzer (Roche), vitamin B12 on the Access analyzer (Beckman), prothrombin time and activated partial thromboplastin time on the STA Compact analyzer (Roche) and PO2 on the ABL 520 analyzer (Radiometer). A model is proposed to assess the status of a process. An exponentially weighted moving average and standard deviation was used to estimate process mean and standard deviation. Process means were estimated overall and for each control level. The process standard deviation was estimated in terms of within-run standard deviation. Limits were defined in accordance with state of the art- or biological variance-derived cut-offs. The examples given are real, not simulated, data. Individual control sample results were normalized to a target value and target standard deviation. The normalized values were used in the exponentially weighted algorithm. The weighting factor was based on a process time constant, which was estimated from the period between two calibration or maintenance procedures. The proposed system was compared with Westgard rules. The Westgard rules perform well, despite the underlying presumption of ergodicity. This is mainly caused by the introduction of the starting rule of 12s, which proves essential to prevent a large number of rule violations. The probability of reporting a test result with an analytical error that exceeds the total allowable error was calculated for the proposed system as well as for the Westgard rules. The proposed method performed better. The proposed algorithm was implemented in a computer program running on computers to which the analyzers were
Sparse and accurate high resolution SAR imaging
NASA Astrophysics Data System (ADS)
Vu, Duc; Zhao, Kexin; Rowe, William; Li, Jian
2012-05-01
We investigate the usage of an adaptive method, the Iterative Adaptive Approach (IAA), in combination with a maximum a posteriori (MAP) estimate to reconstruct high resolution SAR images that are both sparse and accurate. IAA is a nonparametric weighted least squares algorithm that is robust and user parameter-free. IAA has been shown to reconstruct SAR images with excellent side lobes suppression and high resolution enhancement. We first reconstruct the SAR images using IAA, and then we enforce sparsity by using MAP with a sparsity inducing prior. By coupling these two methods, we can produce a sparse and accurate high resolution image that are conducive for feature extractions and target classification applications. In addition, we show how IAA can be made computationally efficient without sacrificing accuracies, a desirable property for SAR applications where the size of the problems is quite large. We demonstrate the success of our approach using the Air Force Research Lab's "Gotcha Volumetric SAR Data Set Version 1.0" challenge dataset. Via the widely used FFT, individual vehicles contained in the scene are barely recognizable due to the poor resolution and high side lobe nature of FFT. However with our approach clear edges, boundaries, and textures of the vehicles are obtained.
Zhang, Tianhao; Gong, Wei; Wang, Wei; Ji, Yuxi; Zhu, Zhongmin; Huang, Yusi
2016-01-01
Highly accurate data on the spatial distribution of ambient fine particulate matter (<2.5 μm: PM2.5) is currently quite limited in China. By introducing NO2 and Enhanced Vegetation Index (EVI) into the Geographically Weighted Regression (GWR) model, a newly developed GWR model combined with a fused Aerosol Optical Depth (AOD) product and meteorological parameters could explain approximately 87% of the variability in the corresponding PM2.5 mass concentrations. There existed obvious increase in the estimation accuracy against the original GWR model without NO2 and EVI, where cross-validation R2 increased from 0.77 to 0.87. Both models tended to overestimate when measurement is low and underestimate when high, where the exact boundary value depended greatly on the dependent variable. There was still severe PM2.5 pollution in many residential areas until 2015; however, policy-driven energy conservation and emission reduction not only reduced the severity of PM2.5 pollution but also its spatial range, to a certain extent, from 2014 to 2015. The accuracy of satellite-derived PM2.5 still has limitations for regions with insufficient ground monitoring stations and desert areas. Generally, the use of NO2 and EVI in GWR models could more effectively estimate PM2.5 at the national scale than previous GWR models. The results in this study could provide a reasonable reference for assessing health impacts, and could be used to examine the effectiveness of emission control strategies under implementation in China. PMID:27941628
You, Wei; Zang, Zengliang; Zhang, Lifeng; Li, Yi; Wang, Weiqi
2016-05-01
Taking advantage of the continuous spatial coverage, satellite-derived aerosol optical depth (AOD) products have been widely used to assess the spatial and temporal characteristics of fine particulate matter (PM2.5) on the ground and their effects on human health. However, the national-scale ground-level PM2.5 estimation is still very limited because the lack of ground PM2.5 measurements to calibrate the model in China. In this study, a national-scale geographically weighted regression (GWR) model was developed to estimate ground-level PM2.5 concentration based on satellite AODs, newly released national-wide hourly PM2.5 concentrations, and meteorological parameters. The results showed good agreements between satellite-retrieved and ground-observed PM2.5 concentration at 943 stations in China. The overall cross-validation (CV) R (2) is 0.76 and root mean squared prediction error (RMSE) is 22.26 μg/m(3) for MODIS-derived AOD. The MISR-derived AOD also exhibits comparable performance with a CV R (2) and RMSE are 0.81 and 27.46 μg/m(3), respectively. Annual PM2.5 concentrations retrieved either by MODIS or MISR AOD indicated that most of the residential community areas exceeded the new annual Chinese PM2.5 National Standard level 2. These results suggest that this approach is useful for estimating large-scale ground-level PM2.5 distributions especially for the regions without PMs monitoring sites.
Donato, Mary M.
2006-01-01
Streamflow and trace-metal concentration data collected at 10 locations in the Spokane River basin of northern Idaho and eastern Washington during 1999-2004 were used as input for the U.S. Geological Survey software, LOADEST, to estimate annual loads and mean flow-weighted concentrations of total and dissolved cadmium, lead, and zinc. Cadmium composed less than 1 percent of the total metal load at all stations; lead constituted from 6 to 42 percent of the total load at stations upstream from Coeur d'Alene Lake and from 2 to 4 percent at stations downstream of the lake. Zinc composed more than 90 percent of the total metal load at 6 of the 10 stations examined in this study. Trace-metal loads were lowest at the station on Pine Creek below Amy Gulch, where the mean annual total cadmium load for 1999-2004 was 39 kilograms per year (kg/yr), the mean estimated total lead load was about 1,700 kg/yr, and the mean annual total zinc load was 14,000 kg/yr. The trace-metal loads at stations on North Fork Coeur d'Alene River at Enaville, Ninemile Creek, and Canyon Creek also were relatively low. Trace-metal loads were highest at the station at Coeur d'Alene River near Harrison. The mean annual total cadmium load was 3,400 kg/yr, the mean total lead load was 240,000 kg/yr, and the mean total zinc load was 510,000 kg/yr for 1999-2004. Trace-metal loads at the station at South Fork Coeur d'Alene River near Pinehurst and the three stations on the Spokane River downstream of Coeur d'Alene Lake also were relatively high. Differences in metal loads, particularly lead, between stations upstream and downstream of Coeur d'Alene Lake likely are due to trapping and retention of metals in lakebed sediments. LOADEST software was used to estimate loads for water years 1999-2001 for many of the same sites discussed in this report. Overall, results from this study and those from a previous study are in good agreement. Observed differences between the two studies are attributable to streamflow
Weighting Regressions by Propensity Scores
ERIC Educational Resources Information Center
Freedman, David A.; Berk, Richard A.
2008-01-01
Regressions can be weighted by propensity scores in order to reduce bias. However, weighting is likely to increase random error in the estimates, and to bias the estimated standard errors downward, even when selection mechanisms are well understood. Moreover, in some cases, weighting will increase the bias in estimated causal parameters. If…
Davis, M E; Simmen, R C M
2010-02-01
Data for the current study were obtained from a divergent selection experiment in which the selection criterion was the average serum IGF-I concentration of 3 postweaning blood samples collected from purebred Angus calves. Multiple trait derivative-free REML procedures were used to obtain estimates of inbreeding depression for IGF-I concentration and for BW and BW gains measured from birth to the conclusion of a 140-d postweaning performance test. Included in the analysis were 3,243 animals in the A(-1) matrix, 2,182 of which had valid records for IGF-I concentration. Over the course of the entire selection experiment, inbreeding of the calf averaged 3.3% (SD = 3.1%) and inbreeding of the dam averaged 1.8% (SD = 2.7%). Mean inbreeding levels at the end of the study were 6.82 +/- 0.38% and 4.20 +/- 0.36% for calves and dams, respectively. Annual rates of increase in inbreeding of calves and dams were 0.36 +/- 0.01 (P < 0.0001) and 0.25 +/- 0.01%/yr (P < 0.0001), respectively. Insulin-like growth factor I concentration at d 28 (IGF28), 42 (IGF42), and 56 (IGF56) of the 140-d postweaning test and mean IGF-I concentration decreased by 0.62 +/- 0.88, 1.86 +/- 0.96, 1.92 +/- 0.89, and 1.48 +/- 0.76 ng/mL per 1% increase in inbreeding of calf. Only the regression coefficient for IGF56 differed significantly from zero, although the regression coefficients for IGF42 and mean IGF-I approached significance (P < 0.10). Increases in inbreeding levels of the dams also tended to result in reduced IGF-I concentrations, although the regression coefficients were not significantly different from zero. Inbreeding of calf had highly significant negative effects on all BW and BW gain traits examined, except for birth weight, with regression coefficients ranging from -0.74 +/- 0.20 kg/% increase in calf inbreeding for postweaning BW gain to -1.68 +/- 0.33 kg/% increase in calf inbreeding for off-test BW. Inbreeding of dam had a significant negative effect on birth weight of progeny and
ERIC Educational Resources Information Center
Zhang, Jinming; Lu, Ting
2007-01-01
In practical applications of item response theory (IRT), item parameters are usually estimated first from a calibration sample. After treating these estimates as fixed and known, ability parameters are then estimated. However, the statistical inferences based on the estimated abilities can be misleading if the uncertainty of the item parameter…
... Weight share What It Takes to Lose Weight: Calorie Basics When you’re trying to lose weight... ... wcdapps.hhs.gov/Badges/Handlers/Badge.ashx?js=0&widgetname=betobaccofreew200short</NOFRAMES& ...
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Hwang, Jun Hyun; Ryu, Dong Hee; Park, Soon-Woo
2015-08-01
We investigated the interaction effect between body weight perception and chronic disease comorbidities on body weight control behavior in overweight/obese Korean adults. We analyzed data from 9,138 overweight/obese adults ≥20 yr of age from a nationally representative cross-sectional survey. Multiple logistic regression using an interaction model was performed to estimate the effect of chronic disease comorbidities on weight control behavior regarding weight perception. Adjusted odds ratios for weight control behavior tended to increase significantly with an increasing number of comorbidities in men regardless of weight perception (P<0.05 for trend), suggesting no interaction. Unlike women who perceived their weight accurately, women who under-perceived their weight did not show significant improvements in weight control behavior even with an increasing number of comorbidities. Thus, a significant interaction between weight perception and comorbidities was found only in women (P=0.031 for interaction). The effect of the relationship between accurate weight perception and chronic disease comorbidities on weight control behavior varied by sex. Improving awareness of body image is particularly necessary for overweight and obese women to prevent complications.
The value of body weight measurement to assess dehydration in children.
Pruvost, Isabelle; Dubos, François; Chazard, Emmanuel; Hue, Valérie; Duhamel, Alain; Martinot, Alain
2013-01-01
Dehydration secondary to gastroenteritis is one of the most common reasons for office visits and hospital admissions. The indicator most commonly used to estimate dehydration status is acute weight loss. Post-illness weight gain is considered as the gold-standard to determine the true level of dehydration and is widely used to estimate weight loss in research. To determine the value of post-illness weight gain as a gold standard for acute dehydration, we conducted a prospective cohort study in which 293 children, aged 1 month to 2 years, with acute diarrhea were followed for 7 days during a 3-year period. The main outcome measures were an accurate pre-illness weight (if available within 8 days before the diarrhea), post-illness weight, and theoretical weight (predicted from the child's individual growth chart). Post-illness weight was measured for 231 (79%) and both theoretical and post-illness weights were obtained for 111 (39%). Only 62 (21%) had an accurate pre-illness weight. The correlation between post-illness and theoretical weight was excellent (0.978), but bootstrapped linear regression analysis showed that post-illness weight underestimated theoretical weight by 0.48 kg (95% CI: 0.06-0.79, p<0.02). The mean difference in the fluid deficit calculated was 4.0% of body weight (95% CI: 3.2-4.7, p<0.0001). Theoretical weight overestimated accurate pre-illness weight by 0.21 kg (95% CI: 0.08-0.34, p = 0.002). Post-illness weight underestimated pre-illness weight by 0.19 kg (95% CI: 0.03-0.36, p = 0.02). The prevalence of 5% dehydration according to post-illness weight (21%) was significantly lower than the prevalence estimated by either theoretical weight (60%) or clinical assessment (66%, p<0.0001).These data suggest that post-illness weight is of little value as a gold standard to determine the true level of dehydration. The performance of dehydration signs or scales determined by using post-illness weight as a gold standard has to be reconsidered.
Schneider, André; Nguyen, Christophe
2011-01-01
Organic acids released from plant roots can form complexes with cadmium (Cd) in the soil solution and influence metal bioavailability not only due to the nature and concentration of the complexes but also due to their lability. The lability of a complex influences its ability to buffer changes in the concentration of free ions (Cd); it depends on the association (, m mol s) and dissociation (, s) rate constants. A resin exchange method was used to estimate and (m mol s), which is the conditional estimate of depending on the calcium (Ca) concentration in solution. The constants were estimated for oxalate, citrate, and malate, three low-molecular-weight organic acids commonly exuded by plant roots and expected to strongly influence Cd uptake by plants. For all three organic acids, the and estimates were around 2.5 10 m mol s and 1.3 × 10 s, respectively. Based on the literature, these values indicate that the Cd- low-molecular-weight organic acids complexes formed between Cd and low-molecular-weight organic acids may be less labile than complexes formed with soil soluble organic matter but more labile than those formed with aminopolycarboxylic chelates.
Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel
2017-04-01
Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.
Maximum entropy principle and partial probability weighted moments
NASA Astrophysics Data System (ADS)
Deng, Jian; Pandey, M. D.; Xie, W. C.
2012-05-01
Maximum entropy principle (MaxEnt) is usually used for estimating the probability density function under specified moment constraints. The density function is then integrated to obtain the cumulative distribution function, which needs to be inverted to obtain a quantile corresponding to some specified probability. In such analysis, consideration of higher ordermoments is important for accurate modelling of the distribution tail. There are three drawbacks for this conventional methodology: (1) Estimates of higher order (>2) moments from a small sample of data tend to be highly biased; (2) It can merely cope with problems with complete or noncensored samples; (3) Only probability weighted moments of integer orders have been utilized. These difficulties inevitably induce bias and inaccuracy of the resultant quantile estimates and therefore have been the main impediments to the application of the MaxEnt Principle in extreme quantile estimation. This paper attempts to overcome these problems and presents a distribution free method for estimating the quantile function of a non-negative randomvariable using the principle of maximum partial entropy subject to constraints of the partial probability weighted moments estimated from censored sample. The main contributions include: (1) New concepts, i.e., partial entropy, fractional partial probability weighted moments, and partial Kullback-Leibler measure are elegantly defined; (2) Maximum entropy principle is re-formulated to be constrained by fractional partial probability weighted moments; (3) New distribution free quantile functions are derived. Numerical analyses are performed to assess the accuracy of extreme value estimates computed from censored samples.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
John-Denny, Blessy; McGarvey, Kathryn; Hann, Alexandra; Pegiazoglou, Ioannis; Peat, Jennifer
2017-01-01
Objective To prospectively compare the actual weights of Australian children in an ethnically diverse metropolitan setting with the predicted weights using the Paediatric Advanced Weight Prediction in the Emergency Room (PAWPER) tape, Broselow tape, Mercy system and calculated weights using the updated Advanced Paediatric Life Support (APLS), Luscombe and Owens and Best Guess formulae. Methods A prospective, cross-sectional, observational, blinded, convenience study conducted at the Children's Hospital at Westmead Paediatric Emergency Department in Sydney, Australia. Comparisons were made using Bland-Altman plots, mean difference, limits of agreement and estimated weight within 10% and 20% of actual weight. Results 199 patients were enrolled in the study with a mean actual weight of 27.2 kg (SD 17.2). Length-based tools, with or without body habitus adjustment, performed better than age-based formulae. When measuring estimated weight within 10% of actual weight, PAWPER performed best with 73%, followed by Mercy (69%), PAWPER with no adjustment (62%), Broselow (60%), Best Guess (47%), Luscombe and Owens (41%) and revised APLS (40%). Mean difference was similar across all methods ranging from 0.4 kg (0.0, 0.9) for Mercy to −2.2 kg (−3.5, −0.9) for revised APLS. Limits of agreement were narrower for the length-based tools (−5.9, 6.8 Mercy; −8.3, 5.6 Broselow; −9.0, 7.1 PAWPER adjusted; −12.1, 9.2 PAWPER unadjusted) than the age-based formulae (−18.6, 17.4 Best Guess; −19.4, 15.1 revised APLS, −21.8, 17.7 Luscombe and Owens). Conclusion In an ethnically diverse population, length-based methods with or without body habitus modification are superior to age-based methods for predicting actual body weight. Body habitus modifications increase the accuracy and precision slightly. PMID:27799153
ERIC Educational Resources Information Center
Nutter, June
1995-01-01
Secondary level physical education teachers can have their students use math concepts while working out on the weight-room equipment. The article explains how students can reinforce math skills while weightlifting by estimating their strength, estimating their power, or calculating other formulas. (SM)
Calabrò, P S; Moraci, N; Suraci, P
2012-03-15
This paper presents the results of laboratory column tests aimed at defining the optimum weight ratio of zero-valent iron (ZVI)/pumice granular mixtures to be used in permeable reactive barriers (PRBs) for the removal of nickel from contaminated groundwater. The tests were carried out feeding the columns with aqueous solutions of nickel nitrate at concentrations of 5 and 50 mg/l using three ZVI/pumice granular mixtures at various weight ratios (10/90, 30/70 and 50/50), for a total of six column tests; two additional tests were carried out using ZVI alone. The most successful compromise between reactivity (higher ZVI content) and long-term hydraulic performance (higher Pumice content) seems to be given by the ZVI/pumice granular mixture with a 30/70 weight ratio.
... Activity Overweight & Obesity Healthy Weight Breastfeeding Micronutrient Malnutrition State and Local ... it comes to weight loss, there's no lack of fad diets promising fast results. But such diets limit your nutritional intake, can be unhealthy, and tend to fail ...
... Together Understanding Adult Overweight & Obesity About Food Portions Bariatric Surgery for Severe Obesity More Weight Management Topics Healthy ... Sleep Apnea Weight Management Topics About Food Portions Bariatric Surgery for Severe Obesity Being Healthy is a Big ...
Dehydration of seabird prey during transport to the colony: Effects on wet weight energy densities
Montevecchi, W.A.; Piatt, John F.
1987-01-01
We present evidence to indicate that dehydration of prey transported by seabirds from capture sites at sea to chicks at colonies inflates estimates of wet weight energy densities. These findings and a comparison of wet and dry weight energy densities reported in the literature emphasize the importance of (i) accurate measurement of the fresh weight and water content of prey, (ii) use of dry weight energy densities in comparisons among species, seasons, and regions, and (iii) cautious interpretation and extrapolation of existing data sets.
Austin, Peter C; Stuart, Elizabeth A
2015-04-30
There is increasing interest in estimating the causal effects of treatments using observational data. Propensity-score matching methods are frequently used to adjust for differences in observed characteristics between treated and control individuals in observational studies. Survival or time-to-event outcomes occur frequently in the medical literature, but the use of propensity score methods in survival analysis has not been thoroughly investigated. This paper compares two approaches for estimating the Average Treatment Effect (ATE) on survival outcomes: Inverse Probability of Treatment Weighting (IPTW) and full matching. The performance of these methods was compared in an extensive set of simulations that varied the extent of confounding and the amount of misspecification of the propensity score model. We found that both IPTW and full matching resulted in estimation of marginal hazard ratios with negligible bias when the ATE was the target estimand and the treatment-selection process was weak to moderate. However, when the treatment-selection process was strong, both methods resulted in biased estimation of the true marginal hazard ratio, even when the propensity score model was correctly specified. When the propensity score model was correctly specified, bias tended to be lower for full matching than for IPTW. The reasons for these biases and for the differences between the two methods appeared to be due to some extreme weights generated for each method. Both methods tended to produce more extreme weights as the magnitude of the effects of covariates on treatment selection increased. Furthermore, more extreme weights were observed for IPTW than for full matching. However, the poorer performance of both methods in the presence of a strong treatment-selection process was mitigated by the use of IPTW with restriction and full matching with a caliper restriction when the propensity score model was correctly specified.
ERIC Educational Resources Information Center
Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.
1999-01-01
Performed an empirical Monte Carlo study using predictor and criterion data from 84,808 U.S. Air Force enlistees. Compared formula-based, traditional empirical, and equal-weights procedures. Discusses issues for basic research on validation and cross-validation. (SLD)
Accurate spectral color measurements
NASA Astrophysics Data System (ADS)
Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.
1999-08-01
Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.
Modeling operating weight and axle weight distributions for highway vehicles
Greene, D.L.; Liang, J.C.
1988-07-01
The estimation of highway cost responsibility requires detailed information on vehicle operating weights and axle weights by type of vehicle. Typically, 10--20 vehicle types must be cross-classified by 10--20 registered weight classes and again by 20 or more operating weight categories, resulting in 100--400 relative frequencies to be determined for each vehicle type. For each of these, gross operating weight must be distributed to each axle or axle unit. Given the rarity of many of the heaviest vehicle types, direct estimation of these frequencies and axle weights from traffic classification count statistics and truck weight data may exceed the reliability of even the largest (e.g., 250,000 record) data sources. An alternative is to estimate statistical models of operating weight distributions as functions of registered weight, and models of axle weight shares as functions of operating weight. This paper describes the estimation of such functions using the multinomial logit model (a log-linear model) and the implementation of the modeling framework as a PC-based FORTRAN program. Areas for further research include the addition of highway class and region as explanatory variables in operating weight distribution models, and the development of theory for including registration costs and costs of operating overweight in the modeling framework. 14 refs., 45 figs., 5 tabs.
Weighted Automata and Weighted Logics
NASA Astrophysics Data System (ADS)
Droste, Manfred; Gastin, Paul
In automata theory, a fundamental result of Büchi and Elgot states that the recognizable languages are precisely the ones definable by sentences of monadic second order logic. We will present a generalization of this result to the context of weighted automata. We develop syntax and semantics of a quantitative logic; like the behaviors of weighted automata, the semantics of sentences of our logic are formal power series describing ‘how often’ the sentence is true for a given word. Our main result shows that if the weights are taken in an arbitrary semiring, then the behaviors of weighted automata are precisely the series definable by sentences of our quantitative logic. We achieve a similar characterization for weighted Büchi automata acting on infinite words, if the underlying semiring satisfies suitable completeness assumptions. Moreover, if the semiring is additively locally finite or locally finite, then natural extensions of our weighted logic still have the same expressive power as weighted automata.
Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping
2015-06-26
In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.
Epidemic spreading on complex networks with general degree and weight distributions
NASA Astrophysics Data System (ADS)
Wang, Wei; Tang, Ming; Zhang, Hai-Feng; Gao, Hui; Do, Younghae; Liu, Zong-Hua
2014-10-01
The spread of disease on complex networks has attracted wide attention in the physics community. Recent works have demonstrated that heterogeneous degree and weight distributions have a significant influence on the epidemic dynamics. In this study, a novel edge-weight-based compartmental approach is developed to estimate the epidemic threshold and epidemic size (final infected density) on networks with general degree and weight distributions, and a remarkable agreement with numerics is obtained. Even in complex networks with the strong heterogeneous degree and weight distributions, this approach is used. We then propose an edge-weight-based removal strategy with different biases and find that such a strategy can effectively control the spread of epidemic when the highly weighted edges are preferentially removed, especially when the weight distribution of a network is extremely heterogenous. The theoretical results from the suggested method can accurately predict the above removal effectiveness.
Sauer, Helene; Dammann, Dirk; Zipfel, Stephan; Teufel, Martin; Junne, Florian; Enck, Paul; Giel, Katrin Elisabeth; Mack, Isabelle
2016-01-01
Objective The aim of the study was to investigate whether obese children and adolescents have a disturbed body representation as compared to normal-weight participants matched for age and gender and whether their body representation changes in the course of an inpatient weight-reduction program. Methods Sixty obese (OBE) and 27 normal-weight (NW) children and adolescents (age: 9–17) were assessed for body representation using a multi-method approach. Therefore, we assessed body size estimation, tactile size estimation, heartbeat detection accuracy, and attitudes towards one’s own body. OBE were examined upon admission and before discharge of an inpatient weight-reduction program. NW served as cross-sectional control group. Results Body size estimation and heartbeat detection accuracy were similar in OBE and NW. OBE overestimated sizes in tactile size estimation and were more dissatisfied with their body as compared to NW. In OBE but not in NW, several measures of body size estimation correlated with negative body evaluation. After weight-loss treatment, OBE had improved in heartbeat detection accuracy and were less dissatisfied with their body. None of the assessed variables predicted weight-loss success. Conclusions Although OBE children and adolescents generally perceived their body size and internal status of the body accurately, weight reduction improved their heartbeat detection accuracy and body dissatisfaction. PMID:27875563
Unbiased bootstrap error estimation for linear discriminant analysis.
Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R
2014-12-01
Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.
Estimating potential evapotranspiration with improved radiation estimation
Technology Transfer Automated Retrieval System (TEKTRAN)
Potential evapotranspiration (PET) is of great importance to estimation of surface energy budget and water balance calculation. The accurate estimation of PET will facilitate efficient irrigation scheduling, drainage design, and other agricultural and meteorological applications. However, accuracy o...
Random Weighted Sobolev Inequalities and Application to Quantum Ergodicity
NASA Astrophysics Data System (ADS)
Robert, Didier; Thomann, Laurent
2015-05-01
This paper is a continuation of Poiret et al. (Ann Henri Poincaré 16:651-689, 2015), where we studied a randomisation method based on the Laplacian with harmonic potential. Here we extend our previous results to the case of any polynomial and confining potential V on . We construct measures, under concentration type assumptions, on the support of which we prove optimal weighted Sobolev estimates on . This construction relies on accurate estimates on the spectral function in a non-compact configuration space. Then we prove random quantum ergodicity results without specific assumption on the classical dynamics. Finally, we prove that almost all bases of Hermite functions are quantum uniquely ergodic.
Forman, Michele R.; Zhu, Yeyi; Hernandez, Ladia M.; Himes, John H.; Dong, Yongquan; Danish, Robert K.; James, Kyla E.; Caulfield, Laura E.; Kerver, Jean M.; Arab, Lenore; Voss, Paula; Hale, Daniel E.; Kanafani, Nadim; Hirschfeld, Steven
2014-01-01
Surrogate measures are needed when recumbent length or height is unobtainable or unreliable. Arm span has been used as a surrogate but is not feasible in children with shoulder or arm contractures. Ulnar length is not usually impaired by joint deformities, yet its utility as a surrogate has not been adequately studied. In this cross-sectional study, we aimed to examine the accuracy and reliability of ulnar length measured by different tools as a surrogate measure of recumbent length and height. Anthropometrics [recumbent length, height, arm span, and ulnar length by caliper (ULC), ruler (ULR), and grid (ULG)] were measured in 1479 healthy infants and children aged <6 y across 8 study centers in the United States. Multivariate mixed-effects linear regression models for recumbent length and height were developed by using ulnar length and arm span as surrogate measures. The agreement between the measured length or height and the predicted values by ULC, ULR, ULG, and arm span were examined by Bland-Altman plots. All 3 measures of ulnar length and arm span were highly correlated with length and height. The degree of precision of prediction equations for length by ULC, ULR, and ULG (R2 = 0.95, 0.95, and 0.92, respectively) was comparable with that by arm span (R2 = 0.97) using age, sex, and ethnicity as covariates; however, height prediction by ULC (R2 = 0.87), ULR (R2 = 0.85), and ULG (R2 = 0.88) was less comparable with arm span (R2 = 0.94). Our study demonstrates that arm span and ULC, ULR, or ULG can serve as accurate and reliable surrogate measures of recumbent length and height in healthy children; however, ULC, ULR, and ULG tend to slightly overestimate length and height in young infants and children. Further testing of ulnar length as a surrogate is warranted in physically impaired or nonambulatory children. PMID:25031329
Accurate bulk density determination of irregularly shaped translucent and opaque aerogels
NASA Astrophysics Data System (ADS)
Petkov, M. P.; Jones, S. M.
2016-05-01
We present a volumetric method for accurate determination of bulk density of aerogels, calculated from extrapolated weight of the dry pure solid and volume estimates based on the Archimedes' principle of volume displacement, using packed 100 μm-sized monodispersed glass spheres as a "quasi-fluid" media. Hard particle packing theory is invoked to demonstrate the reproducibility of the apparent density of the quasi-fluid. Accuracy rivaling that of the refractive index method is demonstrated for both translucent and opaque aerogels with different absorptive properties, as well as for aerogels with regular and irregular shapes.
NASA Technical Reports Server (NTRS)
Howard, W. H.; Young, D. R.
1972-01-01
Device applies compressive force to bone to minimize loss of bone calcium during weightlessness or bedrest. Force is applied through weights, or hydraulic, pneumatic or electrically actuated devices. Device is lightweight and easy to maintain and operate.
Fast and accurate registration techniques for affine and nonrigid alignment of MR brain images.
Liu, Jia-Xiu; Chen, Yong-Sheng; Chen, Li-Fen
2010-01-01
Registration of magnetic resonance brain images is a geometric operation that determines point-wise correspondences between two brains. It remains a difficult task due to the highly convoluted structure of the brain. This paper presents novel methods, Brain Image Registration Tools (BIRT), that can rapidly and accurately register brain images by utilizing the brain structure information estimated from image derivatives. Source and target image spaces are related by affine transformation and non-rigid deformation. The deformation field is modeled by a set of Wendland's radial basis functions hierarchically deployed near the salient brain structures. In general, nonlinear optimization is heavily engaged in the parameter estimation for affine/non-rigid transformation and good initial estimates are thus essential to registration performance. In this work, the affine registration is initialized by a rigid transformation, which can robustly estimate the orientation and position differences of brain images. The parameters of the affine/non-rigid transformation are then hierarchically estimated in a coarse-to-fine manner by maximizing an image similarity measure, the correlation ratio, between the involved images. T1-weighted brain magnetic resonance images were utilized for performance evaluation. Our experimental results using four 3-D image sets demonstrated that BIRT can efficiently align images with high accuracy compared to several other algorithms, and thus is adequate to the applications which apply registration process intensively. Moreover, a voxel-based morphometric study quantitatively indicated that accurate registration can improve both the sensitivity and specificity of the statistical inference results.
Yin, Xianyong; Cheng, Hui; Lin, Yan; Wineinger, Nathan E; Zhou, Fusheng; Sheng, Yujun; Yang, Chao; Li, Pan; Li, Feng; Shen, Changbing; Yang, Sen; Schork, Nicholas J; Zhang, Xuejun
2015-01-01
With numbers of common variants identified mainly through genome-wide association studies (GWASs), there is great interest in incorporating the findings into screening individuals at high risk of psoriasis. The purpose of this study is to establish genetic prediction models and evaluate its discriminatory ability in psoriasis in Han Chinese population. We built the genetic prediction models through weighted polygenic risk score (PRS) using 14 susceptibility variants in 8,819 samples. We found the risk of psoriasis among individuals in the top quartile of PRS was significantly larger than those in the lowest quartile of PRS (OR = 28.20, P < 2.0×10(-16)). We also observed statistically significant associations between the PRS, family history and early age onset of psoriasis. We also built a predictive model with all 14 known susceptibility variants and alcohol consumption, which achieved an area under the curve statistic of ~ 0.88. Our study suggests that 14 psoriasis known susceptibility loci have the discriminating potential, as is also associated with family history and age of onset. This is the genetic predictive model in psoriasis with the largest accuracy to date.
Floyd, Jessica; ter Kuile, Feiko; Cairns, Matt
2017-01-01
Background Malaria transmission has declined substantially in the 21st century, but pregnant women in areas of sustained transmission still require protection to prevent the adverse pregnancy and birth outcomes associated with malaria in pregnancy (MiP). A recent call to action has been issued to address the continuing low coverage of intermittent preventive treatment of malaria in pregnancy (IPTp). This call has, however, been questioned by some, in part due to concerns about resistance to sulphadoxine-pyrimethamine (SP), the only drug currently recommended for IPTp. Methods and findings Using an existing mathematical model of MiP, we combined estimates of the changing endemicity of malaria across Africa with maps of SP resistance mutations and current coverage of antenatal access and IPTp with SP (IPTp-SP) across Africa. Using estimates of the relationship between SP resistance mutations and the parasitological efficacy of SP during pregnancy, we estimated the varying impact of IPTp-SP across Africa and the incremental value of enhancing IPTp-SP uptake to match current antenatal care (ANC) coverage. The risks of MiP and malaria-attributable low birthweight (mLBW) in unprotected pregnancies (i.e., those not using insecticide-treated nets [ITNs]) leading to live births fell by 37% (33%–41% 95% credible interval [crI]) and 31% (27%–34% 95% crI), respectively, from 2000 to 2015 across endemic areas in sub-Saharan Africa. However, these gains are fragile, and coverage is far from optimal. In 2015, 9.5 million (8.3 million–10.4 million 95% crI) of 30.6 million pregnancies in these areas would still have been infected with Plasmodium falciparum without intervention, leading to 750,000 (390,000–1.1 million 95% crI) mLBW deliveries. In all, 6.6 million (5.6 million–7.3 million 95% crI) of these 9.5 million (69.3%) pregnancies at risk of infection (and 53.4% [16.3 million/30.6 million] of all pregnancies) occurred in settings with near-perfect SP curative
Illuminant spectrum estimation at a pixel.
Ratnasingam, Sivalogeswaran; Hernández-Andrés, Javier
2011-04-01
In this paper, an algorithm is proposed to estimate the spectral power distribution of a light source at a pixel. The first step of the algorithm is forming a two-dimensional illuminant invariant chromaticity space. In estimating the illuminant spectrum, generalized inverse estimation and Wiener estimation methods were applied. The chromaticity space was divided into small grids and a weight matrix was used to estimate the illuminant spectrum illuminating the pixels that fall within a grid. The algorithm was tested using a different number of sensor responses to determine the optimum number of sensors for accurate colorimetric and spectral reproduction. To investigate the performance of the algorithm realistically, the responses were multiplied with Gaussian noise and then quantized to 10 bits. The algorithm was tested with standard and measured data. Based on the results presented, the algorithm can be used with six sensors to obtain a colorimetrically good estimate of the illuminant spectrum at a pixel.
Wong, Angelita Pui-Yee; Pipitone, Jon; Park, Min Tae M; Dickie, Erin W; Leonard, Gabriel; Perron, Michel; Pike, Bruce G; Richer, Louis; Veillette, Suzanne; Chakravarty, M Mallar; Pausova, Zdenka; Paus, Tomáš
2014-07-01
The pituitary gland is a key structure in the hypothalamic-pituitary-gonadal (HPG) axis--it plays an important role in sexual maturation during puberty. Despite its small size, its volume can be quantified using magnetic resonance imaging (MRI). Here, we study a cohort of 962 typically developing adolescents from the Saguenay Youth Study and estimate pituitary volumes using a newly developed multi-atlas segmentation method known as the MAGeT Brain algorithm. We found that age and puberty stage (controlled for age) each predicts adjusted pituitary volumes (controlled for total brain volume) in both males and females. Controlling for the effects of age and puberty stage, total testosterone and estradiol levels also predict adjusted pituitary volumes in males and pre-menarche females, respectively. These findings demonstrate that the pituitary gland grows during adolescence, and its volume relates to circulating plasma-levels of sex steroids in both males and females.
Estimating toner usage with laser electrophotographic printers
NASA Astrophysics Data System (ADS)
Wang, Lu; Abramsohn, Dennis; Ives, Thom; Shaw, Mark; Allebach, Jan
2013-02-01
Accurate estimation of toner usage is an area of on-going importance for laser, electrophotographic (EP) printers. We propose a new two-stage approach in which we first predict on a pixel-by-pixel basis, the absorptance from printed and scanned pages. We then form a weighted sum of these pixel values to predict overall toner usage on the printed page. The weights are chosen by least-squares regression to toner usage measured with a set of printed test pages. Our twostage predictor significantly outperforms existing methods that are based on a simple pixel counting strategy in terms of both accuracy and robustness of the predictions.
Hsu, Ya-Wen; Liou, Tsan-Hon; Liou, Yiing Mei; Chen, Hsin-Jen; Chien, Li-Yin
2016-01-01
Children and adolescents tend to lose weight, which may be associated with misperceptions of weight. Previous studies have emphasized establishing correlations between eating disorders and an overestimated perception of body weight, but few studies have focused on an underestimated perception of body weight. The objective of this study was to explore the relationship between misperceptions of body weight and weight-related risk factors, such as eating disorders, inactivity, and unhealthy behaviors, among overweight children who underestimated their body weight. We conducted a cross-sectional, descriptive study between December 1, 2006 and February 15, 2007. A total of 29,313 children and adolescents studying in grades 4-12 were enrolled in this nationwide, cross-sectional survey, and they were asked to complete questionnaires. A multivariate logistic regression using maximum likelihood estimates was used. The prevalence of body weight misperception was 43.2% (26.4% overestimation and 16.8% underestimation). Factors associated with the underestimated perception of weight among overweight children were parental obesity, dietary control for weight loss, breakfast consumption, self-induced vomiting as a weight control strategy, fried food consumption, engaging in vigorous physical activities, and sleeping for >8 hours per day (odds ratios=0.86, 0.42, 0.88, 1.37, 1.13, 1.11, and 1.17, respectively). In conclusion, the early establishment of an accurate perception of body weight may mitigate unhealthy behaviors.
NASA Technical Reports Server (NTRS)
Chatterji, Gano
2011-01-01
Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.
Jafari, Shoja; Razzagzadeh, Sarain
2016-03-01
Genetic parameter estimates of growth rates, Kleiber ratios, and fat-tail dimensions were aimed using 22, 253 records at the present study. The studied traits were average daily gain from birth to weaning, average daily gain from 9 months of age to yearling, Kleiber ratio from birth to weaning, Kleiber ratio from 9 months of age to yearling, fat-tail length, fat-tail width, and fat-tail thickness. Each trait was fitted by four different animal models, which are differentiated by including or excluding maternal effects. Beside the estimates of genetic and phenotypic correlation among the studied traits, the association of them with birth to yearling live body weights using series of bivariate animal models was investigated. The direct heritabilities were ranged from 0.04 to 0.20, which indicated a wide range of additive genetic variances of the traits. Genetic and phenotypic correlations between the main traits were ranged from -0.11 to 0.99 and -0.08 to 0.95, respectively. The results indicated that the traits could be improved by including them in the selection index due to their moderate to high heritability estimation.
Cheung, Yin Bun; Chan, Jerry Kok Yen; Tint, Mya Thway; Godfrey, Keith M.; Gluckman, Peter D.; Kwek, Kenneth; Saw, Seang Mei; Chong, Yap-Seng; Lee, Yung Seng; Yap, Fabian; Lek, Ngee
2016-01-01
Objective Inaccurate parental perception of their child’s weight status is commonly reported in Western countries. It is unclear whether similar misperception exists in Asian populations. This study aimed to evaluate the ability of Singaporean mothers to accurately describe their three-year-old child’s weight status verbally and visually. Methods At three years post-delivery, weight and height of the children were measured. Body mass index (BMI) was calculated and converted into actual weight status using International Obesity Task Force criteria. The mothers were blinded to their child’s measurements and asked to verbally and visually describe what they perceived was their child’s actual weight status. Agreement between actual and described weight status was assessed using Cohen’s Kappa statistic (κ). Results Of 1237 recruited participants, 66.4% (n = 821) with complete data on mothers’ verbal and visual perceptions and children’s anthropometric measurements were analysed. Nearly thirty percent of the mothers were unable to describe their child’s weight status accurately. In verbal description, 17.9% under-estimated and 11.8% over-estimated their child’s weight status. In visual description, 10.4% under-estimated and 19.6% over-estimated their child’s weight status. Many mothers of underweight children over-estimated (verbal 51.6%; visual 88.8%), and many mothers of overweight and obese children under-estimated (verbal 82.6%; visual 73.9%), their child’s weight status. In contrast, significantly fewer mothers of normal-weight children were inaccurate (verbal 16.8%; visual 8.8%). Birth order (p<0.001), maternal (p = 0.004) and child’s weight status (p<0.001) were associated with consistently inaccurate verbal and visual descriptions. Conclusions Singaporean mothers, especially those of underweight and overweight children, may not be able to perceive their young child’s weight status accurately. To facilitate prevention of childhood
Determining the Statistical Significance of Relative Weights
ERIC Educational Resources Information Center
Tonidandel, Scott; LeBreton, James M.; Johnson, Jeff W.
2009-01-01
Relative weight analysis is a procedure for estimating the relative importance of correlated predictors in a regression equation. Because the sampling distribution of relative weights is unknown, researchers using relative weight analysis are unable to make judgments regarding the statistical significance of the relative weights. J. W. Johnson…
Carr, D.B.; Tolley, H.D.
1982-12-01
This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.
Accurate estimation of the elastic properties of porous fibers
Thissell, W.R.; Zurek, A.K.; Addessio, F.
1997-05-01
A procedure is described to calculate polycrystalline anisotropic fiber elastic properties with cylindrical symmetry and porosity. It uses a preferred orientation model (Tome ellipsoidal self-consistent model) for the determination of anisotropic elastic properties for the case of highly oriented carbon fibers. The model predictions, corrected for porosity, are compared to back-calculated fiber elastic properties of an IM6/3501-6 unidirectional composite whose elastic properties have been determined via resonant ultrasound spectroscopy. The Halpin-Tsai equations used to back-calculated fiber elastic properties are found to be inappropriate for anisotropic composite constituents. Modifications are proposed to the Halpin-Tsai equations to expand their applicability to anisotropic reinforcement materials.
Ensemble estimators for multivariate entropy estimation.
Sricharan, Kumar; Wei, Dennis; Hero, Alfred O
2013-07-01
The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size T as the dimension d of the samples increases. In particular, the rate is often glacially slow of order O(T(-)(γ)(/)(d) ), where γ > 0 is a rate parameter. Examples of such estimators include kernel density estimators, k-nearest neighbor (k-NN) density estimators, k-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of O(T(-1)). Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample.
Francoeur, Richard B
2016-01-01
Addressing subsyndromal depression in cerebrovascular conditions, diabetes, and obesity reduces morbidity and risk of major depression. However, depression may be masked because self-reported symptoms may not reveal dysphoric (sad) mood. In this study, the first wave (2,812 elders) from the New Haven Epidemiological Study of the Elderly (EPESE) was used. These population-weighted data combined a stratified, systematic, clustered random sample from independent residences and a census of senior housing. Physical conditions included progressive cerebrovascular disease (CVD; hypertension, silent CVD, stroke, and vascular cognitive impairment [VCI]) and co-occurring excess weight and/or diabetes. These conditions and interactions (clusters) simultaneously predicted 20 depression items and a latent trait of depression in participants with subsyndromal (including subthreshold) depression (11≤ Center for Epidemiologic Studies Depression Scale [CES-D] score ≤27). The option for maximum likelihood estimation with standard errors that are robust to non-normality and non-independence in complex random samples (MLR) in Mplus and an innovation created by the author were used for estimating unbiased effects from latent trait models with exhaustive specification. Symptom profiles reveal masked depression in 1) older males, related to the metabolic syndrome (hypertension–overweight–diabetes; silent CVD–overweight; and silent CVD–diabetes) and 2) older females or the full sample, related to several diabetes and/or overweight clusters that involve stroke or VCI. Several other disease clusters are equivocal regarding masked depression; a couple do emphasize dysphoric mood. Replicating findings could identify subgroups for cost-effective screening of subsyndromal depression. PMID:28003768
NASA Technical Reports Server (NTRS)
1995-01-01
The Attitude Adjuster is a system for weight repositioning corresponding to a SCUBA diver's changing positions. Compact tubes on the diver's air tank permit controlled movement of lead balls within the Adjuster, automatically repositioning when the diver changes position. Manufactured by Think Tank Technologies, the system is light and small, reducing drag and energy requirements and contributing to lower air consumption. The Mid-Continent Technology Transfer Center helped the company with both technical and business information and arranged for the testing at Marshall Space Flight Center's Weightlessness Environmental Training Facility for astronauts.
Reconstructing Weighted Networks from Dynamics
NASA Astrophysics Data System (ADS)
Ching, Emily S. C.; Lai, P. Y.; Leung, C. Y.
2015-03-01
The knowledge of how the different nodes of a network interact or link with one another is crucial for the understanding of the collective behavior and the functionality of the network. We have recently developed a method that can reconstruct both the links and their relative coupling strength of bidirectional weighted networks. Our method requires only measurements of node dynamics as input and is based on a relation between the pseudo-inverse of the matrix of the correlation of the node dynamics and the Laplacian matrix of the weighted network. Using several examples of different dynamics, we demonstrate that our method can accurately reconstruct the connectivity as well as the weights of the links for weighted random and weighted scale-free networks with both linear and nonlinear dynamics. The work of ESCC and CYL has been supported by the Hong Kong Research Grants Council under Grant No. CUHK 14300914.
Conley, Amanda; Boardman, Jason D.
2011-01-01
Objective This paper examines the association between weight overestimation and symptoms of disordered eating behaviors using a nationally representative sample of young women. Method We use data from Wave III of the National Longitudinal Study of Adolescent Health to compare self-reported weight (in pounds) to measure weight obtained by interviewers using a scale. Focusing on normal weight women between the ages of 18 and 24 (n = 2,805) we compare the discrepancy in self-reported and measured weight among women with and without any disordered eating behaviors. Results Women who over report their weight by at least five percent are significantly more likely than those who either under report or accurately report their weights to exhibit disordered eating behaviors. These results persist despite controlling for distorted body image. Conclusion Our findings support both motivational and perceptual bias explanations for overestimating weight among those who exhibit disordered eating behaviors. We argue that weight over-estimation, together with other important information regarding women’s nutrition, exercise, mental health, and health-related behaviors, should be treated as a potential indicator for the diagnosis of an eating disorder among young normal weight women. PMID:17497706
Link Prediction in Weighted Networks: A Weighted Mutual Information Model
Zhu, Boyao; Xia, Yongxiang
2016-01-01
The link-prediction problem is an open issue in data mining and knowledge discovery, which attracts researchers from disparate scientific communities. A wealth of methods have been proposed to deal with this problem. Among these approaches, most are applied in unweighted networks, with only a few taking the weights of links into consideration. In this paper, we present a weighted model for undirected and weighted networks based on the mutual information of local network structures, where link weights are applied to further enhance the distinguishable extent of candidate links. Empirical experiments are conducted on four weighted networks, and results show that the proposed method can provide more accurate predictions than not only traditional unweighted indices but also typical weighted indices. Furthermore, some in-depth discussions on the effects of weak ties in link prediction as well as the potential to predict link weights are also given. This work may shed light on the design of algorithms for link prediction in weighted networks. PMID:26849659
... Parents MORE ON THIS TOPIC Weight Loss Surgery (Bariatric Surgery) Overweight and Obesity Weight and Diabetes Growth Charts ... Losing Weight: Brandon's Story (Video) Managing Your Weight Weight Loss Surgery When Being Overweight Is a Health Problem Who ...
Ekizoglu, Oguzhan; Hocaoglu, Elif; Inci, Ercan; Can, Ismail Ozgur; Aksoy, Sema; Kazimoglu, Cemal
2016-03-01
Radiation exposure during forensic age estimation is associated with ethical implications. It is important to prevent repetitive radiation exposure when conducting advanced ultrasonography (USG) and magnetic resonance imaging (MRI). The purpose of this study was to investigate the utility of 3.0-T MRI in determining the degree of ossification of the distal femoral and proximal tibial epiphyses in a group of Turkish population. We retrospectively evaluated coronal T2-weighted and turbo spin-echo sequences taken upon MRI of 503 patients (305 males, 198 females; age 10-30 years) using a five-stage method. Intra- and interobserver variations were very low. (Intraobserver reliability was κ=0.919 for the distal femoral epiphysis and κ=0.961 for the proximal tibial epiphysis, and interobserver reliability was κ=0.836 for the distal femoral epiphysis and κ=0.885 for the proximal tibial epiphysis.) Spearman's rank correlation analysis indicated a significant positive relationship between age and the extent of ossification of the distal femoral and proximal tibial epiphyses (p<0.001). Comparison of male and female data revealed significant between-gender differences in the ages at first attainment of stages 2, 3, and 4 ossifications of the distal femoral epiphysis and stage 1 and 4 ossifications of the proximal tibial epiphysis (p<0.05). The earliest ages at which ossification of stages 3, 4, and 5 was evident in the distal femoral epiphysis were 14, 17, and 22 years in males and 13, 16, and 21 years in females, respectively. Proximal tibial epiphysis of stages 3, 4, and 5 ossification was first noted at ages 14, 17, and 18 years in males and 13, 15, and 16 years in females, respectively. MRI of the distal femoral and proximal tibial epiphyses is an alternative, noninvasive, and reliable technique to estimate age.
Comparing Measured Bullet Weight with Manufacturer Specifications
2012-02-01
Remington . Figure 3: Bullet weights for 62 grain Berger Flat Base Varmint. Berger 115 grain VLD (0.257) The weight tolerance claimed by Berger...the smallest variation among the .257 inch bullets considered here, and is the most accurate bullet ever tested in the 25-06 Remington 700 Sendero
Oligomeric cationic polymethacrylates: a comparison of methods for determining molecular weight.
Locock, Katherine E S; Meagher, Laurence; Haeussler, Matthias
2014-02-18
This study compares three common laboratory methods, size-exclusion chromatography (SEC), (1)H nuclear magnetic resonance (NMR), and matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF), to determine the molecular weight of oligomeric cationic copolymers. The potential bias for each method was examined across a series of polymers that varied in molecular weight and cationic character (both choice of cation (amine versus guanidine) and relative proportion present). SEC was found to be the least accurate, overestimating Mn by an average of 140%, owing to the lack of appropriate cationic standards available, and the complexity involved in estimating the hydrodynamic volume of copolymers. MALDI-TOF approximated Mn well for the highly monodisperse (Đ < 1.1), low molecular weight (degree of polymerization (DP) <50) species but appeared unsuitable for the largest polymers in the series due to the mass bias associated with the technique. (1)H NMR was found to most accurately estimate Mn in this study, differing to theoretical values by only 5.2%. (1)H NMR end-group analysis is therefore an inexpensive and facile, primary quantitative method to estimate the molecular weight of oliogomeric cationic polymethacrylates if suitably distinct end-groups signals are present in the spectrum.
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.
1985-01-01
Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.
Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio
2015-12-01
This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).
Toward Accurate and Quantitative Comparative Metagenomics
Nayfach, Stephen; Pollard, Katherine S.
2016-01-01
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
2012-01-01
Background Self-reported anthropometric data are commonly used to estimate prevalence of obesity in population and community-based studies. We aim to: 1) Determine whether survey participants are able and willing to self-report height and weight; 2) Assess the accuracy of self-reported compared to measured anthropometric data in a community-based sample of young people. Methods Participants (16–29 years) of a behaviour survey, recruited at a Melbourne music festival (January 2011), were asked to self-report height and weight; researchers independently weighed and measured a sub-sample. Body Mass Index was calculated and overweight/obesity classified as ≥25kg/m2. Differences between measured and self-reported values were assessed using paired t-test/Wilcoxon signed ranks test. Accurate report of height and weight were defined as <2cm and <2kg difference between self-report and measured values, respectively. Agreement between classification of overweight/obesity by self-report and measured values was assessed using McNemar’s test. Results Of 1405 survey participants, 82% of males and 72% of females self-reported their height and weight. Among 67 participants who were also independently measured, self-reported height and weight were significantly less than measured height (p=0.01) and weight (p<0.01) among females, but no differences were detected among males. Overall, 52% accurately self-reported height, 30% under-reported, and 18% over-reported; 34% accurately self-reported weight, 52% under-reported and 13% over-reported. More females (70%) than males (35%) under-reported weight (p=0.01). Prevalence of overweight/obesity was 33% based on self-report data and 39% based on measured data (p=0.16). Conclusions Self-reported measurements may underestimate weight but accurately identified overweight/obesity in the majority of this sample of young people. PMID:23170838
Multi sensor transducer and weight factor
NASA Technical Reports Server (NTRS)
Immer, Christopher D. (Inventor); Lane, John (Inventor); Eckhoff, Anthony J. (Inventor); Perotti, Jose M. (Inventor)
2004-01-01
A multi-sensor transducer and processing method allow insitu monitoring of the senor accuracy and transducer `health`. In one embodiment, the transducer has multiple sensors to provide corresponding output signals in response to a stimulus, such as pressure. A processor applies individual weight factors to reach of the output signals and provide a single transducer output that reduces the contribution from inaccurate sensors. The weight factors can be updated and stored. The processor can use the weight factors to provide a `health` of the transducer based upon the number of accurate versus in-accurate sensors in the transducer.
On numerically accurate finite element
NASA Technical Reports Server (NTRS)
Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.
1974-01-01
A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.
Schroeder, Jonathan P.; Van Riper, David C.
2014-01-01
Areal interpolation transforms data for a variable of interest from a set of source zones to estimate the same variable's distribution over a set of target zones. One common practice has been to guide interpolation by using ancillary control zones that are related to the variable of interest's spatial distribution. This guidance typically involves using source zone data to estimate the density of the variable of interest within each control zone. This article introduces a novel approach to density estimation, the geographically weighted expectation-maximization (GWEM) algorithm, which combines features of two previously used techniques, the expectation-maximization (EM) algorithm and geographically weighted regression. The EM algorithm provides a framework for incorporating proper constraints on data distributions, and using geographical weighting allows estimated control-zone density ratios to vary spatially. We assess the accuracy of GWEM by applying it with land-use/land-cover ancillary data to population counts from a nationwide sample of 1980 United States census tract pairs. We find that GWEM generally is more accurate in this setting than several previously studied methods. Because target-density weighting (TDW)—using 1970 tract densities to guide interpolation—outperforms GWEM in many cases, we also consider two GWEM-TDW hybrid approaches, and find them to improve estimates substantially. PMID:24653524
Schröder, Julian; Cheng, Bastian; Ebinger, Martin; Köhrmann, Martin; Wu, Ona; Kang, Dong-Wha; Liebeskind, David S.; Tourdias, Thomas; Singer, Oliver C.; Christensen, Soren; Campbell, Bruce; Luby, Marie; Warach, Steven; Fiehler, Jens; Fiebach, Jochen B.; Gerloff, Christian; Thomalla, Götz
2016-01-01
Background and Purpose Alberta Stroke Program Early Computed Tomographic Score (ASPECTS) has been used to estimate diffusion-weighted imaging (DWI) lesion volume in acute stroke. We aimed to assess correlations of DWI-ASPECTS with lesion volume in different middle cerebral artery (MCA) subregions and reproduce existing ASPECTS thresholds of a malignant profile defined by lesion volume ≥100 mL. Methods We analyzed data of patients with MCA stroke from a prospective observational study of DWI and fluid-attenuated inversion recovery in acute stroke. DWI-ASPECTS and lesion volume were calculated. The population was divided into subgroups based on lesion localization (superficial MCA territory, deep MCA territory, or both). Correlation of ASPECTS and infarct volume was calculated, and receiver-operating characteristics curve analysis was performed to identify the optimal ASPECTS threshold for ≥100-mL lesion volume. Results A total of 496 patients were included. There was a significant negative correlation between ASPECTS and DWI lesion volume (r=−0.78; P<0.0001). With regards to lesion localization, correlation was weaker in deep MCA region (r=−0.19; P=0.038) when compared with superficial (r=−0.72; P<0.001) or combined superficial and deep MCA lesions (r=−0.72; P<0.001). Receiver-operating characteristics analysis revealed ASPECTS≤6 as best cutoff to identify ≥100-mL DWI lesion volume; however, positive predictive value was low (0.35). Conclusions ASPECTS has limitations when lesion location is not considered. Identification of patients with malignant profile by DWI-ASPECTS may be unreliable. ASPECTS may be a useful tool for the evaluation of noncontrast computed tomography. However, if MRI is used, ASPECTS seems dispensable because lesion volume can easily be quantified on DWI maps. PMID:25316278
Accurate method for computing correlated color temperature.
Li, Changjun; Cui, Guihua; Melgosa, Manuel; Ruan, Xiukai; Zhang, Yaoju; Ma, Long; Xiao, Kaida; Luo, M Ronnier
2016-06-27
For the correlated color temperature (CCT) of a light source to be estimated, a nonlinear optimization problem must be solved. In all previous methods available to compute CCT, the objective function has only been approximated, and their predictions have achieved limited accuracy. For example, different unacceptable CCT values have been predicted for light sources located on the same isotemperature line. In this paper, we propose to compute CCT using the Newton method, which requires the first and second derivatives of the objective function. Following the current recommendation by the International Commission on Illumination (CIE) for the computation of tristimulus values (summations at 1 nm steps from 360 nm to 830 nm), the objective function and its first and second derivatives are explicitly given and used in our computations. Comprehensive tests demonstrate that the proposed method, together with an initial estimation of CCT using Robertson's method [J. Opt. Soc. Am. 58, 1528-1535 (1968)], gives highly accurate predictions below 0.0012 K for light sources with CCTs ranging from 500 K to 10^{6} K.
Misperceptions of weight status among adolescents: sociodemographic and behavioral correlates
Bodde, Amy E; Beebe, Timothy J; Chen, Laura P; Jenkins, Sarah; Perez-Vergara, Kelly; Finney Rutten, Lila J; Ziegenfuss, Jeanette Y
2014-01-01
Objective Accurate perceptions of weight status are important motivational triggers for weight loss among overweight or obese individuals, yet weight misperception is prevalent. To identify and characterize individuals holding misperceptions around their weight status, it may be informative for clinicians to assess self-reported body mass index (BMI) classification (ie, underweight, normal, overweight, obese) in addition to clinical weight measurement. Methods Self-reported weight classification data from the 2007 Current Visit Information – Child and Adolescent Survey collected at Mayo Clinic in Rochester, MN, were compared with measured clinical height and weight for 2,993 adolescents. Results While, overall, 74.2% of adolescents accurately reported their weight status, females, younger adolescents, and proxy (vs self) reporters were more accurate. Controlling for demographic and behavioral characteristics, the higher an individual’s BMI percentile, the less likely there was agreement between self-report and measured BMI percentile. Those with high BMI who misperceive their weight status were less likely than accurate perceivers to attempt weight loss. Conclusion Adolescents’ and proxies’ misperception of weight status increases with BMI percentile. Obtaining an adolescent’s self-perceived weight status in addition to measured height and weight offers clinicians valuable baseline information to discuss motivation for weight loss. PMID:25525400
Accurate ab Initio Spin Densities.
Boguslawski, Katharina; Marti, Konrad H; Legeza, Ors; Reiher, Markus
2012-06-12
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740].
Variance estimation for stratified propensity score estimators.
Williamson, E J; Morley, R; Lucas, A; Carpenter, J R
2012-07-10
Propensity score methods are increasingly used to estimate the effect of a treatment or exposure on an outcome in non-randomised studies. We focus on one such method, stratification on the propensity score, comparing it with the method of inverse-probability weighting by the propensity score. The propensity score--the conditional probability of receiving the treatment given observed covariates--is usually an unknown probability estimated from the data. Estimators for the variance of treatment effect estimates typically used in practice, however, do not take into account that the propensity score itself has been estimated from the data. By deriving the asymptotic marginal variance of the stratified estimate of treatment effect, correctly taking into account the estimation of the propensity score, we show that routinely used variance estimators are likely to produce confidence intervals that are too conservative when the propensity score model includes variables that predict (cause) the outcome, but only weakly predict the treatment. In contrast, a comparison with the analogous marginal variance for the inverse probability weighted (IPW) estimator shows that routinely used variance estimators for the IPW estimator are likely to produce confidence intervals that are almost always too conservative. Because exact calculation of the asymptotic marginal variance is likely to be complex, particularly for the stratified estimator, we suggest that bootstrap estimates of variance should be used in practice.
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnet