Regularization Based Iterative Point Match Weighting for Accurate Rigid Transformation Estimation.
Liu, Yonghuai; De Dominicis, Luigi; Wei, Baogang; Chen, Liang; Martin, Ralph R
2015-09-01
Feature extraction and matching (FEM) for 3D shapes finds numerous applications in computer graphics and vision for object modeling, retrieval, morphing, and recognition. However, unavoidable incorrect matches lead to inaccurate estimation of the transformation relating different datasets. Inspired by AdaBoost, this paper proposes a novel iterative re-weighting method to tackle the challenging problem of evaluating point matches established by typical FEM methods. Weights are used to indicate the degree of belief that each point match is correct. Our method has three key steps: (i) estimation of the underlying transformation using weighted least squares, (ii) penalty parameter estimation via minimization of the weighted variance of the matching errors, and (iii) weight re-estimation taking into account both matching errors and information learnt in previous iterations. A comparative study, based on real shapes captured by two laser scanners, shows that the proposed method outperforms four other state-of-the-art methods in terms of evaluating point matches between overlapping shapes established by two typical FEM methods, resulting in more accurate estimates of the underlying transformation. This improved transformation can be used to better initialize the iterative closest point algorithm and its variants, making 3D shape registration more likely to succeed. PMID:26357287
Ultrasound Fetal Weight Estimation: How Accurate Are We Now Under Emergency Conditions?
Dimassi, Kaouther; Douik, Fatma; Ajroudi, Mariem; Triki, Amel; Gara, Mohamed Faouzi
2015-10-01
The primary aim of this study was to evaluate the accuracy of sonographic estimation of fetal weight when performed at due date by first-line sonographers. This was a prospective study including 500 singleton pregnancies. Ultrasound examinations were performed by residents on delivery day. Estimated fetal weights (EFWs) were calculated and compared with the corresponding birth weights. The median absolute difference between EFW and birth weight was 200 g (100-330). This difference was within ±10% in 75.2% of the cases. The median absolute percentage error was 5.53% (2.70%-10.03%). Linear regression analysis revealed a good correlation between EFW and birth weight (r = 0.79, p < 0.0001). According to Bland-Altman analysis, bias was -85.06 g (95% limits of agreement: -663.33 to 494.21). In conclusion, EFWs calculated by residents were as accurate as those calculated by experienced sonographers. Nevertheless, predictive performance remains limited, with a low sensitivity in the diagnosis of macrosomia. PMID:26164286
Accurate pose estimation for forensic identification
NASA Astrophysics Data System (ADS)
Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk
2010-04-01
In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.
IRIS: Towards an Accurate and Fast Stage Weight Prediction Method
NASA Astrophysics Data System (ADS)
Taponier, V.; Balu, A.
2002-01-01
The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator
31 CFR 205.24 - How are accurate estimates maintained?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
Micromagnetometer calibration for accurate orientation estimation.
Zhang, Zhi-Qiang; Yang, Guang-Zhong
2015-02-01
Micromagnetometers, together with inertial sensors, are widely used for attitude estimation for a wide variety of applications. However, appropriate sensor calibration, which is essential to the accuracy of attitude reconstruction, must be performed in advance. Thus far, many different magnetometer calibration methods have been proposed to compensate for errors such as scale, offset, and nonorthogonality. They have also been used for obviate magnetic errors due to soft and hard iron. However, in order to combine the magnetometer with inertial sensor for attitude reconstruction, alignment difference between the magnetometer and the axes of the inertial sensor must be determined as well. This paper proposes a practical means of sensor error correction by simultaneous consideration of sensor errors, magnetic errors, and alignment difference. We take the summation of the offset and hard iron error as the combined bias and then amalgamate the alignment difference and all the other errors as a transformation matrix. A two-step approach is presented to determine the combined bias and transformation matrix separately. In the first step, the combined bias is determined by finding an optimal ellipsoid that can best fit the sensor readings. In the second step, the intrinsic relationships of the raw sensor readings are explored to estimate the transformation matrix as a homogeneous linear least-squares problem. Singular value decomposition is then applied to estimate both the transformation matrix and magnetic vector. The proposed method is then applied to calibrate our sensor node. Although there is no ground truth for the combined bias and transformation matrix for our node, the consistency of calibration results among different trials and less than 3(°) root mean square error for orientation estimation have been achieved, which illustrates the effectiveness of the proposed sensor calibration method for practical applications. PMID:25265625
Robust ODF smoothing for accurate estimation of fiber orientation.
Beladi, Somaieh; Pathirana, Pubudu N; Brotchie, Peter
2010-01-01
Q-ball imaging was presented as a model free, linear and multimodal diffusion sensitive approach to reconstruct diffusion orientation distribution function (ODF) using diffusion weighted MRI data. The ODFs are widely used to estimate the fiber orientations. However, the smoothness constraint was proposed to achieve a balance between the angular resolution and noise stability for ODF constructs. Different regularization methods were proposed for this purpose. However, these methods are not robust and quite sensitive to the global regularization parameter. Although, numerical methods such as L-curve test are used to define a globally appropriate regularization parameter, it cannot serve as a universal value suitable for all regions of interest. This may result in over smoothing and potentially end up in neglecting an existing fiber population. In this paper, we propose to include an interpolation step prior to the spherical harmonic decomposition. This interpolation based approach is based on Delaunay triangulation provides a reliable, robust and accurate smoothing approach. This method is easy to implement and does not require other numerical methods to define the required parameters. Also, the fiber orientations estimated using this approach are more accurate compared to other common approaches. PMID:21096202
Calculating weighted estimates of peak streamflow statistics
Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.
2012-01-01
According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.
Efficient and Accurate WLAN Positioning with Weighted Graphs
NASA Astrophysics Data System (ADS)
Hansen, René; Thomsen, Bent
This paper concerns indoor location determination by using existing WLAN infrastructures and WLAN enabled mobile devices. The location fingerprinting technique performs localization by first constructing a radio map of signal strengths from nearby access points. The radio map is subsequently searched using a classification algorithm to determine a location estimate. This paper addresses two distinct challenges of location fingerprinting incurred by positioning moving users. Firstly, movement affects the positioning accuracy negatively due to increased signal strength fluctuations. Secondly, tracking moving users requires a low-latency overhead which translates into efficient computations to be done on a mobile device with limited capabilities. We present a technique to simultaneously improve the positioning accuracy and computational efficiency. The technique utilizes a weighted graph model of the indoor environment to improve positioning accuracy and computational efficiency by only considering the subset of locations in the radio map that are feasible to reach from a previously estimated position. The technique is general and can be used on top of any existing location system. Our results indicate that we are able to achieve similar dynamic localization accuracy to static localization. Effectively, we are able to counter the adverse effects of added signal fluctuations caused by movement. However, as some of our experiments testify, any location system is fundamentally constrained by the underlying environment. We give pointers to research which allows such problems to be detected early and thereby avoided before deploying a system.
Weighted conditional least-squares estimation
Booth, J.G.
1987-01-01
A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered.
A live weight-heart girth relationship for accurate dosing of east African shorthorn zebu cattle.
Lesosky, Maia; Dumas, Sarah; Conradie, Ilana; Handel, Ian Graham; Jennings, Amy; Thumbi, Samuel; Toye, Phillip; Bronsvoort, Barend Mark de Clare
2013-01-01
The accurate estimation of livestock weights is important for many aspects of livestock management including nutrition, production and appropriate dosing of pharmaceuticals. Subtherapeutic dosing has been shown to accelerate pathogen resistance which can have subsequent widespread impacts. There are a number of published models for the prediction of live weight from morphometric measurements of cattle, but many of these models use measurements difficult to gather and include complicated age, size and gender stratification. In this paper, we use data from the Infectious Diseases of East Africa calf cohort study and additional data collected at local markets in western Kenya to develop a simple model based on heart girth circumference to predict live weight of east African shorthorn zebu (SHZ) cattle. SHZ cattle are widespread throughout eastern and southern Africa and are economically important multipurpose animals. We demonstrate model accuracy by splitting the data into training and validation subsets and comparing fitted and predicted values. The final model is weight(0.262) = 0.95 + 0.022 × girth which has an R (2) value of 0.98 and 95 % prediction intervals that fall within the ± 20 % body weight error band regarded as acceptable when dosing livestock. This model provides a highly reliable and accurate method for predicting weights of SHZ cattle using a single heart girth measurement which can be easily obtained with a tape measure in the field setting. PMID:22923040
Analytical Fuselage and Wing Weight Estimation of Transport Aircraft
NASA Technical Reports Server (NTRS)
Chambers, Mark C.; Ardema, Mark D.; Patron, Anthony P.; Hahn, Andrew S.; Miura, Hirokazu; Moore, Mark D.
1996-01-01
A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft, and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT has traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight. Using statistical analysis techniques, relations between the load-bearing fuselage and wing weights calculated by PDCYL and corresponding actual weights were determined.
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Code of Federal Regulations, 2010 CFR
2010-01-01
... by the Director of the Federal Register in accordance with 5 U.S.C. 552(a) and 1 CFR part 51. (These... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Scale requirements for accurate... PROCEDURES AND REQUIREMENTS FOR ACCURATE WEIGHTS § 442.3 Scale requirements for accurate weights,...
Code of Federal Regulations, 2013 CFR
2013-01-01
... by the Director of the Federal Register in accordance with 5 U.S.C. 552(a) and 1 CFR part 51. (These... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Scale requirements for accurate... PROCEDURES AND REQUIREMENTS FOR ACCURATE WEIGHTS § 442.3 Scale requirements for accurate weights,...
Code of Federal Regulations, 2014 CFR
2014-01-01
... by the Director of the Federal Register in accordance with 5 U.S.C. 552(a) and 1 CFR part 51. (These... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Scale requirements for accurate... PROCEDURES AND REQUIREMENTS FOR ACCURATE WEIGHTS § 442.3 Scale requirements for accurate weights,...
Code of Federal Regulations, 2012 CFR
2012-01-01
... by the Director of the Federal Register in accordance with 5 U.S.C. 552(a) and 1 CFR part 51. (These... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Scale requirements for accurate... PROCEDURES AND REQUIREMENTS FOR ACCURATE WEIGHTS § 442.3 Scale requirements for accurate weights,...
Code of Federal Regulations, 2011 CFR
2011-01-01
... by the Director of the Federal Register in accordance with 5 U.S.C. 552(a) and 1 CFR part 51. (These... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Scale requirements for accurate... PROCEDURES AND REQUIREMENTS FOR ACCURATE WEIGHTS § 442.3 Scale requirements for accurate weights,...
Validation of an Improved Pediatric Weight Estimation Strategy
Abdel-Rahman, Susan M.; Ahlers, Nichole; Holmes, Anne; Wright, Krista; Harris, Ann; Weigel, Jaylene; Hill, Talita; Baird, Kim; Michaels, Marla; Kearns, Gregory L.
2013-01-01
OBJECTIVES To validate the recently described Mercy method for weight estimation in an independent cohort of children living in the United States. METHODS Anthropometric data including weight, height, humeral length, and mid upper arm circumference were collected from 976 otherwise healthy children (2 months to 14 years old). The data were used to examine the predictive performances of the Mercy method and four other weight estimation strategies (the Advanced Pediatric Life Support [APLS] method, the Broselow tape, and the Luscombe and Owens and the Nelson methods). RESULTS The Mercy method demonstrated accuracy comparable to that observed in the original study (mean error: −0.3 kg; mean percentage error: −0.3%; root mean square error: 2.62 kg; 95% limits of agreement: 0.83–1.19). This method estimated weight within 20% of actual for 95% of children compared with 58.7% for APLS, 78% for Broselow, 54.4% for Luscombe and Owens, and 70.4% for Nelson. Furthermore, the Mercy method was the only weight estimation strategy which enabled prediction of weight in all of the children enrolled. CONCLUSIONS The Mercy method proved to be highly accurate and more robust than existing weight estimation strategies across a wider range of age and body mass index values, thereby making it superior to other existing approaches. PMID:23798905
Estimating Weight in Children With Down Syndrome
Rahm, Ginny; Abdel-Rahman, Susan M.
2015-01-01
Objective. Significant attention has been paid to weight estimation in settings where scales are impractical or unavailable; however, no studies have evaluated the performance of published weight estimation methods in children with Down syndrome. This study was designed to evaluate the predictive performance of various methods in this population with well-established differences in height and weight for age. Methods. This was a prospective study of children aged 0 to 18 years with Down syndrome. Anthropometric measurements including height, weight, humeral length, and mid-upper arm circumference were collected and applied to 4 distinct weight estimation strategies based on age (APLS), length (Broselow), habitus (Cattermole), and length plus habitus (Mercy). Predictive performance was evaluated by examining residual error (RE), percentage error (PE), root mean square error (RMSE), limits of agreement, and intraclass correlation coefficients. Results. A total of 318 children distributed across age, gender, and body mass index percentile were enrolled. APLS and Mercy showed the smallest degree of bias (PE = 7.8 ± 24.5% and −3.9 ± 12.4%, respectively). Broselow suffered the most extreme underestimation (−63%), whereas the APLS suffered the greatest degree of overestimation (107%). Mercy demonstrated the highest intraclass correlation coefficient (0.987 vs 0.867-0.885) and predicted weight within 20% of actual in the largest proportion of participants (88% vs 40% to 76%). All methods were less robust in children with Down syndrome than reported for unaffected children. Conclusions. Mercy offered the best option for weight estimation in children with Down syndrome. Additional anthropometric data collected in this special population would allow investigators to refine existing weight estimation strategies specifically for these children. PMID:27335936
Accurate parameter estimation for unbalanced three-phase system.
Chen, Yuan; So, Hing Cheung
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS. PMID:25162056
Accurate pose estimation using single marker single camera calibration system
NASA Astrophysics Data System (ADS)
Pati, Sarthak; Erat, Okan; Wang, Lejing; Weidert, Simon; Euler, Ekkehard; Navab, Nassir; Fallavollita, Pascal
2013-03-01
Visual marker based tracking is one of the most widely used tracking techniques in Augmented Reality (AR) applications. Generally, multiple square markers are needed to perform robust and accurate tracking. Various marker based methods for calibrating relative marker poses have already been proposed. However, the calibration accuracy of these methods relies on the order of the image sequence and pre-evaluation of pose-estimation errors, making the method offline. Several studies have shown that the accuracy of pose estimation for an individual square marker depends on camera distance and viewing angle. We propose a method to accurately model the error in the estimated pose and translation of a camera using a single marker via an online method based on the Scaled Unscented Transform (SUT). Thus, the pose estimation for each marker can be estimated with highly accurate calibration results independent of the order of image sequences compared to cases when this knowledge is not used. This removes the need for having multiple markers and an offline estimation system to calculate camera pose in an AR application.
Accurate measure by weight of liquids in industry
Muller, M.R.
1992-12-12
This research's focus was to build a prototype of a computerized liquid dispensing system. This liquid metering system is based on the concept of altering the representative volume to account for temperature changes in the liquid to be dispensed. This is actualized by using a measuring tank and a temperature compensating displacement plunger. By constantly monitoring the temperature of the liquid, the plunger can be used to increase or decrease the specified volume to more accurately dispense liquid with a specified mass. In order to put the device being developed into proper engineering perspective, an extensive literature review was undertaken on all areas of industrial metering of liquids with an emphasis on gravimetric methods.
Accurate measure by weight of liquids in industry. Final report
Muller, M.R.
1992-12-12
This research`s focus was to build a prototype of a computerized liquid dispensing system. This liquid metering system is based on the concept of altering the representative volume to account for temperature changes in the liquid to be dispensed. This is actualized by using a measuring tank and a temperature compensating displacement plunger. By constantly monitoring the temperature of the liquid, the plunger can be used to increase or decrease the specified volume to more accurately dispense liquid with a specified mass. In order to put the device being developed into proper engineering perspective, an extensive literature review was undertaken on all areas of industrial metering of liquids with an emphasis on gravimetric methods.
Sonography in Fetal Birth Weight Estimation
ERIC Educational Resources Information Center
Akinola, R. A.; Akinola, O. I.; Oyekan, O. O.
2009-01-01
The estimation of fetal birth weight is an important factor in the management of high risk pregnancies. The information and knowledge gained through this study, comparing a combination of various fetal parameters using computer assisted analysis, will help the obstetrician to screen the high risk pregnancies, monitor the growth and development,…
Generalized weighted ratio method for accurate turbidity measurement over a wide range.
Liu, Hongbo; Yang, Ping; Song, Hong; Guo, Yilu; Zhan, Shuyue; Huang, Hui; Wang, Hangzhou; Tao, Bangyi; Mu, Quanquan; Xu, Jing; Li, Dejun; Chen, Ying
2015-12-14
Turbidity measurement is important for water quality assessment, food safety, medicine, ocean monitoring, etc. In this paper, a method that accurately estimates the turbidity over a wide range is proposed, where the turbidity of the sample is represented as a weighted ratio of the scattered light intensities at a series of angles. An improvement in the accuracy is achieved by expanding the structure of the ratio function, thus adding more flexibility to the turbidity-intensity fitting. Experiments have been carried out with an 850 nm laser and a power meter fixed on a turntable to measure the light intensity at different angles. The results show that the relative estimation error of the proposed method is 0.58% on average for a four-angle intensity combination for all test samples with a turbidity ranging from 160 NTU to 4000 NTU. PMID:26699060
Influence Re-weighted G-Estimation.
Rich, Benjamin; Moodie, Erica E M; A Stephens, David
2016-05-01
Individualized medicine is an area that is growing, both in clinical and statistical settings, where in the latter, personalized treatment strategies are often referred to as dynamic treatment regimens. Estimation of the optimal dynamic treatment regime has focused primarily on semi-parametric approaches, some of which are said to be doubly robust in that they give rise to consistent estimators provided at least one of two models is correctly specified. In particular, the locally efficient doubly robust g-estimation is robust to misspecification of the treatment-free outcome model so long as the propensity model is specified correctly, at the cost of an increase in variability. In this paper, we propose data-adaptive weighting schemes that serve to decrease the impact of influential points and thus stabilize the estimator. In doing so, we provide a doubly robust g-estimator that is also robust in the sense of Hampel (15). PMID:26234949
Structural Weight Estimation for Launch Vehicles
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Su, Philip; Eldred, Lloyd
2002-01-01
This paper describes some of the work in progress to develop automated structural weight estimation procedures within the Vehicle Analysis Branch (VAB) of the NASA Langley Research Center. One task of the VAB is to perform system studies at the conceptual and early preliminary design stages on launch vehicles and in-space transportation systems. Some examples of these studies for Earth to Orbit (ETO) systems are the Future Space Transportation System [1], Orbit On Demand Vehicle [2], Venture Star [3], and the Personnel Rescue Vehicle[4]. Structural weight calculation for launch vehicle studies can exist on several levels of fidelity. Typically historically based weight equations are used in a vehicle sizing program. Many of the studies in the vehicle analysis branch have been enhanced in terms of structural weight fraction prediction by utilizing some level of off-line structural analysis to incorporate material property, load intensity, and configuration effects which may not be captured by the historical weight equations. Modification of Mass Estimating Relationships (MER's) to assess design and technology impacts on vehicle performance are necessary to prioritize design and technology development decisions. Modern CAD/CAE software, ever increasing computational power and platform independent computer programming languages such as JAVA provide new means to create greater depth of analysis tools which can be included into the conceptual design phase of launch vehicle development. Commercial framework computing environments provide easy to program techniques which coordinate and implement the flow of data in a distributed heterogeneous computing environment. It is the intent of this paper to present a process in development at NASA LaRC for enhanced structural weight estimation using this state of the art computational power.
An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-01-01
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314
An accurate link correlation estimator for improving wireless protocol performance.
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-01-01
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314
Fast and accurate estimation for astrophysical problems in large databases
NASA Astrophysics Data System (ADS)
Richards, Joseph W.
2010-10-01
A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems
Cook, Andrea J.; Elmore, Joann G.; Zhu, Weiwei; Jackson, Sara L.; Carney, Patricia A.; Flowers, Chris; Onega, Tracy; Geller, Berta; Rosenberg, Robert D.; Miglioretti, Diana L.
2013-01-01
Objective To determine if U.S. radiologists accurately estimate their own interpretive performance of screening mammography and how they compare their performance to their peers’. Materials and Methods 174 radiologists from six Breast Cancer Surveillance Consortium (BCSC) registries completed a mailed survey between 2005 and 2006. Radiologists’ estimated and actual recall, false positive, and cancer detection rates and positive predictive value of biopsy recommendation (PPV2) for screening mammography were compared. Radiologists’ ratings of their performance as lower, similar, or higher than their peers were compared to their actual performance. Associations with radiologist characteristics were estimated using weighted generalized linear models. The study was approved by the institutional review boards of the participating sites, informed consent was obtained from radiologists, and procedures were HIPAA compliant. Results While most radiologists accurately estimated their cancer detection and recall rates (74% and 78% of radiologists), fewer accurately estimated their false positive rate and PPV2 (19% and 26%). Radiologists reported having similar (43%) or lower (31%) recall rates and similar (52%) or lower (33%) false positive rates compared to their peers, and similar (72%) or higher (23%) cancer detection rates and similar (72%) or higher (38%) PPV2. Estimation accuracy did not differ by radiologists’ characteristics except radiologists who interpret ≤1,000 mammograms annually were less accurate at estimating their recall rates. Conclusion Radiologists perceive their performance to be better than it actually is and at least as good as their peers. Radiologists have particular difficulty estimating their false positive rates and PPV2. PMID:22915414
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
Accurate and robust estimation of camera parameters using RANSAC
NASA Astrophysics Data System (ADS)
Zhou, Fuqiang; Cui, Yi; Wang, Yexin; Liu, Liu; Gao, He
2013-03-01
Camera calibration plays an important role in the field of machine vision applications. The popularly used calibration approach based on 2D planar target sometimes fails to give reliable and accurate results due to the inaccurate or incorrect localization of feature points. To solve this problem, an accurate and robust estimation method for camera parameters based on RANSAC algorithm is proposed to detect the unreliability and provide the corresponding solutions. Through this method, most of the outliers are removed and the calibration errors that are the main factors influencing measurement accuracy are reduced. Both simulative and real experiments have been carried out to evaluate the performance of the proposed method and the results show that the proposed method is robust under large noise condition and quite efficient to improve the calibration accuracy compared with the original state.
Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.
2008-01-01
Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2014 CFR
2014-01-01
... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. These materials are incorporated as they exist on the date... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Scales; accurate weights, repairs... AGRICULTURE REGULATIONS UNDER THE PACKERS AND STOCKYARDS ACT Services § 201.71 Scales; accurate...
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2012 CFR
2012-01-01
... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. These materials are incorporated as they exist on the date... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Scales; accurate weights, repairs... AGRICULTURE REGULATIONS UNDER THE PACKERS AND STOCKYARDS ACT Services § 201.71 Scales; accurate...
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2013 CFR
2013-01-01
... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. These materials are incorporated as they exist on the date... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Scales; accurate weights, repairs... AGRICULTURE REGULATIONS UNDER THE PACKERS AND STOCKYARDS ACT Services § 201.71 Scales; accurate...
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2010 CFR
2010-01-01
... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. These materials are incorporated as they exist on the date... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Scales; accurate weights, repairs... AGRICULTURE REGULATIONS UNDER THE PACKERS AND STOCKYARDS ACT Services § 201.71 Scales; accurate...
9 CFR 201.71 - Scales; accurate weights, repairs, adjustments or replacements after inspection.
Code of Federal Regulations, 2011 CFR
2011-01-01
... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. These materials are incorporated as they exist on the date... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Scales; accurate weights, repairs... AGRICULTURE REGULATIONS UNDER THE PACKERS AND STOCKYARDS ACT Services § 201.71 Scales; accurate...
Accurate estimators of correlation functions in Fourier space
NASA Astrophysics Data System (ADS)
Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.
2016-08-01
Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.
Weighted estimates for tangential boundary behaviour
NASA Astrophysics Data System (ADS)
Krotov, V. G.; Smovzh, L. V.
2006-02-01
Let (X,\\mu,d) be a space of homogeneous type (here d is a quasimetric and \\mu a measure). A function \\varepsilon of modulus of continuity kind gives rise to approach regions \\Gamma_{\\varepsilon}(x) at the boundary of \\mathbf{X}, \\mathbf{X}=X\\times\\lbrack0,1), where for a point x\\in X, \\displaystyle \\Gamma_{\\varepsilon}(x)=\\{(y,t)\\in \\mathbf{X}:d(x,y)<\\varepsilon(1-t)\\}.These are 'tangential' regions if \\lim_{t\\to+0}\\varepsilon(t)/t=\\infty.Weighted L^p-estimates are proved for the corresponding maximal functions of integral operators. Applications of these estimates to potentials in \\mathbb{R}^n and to multipliers of homogeneous expansions of holomorphic functions in the Hardy classes in the unit ball of \\mathbb{C}^n are presented.
NASA Astrophysics Data System (ADS)
Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime
2016-03-01
Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.
Accurate Orientation Estimation Using AHRS under Conditions of Magnetic Distortion
Yadav, Nagesh; Bleakley, Chris
2014-01-01
Low cost, compact attitude heading reference systems (AHRS) are now being used to track human body movements in indoor environments by estimation of the 3D orientation of body segments. In many of these systems, heading estimation is achieved by monitoring the strength of the Earth's magnetic field. However, the Earth's magnetic field can be locally distorted due to the proximity of ferrous and/or magnetic objects. Herein, we propose a novel method for accurate 3D orientation estimation using an AHRS, comprised of an accelerometer, gyroscope and magnetometer, under conditions of magnetic field distortion. The system performs online detection and compensation for magnetic disturbances, due to, for example, the presence of ferrous objects. The magnetic distortions are detected by exploiting variations in magnetic dip angle, relative to the gravity vector, and in magnetic strength. We investigate and show the advantages of using both magnetic strength and magnetic dip angle for detecting the presence of magnetic distortions. The correction method is based on a particle filter, which performs the correction using an adaptive cost function and by adapting the variance during particle resampling, so as to place more emphasis on the results of dead reckoning of the gyroscope measurements and less on the magnetometer readings. The proposed method was tested in an indoor environment in the presence of various magnetic distortions and under various accelerations (up to 3 g). In the experiments, the proposed algorithm achieves <2° static peak-to-peak error and <5° dynamic peak-to-peak error, significantly outperforming previous methods. PMID:25347584
Weight Estimation Tool for Children Aged 6 to 59 Months in Limited-Resource Settings
2016-01-01
Importance A simple, reliable anthropometric tool for rapid estimation of weight in children would be useful in limited-resource settings where current weight estimation tools are not uniformly reliable, nearly all global under-five mortality occurs, severe acute malnutrition is a significant contributor in approximately one-third of under-five mortality, and a weight scale may not be immediately available in emergencies to first-response providers. Objective To determine the accuracy and precision of mid-upper arm circumference (MUAC) and height as weight estimation tools in children under five years of age in low-to-middle income countries. Design This was a retrospective observational study. Data were collected in 560 nutritional surveys during 1992–2006 using a modified Expanded Program of Immunization two-stage cluster sample design. Setting Locations with high prevalence of acute and chronic malnutrition. Participants A total of 453,990 children met inclusion criteria (age 6–59 months; weight ≤ 25 kg; MUAC 80–200 mm) and exclusion criteria (bilateral pitting edema; biologically implausible weight-for-height z-score (WHZ), weight-for-age z-score (WAZ), and height-for-age z-score (HAZ) values). Exposures Weight was estimated using Broselow Tape, Hong Kong formula, and database MUAC alone, height alone, and height and MUAC combined. Main Outcomes and Measures Mean percentage difference between true and estimated weight, proportion of estimates accurate to within ± 25% and ± 10% of true weight, weighted Kappa statistic, and Bland-Altman bias were reported as measures of tool accuracy. Standard deviation of mean percentage difference and Bland-Altman 95% limits of agreement were reported as measures of tool precision. Results Database height was a more accurate and precise predictor of weight compared to Broselow Tape 2007 [B], Broselow Tape 2011 [A], and MUAC. Mean percentage difference between true and estimated weight was +0.49% (SD = 10
Hwang, Beomsoo; Jeon, Doyoung
2015-01-01
In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074
NASA Astrophysics Data System (ADS)
Rhee, Young Min
2000-10-01
A modified method to construct an accurate potential energy surface by interpolation is presented. The modification is based on the use of Cartesian coordinates in the weighting function. The translational and rotational invariance of the potential is incorporated by a proper definition of the distance between two Cartesian configurations. A numerical algorithm to find the distance is developed. It is shown that the present method is more exact in describing a planar system compared to the previous methods with weightings in internal coordinates. The applicability of the method to reactive systems is also demonstrated by performing classical trajectory simulations on the surface.
On Relevance Weight Estimation and Query Expansion.
ERIC Educational Resources Information Center
Robertson, S. E.
1986-01-01
A Bayesian argument is used to suggest modifications to the Robertson and Jones relevance weighting formula to accommodate the addition to the query of terms taken from the relevant documents identified during the search. (Author)
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Technology Transfer Automated Retrieval System (TEKTRAN)
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...
ERIC Educational Resources Information Center
Geller, Josie; Srikameswaran, Suja; Zaitsoff, Shannon L.; Cockell, Sarah J.; Poole, Gary D.
2003-01-01
Examined parents' awareness of their daughters' attitudes, beliefs, and feelings about their bodies. Sixty-six adolescent daughters completed an eating disorder scale, a body figure rating scale, and made ratings of their shape and weight. Greater discrepancies between parents' estimates of daughters' body esteem and daughters' self-reported body…
Accurate response surface approximations for weight equations based on structural optimization
NASA Astrophysics Data System (ADS)
Papila, Melih
Accurate weight prediction methods are vitally important for aircraft design optimization. Therefore, designers seek weight prediction techniques with low computational cost and high accuracy, and usually require a compromise between the two. The compromise can be achieved by combining stress analysis and response surface (RS) methodology. While stress analysis provides accurate weight information, RS techniques help to transmit effectively this information to the optimization procedure. The focus of this dissertation is structural weight equations in the form of RS approximations and their accuracy when fitted to results of structural optimizations that are based on finite element analyses. Use of RS methodology filters out the numerical noise in structural optimization results and provides a smooth weight function that can easily be used in gradient-based configuration optimization. In engineering applications RS approximations of low order polynomials are widely used, but the weight may not be modeled well by low-order polynomials, leading to bias errors. In addition, some structural optimization results may have high-amplitude errors (outliers) that may severely affect the accuracy of the weight equation. Statistical techniques associated with RS methodology are sought in order to deal with these two difficulties: (1) high-amplitude numerical noise (outliers) and (2) approximation model inadequacy. The investigation starts with reducing approximation error by identifying and repairing outliers. A potential reason for outliers in optimization results is premature convergence, and outliers of such nature may be corrected by employing different convergence settings. It is demonstrated that outlier repair can lead to accuracy improvements over the more standard approach of removing outliers. The adequacy of approximation is then studied by a modified lack-of-fit approach, and RS errors due to the approximation model are reduced by using higher order polynomials. In
Weight estimation techniques for composite airplanes in general aviation industry
NASA Technical Reports Server (NTRS)
Paramasivam, T.; Horn, W. J.; Ritter, J.
1986-01-01
Currently available weight estimation methods for general aviation airplanes were investigated. New equations with explicit material properties were developed for the weight estimation of aircraft components such as wing, fuselage and empennage. Regression analysis was applied to the basic equations for a data base of twelve airplanes to determine the coefficients. The resulting equations can be used to predict the component weights of either metallic or composite airplanes.
Image analysis for estimating the weight of live animals
NASA Astrophysics Data System (ADS)
Schofield, C. P.; Marchant, John A.
1991-02-01
Many components of animal production have been automated. For example weighing feeding identification and yield recording on cattle pigs poultry and fish. However some of these tasks still require a considerable degree of human input and more effective automation could lead to better husbandry. For example if the weight of pigs could be monitored more often without increasing labour input then this information could be used to measure growth rates and control fat level allowing accurate prediction of market dates and optimum carcass quality to be achieved with improved welfare at minimum cost. Some aspects of animal production have defied automation. For example attending to the well being of housed animals is the preserve of the expert stockman. He gathers visual data about the animals in his charge (in more plain words goes and looks at their condition and behaviour) and processes this data to draw conclusions and take actions. Automatically collecting data on well being implies that the animals are not disturbed from their normal environment otherwise false conclusions will be drawn. Computer image analysis could provide the data required without the need to disturb the animals. This paper describes new work at the Institute of Engineering Research which uses image analysis to estimate the weight of pigs as a starting point for the wider range of applications which have been identified. In particular a technique has been developed to
Carels, Robert A; Konrad, Krista; Harper, Jessica
2007-09-01
People frequently place foods into "health" or "diet" categories. This study examined whether (1) evaluations of "healthiness/unhealthiness" influence "caloric" estimation accuracy, (2) people evaluate foods for "healthiness/unhealthiness" or "weight gain/loss" differently, and (3) food evaluations differ by gender, diet status, and weight. Also, undergraduate dieters attempting to lose weight on their own were compared to obese weight loss program participants. Undergraduate students (N=101) rated eight "healthy" and "unhealthy" foods on perceived "healthiness/unhealthiness," "weight loss/gain capacity" and "caloric" content. Open-ended questions inquiring why a food was "healthy/unhealthy" or would "contribute to weight gain/loss" were coded into independent food categories (e.g., high fat). Results indicate that calories were systematically underestimated in healthy/weight loss foods, while they were systematically overestimated in unhealthy/weight gain foods. Dieters were more accurate at estimating "calories" of healthy foods and more attentive to the foods' fat, "calorie", and sugar content than non-dieters. Overweight participants commented more on fat and sugar content than normal weight participants. Undergraduate dieters used fewer categories for evaluating foods than weight loss program participants. Individual difference characteristics, such as diet-status, weight, and gender, influence people's perceptions of foods' healthiness or capacity to influence weight, and in some instances systematically bias their estimates of the caloric content of foods. PMID:17428574
Using the Mercy Method for Weight Estimation in Indian Children
Batmanabane, Gitanjali; Jena, Pradeep Kumar; Dikshit, Roshan
2015-01-01
This study was designed to compare the performance of a new weight estimation strategy (Mercy Method) with 12 existing weight estimation methods (APLS, Best Guess, Broselow, Leffler, Luscombe-Owens, Nelson, Shann, Theron, Traub-Johnson, Traub-Kichen) in children from India. Otherwise healthy children, 2 months to 16 years, were enrolled and weight, height, humeral length (HL), and mid-upper arm circumference (MUAC) were obtained by trained raters. Weight estimation was performed as described for each method. Predicted weights were regressed against actual weights and the slope, intercept, and Pearson correlation coefficient estimated. Agreement between estimated weight and actual weight was determined using Bland–Altman plots with log-transformation. Predictive performance of each method was assessed using mean error (ME), mean percentage error (MPE), and root mean square error (RMSE). Three hundred seventy-five children (7.5 ± 4.3 years, 22.1 ± 12.3 kg, 116.2 ± 26.3 cm) participated in this study. The Mercy Method (MM) offered the best correlation between actual and estimated weight when compared with the other methods (r2 = .967 vs .517-.844). The MM also demonstrated the lowest ME, MPE, and RMSE. Finally, the MM estimated weight within 20% of actual for nearly all children (96%) as opposed to the other methods for which these values ranged from 14% to 63%. The MM performed extremely well in Indian children with performance characteristics comparable to those observed for US children in whom the method was developed. It appears that the MM can be used in Indian children without modification, extending the utility of this weight estimation strategy beyond Western populations. PMID:27335932
How Accurately Do Spectral Methods Estimate Effective Elastic Thickness?
NASA Astrophysics Data System (ADS)
Perez-Gussinye, M.; Lowry, A. R.; Watts, A. B.; Velicogna, I.
2002-12-01
The effective elastic thickness, Te, is an important parameter that has the potential to provide information on the long-term thermal and mechanical properties of the the lithosphere. Previous studies have estimated Te using both forward and inverse (spectral) methods. While there is generally good agreement between the results obtained using these methods, spectral methods are limited because they depend on the spectral estimator and the window size chosen for analysis. In order to address this problem, we have used a multitaper technique which yields optimal estimates of the bias and variance of the Bouguer coherence function relating topography and gravity anomaly data. The technique has been tested using realistic synthetic topography and gravity. Synthetic data were generated assuming surface and sub-surface (buried) loading of an elastic plate with fractal statistics consistent with real data sets. The cases of uniform and spatially varying Te are examined. The topography and gravity anomaly data consist of 2000x2000 km grids sampled at 8 km interval. The bias in the Te estimate is assessed from the difference between the true Te value and the mean from analyzing 100 overlapping windows within the 2000x2000 km data grids. For the case in which Te is uniform, the bias and variance decrease with window size and increase with increasing true Te value. In the case of a spatially varying Te, however, there is a trade-off between spatial resolution and variance. With increasing window size the variance of the Te estimate decreases, but the spatial changes in Te are smeared out. We find that for a Te distribution consisting of a strong central circular region of Te=50 km (radius 600 km) and progressively smaller Te towards its edges, the 800x800 and 1000x1000 km window gave the best compromise between spatial resolution and variance. Our studies demonstrate that assumed stationarity of the relationship between gravity and topography data yields good results even in
Zhang, Xinyue; Lourenco, Daniela; Aguilar, Ignacio; Legarra, Andres; Misztal, Ignacy
2016-01-01
former. Manhattan plots had higher resolution with 5 and 100 QTL. Using a common weight for a window of 20 SNP that sums or averages the SNP variance enhances accuracy of predicting GEBV and provides accurate estimation of marker effects. PMID:27594861
Zhang, Xinyue; Lourenco, Daniela; Aguilar, Ignacio; Legarra, Andres; Misztal, Ignacy
2016-01-01
. Manhattan plots had higher resolution with 5 and 100 QTL. Using a common weight for a window of 20 SNP that sums or averages the SNP variance enhances accuracy of predicting GEBV and provides accurate estimation of marker effects. PMID:27594861
Accurate feature detection and estimation using nonlinear and multiresolution analysis
NASA Astrophysics Data System (ADS)
Rudin, Leonid; Osher, Stanley
1994-11-01
A program for feature detection and estimation using nonlinear and multiscale analysis was completed. The state-of-the-art edge detection was combined with multiscale restoration (as suggested by the first author) and robust results in the presence of noise were obtained. Successful applications to numerous images of interest to DOD were made. Also, a new market in the criminal justice field was developed, based in part, on this work.
NASA Technical Reports Server (NTRS)
Sensmeier, Mark D.; Samareh, Jamshid A.
2005-01-01
An approach is proposed for the application of rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process. This should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. A demonstration of this process is shown for two sample aircraft wing designs.
Estimating the weight of generally configured dual wing systems
NASA Technical Reports Server (NTRS)
Cronin, D. L.; Somnay, R. J.
1985-01-01
Formulas available for the weight estimation of monoplane wings cannot be said to be appropriate for the estimation of generally configured dual wing systems. In the present paper a method is described which simultaneously generates a structural weight estimate and a fully stressed, quasi-optimal structure for a model of a dual wing system. The method is fast and inexpensive. It is ideally suited to preliminary design. To illustrate the method, a dual wing system and a conventional wing system are sized. Numerical computation is shown to be suitably fast for both cases and, for both cases, convergence to a final configuration is shown to be quite rapid. To illustrate the validity of the method, a conventional wing is sized and its weight obtained by the present method is compared to its weight determined by a reputable weight estimation formula. The results are shown to be very close.
Pros, Cons, and Alternatives to Weight Based Cost Estimating
NASA Technical Reports Server (NTRS)
Joyner, Claude R.; Lauriem, Jonathan R.; Levack, Daniel H.; Zapata, Edgar
2011-01-01
Many cost estimating tools use weight as a major parameter in projecting the cost. This is often combined with modifying factors such as complexity, technical maturity of design, environment of operation, etc. to increase the fidelity of the estimate. For a set of conceptual designs, all meeting the same requirements, increased weight can be a major driver in increased cost. However, once a design is fixed, increased weight generally decreases cost, while decreased weight generally increases cost - and the relationship is not linear. Alternative approaches to estimating cost without using weight (except perhaps for materials costs) have been attempted to try to produce a tool usable throughout the design process - from concept studies through development. This paper will address the pros and cons of using weight based models for cost estimating, using liquid rocket engines as the example. It will then examine approaches that minimize the impct of weight based cost estimating. The Rocket Engine- Cost Model (RECM) is an attribute based model developed internally by Pratt & Whitney Rocketdyne for NASA. RECM will be presented primarily to show a successful method to use design and programmatic parameters instead of weight to estimate both design and development costs and production costs. An operations model developed by KSC, the Launch and Landing Effects Ground Operations model (LLEGO), will also be discussed.
NASA Technical Reports Server (NTRS)
Grissom, D. S.; Schneider, W. C.
1971-01-01
The determination of a base line (minimum weight) design for the primary structure of the living quarters modules in an earth-orbiting space base was investigated. Although the design is preliminary in nature, the supporting analysis is sufficiently thorough to provide a reasonably accurate weight estimate of the major components that are considered to comprise the structural weight of the space base.
Evaluation of the Mercy weight estimation method in Ouelessebougou, Mali
2014-01-01
Background This study evaluated the performance of a new weight estimation strategy (Mercy Method) with four existing weight-estimation methods (APLS, ARC, Broselow, and Nelson) in children from Ouelessebougou, Mali. Methods Otherwise healthy children, 2 mos to 16 yrs, were enrolled and weight, height, humeral length (HL) and mid-upper arm circumference (MUAC) obtained by trained raters. Weight estimation was performed as described for each method. Predicted weights were regressed against actual weights. Agreement between estimated and actual weight was determined using Bland-Altman plots with log-transformation. Predictive performance of each method was assessed using residual error (RE), percentage error (PE), root mean square error (RMSE), and percent predicted within 10, 20 and 30% of actual weight. Results 473 children (8.1 ± 4.8 yr, 25.1 ± 14.5 kg, 120.9 ± 29.5 cm) participated in this study. The Mercy Method (MM) offered the best correlation between actual and estimated weight when compared with the other methods (r2 = 0.97 vs. 0.80-0.94). The MM also demonstrated the lowest ME (0.06 vs. 0.92-4.1 kg), MPE (1.6 vs. 7.8-19.8%) and RMSE (2.6 vs. 3.0-6.7). Finally, the MM estimated weight within 20% of actual for nearly all children (97%) as opposed to the other methods for which these values ranged from 50-69%. Conclusions The MM performed extremely well in Malian children with performance characteristics comparable to those observed for U.S and India and could be used in sub-Saharan African children without modification extending the utility of this weight estimation strategy. PMID:24650051
Accurate tempo estimation based on harmonic + noise decomposition
NASA Astrophysics Data System (ADS)
Alonso, Miguel; Richard, Gael; David, Bertrand
2006-12-01
We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio signal by means of a subspace analysis method. Then, a technique to measure the degree of musical accentuation as a function of time is developed and separately applied to the harmonic and noise parts of the input signal. This is followed by a periodicity estimation block that calculates the salience of musical accents for a large number of potential periods. Next, a multipath dynamic programming searches among all the potential periodicities for the most consistent prospects through time, and finally the most energetic candidate is selected as tempo. Our proposal is validated using a manually annotated test-base containing 961 music signals from various musical genres. In addition, the performance of the algorithm under different configurations is compared. The robustness of the algorithm when processing signals of degraded quality is also measured.
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John
2016-01-01
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.
Bioaccessibility tests accurately estimate bioavailability of lead to quail.
Beyer, W Nelson; Basta, Nicholas T; Chaney, Rufus L; Henry, Paula F P; Mosby, David E; Rattner, Barnett A; Scheckel, Kirk G; Sprague, Daniel T; Weber, John S
2016-09-01
Hazards of soil-borne lead (Pb) to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, the authors measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from 5 Pb-contaminated Superfund sites had relative bioavailabilities from 33% to 63%, with a mean of approximately 50%. Treatment of 2 of the soils with phosphorus (P) significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in 6 in vitro tests and regressed on bioavailability: the relative bioavailability leaching procedure at pH 1.5, the same test conducted at pH 2.5, the Ohio State University in vitro gastrointestinal method, the urban soil bioaccessible lead test, the modified physiologically based extraction test, and the waterfowl physiologically based extraction test. All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the relative bioavailability leaching procedure at pH 2.5 and Ohio State University in vitro gastrointestinal tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite, and tertiary Pb phosphate) and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb, and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb. Environ Toxicol Chem 2016;35:2311-2319. Published 2016 Wiley Periodicals Inc. on behalf of
An evaluation of study design for estimating a time-of-day noise weighting
NASA Technical Reports Server (NTRS)
Fields, J. M.
1986-01-01
The relative importance of daytime and nighttime noise of the same noise level is represented by a time-of-day weight in noise annoyance models. The high correlations between daytime and nighttime noise were regarded as a major reason that previous social surveys of noise annoyance could not accurately estimate the value of the time-of-day weight. Study designs which would reduce the correlation between daytime and nighttime noise are described. It is concluded that designs based on short term variations in nighttime noise levels would not be able to provide valid measures of response to nighttime noise. The accuracy of the estimate of the time-of-day weight is predicted for designs which are based on long term variations in nighttime noise levels. For these designs it is predicted that it is not possible to form satisfactorily precise estimates of the time-of-day weighting.
High-Resolution Tsunami Inundation Simulations Based on Accurate Estimations of Coastal Waveforms
NASA Astrophysics Data System (ADS)
Oishi, Y.; Imamura, F.; Sugawara, D.; Furumura, T.
2015-12-01
We evaluate the accuracy of high-resolution tsunami inundation simulations in detail using the actual observational data of the 2011 Tohoku-Oki earthquake (Mw9.0) and investigate the methodologies to improve the simulation accuracy.Due to the recent development of parallel computing technologies, high-resolution tsunami inundation simulations are conducted more commonly than before. To evaluate how accurately these simulations can reproduce inundation processes, we test several types of simulation configurations on a parallel computer, where we can utilize the observational data (e.g., offshore and coastal waveforms and inundation properties) that are recorded during the Tohoku-Oki earthquake.Before discussing the accuracy of inundation processes on land, the incident waves at coastal sites must be accurately estimated. However, for megathrust earthquakes, it is difficult to find the tsunami source that can provide accurate estimations of tsunami waveforms at every coastal site because of the complex spatiotemporal distribution of the source and the limitation of observation. To overcome this issue, we employ a site-specific source inversion approach that increases the estimation accuracy within a specific coastal site by applying appropriate weighting to the observational data in the inversion process.We applied our source inversion technique to the Tohoku tsunami and conducted inundation simulations using 5-m resolution digital elevation model data (DEM) for the coastal area around Miyako Bay and Sendai Bay. The estimated waveforms at the coastal wave gauges of these bays successfully agree with the observed waveforms. However, the simulations overestimate the inundation extent indicating the necessity to improve the inundation model. We find that the value of Manning's roughness coefficient should be modified from the often-used value of n = 0.025 to n = 0.033 to obtain proper results at both cities.In this presentation, the simulation results with several
A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows
NASA Astrophysics Data System (ADS)
Diaz, Steven William
A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the
Robust and efficient estimation with weighted composite quantile regression
NASA Astrophysics Data System (ADS)
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
ERIC Educational Resources Information Center
Natale, Ruby; Uhlhorn, Susan B.; Lopez-Mitnik, Gabriela; Camejo, Stephanie; Englebert, Nicole; Delamater, Alan M.; Messiah, Sarah E.
2016-01-01
Background: One in four preschool-age children in the United States are currently overweight or obese. Previous studies have shown that caregivers of this age group often have difficulty accurately recognizing their child's weight status. The purpose of this study was to examine factors associated with accurate/inaccurate perception of child body…
Weighted measurement fusion Kalman estimator for multisensor descriptor system
NASA Astrophysics Data System (ADS)
Dou, Yinfeng; Ran, Chenjian; Gao, Yuan
2016-08-01
For the multisensor linear stochastic descriptor system with correlated measurement noises, the fused measurement can be obtained based on the weighted least square (WLS) method, and the reduced-order state components are obtained applying singular value decomposition method. Then, the multisensor descriptor system is transformed to a fused reduced-order non-descriptor system with correlated noise. And the weighted measurement fusion (WMF) Kalman estimator of this reduced-order subsystem is presented. According to the relationship of the presented non-descriptor system and the original descriptor system, the WMF Kalman estimator and its estimation error variance matrix of the original multisensor descriptor system are presented. The presented WMF Kalman estimator has global optimality, and can avoid computing these cross-variances of the local Kalman estimator, compared with the state fusion method. A simulation example about three-sensors stochastic dynamic input and output systems in economy verifies the effectiveness.
Habecker, Patrick; Dombrowski, Kirk; Khan, Bilal
2015-01-01
Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM) provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations) via traditional survey tools such as telephone or mail surveys—by asking a representative sample to estimate the number of people they know who are members of such a “hidden” subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation “trimming” to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights. PMID:26630261
Global Rotation Estimation Using Weighted Iterative Lie Algebraic Averaging
NASA Astrophysics Data System (ADS)
Reich, M.; Heipke, C.
2015-08-01
In this paper we present an approach for a weighted rotation averaging to estimate absolute rotations from relative rotations between two images for a set of multiple overlapping images. The solution does not depend on initial values for the unknown parameters and is robust against outliers. Our approach is one part of a solution for a global image orientation. Often relative rotations are not free from outliers, thus we use the redundancy in available pairwise relative rotations and present a novel graph-based algorithm to detect and eliminate inconsistent rotations. The remaining relative rotations are input to a weighted least squares adjustment performed in the Lie algebra of the rotation manifold SO(3) to obtain absolute orientation parameters for each image. Weights are determined using the prior information we derived from the estimation of the relative rotations. Because we use the Lie algebra of SO(3) for averaging no subsequent adaptation of the results has to be performed but the lossless projection to the manifold. We evaluate our approach on synthetic and real data. Our approach often is able to detect and eliminate all outliers from the relative rotations even if very high outlier rates are present. We show that we improve the quality of the estimated absolute rotations by introducing individual weights for the relative rotations based on various indicators. In comparison with the state-of-the-art in recent publications to global image orientation we achieve best results in the examined datasets.
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Sensmeier, mark D.; Stewart, Bret A.
2006-01-01
Algorithms for rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process have been developed. Application of these algorithms should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. Recent enhancements to this approach include the porting of the algorithms to a platform-independent software language Python, and modifications to specifically consider morphing aircraft-type configurations. Two sample cases which illustrate these recent developments are presented.
Browning, Sharon R.; Browning, Brian L.
2015-01-01
Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package. PMID:26299365
Development of Non-Optimum Factors for Launch Vehicle Propellant Tank Bulkhead Weight Estimation
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Wallace, Matthew L.; Cerro, Jeffrey A.
2012-01-01
Non-optimum factors are used during aerospace conceptual and preliminary design to account for the increased weights of as-built structures due to future manufacturing and design details. Use of higher-fidelity non-optimum factors in these early stages of vehicle design can result in more accurate predictions of a concept s actual weights and performance. To help achieve this objective, non-optimum factors are calculated for the aluminum-alloy gores that compose the ogive and ellipsoidal bulkheads of the Space Shuttle Super-Lightweight Tank propellant tanks. Minimum values for actual gore skin thicknesses and weld land dimensions are extracted from selected production drawings, and are used to predict reference gore weights. These actual skin thicknesses are also compared to skin thicknesses predicted using classical structural mechanics and tank proof-test pressures. Both coarse and refined weights models are developed for the gores. The coarse model is based on the proof pressure-sized skin thicknesses, and the refined model uses the actual gore skin thicknesses and design detail dimensions. To determine the gore non-optimum factors, these reference weights are then compared to flight hardware weights reported in a mass properties database. When manufacturing tolerance weight estimates are taken into account, the gore non-optimum factors computed using the coarse weights model range from 1.28 to 2.76, with an average non-optimum factor of 1.90. Application of the refined weights model yields non-optimum factors between 1.00 and 1.50, with an average non-optimum factor of 1.14. To demonstrate their use, these calculated non-optimum factors are used to predict heavier, more realistic gore weights for a proposed heavy-lift launch vehicle s propellant tank bulkheads. These results indicate that relatively simple models can be developed to better estimate the actual weights of large structures for future launch vehicles.
EDIN0613P weight estimating program. [for launch vehicles
NASA Technical Reports Server (NTRS)
Hirsch, G. N.
1976-01-01
The weight estimating relationships and program developed for space power system simulation are described. The program was developed to size a two-stage launch vehicle for the space power system. The program is actually part of an overall simulation technique called EDIN (Engineering Design and Integration) system. The program sizes the overall vehicle, generates major component weights and derives a large amount of overall vehicle geometry. The program is written in FORTRAN V and is designed for use on the Univac Exec 8 (1110). By utilizing the flexibility of this program while remaining cognizant of the limits imposed upon output depth and accuracy by utilization of generalized input, this program concept can be a useful tool for estimating purposes at the conceptual design stage of a launch vehicle.
Weighted estimating equations with nonignorably missing response data.
Troxel, A B; Lipsitz, S R; Brennan, T A
1997-09-01
We propose weighted estimating equations for data with nonignorable nonresponse in order to reduce the bias that can occur with a complete case analysis. A survey concerning medical practice guidelines, malpractice litigation, and settlement provides the framework. The survey was sent to recipients in two waves: those who responded on the first or second wave are used to estimate a nonignorable nonresponse model, while the fraction of recipients who never responded is used to allow the percentage of missing data to change with each wave. We use the structure of the GEE of Liang and Zeger (1986, Biometrika 73, 13-22), adding weights equal to the inverse probability of being observed. We present simulations demonstrating the bias that can occur with an unweighted analysis and use the survey data to illustrate the methods. PMID:9290219
Dynamic consensus estimation of weighted average on directed graphs
NASA Astrophysics Data System (ADS)
Li, Shuai; Guo, Yi
2015-07-01
Recent applications call for distributed weighted average estimation over sensor networks, where sensor measurement accuracy or environmental conditions need to be taken into consideration in the final consensused group decision. In this paper, we propose new dynamic consensus filter design to distributed estimate weighted average of sensors' inputs on directed graphs. Based on recent advances in the filed, we modify the existing proportional-integral consensus filter protocol to remove the requirement of bi-directional gain exchange between neighbouring sensors, so that the algorithm works for directed graphs where bi-directional communications are not possible. To compensate for the asymmetric structure of the system introduced by such a removal, sufficient gain conditions are obtained for the filter protocols to guarantee the convergence. It is rigorously proved that the proposed filter protocol converges to the weighted average of constant inputs asymptotically, and to the weighted average of time-varying inputs with a bounded error. Simulations verify the effectiveness of the proposed protocols.
Are In-Bed Electronic Weights Recorded in the Medical Record Accurate?
Gerl, Heather; Miko, Alexandra; Nelson, Mandy; Godaire, Lori
2016-01-01
This study found large discrepancies between in-bed weights recorded in the medical record and carefully obtained standing weights with a calibrated, electronic bedside scale. This discrepancy appears to be related to inadequate bed calibration before patient admission and having excessive linen, clothing, and/or equipment on the bed during weighing by caregivers. PMID:27522846
Performance and Weight Estimates for an Advanced Open Rotor Engine
NASA Technical Reports Server (NTRS)
Hendricks, Eric S.; Tong, Michael T.
2012-01-01
NASA s Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft. The open rotor concept (also historically referred to an unducted fan or advanced turboprop) may allow for the achievement of this objective by reducing engine fuel consumption. To evaluate the potential impact of open rotor engines, cycle modeling and engine weight estimation capabilities have been developed. The initial development of the cycle modeling capabilities in the Numerical Propulsion System Simulation (NPSS) tool was presented in a previous paper. Following that initial development, further advancements have been made to the cycle modeling and weight estimation capabilities for open rotor engines and are presented in this paper. The developed modeling capabilities are used to predict the performance of an advanced open rotor concept using modern counter-rotating propeller designs. Finally, performance and weight estimates for this engine are presented and compared to results from a previous NASA study of advanced geared and direct-drive turbofans.
48 CFR 52.247-8 - Estimated Weights or Quantities Not Guaranteed.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Estimated Weights or... Provisions and Clauses 52.247-8 Estimated Weights or Quantities Not Guaranteed. As prescribed in 47.207-3(e... transportation-related services when weights or quantities are estimates: Estimated Weights or Quantities...
48 CFR 52.247-8 - Estimated Weights or Quantities Not Guaranteed.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Estimated Weights or... Provisions and Clauses 52.247-8 Estimated Weights or Quantities Not Guaranteed. As prescribed in 47.207-3(e... transportation-related services when weights or quantities are estimates: Estimated Weights or Quantities...
NASA Astrophysics Data System (ADS)
Vizireanu, D. N.; Halunga, S. V.
2012-04-01
A simple, fast and accurate amplitude estimation algorithm of sinusoidal signals for DSP based instrumentation is proposed. It is shown that eight samples, used in two steps, are sufficient. A practical analytical formula for amplitude estimation is obtained. Numerical results are presented. Simulations have been performed when the sampled signal is affected by white Gaussian noise and when the samples are quantized on a given number of bits.
Digital combining-weight estimation for broadband sources using maximum-likelihood estimates
NASA Technical Reports Server (NTRS)
Rodemich, E. R.; Vilnrotter, V. A.
1994-01-01
An algorithm described for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system is compared with the maximum-likelihood estimate. This provides some improvement in performance, with an increase in computational complexity. However, the maximum-likelihood algorithm is simple enough to allow implementation on a PC-based combining system.
Blanc, Ann K.; Wardlaw, Tessa
2005-01-01
OBJECTIVE: To critically examine the data used to produce estimates of the proportion of infants with low birth weight in developing countries and to describe biases in these data. To assess the effect of adjustment procedures on the estimates and propose a modified estimation procedure for international reporting purposes. METHODS: Mothers' reports about their recent births in 62 nationally representative Demographic and Health Surveys (DHS) conducted between 1990 and 2000 were analysed. The proportion of infants weighed at birth, characteristics of those weighed, extent of misreporting, and mothers' subjective assessments of their children's size at birth were examined. FINDINGS: In many developing countries the majority of infants were not weighed at birth. Those who were weighed were more likely to have mothers who live in urban areas and are educated, and to be born in a medical facility with assistance from medically trained personnel. Birth weights reported by mothers are "heaped" on multiples of 500 grams. CONCLUSION: Current survey-based estimates of the prevalence of low birth weight are biased substantially downwards. Two adjustments to reported data are recommended: a weighting procedure that combines reported birth weights with mothers' assessment of the child's size at birth, and categorization of one-quarter of the infants reported to have a birth weight of exactly 2500 grams as having low birth weight. Averaged over all surveys, these procedures increased the proportion classified as having low birth weight by 25%. We also recommend that the proportion of infants not weighed at birth be routinely reported. Efforts are needed to increase the weighing of newborns and the recording of their weights. PMID:15798841
On the accurate estimation of gap fraction during daytime with digital cover photography
NASA Astrophysics Data System (ADS)
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
A new geometric-based model to accurately estimate arm and leg inertial estimates.
Wicke, Jason; Dumas, Geneviève A
2014-06-01
Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. PMID:24735506
Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.
Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro
2016-01-12
The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy. PMID:26605696
Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua
2012-01-01
Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094
Bowden, Jack; Davey Smith, George; Haycock, Philip C.
2016-01-01
ABSTRACT Developments in genome‐wide association studies and the increasing availability of summary genetic association data have made application of Mendelian randomization relatively straightforward. However, obtaining reliable results from a Mendelian randomization investigation remains problematic, as the conventional inverse‐variance weighted method only gives consistent estimates if all of the genetic variants in the analysis are valid instrumental variables. We present a novel weighted median estimator for combining data on multiple genetic variants into a single causal estimate. This estimator is consistent even when up to 50% of the information comes from invalid instrumental variables. In a simulation analysis, it is shown to have better finite‐sample Type 1 error rates than the inverse‐variance weighted method, and is complementary to the recently proposed MR‐Egger (Mendelian randomization‐Egger) regression method. In analyses of the causal effects of low‐density lipoprotein cholesterol and high‐density lipoprotein cholesterol on coronary artery disease risk, the inverse‐variance weighted method suggests a causal effect of both lipid fractions, whereas the weighted median and MR‐Egger regression methods suggest a null effect of high‐density lipoprotein cholesterol that corresponds with the experimental evidence. Both median‐based and MR‐Egger regression methods should be considered as sensitivity analyses for Mendelian randomization investigations with multiple genetic variants. PMID:27061298
Bowden, Jack; Davey Smith, George; Haycock, Philip C; Burgess, Stephen
2016-05-01
Developments in genome-wide association studies and the increasing availability of summary genetic association data have made application of Mendelian randomization relatively straightforward. However, obtaining reliable results from a Mendelian randomization investigation remains problematic, as the conventional inverse-variance weighted method only gives consistent estimates if all of the genetic variants in the analysis are valid instrumental variables. We present a novel weighted median estimator for combining data on multiple genetic variants into a single causal estimate. This estimator is consistent even when up to 50% of the information comes from invalid instrumental variables. In a simulation analysis, it is shown to have better finite-sample Type 1 error rates than the inverse-variance weighted method, and is complementary to the recently proposed MR-Egger (Mendelian randomization-Egger) regression method. In analyses of the causal effects of low-density lipoprotein cholesterol and high-density lipoprotein cholesterol on coronary artery disease risk, the inverse-variance weighted method suggests a causal effect of both lipid fractions, whereas the weighted median and MR-Egger regression methods suggest a null effect of high-density lipoprotein cholesterol that corresponds with the experimental evidence. Both median-based and MR-Egger regression methods should be considered as sensitivity analyses for Mendelian randomization investigations with multiple genetic variants. PMID:27061298
Bayesian hemodynamic parameter estimation by bolus tracking perfusion weighted imaging.
Boutelier, Timothé; Kudo, Koshuke; Pautot, Fabrice; Sasaki, Makoto
2012-07-01
A delay-insensitive probabilistic method for estimating hemodynamic parameters, delays, theoretical residue functions, and concentration time curves by computed tomography (CT) and magnetic resonance (MR) perfusion weighted imaging is presented. Only a mild stationarity hypothesis is made beyond the standard perfusion model. New microvascular parameters with simple hemodynamic interpretation are naturally introduced. Simulations on standard digital phantoms show that the method outperforms the oscillating singular value decomposition (oSVD) method in terms of goodness-of-fit, linearity, statistical and systematic errors on all parameters, especially at low signal-to-noise ratios (SNRs). Delay is always estimated sharply with user-supplied resolution and is purely arterial, by contrast to oSVD time-to-maximum TMAX that is very noisy and biased by mean transit time (MTT), blood volume, and SNR. Residue functions and signals estimates do not suffer overfitting anymore. One CT acute stroke case confirms simulation results and highlights the ability of the method to reliably estimate MTT when SNR is low. Delays look promising for delineating the arterial occlusion territory and collateral circulation. PMID:22410325
Flocke, N
2009-08-14
In this paper it is shown that shifted Jacobi polynomials G(n)(p,q,x) can be used in connection with the Gaussian quadrature modified moment technique to greatly enhance the accuracy of evaluation of Rys roots and weights used in Gaussian integral evaluation in quantum chemistry. A general four-term inhomogeneous recurrence relation is derived for the shifted Jacobi polynomial modified moments over the Rys weight function e(-Tx)/square root x. It is shown that for q=1/2 this general four-term inhomogeneous recurrence relation reduces to a three-term p-dependent inhomogeneous recurrence relation. Adjusting p to proper values depending on the Rys exponential parameter T, the method is capable of delivering highly accurate results for large number of roots and weights in the most difficult to treat intermediate T range. Examples are shown, and detailed formulas together with practical suggestions for their efficient implementation are also provided. PMID:19691378
NASA Astrophysics Data System (ADS)
Flocke, N.
2009-08-01
In this paper it is shown that shifted Jacobi polynomials Gn(p,q,x) can be used in connection with the Gaussian quadrature modified moment technique to greatly enhance the accuracy of evaluation of Rys roots and weights used in Gaussian integral evaluation in quantum chemistry. A general four-term inhomogeneous recurrence relation is derived for the shifted Jacobi polynomial modified moments over the Rys weight function e-Tx/√x . It is shown that for q =1/2 this general four-term inhomogeneous recurrence relation reduces to a three-term p-dependent inhomogeneous recurrence relation. Adjusting p to proper values depending on the Rys exponential parameter T, the method is capable of delivering highly accurate results for large number of roots and weights in the most difficult to treat intermediate T range. Examples are shown, and detailed formulas together with practical suggestions for their efficient implementation are also provided.
Accurate estimation of forest carbon stocks by 3-D remote sensing of individual trees.
Omasa, Kenji; Qiu, Guo Yu; Watanuki, Kenichi; Yoshimi, Kenji; Akiyama, Yukihide
2003-03-15
Forests are one of the most important carbon sinks on Earth. However, owing to the complex structure, variable geography, and large area of forests, accurate estimation of forest carbon stocks is still a challenge for both site surveying and remote sensing. For these reasons, the Kyoto Protocol requires the establishment of methodologies for estimating the carbon stocks of forests (Kyoto Protocol, Article 5). A possible solution to this challenge is to remotely measure the carbon stocks of every tree in an entire forest. Here, we present a methodology for estimating carbon stocks of a Japanese cedar forest by using a high-resolution, helicopter-borne 3-dimensional (3-D) scanning lidar system that measures the 3-D canopy structure of every tree in a forest. Results show that a digital image (10-cm mesh) of woody canopy can be acquired. The treetop can be detected automatically with a reasonable accuracy. The absolute error ranges for tree height measurements are within 42 cm. Allometric relationships of height to carbon stocks then permit estimation of total carbon storage by measurement of carbon stocks of every tree. Thus, we suggest that our methodology can be used to accurately estimate the carbon stocks of Japanese cedar forests at a stand scale. Periodic measurements will reveal changes in forest carbon stocks. PMID:12680675
An improved Q estimation approach: the weighted centroid frequency shift method
NASA Astrophysics Data System (ADS)
Li, Jingnan; Wang, Shangxu; Yang, Dengfeng; Dong, Chunhui; Tao, Yonghui; Zhou, Yatao
2016-06-01
Seismic wave propagation in subsurface media suffers from absorption, which can be quantified by the quality factor Q. Accurate estimation of the Q factor is of great importance for the resolution enhancement of seismic data, precise imaging and interpretation, and reservoir prediction and characterization. The centroid frequency shift method (CFS) is currently one of the most commonly used Q estimation methods. However, for seismic data that contain noise, the accuracy and stability of Q extracted using CFS depend on the choice of frequency band. In order to reduce the influence of frequency band choices and obtain Q with greater precision and robustness, we present an improved CFS Q measurement approach—the weighted CFS method (WCFS), which incorporates a Gaussian weighting coefficient into the calculation procedure of the conventional CFS. The basic idea is to enhance the proportion of advantageous frequencies in the amplitude spectrum and reduce the weight of disadvantageous frequencies. In this novel method, we first construct a Gauss function using the centroid frequency and variance of the reference wavelet. Then we employ it as the weighting coefficient for the amplitude spectrum of the original signal. Finally, the conventional CFS is adopted for the weighted amplitude spectrum to extract the Q factor. Numerical tests of noise-free synthetic data demonstrate that the WCFS is feasible and efficient, and produces more accurate results than the conventional CFS. Tests for noisy synthetic data indicate that the new method has better anti-noise capability than the CFS. The application to field vertical seismic profile (VSP) data further demonstrates its validity5.
Estimating topological properties of weighted networks from limited information.
Cimini, Giulio; Squartini, Tiziano; Gabrielli, Andrea; Garlaschelli, Diego
2015-10-01
A problem typically encountered when studying complex systems is the limitedness of the information available on their topology, which hinders our understanding of their structure and of the dynamical processes taking place on them. A paramount example is provided by financial networks, whose data are privacy protected: Banks publicly disclose only their aggregate exposure towards other banks, keeping individual exposures towards each single bank secret. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here, we develop a reconstruction method, based on statistical mechanics concepts, that makes use of the empirical link density in a highly nontrivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems. PMID:26565153
Estimating topological properties of weighted networks from limited information
NASA Astrophysics Data System (ADS)
Cimini, Giulio; Squartini, Tiziano; Gabrielli, Andrea; Garlaschelli, Diego
2015-10-01
A problem typically encountered when studying complex systems is the limitedness of the information available on their topology, which hinders our understanding of their structure and of the dynamical processes taking place on them. A paramount example is provided by financial networks, whose data are privacy protected: Banks publicly disclose only their aggregate exposure towards other banks, keeping individual exposures towards each single bank secret. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here, we develop a reconstruction method, based on statistical mechanics concepts, that makes use of the empirical link density in a highly nontrivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems.
Estimating topological properties of weighted networks from limited information
NASA Astrophysics Data System (ADS)
Gabrielli, Andrea; Cimini, Giulio; Garlaschelli, Diego; Squartini, Angelo
A typical problem met when studying complex systems is the limited information available on their topology, which hinders our understanding of their structural and dynamical properties. A paramount example is provided by financial networks, whose data are privacy protected. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here we develop a reconstruction method, based on statistical mechanics concepts, that exploits the empirical link density in a highly non-trivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems. Acknoweledgement to ``Growthcom'' ICT - EC project (Grant No: 611272) and ``Crisislab'' Italian Project.
Burward-Hoy, J. M.; Geist, W. H.; Krick, M. S.; Mayo, D. R.
2004-01-01
Neutron multiplicity counting is a technique for the rapid, nondestructive measurement of plutonium mass in pure and impure materials. This technique is very powerful because it uses the measured coincidence count rates to determine the sample mass without requiring a set of representative standards for calibration. Interpreting measured singles, doubles, and triples count rates using the three-parameter standard point model accurately determines plutonium mass, neutron multiplication, and the ratio of ({alpha},n) to spontaneous-fission neutrons (alpha) for oxides of moderate mass. However, underlying standard point model assumptions - including constant neutron energy and constant multiplication throughout the sample - cause significant biases for the mass, multiplication, and alpha in measurements of metal and large, dense oxides.
National Airspace System Delay Estimation Using Weather Weighted Traffic Counts
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.; Sridhar, Banavar
2004-01-01
Assessment of National Airspace System performance, which is usually measured in terms of delays resulting from the application of traffic flow management initiatives in response to weather conditions, volume, equipment outages and runway conditions, is needed both for guiding flow control decisions during the day of operations and for post operations analysis. Comparison of the actual delay, resulting from the traffic flow management initiatives, with the expected delay, based on traffic demand and other conditions, provides the assessment of the National Airspace System performance. This paper provides a method for estimating delay using the expected traffic demand and weather. In order to identify the cause of delays, 517 days of National Airspace System delay data reported by the Federal Aviation Administration s Operations Network were analyzed. This analysis shows that weather is the most important causal factor for delays followed by equipment and runway delays. Guided by these results, the concept of weather weighted traffic counts as a measure of system delay is described. Examples are given to show the variation of these counts as a function of time of the day. The various datasets, consisting of aircraft position data, enroute severe weather data, surface wind speed and visibility data, reported delay data and number of aircraft handled by the Centers data, and their sources are described. The procedure for selecting reference days on which traffic was minimally impacted by weather is described. Different traffic demand on each reference day of the week, determined by analysis of 42 days of traffic and delay data, was used as the expected traffic demand for each day of the week. Next, the method for computing the weather weighted traffic counts using the expected traffic demand, derived from reference days, and the expanded regions around severe weather cells is discussed. It is shown via a numerical example that this approach improves the dynamic range
Natale, Ruby; Uhlhorn, Susan B; Lopez-Mitnik, Gabriela; Camejo, Stephanie; Englebert, Nicole; Delamater, Alan M; Messiah, Sarah E
2016-04-01
Background One in four preschool-age children in the United States are currently overweight or obese. Previous studies have shown that caregivers of this age group often have difficulty accurately recognizing their child's weight status. The purpose of this study was to examine factors associated with accurate/inaccurate perception of child body mass index (BMI) among a multicultural sample of caregivers who were predominantly low-income and foreign-born.Methods A total of 980 caregivers (72% Hispanic, 71% born outside of the United States) of preschool-age children (N= 1,105) were asked if their child was normal weight, overweight, or obese. Answers were compared to actual child BMI percentile category via chi-square analysis. Logistic regression analysis was performed to assess predictors of accurate perception of child BMI percentile category.Results More than one third of preschoolers were either overweight (18.4%) or obese (16.5%). The majority (92%) of caregivers of an overweight/obese child inaccurately perceived that their child was in a normal BMI category. Overall, foreign-born caregivers were significantly less likely to accurately perceive their child's BMI percentile category versus U.S.-born caregivers (odds ratio [OR] = 0.65, 95% confidence interval [CI] = 0.48-0.88). Specifically, those born in South America (OR = 0.59, 95% CI = 0.36-0.98), Central America/Mexico (OR = 0.59, 95% CI = 0.41-0.85), and Caribbean Hispanic nations (OR = 0.54, 95% CI = 0.35-0.83) were significantly less likely to accurately perceive their child's BMI category versus U.S.-born caregivers.Conclusions The results of this study suggest that foreign-born caregivers of U.S. preschool-age overweight/obese children in particular do not accurately perceive their child's BMI status. Health care professionals serving foreign-born caregivers may consider additional culturally appropriate healthy weight counseling for these families. PMID:26304710
Yu, Jihnhee; Yang, Luge; Vexler, Albert; Hutson, Alan D
2016-06-15
The receiver operating characteristic (ROC) curve is a popular technique with applications, for example, investigating an accuracy of a biomarker to delineate between disease and non-disease groups. A common measure of accuracy of a given diagnostic marker is the area under the ROC curve (AUC). In contrast with the AUC, the partial area under the ROC curve (pAUC) looks into the area with certain specificities (i.e., true negative rate) only, and it can be often clinically more relevant than examining the entire ROC curve. The pAUC is commonly estimated based on a U-statistic with the plug-in sample quantile, making the estimator a non-traditional U-statistic. In this article, we propose an accurate and easy method to obtain the variance of the nonparametric pAUC estimator. The proposed method is easy to implement for both one biomarker test and the comparison of two correlated biomarkers because it simply adapts the existing variance estimator of U-statistics. In this article, we show accuracy and other advantages of the proposed variance estimation method by broadly comparing it with previously existing methods. Further, we develop an empirical likelihood inference method based on the proposed variance estimator through a simple implementation. In an application, we demonstrate that, depending on the inferences by either the AUC or pAUC, we can make a different decision on a prognostic ability of a same set of biomarkers. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26790540
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.
Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785
Accurate estimation of object location in an image sequence using helicopter flight data
NASA Technical Reports Server (NTRS)
Tang, Yuan-Liang; Kasturi, Rangachar
1994-01-01
In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.
Effective Echo Detection and Accurate Orbit Estimation Algorithms for Space Debris Radar
NASA Astrophysics Data System (ADS)
Isoda, Kentaro; Sakamoto, Takuya; Sato, Toru
Orbit estimation of space debris, objects of no inherent value orbiting the earth, is a task that is important for avoiding collisions with spacecraft. The Kamisaibara Spaceguard Center radar system was built in 2004 as the first radar facility in Japan devoted to the observation of space debris. In order to detect the smaller debris, coherent integration is effective in improving SNR (Signal-to-Noise Ratio). However, it is difficult to apply coherent integration to real data because the motions of the targets are unknown. An effective algorithm is proposed for echo detection and orbit estimation of the faint echoes from space debris. The characteristics of the evaluation function are utilized by the algorithm. Experiments show the proposed algorithm improves SNR by 8.32dB and enables estimation of orbital parameters accurately to allow for re-tracking with a single radar.
Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar
2016-01-01
Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non
Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness
2015-01-01
Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073
Information-geometric measures for estimation of connection weight under correlated inputs.
Nie, Yimin; Tatsuno, Masami
2012-12-01
The brain processes information in a highly parallel manner. Determination of the relationship between neural spikes and synaptic connections plays a key role in the analysis of electrophysiological data. Information geometry (IG) has been proposed as a powerful analysis tool for multiple spike data, providing useful insights into the statistical interactions within a population of neurons. Previous work has demonstrated that IG measures can be used to infer the connection weight between two neurons in a neural network. This property is useful in neuroscience because it provides a way to estimate learning-induced changes in synaptic strengths from extracellular neuronal recordings. A previous study has shown, however, that this property would hold only when inputs to neurons are not correlated. Since neurons in the brain often receive common inputs, this would hinder the application of the IG method to real data. We investigated the two-neuron-IG measures in higher-order log-linear models to overcome this limitation. First, we mathematically showed that the estimation of uniformly connected synaptic weight can be improved by taking into account higher-order log-linear models. Second, we numerically showed that the estimation can be improved for more general asymmetrically connected networks. Considering the estimated number of the synaptic connections in the brain, we showed that the two-neuron IG measure calculated by the fourth- or fifth-order log-linear model would provide an accurate estimation of connection strength within approximately a 10% error. These studies suggest that the two-neuron IG measure with higher-order log-linear expansion is a robust estimator of connection weight even under correlated inputs, providing a useful analytical tool for real multineuronal spike data. PMID:22970877
NASA Astrophysics Data System (ADS)
Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong
2015-08-01
For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.
Njoku, Charles; Emechebe, Cajethan; Odusolu, Patience; Abeshi, Sylvestre
2014-01-01
Information on fetal weight is of importance to obstetricians in the management of pregnancy and delivery. The objective of this study is to compare the accuracy of clinical and sonographic methods of predicting fetal weights at term. This prospective comparative study of 200 parturients was conducted at the University of Calabar Teaching Hospital, Calabar. The study participants were mothers with singleton term pregnancy admitted for delivery. The mean absolute percentage errors of both clinical and ultrasound methods were 11.16% ± 9.48 and 9.036% ± 7.61, respectively, and the difference was not statistically significant (P = 0.205). The accuracy within 10% of actual birth weights was 69.5% and 72% for both clinical estimation of fetal weight and ultrasound, respectively, and the difference was not statistically significant (P = 0.755). The accuracy of fetal weight estimation using Dare's formula is comparable to ultrasound estimates for predicting birth weight at term.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Cancilla, John C; Díaz-Rodríguez, Pablo; Matute, Gemma; Torrecilla, José S
2015-02-14
The estimation of the density and refractive index of ternary mixtures comprising the ionic liquid (IL) 1-butyl-3-methylimidazolium tetrafluoroborate, 2-propanol, and water at a fixed temperature of 298.15 K has been attempted through artificial neural networks. The obtained results indicate that the selection of this mathematical approach was a well-suited option. The mean prediction errors obtained, after simulating with a dataset never involved in the training process of the model, were 0.050% and 0.227% for refractive index and density estimation, respectively. These accurate results, which have been attained only using the composition of the dissolutions (mass fractions), imply that, most likely, ternary mixtures similar to the one analyzed, can be easily evaluated utilizing this algorithmic tool. In addition, different chemical processes involving ILs can be monitored precisely, and furthermore, the purity of the compounds in the studied mixtures can be indirectly assessed thanks to the high accuracy of the model. PMID:25583241
Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti
2016-01-01
The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems. PMID:26651397
Lamb mode selection for accurate wall loss estimation via guided wave tomography
Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.
2014-02-18
Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.
Lamb mode selection for accurate wall loss estimation via guided wave tomography
NASA Astrophysics Data System (ADS)
Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.
2014-02-01
Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1-2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S0 and A0, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A0 to thickness variations was shown to be superior to S0, however, the attenuation from A0 when a liquid loading was present was much higher than S0. A0 was less sensitive to the presence of coatings on the surface of than S0.
Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.
Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet
2016-05-01
Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments. PMID:26851474
Granata, Daniele; Carnevale, Vincenzo
2016-01-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265
Hybridization modeling of oligonucleotide SNP arrays for accurate DNA copy number estimation
Wan, Lin; Sun, Kelian; Ding, Qi; Cui, Yuehua; Li, Ming; Wen, Yalu; Elston, Robert C.; Qian, Minping; Fu, Wenjiang J
2009-01-01
Affymetrix SNP arrays have been widely used for single-nucleotide polymorphism (SNP) genotype calling and DNA copy number variation inference. Although numerous methods have achieved high accuracy in these fields, most studies have paid little attention to the modeling of hybridization of probes to off-target allele sequences, which can affect the accuracy greatly. In this study, we address this issue and demonstrate that hybridization with mismatch nucleotides (HWMMN) occurs in all SNP probe-sets and has a critical effect on the estimation of allelic concentrations (ACs). We study sequence binding through binding free energy and then binding affinity, and develop a probe intensity composite representation (PICR) model. The PICR model allows the estimation of ACs at a given SNP through statistical regression. Furthermore, we demonstrate with cell-line data of known true copy numbers that the PICR model can achieve reasonable accuracy in copy number estimation at a single SNP locus, by using the ratio of the estimated AC of each sample to that of the reference sample, and can reveal subtle genotype structure of SNPs at abnormal loci. We also demonstrate with HapMap data that the PICR model yields accurate SNP genotype calls consistently across samples, laboratories and even across array platforms. PMID:19586935
Granata, Daniele; Carnevale, Vincenzo
2016-01-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant "collective" variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265
Methods for accurate estimation of net discharge in a tidal channel
Simpson, M.R.; Bland, R.
2000-01-01
Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three
MIDAS robust trend estimator for accurate GPS station velocities without step detection
NASA Astrophysics Data System (ADS)
Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-03-01
Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2012-01-01
This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…
Accurate estimation of the RMS emittance from single current amplifier data
Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.
2002-05-31
This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H{sup -} ion source.
Accurate estimation of motion blur parameters in noisy remote sensing image
NASA Astrophysics Data System (ADS)
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Schütt, Heiko H; Harmeling, Stefan; Macke, Jakob H; Wichmann, Felix A
2016-05-01
The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available. PMID:27013261
Accurate estimation of human body orientation from RGB-D sensors.
Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao
2013-10-01
Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method. PMID:23893759
NASA Astrophysics Data System (ADS)
Rosenthal, Yair; Lohmann, George P.
2002-09-01
Paired δ18O and Mg/Ca measurements on the same foraminiferal shells offer the ability to independently estimate sea surface temperature (SST) changes and assess their temporal relationship to the growth and decay of continental ice sheets. The accuracy of this method is confounded, however, by the absence of a quantitative method to correct Mg/Ca records for alteration by dissolution. Here we describe dissolution-corrected calibrations for Mg/Ca-paleothermometry in which the preexponent constant is a function of size-normalized shell weight: (1) for G. ruber (212-300 μm) (Mg/Ca)ruber = (0.025 wt + 0.11) e0.095T and (b) for G. sacculifer (355-425 μm) (Mg/Ca)sacc = (0.0032 wt + 0.181) e0.095T. The new calibrations improve the accuracy of SST estimates and are globally applicable. With this correction, eastern equatorial Atlantic SST during the Last Glacial Maximum is estimated to be 2.9° ± 0.4°C colder than today.
Quick and accurate estimation of the elastic constants using the minimum image method
NASA Astrophysics Data System (ADS)
Tretiakov, Konstantin V.; Wojciechowski, Krzysztof W.
2015-04-01
A method for determining the elastic properties using the minimum image method (MIM) is proposed and tested on a model system of particles interacting by the Lennard-Jones (LJ) potential. The elastic constants of the LJ system are determined in the thermodynamic limit, N → ∞, using the Monte Carlo (MC) method in the NVT and NPT ensembles. The simulation results show that when determining the elastic constants, the contribution of long-range interactions cannot be ignored, because that would lead to erroneous results. In addition, the simulations have revealed that the inclusion of further interactions of each particle with all its minimum image neighbors even in case of small systems leads to results which are very close to the values of elastic constants in the thermodynamic limit. This enables one for a quick and accurate estimation of the elastic constants using very small samples.
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.
Saccà, Alessandro
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
Accurate Estimation of the Fine Layering Effect on the Wave Propagation in the Carbonate Rocks
NASA Astrophysics Data System (ADS)
Bouchaala, F.; Ali, M. Y.
2014-12-01
The attenuation caused to the seismic wave during its propagation can be mainly divided into two parts, the scattering and the intrinsic attenuation. The scattering is an elastic redistribution of the energy due to the medium heterogeneities. However the intrinsic attenuation is an inelastic phenomenon, mainly due to the fluid-grain friction during the wave passage. The intrinsic attenuation is directly related to the physical characteristics of the medium, so this parameter is very can be used for media characterization and fluid detection, which is beneficial for the oil and gas industry. The intrinsic attenuation is estimated by subtracting the scattering from the total attenuation, therefore the accuracy of the intrinsic attenuation is directly dependent on the accuracy of the total attenuation and the scattering. The total attenuation can be estimated from the recorded waves, by using in-situ methods as the spectral ratio and frequency shift methods. The scattering is estimated by assuming the heterogeneities as a succession of stacked layers, each layer is characterized by a single density and velocity. The accuracy of the scattering is strongly dependent on the layer thicknesses, especially in the case of the media composed of carbonate rocks, such media are known for their strong heterogeneity. Previous studies gave some assumptions for the choice of the layer thickness, but they showed some limitations especially in the case of carbonate rocks. In this study we established a relationship between the layer thicknesses and the frequency of the propagation, after certain mathematical development of the Generalized O'Doherty-Anstey formula. We validated this relationship through some synthetic tests and real data provided from a VSP carried out over an onshore oilfield in the emirate of Abu Dhabi in the United Arab Emirates, primarily composed of carbonate rocks. The results showed the utility of our relationship for an accurate estimation of the scattering
Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry
NASA Astrophysics Data System (ADS)
van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.
2016-03-01
Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ < 20°). These methods enable accurate depth estimation of surgical tools with respect to anatomical structures. However, they are computationally expensive and time consuming, rendering them unattractive for image-guided interventions. We propose an alternative approach for depth estimation of biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.
Can student health professionals accurately estimate alcohol content in commonly occurring drinks?
Sinclair, Julia; Searle, Emma
2016-01-01
Objectives: Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students. Methods: A cross-sectional survey of 891 medical and nursing students across different years of training was conducted. Students were asked the alcohol content of 10 different alcoholic drinks by seeing a slide of the drink (with picture, volume and percentage of alcohol by volume) for 30 s. Results: Overall, the mean number of correctly estimated drinks (out of the 10 tested) was 2.4, increasing to just over 3 if a 10% margin of error was used. Wine and premium strength beers were underestimated by over 50% of students. Those who drank alcohol themselves, or who were further on in their clinical training, did better on the task, but overall the levels remained low. Conclusions: Knowledge of, or the ability to work out, the alcohol content of commonly consumed drinks is poor, and further research is needed to understand the reasons for this and the impact this may have on the likelihood to undertake screening or initiate treatment. PMID:27536344
NASA Astrophysics Data System (ADS)
Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray
2016-06-01
Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.
Weight estimation of unconventional structures by structural optimization
NASA Technical Reports Server (NTRS)
Miura, Hirokazu; Shyu, Albert
1986-01-01
Automated techniques are presented that are used in structural optimization technology, with emphasis on modifications of finite element models to obtain an optimal material distribution for minimum weight while satisfying the prescribed design requirements. It is anticipated that the future development of computer aided engineering (CAE) system will provide environments where structural analysis, a design optimization, and weight evaluation modules are integrated, sharing a common data base. Structural optimization capabilities obtained by integrating a finite element structural analysis program and a numerical optimization code are developed and applied to two illustrative examples: marine gear housing structural weight minimization and joined wing structures.
Discrete state model and accurate estimation of loop entropy of RNA secondary structures.
Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie
2008-03-28
Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html. PMID:18376982
A Rapid Empirical Method for Estimating the Gross Takeoff Weight of a High Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mack, Robert J.
1999-01-01
During the cruise segment of the flight mission, aircraft flying at supersonic speeds generate sonic booms that are usually maximum at the beginning of cruise. The pressure signature with the shocks causing these perceived booms can be predicted if the aircraft's geometry, Mach number, altitude, angle of attack, and cruise weight are known. Most methods for estimating aircraft weight, especially beginning-cruise weight, are empirical and based on least- square-fit equations that best represent a body of component weight data. The empirical method discussed in this report used simplified weight equations based on a study of performance and weight data from conceptual and real transport aircraft. Like other weight-estimation methods, weights were determined at several points in the mission. While these additional weights were found to be useful, it is the determination of beginning-cruise weight that is most important for the prediction of the aircraft's sonic-boom characteristics.
NASA Astrophysics Data System (ADS)
Bollmann, J.
2014-04-01
A circular polarizer is used for the first time to image coccoliths without the extinction pattern of crossed polarized light at maximum interference colour. The combination of a circular polarizer with retardation measurements based on grey values derived from theoretical calculations allows for the first time accurate calculations of the weight of single coccoliths thinner than 1.37 μm. The weight estimates of 364 Holocene coccoliths using this new method are in good agreement with published volumetric estimates. A robust calibration method based on the measurement of a calibration target of known retardation enables the comparison of data between different imaging systems. Therefore, the new method overcomes the shortcomings of the error prone empirical calibration procedure of a previously reported method based on birefringence of calcite. Furthermore, it greatly simplifies the identification of coccolithophore species on the light microscope as well as the calculation of the area and thus weight of a coccolith.
Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)
1995-01-01
It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.
Correa, John B; Apolzan, John W; Shepard, Desti N; Heil, Daniel P; Rood, Jennifer C; Martin, Corby K
2016-07-01
Activity monitors such as the Actical accelerometer, the Sensewear armband, and the Intelligent Device for Energy Expenditure and Activity (IDEEA) are commonly validated against gold standards (e.g., doubly labeled water, or DLW) to determine whether they accurately measure total daily energy expenditure (TEE) or activity energy expenditure (AEE). However, little research has assessed whether these parameters or others (e.g., posture allocation) predict body weight change over time. The aims of this study were to (i) test whether estimated energy expenditure or posture allocation from the devices was associated with weight change during and following a low-calorie diet (LCD) and (ii) compare free-living TEE and AEE predictions from the devices against DLW before weight change. Eighty-seven participants from 2 clinical trials wore 2 of the 3 devices simultaneously for 1 week of a 2-week DLW period. Participants then completed an 8-week LCD and were weighed at the start and end of the LCD and 6 and 12 months after the LCD. More time spent walking at baseline, measured by the IDEEA, significantly predicted greater weight loss during the 8-week LCD. Measures of posture allocation demonstrated medium effect sizes in their relationships with weight change. Bland-Altman analyses indicated that the Sensewear and the IDEEA accurately estimated TEE, and the IDEEA accurately measured AEE. The results suggest that the ability of energy expenditure and posture allocation to predict weight change is limited, and the accuracy of TEE and AEE measurements varies across activity monitoring devices, with multi-sensor monitors demonstrating stronger validity. PMID:27270210
Development and validation of GFR-estimating equations using diabetes, transplant and weight
Stevens, Lesley A.; Schmid, Christopher H.; Zhang, Yaping L.; Coresh, Josef; Manzi, Jane; Landis, Richard; Bakoush, Omran; Contreras, Gabriel; Genuth, Saul; Klintmalm, Goran B.; Poggio, Emilio; Rossing, Peter; Rule, Andrew D.; Weir, Matthew R.; Kusek, John; Greene, Tom; Levey, Andrew S.
2010-01-01
Background. We have reported a new equation (CKD-EPI equation) that reduces bias and improves accuracy for GFR estimation compared to the MDRD study equation while using the same four basic predictor variables: creatinine, age, sex and race. Here, we describe the development and validation of this equation as well as other equations that incorporate diabetes, transplant and weight as additional predictor variables. Methods. Linear regression was used to relate log-measured GFR (mGFR) to sex, race, diabetes, transplant, weight, various transformations of creatinine and age with and without interactions. Equations were developed in a pooled database of 10 studies [2/3 (N = 5504) for development and 1/3 (N = 2750) for internal validation], and final model selection occurred in 16 additional studies [external validation (N = 3896)]. Results. The mean mGFR was 68, 67 and 68 ml/min/ 1.73 m2 in the development, internal validation and external validation datasets, respectively. In external validation, an equation that included a linear age term and spline terms in creatinine to account for a reduction in the magnitude of the slope at low serum creatinine values exhibited the best performance (bias = 2.5, RMSE = 0.250) among models using the four basic predictor variables. Addition of terms for diabetes and transplant did not improve performance. Equations with weight showed a small improvement in the subgroup with BMI <20 kg/m2. Conclusions. The CKD-EPI equation, based on creatinine, age, sex and race, has been validated and is more accurate than the MDRD study equation. The addition of weight, diabetes and transplant does not significantly improve equation performance. PMID:19793928
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
48 CFR 52.247-20 - Estimated Quantities or Weights for Evaluation of Offers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Weights for Evaluation of Offers. 52.247-20 Section 52.247-20 Federal Acquisition Regulations System... Text of Provisions and Clauses 52.247-20 Estimated Quantities or Weights for Evaluation of Offers. As... transportation-related services when quantities or weights of shipments between each origin and destination...
48 CFR 52.247-20 - Estimated Quantities or Weights for Evaluation of Offers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Weights for Evaluation of Offers. 52.247-20 Section 52.247-20 Federal Acquisition Regulations System... Text of Provisions and Clauses 52.247-20 Estimated Quantities or Weights for Evaluation of Offers. As... transportation-related services when quantities or weights of shipments between each origin and destination...
Operational Weight Estimations of Commercial Jet Transport Aircraft
NASA Technical Reports Server (NTRS)
Anderson, Joseph L.
1972-01-01
In evaluating current or proposed commercial transport airplanes, there has not been available a ready means to determine weights so as to compare airplanes within this particular class. This paper describes the development of and presents such comparative tools. The major design characteristics of current American jet transport airplanes were collected, and these data were correlated by means of regression analysis to develop weight relationships for these airplanes as functions of their operational requirements. The characteristics for 23 airplanes were assembled and examined in terms of the effects of the number of people carried, the cargo load, and the operating range. These airplane characteristics were correlated for the airplanes as one of three subclasses, namely the small, twin-engine jet transport, the conventional three- and four-engine jets, and the new wide-body jets.
Imani, Farsad; Karimi Rouzbahani, Hamid Reza; Goudarzi, Mehrdad; Tarrahi, Mohammad Javad; Ebrahim Soltani, Alireza
2016-01-01
Background: During anesthesia, continuous body temperature monitoring is essential, especially in children. Anesthesia can increase the risk of loss of body temperature by three to four times. Hypothermia in children results in increased morbidity and mortality. Since the measurement points of the core body temperature are not easily accessible, near core sites, like rectum, are used. Objectives: The purpose of this study was to measure skin temperature over the carotid artery and compare it with the rectum temperature, in order to propose a model for accurate estimation of near core body temperature. Patients and Methods: Totally, 124 patients within the age range of 2 - 6 years, undergoing elective surgery, were selected. Temperature of rectum and skin over the carotid artery was measured. Then, the patients were randomly divided into two groups (each including 62 subjects), namely modeling (MG) and validation groups (VG). First, in the modeling group, the average temperature of the rectum and skin over the carotid artery were measured separately. The appropriate model was determined, according to the significance of the model’s coefficients. The obtained model was used to predict the rectum temperature in the second group (VG group). Correlation of the predicted values with the real values (the measured rectum temperature) in the second group was investigated. Also, the difference in the average values of these two groups was examined in terms of significance. Results: In the modeling group, the average rectum and carotid temperatures were 36.47 ± 0.54°C and 35.45 ± 0.62°C, respectively. The final model was obtained, as follows: Carotid temperature × 0.561 + 16.583 = Rectum temperature. The predicted value was calculated based on the regression model and then compared with the measured rectum value, which showed no significant difference (P = 0.361). Conclusions: The present study was the first research, in which rectum temperature was compared with that
2012-01-01
Background Few equations have been developed in veterinary medicine compared to human medicine to predict body composition. The present study was done to evaluate the influence of weight loss on biometry (BIO), bioimpedance analysis (BIA) and ultrasonography (US) in cats, proposing equations to estimate fat (FM) and lean (LM) body mass, as compared to dual energy x-ray absorptiometry (DXA) as the referenced method. For this were used 16 gonadectomized obese cats (8 males and 8 females) in a weight loss program. DXA, BIO, BIA and US were performed in the obese state (T0; obese animals), after 10% of weight loss (T1) and after 20% of weight loss (T2). Stepwise regression was used to analyze the relationship between the dependent variables (FM, LM) determined by DXA and the independent variables obtained by BIO, BIA and US. The better models chosen were evaluated by a simple regression analysis and means predicted vs. determined by DXA were compared to verify the accuracy of the equations. Results The independent variables determined by BIO, BIA and US that best correlated (p < 0.005) with the dependent variables (FM and LM) were BW (body weight), TC (thoracic circumference), PC (pelvic circumference), R (resistance) and SFLT (subcutaneous fat layer thickness). Using Mallows’Cp statistics, p value and r2, 19 equations were selected (12 for FM, 7 for LM); however, only 7 equations accurately predicted FM and one LM of cats. Conclusions The equations with two variables are better to use because they are effective and will be an alternative method to estimate body composition in the clinical routine. For estimated lean mass the equations using body weight associated with biometrics measures can be proposed. For estimated fat mass the equations using body weight associated with bioimpedance analysis can be proposed. PMID:22781317
Information Weighted Consensus for Distributed Estimation in Vision Networks
ERIC Educational Resources Information Center
Kamal, Ahmed Tashrif
2013-01-01
Due to their high fault-tolerance, ease of installation and scalability to large networks, distributed algorithms have recently gained immense popularity in the sensor networks community, especially in computer vision. Multi-target tracking in a camera network is one of the fundamental problems in this domain. Distributed estimation algorithms…
The weight of nations: an estimation of adult human biomass
2012-01-01
Background The energy requirement of species at each trophic level in an ecological pyramid is a function of the number of organisms and their average mass. Regarding human populations, although considerable attention is given to estimating the number of people, much less is given to estimating average mass, despite evidence that average body mass is increasing. We estimate global human biomass, its distribution by region and the proportion of biomass due to overweight and obesity. Methods For each country we used data on body mass index (BMI) and height distribution to estimate average adult body mass. We calculated total biomass as the product of population size and average body mass. We estimated the percentage of the population that is overweight (BMI > 25) and obese (BMI > 30) and the biomass due to overweight and obesity. Results In 2005, global adult human biomass was approximately 287 million tonnes, of which 15 million tonnes were due to overweight (BMI > 25), a mass equivalent to that of 242 million people of average body mass (5% of global human biomass). Biomass due to obesity was 3.5 million tonnes, the mass equivalent of 56 million people of average body mass (1.2% of human biomass). North America has 6% of the world population but 34% of biomass due to obesity. Asia has 61% of the world population but 13% of biomass due to obesity. One tonne of human biomass corresponds to approximately 12 adults in North America and 17 adults in Asia. If all countries had the BMI distribution of the USA, the increase in human biomass of 58 million tonnes would be equivalent in mass to an extra 935 million people of average body mass, and have energy requirements equivalent to that of 473 million adults. Conclusions Increasing population fatness could have the same implications for world food energy demands as an extra half a billion people living on the earth. PMID:22709383
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1983-01-01
Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.
NASA Technical Reports Server (NTRS)
Mullen, J., Jr.
1978-01-01
The implementation of the changes to the program for Wing Aeroelastic Design and the development of a program to estimate aircraft fuselage weights are described. The equations to implement the modified planform description, the stiffened panel skin representation, the trim loads calculation, and the flutter constraint approximation are presented. A comparison of the wing model with the actual F-5A weight material distributions and loads is given. The equations and program techniques used for the estimation of aircraft fuselage weights are described. These equations were incorporated as a computer code. The weight predictions of this program are compared with data from the C-141.
ERIC Educational Resources Information Center
Penfield, Randall D.; Bergeron, Jennifer M.
2005-01-01
This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…
Schneider, Iris K.; Parzuchowski, Michal; Wojciszke, Bogdan; Schwarz, Norbert; Koole, Sander L.
2015-01-01
Previous work suggests that perceived importance of an object influences estimates of its weight. Specifically, important books were estimated to be heavier than non-important books. However, the experimental set-up of these studies may have suffered from a potential confound and findings may be confined to books only. Addressing this, we investigate the effect of importance on weight estimates by examining whether the importance of information stored on a data storage device (USB-stick or portable hard drive) can alter weight estimates. Results show that people thinking a USB-stick holds important tax information (vs. expired tax information vs. no information) estimate it to be heavier (Experiment 1) compared to people who do not. Similarly, people who are told a portable hard drive holds personally relevant information (vs. irrelevant), also estimate the drive to be heavier (Experiments 2A,B). PMID:25620942
Schneider, Iris K; Parzuchowski, Michal; Wojciszke, Bogdan; Schwarz, Norbert; Koole, Sander L
2014-01-01
Previous work suggests that perceived importance of an object influences estimates of its weight. Specifically, important books were estimated to be heavier than non-important books. However, the experimental set-up of these studies may have suffered from a potential confound and findings may be confined to books only. Addressing this, we investigate the effect of importance on weight estimates by examining whether the importance of information stored on a data storage device (USB-stick or portable hard drive) can alter weight estimates. Results show that people thinking a USB-stick holds important tax information (vs. expired tax information vs. no information) estimate it to be heavier (Experiment 1) compared to people who do not. Similarly, people who are told a portable hard drive holds personally relevant information (vs. irrelevant), also estimate the drive to be heavier (Experiments 2A,B). PMID:25620942
NASA Astrophysics Data System (ADS)
Grasso, Robert J.; Russo, Leonard P.; Barrett, John L.; Odhner, Jefferson E.; Egbert, Paul I.
2007-09-01
BAE Systems presents the results of a program to model the performance of Raman LIDAR systems for the remote detection of atmospheric gases, air polluting hydrocarbons, chemical and biological weapons, and other molecular species of interest. Our model, which integrates remote Raman spectroscopy, 2D and 3D LADAR, and USAF atmospheric propagation codes permits accurate determination of the performance of a Raman LIDAR system. The very high predictive performance accuracy of our model is due to the very accurate calculation of the differential scattering cross section for the specie of interest at user selected wavelengths. We show excellent correlation of our calculated cross section data, used in our model, with experimental data obtained from both laboratory measurements and the published literature. In addition, the use of standard USAF atmospheric models provides very accurate determination of the atmospheric extinction at both the excitation and Raman shifted wavelengths.
Application of parametric weight and cost estimating relationships to future transport aircraft
NASA Technical Reports Server (NTRS)
Beltramo, M. N.; Morris, M. A.; Anderson, J. L.
1979-01-01
A model comprised of system level weight and cost estimating relationships for transport aircraft is presented. In order to determine the production cost of future aircraft its weight is first estimated based on performance parameters, and then the cost is estimated as a function of weight. For initial evaluation CERs were applied to actual system weights of six aircraft (3 military and 3 commercial) with mean empty weights ranging from 30,000 to 300,000 lb. The resulting cost estimates were compared with actual costs. The average absolute error was only 4.3%. Then the model was applied to five aircraft still in the design phase (Boeing 757, 767 and 777, and BAC HS146-100 and HS146-200). While the estimates for the 757 and 767 are within 2 to 3 percent of their assumed break-even costs, it is recognized that these are very sensitive to the validity of the estimated weights, inflation factor, the amount assumed for nonrecurring costs, etc., and it is suggested that the model may be used in conjunction with other information such as RDT&E cost estimates and market forecasts. The model will help NASA evaluate new technologies and production costs of future aircraft.
Zhu, Fangqiang; Hummer, Gerhard
2012-01-01
The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this paper, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimal allocation of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations. PMID:22109354
Reference Models for Structural Technology Assessment and Weight Estimation
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Eldred, Lloyd
2005-01-01
Previously the Exploration Concepts Branch of NASA Langley Research Center has developed techniques for automating the preliminary design level of launch vehicle airframe structural analysis for purposes of enhancing historical regression based mass estimating relationships. This past work was useful and greatly reduced design time, however its application area was very narrow in terms of being able to handle a large variety in structural and vehicle general arrangement alternatives. Implementation of the analysis approach presented herein also incorporates some newly developed computer programs. Loft is a program developed to create analysis meshes and simultaneously define structural element design regions. A simple component defining ASCII file is read by Loft to begin the design process. HSLoad is a Visual Basic implementation of the HyperSizer Application Programming Interface, which automates the structural element design process. Details of these two programs and their use are explained in this paper. A feature which falls naturally out of the above analysis paradigm is the concept of "reference models". The flexibility of the FEA based JAVA processing procedures and associated process control classes coupled with the general utility of Loft and HSLoad make it possible to create generic program template files for analysis of components ranging from something as simple as a stiffened flat panel, to curved panels, fuselage and cryogenic tank components, flight control surfaces, wings, through full air and space vehicle general arrangements.
Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.
Technology Transfer Automated Retrieval System (TEKTRAN)
Aims: To simplify the determination of the nuclear condition of the pathogenic Rhizoctonia, which currently needs to be performed either using two fluorescent dyes, thus is more costly and time-consuming, or using only one fluorescent dye, and thus less accurate. Methods and Results: A red primary ...
Precision of sugarcane biomass estimates in pot studies using fresh and dry weights
Technology Transfer Automated Retrieval System (TEKTRAN)
Sugarcane (Saccharum spp.) field studies generally report fresh weight (FW) rather than dry weight (DW) due to logistical difficulties in drying large amounts of biomass. Pot studies often measure biomass of young plants with DW under the assumption that DW provides a more precise estimate of treatm...
Empirical expressions for estimating length and weight of axial-flow components of VTOL powerplants
NASA Technical Reports Server (NTRS)
Sagerser, D. A.; Lieblein, S.; Krebs, R. P.
1971-01-01
Simplified equations are presented for estimating the length and weight of major powerplant components of VTOL aircraft. The equations were developed from correlations of lift and cruise engine data. Components involved include fan, fan duct, compressor, combustor, turbine, structure, and accessories. Comparisons of actual and calculated total engine weights are included for several representative engines.
Shen, Yan; Lou, Shuqin; Wang, Xin
2014-03-20
The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters. PMID:24663461
A method to estimate weight and dimensions of large and small gas turbine engines
NASA Technical Reports Server (NTRS)
Onat, E.; Klees, G. W.
1979-01-01
A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.
On estimating mean lifetimes by a weighted sum of lifetime measurements
NASA Astrophysics Data System (ADS)
Prosper, Harrison Bertrand
1987-10-01
Given N lifetime measurements an estimate of the mean lifetime can be obtained from a weighted sum of these measurements. We derive exact expressions for the probability density function, the moment-generating function, and the cumulative distribution function for the weighted sum. We indicate how these results might be used in the estimation of particle lifetimes. The probability distribution function of Yost for the distribution of lifetime measurements with finite measurement error is our starting point.
Technical note: tree truthing: how accurate are substrate estimates in primate field studies?
Bezanson, Michelle; Watts, Sean M; Jobin, Matthew J
2012-04-01
Field studies of primate positional behavior typically rely on ground-level estimates of substrate size, angle, and canopy location. These estimates potentially influence the identification of positional modes by the observer recording behaviors. In this study we aim to test ground-level estimates against direct measurements of support angles, diameters, and canopy heights in trees at La Suerte Biological Research Station in Costa Rica. After reviewing methods that have been used by past researchers, we provide data collected within trees that are compared to estimates obtained from the ground. We climbed five trees and measured 20 supports. Four observers collected measurements of each support from different locations on the ground. Diameter estimates varied from the direct tree measures by 0-28 cm (Mean: 5.44 ± 4.55). Substrate angles varied by 1-55° (Mean: 14.76 ± 14.02). Height in the tree was best estimated using a clinometer as estimates with a two-meter reference placed by the tree varied by 3-11 meters (Mean: 5.31 ± 2.44). We determined that the best support size estimates were those generated relative to the size of the focal animal and divided into broader categories. Support angles were best estimated in 5° increments and then checked using a Haglöf clinometer in combination with a laser pointer. We conclude that three major factors should be addressed when estimating support features: observer error (e.g., experience and distance from the target), support deformity, and how support size and angle influence the positional mode selected by a primate individual. individual. PMID:22371099
Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter
NASA Astrophysics Data System (ADS)
Strano, Salvatore; Terzo, Mario
2016-06-01
The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.
ACCURATE ESTIMATIONS OF STELLAR AND INTERSTELLAR TRANSITION LINES OF TRIPLY IONIZED GERMANIUM
Dutta, Narendra Nath; Majumder, Sonjoy E-mail: sonjoy@gmail.com
2011-08-10
In this paper, we report on weighted oscillator strengths of E1 transitions and transition probabilities of E2 transitions among different low-lying states of triply ionized germanium using highly correlated relativistic coupled cluster (RCC) method. Due to the abundance of Ge IV in the solar system, planetary nebulae, white dwarf stars, etc., the study of such transitions is important from an astrophysical point of view. The weighted oscillator strengths of E1 transitions are presented in length and velocity gauge forms to check the accuracy of the calculations. We find excellent agreement between calculated and experimental excitation energies. Oscillator strengths of few transitions, wherever studied in the literature via other theoretical and experimental approaches, are compared with our RCC calculations.
Development of weight and cost estimates for lifting surfaces with active controls
NASA Technical Reports Server (NTRS)
Anderson, R. D.; Flora, C. C.; Nelson, R. M.; Raymond, E. T.; Vincent, J. H.
1976-01-01
Equations and methodology were developed for estimating the weight and cost incrementals due to active controls added to the wing and horizontal tail of a subsonic transport airplane. The methods are sufficiently generalized to be suitable for preliminary design. Supporting methodology and input specifications for the weight and cost equations are provided. The weight and cost equations are structured to be flexible in terms of the active control technology (ACT) flight control system specification. In order to present a self-contained package, methodology is also presented for generating ACT flight control system characteristics for the weight and cost equations. Use of the methodology is illustrated.
Development of a conceptual flight vehicle design weight estimation method library and documentation
NASA Astrophysics Data System (ADS)
Walker, Andrew S.
The state of the art in estimating the volumetric size and mass of flight vehicles is held today by an elite group of engineers in the Aerospace Conceptual Design Industry. This is not a skill readily accessible or taught in academia. To estimate flight vehicle mass properties, many aerospace engineering students are encouraged to read the latest design textbooks, learn how to use a few basic statistical equations, and plunge into the details of parametric mass properties analysis. Specifications for and a prototype of a standardized engineering "tool-box" of conceptual and preliminary design weight estimation methods were developed to manage the growing and ever-changing body of weight estimation knowledge. This also bridges the gap in Mass Properties education for aerospace engineering students. The Weight Method Library will also be used as a living document for use by future aerospace students. This "tool-box" consists of a weight estimation method bibliography containing unclassified, open-source literature for conceptual and preliminary flight vehicle design phases. Transport aircraft validation cases have been applied to each entry in the AVD Weight Method Library in order to provide a sense of context and applicability to each method. The weight methodology validation results indicate consensus and agreement of the individual methods. This generic specification of a method library will be applicable for use by other disciplines within the AVD Lab, Post-Graduate design labs, or engineering design professionals.
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
ERIC Educational Resources Information Center
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Univariate and Default Standard Unit Biases in Estimation of Body Weight and Caloric Content
ERIC Educational Resources Information Center
Geier, Andrew B.; Rozin, Paul
2009-01-01
College students estimated the weight of adult women from either photographs or a live presentation by a set of models and estimated the calories in 1 of 2 actual meals. The 2 meals had the same items, but 1 had larger portion sizes than the other. The results suggest: (a) Judgments are biased toward transforming the example in question to the…
FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+
NASA Astrophysics Data System (ADS)
Sahoo, B. K.
2010-12-01
We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.
Essink-Bot, Marie-Louise; Pereira, Joaquin; Packer, Claire; Schwarzinger, Michael; Burstrom, Kristina
2002-01-01
OBJECTIVE: To investigate the sources of cross-national variation in disability-adjusted life-years (DALYs) in the European Disability Weights Project. METHODS: Disability weights for 15 disease stages were derived empirically in five countries by means of a standardized procedure and the cross-national differences in visual analogue scale (VAS) scores were analysed. For each country the burden of dementia in women, used as an illustrative example, was estimated in DALYs. An analysis was performed of the relative effects of cross-national variations in demography, epidemiology and disability weights on DALY estimates. FINDINGS: Cross-national comparison of VAS scores showed almost identical ranking orders. After standardization for population size and age structure of the populations, the DALY rates per 100000 women ranged from 1050 in France to 1404 in the Netherlands. Because of uncertainties in the epidemiological data, the extent to which these differences reflected true variation between countries was difficult to estimate. The use of European rather than country-specific disability weights did not lead to a significant change in the burden of disease estimates for dementia. CONCLUSIONS: Sound epidemiological data are the first requirement for burden of disease estimation and relevant between-countries comparisons. DALY estimates for dementia were relatively insensitive to differences in disability weights between European countries. PMID:12219156
Accurate State Estimation and Tracking of a Non-Cooperative Target Vehicle
NASA Technical Reports Server (NTRS)
Thienel, Julie K.; Sanner, Robert M.
2006-01-01
Autonomous space rendezvous scenarios require knowledge of the target vehicle state in order to safely dock with the chaser vehicle. Ideally, the target vehicle state information is derived from telemetered data, or with the use of known tracking points on the target vehicle. However, if the target vehicle is non-cooperative and does not have the ability to maintain attitude control, or transmit attitude knowledge, the docking becomes more challenging. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a tracking control scheme. The approach is tested with the robotic servicing mission concept for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates, but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST.
Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle
NASA Technical Reports Server (NTRS)
VanEepoel, John; Thienel, Julie; Sanner, Robert M.
2006-01-01
In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.
A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system
Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob
2013-01-01
Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541
Fast and accurate probability density estimation in large high dimensional astronomical datasets
NASA Astrophysics Data System (ADS)
Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.
2015-01-01
Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.
Spectral estimation from laser scanner data for accurate color rendering of objects
NASA Astrophysics Data System (ADS)
Baribeau, Rejean
2002-06-01
Estimation methods are studied for the recovery of the spectral reflectance across the visible range from the sensing at just three discrete laser wavelengths. Methods based on principal component analysis and on spline interpolation are judged based on the CIE94 color differences for some reference data sets. These include the Macbeth color checker, the OSA-UCS color charts, some artist pigments, and a collection of miscellaneous surface colors. The optimal three sampling wavelengths are also investigated. It is found that color can be estimated with average accuracy ΔE94 = 2.3 when optimal wavelengths 455 nm, 540 n, and 610 nm are used.
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1985-01-01
Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.
NASA Astrophysics Data System (ADS)
Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru
2014-05-01
This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.
Zhang, Peng; Zhou, Ning; Abdollahi, Ali
2013-09-10
A Generalized Subspace-Least Mean Square (GSLMS) method is presented for accurate and robust estimation of oscillation modes from exponentially damped power system signals. The method is based on orthogonality of signal and noise eigenvectors of the signal autocorrelation matrix. Performance of the proposed method is evaluated using Monte Carlo simulation and compared with Prony method. Test results show that the GSLMS is highly resilient to noise and significantly dominates Prony method in tracking power system modes under noisy environments.
Accurate motion parameter estimation for colonoscopy tracking using a regression method
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2010-03-01
Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.
How Accurate and Robust Are the Phylogenetic Estimates of Austronesian Language Relationships?
Greenhill, Simon J.; Drummond, Alexei J.; Gray, Russell D.
2010-01-01
We recently used computational phylogenetic methods on lexical data to test between two scenarios for the peopling of the Pacific. Our analyses of lexical data supported a pulse-pause scenario of Pacific settlement in which the Austronesian speakers originated in Taiwan around 5,200 years ago and rapidly spread through the Pacific in a series of expansion pulses and settlement pauses. We claimed that there was high congruence between traditional language subgroups and those observed in the language phylogenies, and that the estimated age of the Austronesian expansion at 5,200 years ago was consistent with the archaeological evidence. However, the congruence between the language phylogenies and the evidence from historical linguistics was not quantitatively assessed using tree comparison metrics. The robustness of the divergence time estimates to different calibration points was also not investigated exhaustively. Here we address these limitations by using a systematic tree comparison metric to calculate the similarity between the Bayesian phylogenetic trees and the subgroups proposed by historical linguistics, and by re-estimating the age of the Austronesian expansion using only the most robust calibrations. The results show that the Austronesian language phylogenies are highly congruent with the traditional subgroupings, and the date estimates are robust even when calculated using a restricted set of historical calibrations. PMID:20224774
Accurate Angle Estimator for High-Frame-Rate 2-D Vector Flow Imaging.
Villagomez Hoyos, Carlos Armando; Stuart, Matthias Bo; Hansen, Kristoffer Lindskov; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt
2016-06-01
This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using the experimental ultrasound scanner SARUS and a flow rig before being tested in vivo. An 8-MHz linear array transducer is used with defocused beam emissions. In the simulations of a spinning disk phantom, a 360° uniform behavior on the angle estimation is observed with a median angle bias of 1.01° and a median angle SD of 1.8°. Similar results are obtained on a straight vessel for both simulations and measurements, where the obtained angle biases are below 1.5° with SDs around 1°. Estimated velocity magnitudes are also kept under 10% bias and 5% relative SD in both simulations and measurements. An in vivo measurement is performed on a carotid bifurcation of a healthy individual. A 3-s acquisition during three heart cycles is captured. A consistent and repetitive vortex is observed in the carotid bulb during systoles. PMID:27093598
NASA Astrophysics Data System (ADS)
Saslow, Wayne M.
2014-04-01
Three common approaches to F→=ma→ are: (1) as an exactly true definition of force F→ in terms of measured inertial mass m and measured acceleration a→; (2) as an exactly true axiom relating measured values of a→, F→ and m; and (3) as an imperfect but accurately true physical law relating measured a→ to measured F→, with m an experimentally determined, matter-dependent constant, in the spirit of the resistance R in Ohm's law. In the third case, the natural units are those of a→ and F→, where a→ is normally specified using distance and time as standard units, and F→ from a spring scale as a standard unit; thus mass units are derived from force, distance, and time units such as newtons, meters, and seconds. The present work develops the third approach when one includes a second physical law (again, imperfect but accurate)—that balance-scale weight W is proportional to m—and the fact that balance-scale measurements of relative weight are more accurate than those of absolute force. When distance and time also are more accurately measurable than absolute force, this second physical law permits a shift to standards of mass, distance, and time units, such as kilograms, meters, and seconds, with the unit of force—the newton—a derived unit. However, were force and distance more accurately measurable than time (e.g., time measured with an hourglass), this second physical law would permit a shift to standards of force, mass, and distance units such as newtons, kilograms, and meters, with the unit of time—the second—a derived unit. Therefore, the choice of the most accurate standard units depends both on what is most accurately measurable and on the accuracy of physical law.
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980
Accurate estimation of influenza epidemics using Google search data via ARGO
Yang, Shihao; Santillana, Mauricio; Kou, S. C.
2015-01-01
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980
Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle
NASA Astrophysics Data System (ADS)
Timinis, Constantinos; Pitris, Costas
2016-03-01
The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.
Monaco, James P; Madabhushi, Anant
2012-12-01
Many estimation tasks require Bayesian classifiers capable of adjusting their performance (e.g. sensitivity/specificity). In situations where the optimal classification decision can be identified by an exhaustive search over all possible classes, means for adjusting classifier performance, such as probability thresholding or weighting the a posteriori probabilities, are well established. Unfortunately, analogous methods compatible with Markov random fields (i.e. large collections of dependent random variables) are noticeably absent from the literature. Consequently, most Markov random field (MRF) based classification systems typically restrict their performance to a single, static operating point (i.e. a paired sensitivity/specificity). To address this deficiency, we previously introduced an extension of maximum posterior marginals (MPM) estimation that allows certain classes to be weighted more heavily than others, thus providing a means for varying classifier performance. However, this extension is not appropriate for the more popular maximum a posteriori (MAP) estimation. Thus, a strategy for varying the performance of MAP estimators is still needed. Such a strategy is essential for several reasons: (1) the MAP cost function may be more appropriate in certain classification tasks than the MPM cost function, (2) the literature provides a surfeit of MAP estimation implementations, several of which are considerably faster than the typical Markov Chain Monte Carlo methods used for MPM, and (3) MAP estimation is used far more often than MPM. Consequently, in this paper we introduce multiplicative weighted MAP (MWMAP) estimation-achieved via the incorporation of multiplicative weights into the MAP cost function-which allows certain classes to be preferred over others. This creates a natural bias for specific classes, and consequently a means for adjusting classifier performance. Similarly, we show how this multiplicative weighting strategy can be applied to the MPM
Techniques for accurate estimation of net discharge in a tidal channel
Simpson, Michael R.; Bland, Roger
1999-01-01
An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. The relative magnitude of equipment errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second. Typical maximum flow rates during the data-collection period averaged 750 cubic meters per second.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Porter, Albert A.
1991-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Porter, Albert A.
1990-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?
NASA Astrophysics Data System (ADS)
Ramarohetra, J.; Sultan, B.
2012-04-01
Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and
Plant DNA Barcodes Can Accurately Estimate Species Richness in Poorly Known Floras
Costion, Craig; Ford, Andrew; Cross, Hugh; Crayn, Darren; Harrington, Mark; Lowe, Andrew
2011-01-01
Background Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70%) and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. Methodology/Principal Findings Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. Conclusions/Significance We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways. PMID:22096501
Lupaşcu, Carmen Alina; Tegolo, Domenico; Trucco, Emanuele
2013-12-01
We present an algorithm estimating the width of retinal vessels in fundus camera images. The algorithm uses a novel parametric surface model of the cross-sectional intensities of vessels, and ensembles of bagged decision trees to estimate the local width from the parameters of the best-fit surface. We report comparative tests with REVIEW, currently the public database of reference for retinal width estimation, containing 16 images with 193 annotated vessel segments and 5066 profile points annotated manually by three independent experts. Comparative tests are reported also with our own set of 378 vessel widths selected sparsely in 38 images from the Tayside Scotland diabetic retinopathy screening programme and annotated manually by two clinicians. We obtain considerably better accuracies compared to leading methods in REVIEW tests and in Tayside tests. An important advantage of our method is its stability (success rate, i.e., meaningful measurement returned, of 100% on all REVIEW data sets and on the Tayside data set) compared to a variety of methods from the literature. We also find that results depend crucially on testing data and conditions, and discuss criteria for selecting a training set yielding optimal accuracy. PMID:24001930
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985
Golmakani, Nahid; Khaleghinezhad, Khosheh; Dadgar, Selmeh; Hashempor, Majid; Baharian, Nosrat
2015-01-01
Introduction: In developing countries, hemorrhage accounts for 30% of the maternal deaths. Postpartum hemorrhage has been defined as blood loss of around 500 ml or more, after completing the third phase of labor. Most cases of postpartum hemorrhage occur during the first hour after birth. The most common reason for bleeding in the early hours after childbirth is uterine atony. Bleeding during delivery is usually a visual estimate that is measured by the midwife. It has a high error rate. However, studies have shown that the use of a standard can improve the estimation. The aim of the research is to compare the estimation of postpartum hemorrhage using the weighting method and the National Guideline for postpartum hemorrhage estimation. Materials and Methods: This descriptive study was conducted on 112 females in the Omolbanin Maternity Department of Mashhad, for a six-month period, from November 2012 to May 2013. The accessible method was used for sampling. The data collection tools were case selection, observation and interview forms. For postpartum hemorrhage estimation, after the third section of labor was complete, the quantity of bleeding was estimated in the first and second hours after delivery, by the midwife in charge, using the National Guideline for vaginal delivery, provided by the Maternal Health Office. Also, after visual estimation by using the National Guideline, the sheets under parturient in first and second hours after delivery were exchanged and weighted. The data were analyzed using descriptive statistics and the t-test. Results: According to the results, a significant difference was found between the estimated blood loss based on the weighting methods and that using the National Guideline (weighting method 62.68 ± 16.858 cc vs. National Guideline 45.31 ± 13.484 cc in the first hour after delivery) (P = 0.000) and (weighting method 41.26 ± 10.518 vs. National Guideline 30.24 ± 8.439 in second hour after delivery) (P = 0.000). Conclusions
Higher Accurate Estimation of Axial and Bending Stiffnesses of Plates Clamped by Bolts
NASA Astrophysics Data System (ADS)
Naruse, Tomohiro; Shibutani, Yoji
Equivalent stiffness of clamped plates should be prescribed not only to evaluate the strength of bolted joints by the scheme of “joint diagram” but also to make structural analyses for practical structures with many bolted joints. We estimated the axial stiffness and bending stiffness of clamped plates by using Finite Element (FE) analyses while taking the contact condition on bearing surfaces and between the plates into account. The FE models were constructed for bolted joints tightened with M8, 10, 12 and 16 bolts and plate thicknesses of 3.2, 4.5, 6.0 and 9.0 mm, and the axial and bending compliances were precisely evaluated. These compliances of clamped plates were compared with those from VDI 2230 (2003) code, in which the equivalent conical compressive stress field in the plate has been assumed. The code gives larger axial stiffness for 11% and larger bending stiffness for 22%, and it cannot apply to the clamped plates with different thickness. Thus the code shall give lower bolt stress (unsafe estimation). We modified the vertical angle tangent, tanφ, of the equivalent conical by adding a term of the logarithm of thickness ratio t1/t2 and by fitting to the analysis results. The modified tanφ can estimate the axial compliance with the error from -1.5% to 6.8% and the bending compliance with the error from -6.5% to 10%. Furthermore, the modified tanφ can take the thickness difference into consideration.
Accurate estimation of airborne ultrasonic time-of-flight for overlapping echoes.
Sarabia, Esther G; Llata, Jose R; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P
2013-01-01
In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774
Accurate Estimation of Airborne Ultrasonic Time-of-Flight for Overlapping Echoes
Sarabia, Esther G.; Llata, Jose R.; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P.
2013-01-01
In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774
NASA Astrophysics Data System (ADS)
Gang, Jin; Yiqi, Zhuang; Yue, Yin; Miao, Cui
2015-03-01
A novel digitally controlled automatic gain control (AGC) loop circuitry for the global navigation satellite system (GNSS) receiver chip is presented. The entire AGC loop contains a programmable gain amplifier (PGA), an AGC circuit and an analog-to-digital converter (ADC), which is implemented in a 0.18 μm complementary metal-oxide-semiconductor (CMOS) process and measured. A binary-weighted approach is proposed in the PGA to achieve wide dB-linear gain control with small gain error. With binary-weighted cascaded amplifiers for coarse gain control, and parallel binary-weighted trans-conductance amplifier array for fine gain control, the PGA can provide a 64 dB dynamic range from -4 to 60 dB in 1.14 dB gain steps with a less than 0.15 dB gain error. Based on the Gaussian noise statistic characteristic of the GNSS signal, a digital AGC circuit is also proposed with low area and fast settling. The feed-backward AGC loop occupies an area of 0.27 mm2 and settles within less than 165 μs while consuming an average current of 1.92 mA at 1.8 V.
Neural network-based visual body weight estimation for drug dosage finding
NASA Astrophysics Data System (ADS)
Pfitzner, Christian; May, Stefan; Nüchter, Andreas
2016-03-01
Body weight adapted drug dosages are important for emergency treatments: Inaccuracies in body weight estimation may lead to inaccurate drug dosing. This paper describes an improved approach to estimating the body weight of emergency patients in a trauma room, based on images from an RGB-D and a thermal camera. The improvements are specific to several aspects: Fusion of RGB-D and thermal camera eases filtering and segmentation of the patient's body from the background. Robustness and accuracy is gained by an artificial neural network, which considers geometric features from the sensors as input, e.g. the patient's volume, and shape parameters. Preliminary experiments with 69 patients show an accuracy close to 90 percent, with less than 10 percent relative error and the results are compared with the physician's estimate, the patient's statement and an established anthropometric method.
NASA Astrophysics Data System (ADS)
Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.
2015-03-01
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.
An Energy-Efficient Strategy for Accurate Distance Estimation in Wireless Sensor Networks
Tarrío, Paula; Bernardos, Ana M.; Casar, José R.
2012-01-01
In line with recent research efforts made to conceive energy saving protocols and algorithms and power sensitive network architectures, in this paper we propose a transmission strategy to minimize the energy consumption in a sensor network when using a localization technique based on the measurement of the strength (RSS) or the time of arrival (TOA) of the received signal. In particular, we find the transmission power and the packet transmission rate that jointly minimize the total consumed energy, while ensuring at the same time a desired accuracy in the RSS or TOA measurements. We also propose some corrections to these theoretical results to take into account the effects of shadowing and packet loss in the propagation channel. The proposed strategy is shown to be effective in realistic scenarios providing energy savings with respect to other transmission strategies, and also guaranteeing a given accuracy in the distance estimations, which will serve to guarantee a desired accuracy in the localization result. PMID:23202218
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, F. J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1994-01-01
An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.
NASA Technical Reports Server (NTRS)
Martinovic, Zoran N.; Cerro, Jeffrey A.
2002-01-01
This is an interim user's manual for current procedures used in the Vehicle Analysis Branch at NASA Langley Research Center, Hampton, Virginia, for launch vehicle structural subsystem weight estimation based on finite element modeling and structural analysis. The process is intended to complement traditional methods of conceptual and early preliminary structural design such as the application of empirical weight estimation or application of classical engineering design equations and criteria on one dimensional "line" models. Functions of two commercially available software codes are coupled together. Vehicle modeling and analysis are done using SDRC/I-DEAS, and structural sizing is performed with the Collier Research Corp. HyperSizer program.
NASA Astrophysics Data System (ADS)
Tan, Zhiqiang; Xia, Junchao; Zhang, Bin W.; Levy, Ronald M.
2016-01-01
The weighted histogram analysis method (WHAM) including its binless extension has been developed independently in several different contexts, and widely used in chemistry, physics, and statistics, for computing free energies and expectations from multiple ensembles. However, this method, while statistically efficient, is computationally costly or even infeasible when a large number, hundreds or more, of distributions are studied. We develop a locally WHAM (local WHAM) from the perspective of simulations of simulations (SOS), using generalized serial tempering (GST) to resample simulated data from multiple ensembles. The local WHAM equations based on one jump attempt per GST cycle can be solved by optimization algorithms orders of magnitude faster than standard implementations of global WHAM, but yield similarly accurate estimates of free energies to global WHAM estimates. Moreover, we propose an adaptive SOS procedure for solving local WHAM equations stochastically when multiple jump attempts are performed per GST cycle. Such a stochastic procedure can lead to more accurate estimates of equilibrium distributions than local WHAM with one jump attempt per cycle. The proposed methods are broadly applicable when the original data to be "WHAMMED" are obtained properly by any sampling algorithm including serial tempering and parallel tempering (replica exchange). To illustrate the methods, we estimated absolute binding free energies and binding energy distributions using the binding energy distribution analysis method from one and two dimensional replica exchange molecular dynamics simulations for the beta-cyclodextrin-heptanoate host-guest system. In addition to the computational advantage of handling large datasets, our two dimensional WHAM analysis also demonstrates that accurate results similar to those from well-converged data can be obtained from simulations for which sampling is limited and not fully equilibrated.
[Research on maize multispectral image accurate segmentation and chlorophyll index estimation].
Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e
2015-01-01
In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray
The challenges of accurately estimating time of long bone injury in children.
Pickett, Tracy A
2015-07-01
The ability to determine the time an injury occurred can be of crucial significance in forensic medicine and holds special relevance to the investigation of child abuse. However, dating paediatric long bone injury, including fractures, is nuanced by complexities specific to the paediatric population. These challenges include the ability to identify bone injury in a growing or only partially-calcified skeleton, different injury patterns seen within the spectrum of the paediatric population, the effects of bone growth on healing as a separate entity from injury, differential healing rates seen at different ages, and the relative scarcity of information regarding healing rates in children, especially the very young. The challenges posed by these factors are compounded by a lack of consistency in defining and categorizing healing parameters. This paper sets out the primary limitations of existing knowledge regarding estimating timing of paediatric bone injury. Consideration and understanding of the multitude of factors affecting bone injury and healing in children will assist those providing opinion in the medical-legal forum. PMID:26048508
NASA Astrophysics Data System (ADS)
Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick
2009-06-01
When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.
Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.
Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M
2016-08-01
Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835
Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates
NASA Astrophysics Data System (ADS)
Carbogno, Christian; Scheffler, Matthias
In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.
Estimation of Disability Weights in the General Population of South Korea Using a Paired Comparison.
Ock, Minsu; Ahn, Jeonghoon; Yoon, Seok-Jun; Jo, Min-Woo
2016-01-01
We estimated the disability weights in the South Korean population by using a paired comparison-only model wherein 'full health' and 'being dead' were included as anchor points, without resorting to a cardinal method, such as person trade-off. The study was conducted via 2 types of survey: a household survey involving computer-assisted face-to-face interviews and a web-based survey (similar to that of the GBD 2010 disability weight study). With regard to the valuation methods, paired comparison, visual analogue scale (VAS), and standard gamble (SG) were used in the household survey, whereas paired comparison and population health equivalence (PHE) were used in the web-based survey. Accordingly, we described a total of 258 health states, with 'full health' and 'being dead' designated as anchor points. In the analysis, 4 models were considered: a paired comparison-only model; hybrid model between paired comparison and PHE; VAS model; and SG model. A total of 2,728 and 3,188 individuals participated in the household and web-based survey, respectively. The Pearson correlation coefficients of the disability weights of health states between the GBD 2010 study and the current models were 0.802 for Model 2, 0.796 for Model 1, 0.681 for Model 3, and 0.574 for Model 4 (all P-values<0.001). The discrimination of values according to health state severity was most suitable in Model 1. Based on these results, the paired comparison-only model was selected as the best model for estimating disability weights in South Korea, and for maintaining simplicity in the analysis. Thus, disability weights can be more easily estimated by using paired comparison alone, with 'full health' and 'being dead' as one of the health states. As noted in our study, we believe that additional evidence regarding the universality of disability weight can be observed by using a simplified methodology of estimating disability weights. PMID:27606626
Grosse, Scott D; Chaugule, Shraddha S; Hay, Joel W
2015-01-01
Estimates of preference-weighted health outcomes or health state utilities are needed to assess improvements in health in terms of quality-adjusted life-years. Gains in quality-adjusted life-years are used to assess the cost–effectiveness of prophylactic use of clotting factor compared with on-demand treatment among people with hemophilia, a congenital bleeding disorder. Published estimates of health utilities for people with hemophilia vary, contributing to uncertainty in the estimates of cost–effectiveness of prophylaxis. Challenges in estimating utility weights for the purpose of evaluating hemophilia treatment include selection bias in observational data, difficulty in adjusting for predictors of health-related quality of life and lack of preference-based data comparing adults with lifetime or primary prophylaxis versus no prophylaxis living within the same country and healthcare system. PMID:25585817
Estimation of breed-specific heterosis effects for birth, weaning, and yearling weight in cattle.
Schiermiester, L N; Thallman, R M; Kuehn, L A; Kachman, S D; Spangler, M L
2015-01-01
Heterosis, assumed proportional to expected breed heterozygosity, was calculated for 6834 individuals with birth, weaning and yearling weight records from Cycle VII and advanced generations of the U.S. Meat Animal Research Center (USMARC) Germplasm Evaluation (GPE) project. Breeds represented in these data included: Angus, Hereford, Red Angus, Charolais, Gelbvieh, Simmental, Limousin and Composite MARC III. Heterosis was further estimated by proportions of British × British (B × B), British × Continental (B × C) and Continental × Continental (C × C) crosses and by breed-specific combinations. Model 1 fitted fixed covariates for heterosis within biological types while Model 2 fitted random breed-specific combinations nested within the fixed biological type covariates. Direct heritability estimates (SE) for birth, weaning ,and yearling weight for Model 1 were 0.42 (0.04), 0.22 (0.03), and 0.39 (0.05), respectively. The direct heritability estimates (SE) of birth, weaning, and yearling weight for Model 2 were the same as Model 1, except yearling weight heritability was 0.38 (0.05). The B × B, B × C, and C × C heterosis estimates for birth weight were 0.47 (0.37), 0.75 (0.32), and 0.73 (0.54) kg, respectively. The B × B, B × C, and C × C heterosis estimates for weaning weight were 6.43 (1.80), 8.65 (1.54), and 5.86 (2.57) kg, respectively. Yearling weight estimates for B × B, B × C, and C × C heterosis were 17.59(3.06), 13.88 (2.63), and 9.12 (4.34) kg, respectively. Differences did exist among estimates of breed-specific heterosis for weaning and yearling weight, although the variance component associated with breed-specific heterosis was not significant. These results illustrate that there are differences in breed-specific heterosis and exploiting these differences can lead to varying levels of heterosis among mating plans. PMID:25568356
ProViDE: A software tool for accurate estimation of viral diversity in metagenomic samples
Ghosh, Tarini Shankar; Mohammed, Monzoorul Haque; Komanduri, Dinakar; Mande, Sharmila Shekhar
2011-01-01
Given the absence of universal marker genes in the viral kingdom, researchers typically use BLAST (with stringent E-values) for taxonomic classification of viral metagenomic sequences. Since majority of metagenomic sequences originate from hitherto unknown viral groups, using stringent e-values results in most sequences remaining unclassified. Furthermore, using less stringent e-values results in a high number of incorrect taxonomic assignments. The SOrt-ITEMS algorithm provides an approach to address the above issues. Based on alignment parameters, SOrt-ITEMS follows an elaborate work-flow for assigning reads originating from hitherto unknown archaeal/bacterial genomes. In SOrt-ITEMS, alignment parameter thresholds were generated by observing patterns of sequence divergence within and across various taxonomic groups belonging to bacterial and archaeal kingdoms. However, many taxonomic groups within the viral kingdom lack a typical Linnean-like taxonomic hierarchy. In this paper, we present ProViDE (Program for Viral Diversity Estimation), an algorithm that uses a customized set of alignment parameter thresholds, specifically suited for viral metagenomic sequences. These thresholds capture the pattern of sequence divergence and the non-uniform taxonomic hierarchy observed within/across various taxonomic groups of the viral kingdom. Validation results indicate that the percentage of ‘correct’ assignments by ProViDE is around 1.7 to 3 times higher than that by the widely used similarity based method MEGAN. The misclassification rate of ProViDE is around 3 to 19% (as compared to 5 to 42% by MEGAN) indicating significantly better assignment accuracy. ProViDE software and a supplementary file (containing supplementary figures and tables referred to in this article) is available for download from http://metagenomics.atc.tcs.com/binning/ProViDE/ PMID:21544173
A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates
NASA Astrophysics Data System (ADS)
Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.
2015-08-01
We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.
Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation
ter Horst, Arjan C.; Koppen, Mathieu; Selen, Luc P. J.; Medendorp, W. Pieter
2015-01-01
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement. PMID:26658990
Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation.
ter Horst, Arjan C; Koppen, Mathieu; Selen, Luc P J; Medendorp, W Pieter
2015-01-01
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement. PMID:26658990
Martinson, K L; Coleman, R C; Rendahl, A K; Fang, Z; McCue, M E
2014-05-01
Excessive BW has become a major health issue in the equine (Equus caballus) industry. The objectives were to determine if the addition of neck circumference and height improved existing BW estimation equations, to develop an equation for estimation of ideal BW, and to develop a method for assessing the likelihood of being overweight in adult equids. Six hundred and twenty-nine adult horses and ponies who met the following criteria were measured and weighed at 2 horse shows in September 2011 in Minnesota: age ≥ 3 yr, height ≥ 112 cm, and nonpregnant. Personnel assessed BCS on a scale of 1 to 9 and measured wither height at the third thoracic vertebra, body length from the point of shoulder to the point of the buttock, neck and girth circumference, and weight using a portable livestock scale. Individuals were grouped into breed types on the basis of existing knowledge and were confirmed with multivariate ANOVA analysis of morphometric measurements. Equations for estimated and ideal BW were developed using linear regression modeling. For estimated BW, the model was fit using all individuals and all morphometric measurements. For ideal BW, the model was fit using individuals with a BCS of 5; breed type, height, and body length were considered as these measurements are not affected by adiposity. A BW score to assess the likelihood of being overweight was developed by fitting a proportional odds logistic regression model on BCS using the difference between ideal and estimated BW, the neck to height ratio, and the girth to height ratio as predictors; this score was then standardized using the data from individuals with a BCS of 5. Breed types included Arabian, stock, and pony. Mean (± SD) BCS was 5.6 ± 0.9. BW (kg) was estimated by taking [girth (cm)(1.48)6 × length (cm)(0.554) × height (cm)(0.599) × neck (cm)(0.173)]/3,596, 3,606, and 3,441 for Arabians, ponies, and stock horses, respectively (R(2) = 0.92; mean-squared error (MSE) = 22 kg). Ideal BW (kg) was
Figure of merit of diamond power devices based on accurately estimated impact ionization processes
NASA Astrophysics Data System (ADS)
Hiraiwa, Atsushi; Kawarada, Hiroshi
2013-07-01
Although a high breakdown voltage or field is considered as a major advantage of diamond, there has been a large difference in breakdown voltages or fields of diamond devices in literature. Most of these apparently contradictory results did not correctly reflect material properties because of specific device designs, such as punch-through structure and insufficient edge termination. Once these data were removed, the remaining few results, including a record-high breakdown field of 20 MV/cm, were theoretically reproduced, exactly calculating ionization integrals based on the ionization coefficients that were obtained after compensating for possible errors involved in reported theoretical values. In this compensation, we newly developed a method for extracting an ionization coefficient from an arbitrary relationship between breakdown voltage and doping density in the Chynoweth's framework. The breakdown field of diamond was estimated to depend on the doping density more than other materials, and accordingly required to be compared at the same doping density. The figure of merit (FOM) of diamond devices, obtained using these breakdown data, was comparable to the FOMs of 4H-SiC and Wurtzite-GaN devices at room temperature, but was projected to be larger than the latter by more than one order of magnitude at higher temperatures about 300 °C. Considering the relatively undeveloped state of diamond technology, there is room for further enhancement of the diamond FOM, improving breakdown voltage and mobility. Through these investigations, junction breakdown was found to be initiated by electrons or holes in a p--type or n--type drift layer, respectively. The breakdown voltages in the two types of drift layers differed from each other in a strict sense but were practically the same. Hence, we do not need to care about the conduction type of drift layers, but should rather exactly calculate the ionization integral without approximating ionization coefficients by a power
Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.
NASA Astrophysics Data System (ADS)
Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke
2013-04-01
temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules
A Computer Code for Gas Turbine Engine Weight And Disk Life Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Ghosn, Louis J.; Halliwell, Ian; Wickenheiser, Tim (Technical Monitor)
2002-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. In this paper, the major enhancements to NASA's engine-weight estimate computer code (WATE) are described. These enhancements include the incorporation of improved weight-calculation routines for the compressor and turbine disks using the finite-difference technique. Furthermore, the stress distribution for various disk geometries was also incorporated, for a life-prediction module to calculate disk life. A material database, consisting of the material data of most of the commonly-used aerospace materials, has also been incorporated into WATE. Collectively, these enhancements provide a more realistic and systematic way to calculate the engine weight. They also provide additional insight into the design trade-off between engine life and engine weight. To demonstrate the new capabilities, the enhanced WATE code is used to perform an engine weight/life trade-off assessment on a production aircraft engine.
Valchev, Nikola; Zijdewind, Inge; Keysers, Christian; Gazzola, Valeria; Avenanti, Alessio; Maurits, Natasha M.
2016-01-01
Seeing others performing an action induces the observers’ motor cortex to “resonate” with the observed action. Transcranial magnetic stimulation (TMS) studies suggest that such motor resonance reflects the encoding of various motor features of the observed action, including the apparent motor effort. However, it is unclear whether such encoding requires direct observation or whether force requirements can be inferred when the moving body part is partially occluded. To address this issue, we presented participants with videos of a right hand lifting a box of three different weights and asked them to estimate its weight. During each trial we delivered one transcranial magnetic stimulation (TMS) pulse over the left primary motor cortex of the observer and recorded the motor evoked potentials (MEPs) from three muscles of the right hand (first dorsal interosseous, FDI, abductor digiti minimi, ADM, and brachioradialis, BR). Importantly, because the hand shown in the videos was hidden behind a screen, only the contractions in the actor’s BR muscle under the bare skin were observable during the entire videos, while the contractions in the actor’s FDI and ADM muscles were hidden during the grasp and actual lift. The amplitudes of the MEPs recorded from the BR (observable) and FDI (hidden) muscle increased with the weight of the box. These findings indicate that the modulation of motor excitability induced by action observation extends to the cortical representation of muscles with contractions that could not be observed. Thus, motor resonance appears to reflect force requirements of observed lifting actions even when the moving body part is occluded from view. PMID:25462196
Valchev, Nikola; Zijdewind, Inge; Keysers, Christian; Gazzola, Valeria; Avenanti, Alessio; Maurits, Natasha M
2015-01-01
Seeing others performing an action induces the observers' motor cortex to "resonate" with the observed action. Transcranial magnetic stimulation (TMS) studies suggest that such motor resonance reflects the encoding of various motor features of the observed action, including the apparent motor effort. However, it is unclear whether such encoding requires direct observation or whether force requirements can be inferred when the moving body part is partially occluded. To address this issue, we presented participants with videos of a right hand lifting a box of three different weights and asked them to estimate its weight. During each trial we delivered one transcranial magnetic stimulation (TMS) pulse over the left primary motor cortex of the observer and recorded the motor evoked potentials (MEPs) from three muscles of the right hand (first dorsal interosseous, FDI, abductor digiti minimi, ADM, and brachioradialis, BR). Importantly, because the hand shown in the videos was hidden behind a screen, only the contractions in the actor's BR muscle under the bare skin were observable during the entire videos, while the contractions in the actor's FDI and ADM muscles were hidden during the grasp and actual lift. The amplitudes of the MEPs recorded from the BR (observable) and FDI (hidden) muscle increased with the weight of the box. These findings indicate that the modulation of motor excitability induced by action observation extends to the cortical representation of muscles with contractions that could not be observed. Thus, motor resonance appears to reflect force requirements of observed lifting actions even when the moving body part is occluded from view. PMID:25462196
Menon, Manoj; Langdon, Jonathan; McAleavey, Stephen
2015-01-01
In elastography, displacement estimation is often performed using cross-correlation based techniques, assuming fully-developed, homogeneous speckle. In the presence of a local, large variation in echo amplitude, such as a reflection from a vessel wall, this assumption does not hold true, resulting in a biased displacement estimate. Normalizing the echo by its envelope before displacement estimation reduces this effect at the cost of larger jitter errors. An algorithm is proposed to reduce amplitude-dependent bias in displacement estimates while avoiding a large increase in the jitter error magnitude. The algorithm involves “Envelope-Weighted Normalization” (EWN) of echo data before displacement estimation. A parametric analysis was conducted to find the optimum parameters with which this technique could be implemented. The EWN technique was found to significantly reduce the RMS error of the displacement estimates showing the greatest improvements when utilizing longer window lengths and higher ultrasonic frequencies. PMID:20687275
An improved method of estimating molecular weights of volatile organic compound from their mass spectra has been developed and implemented with an expert system. he method is based on the strong correlation of MAXMASS, the highest mass with an intensity of 5% of the base peak in ...
A fast, personal-computer based method of estimating molecular weights of organic compounds from low resolution mass I spectra has been thoroughly evaluated. he method is based on a rule-based pattern,recognition/expert system approach which uses empirical linear corrections whic...
Preliminary weight and cost estimates for transport aircraft composite structural design concepts
NASA Technical Reports Server (NTRS)
1973-01-01
Preliminary weight and cost estimates have been prepared for design concepts utilized for a transonic long range transport airframe with extensive applications of advanced composite materials. The design concepts, manufacturing approach, and anticipated details of manufacturing cost reflected in the composite airframe are substantially different from those found in conventional metal structure and offer further evidence of the advantages of advanced composite materials.
NASA Technical Reports Server (NTRS)
Hale, P. L.
1982-01-01
The weight and major envelope dimensions of small aircraft propulsion gas turbine engines are estimated. The computerized method, called WATE-S (Weight Analysis of Turbine Engines-Small) is a derivative of the WATE-2 computer code. WATE-S determines the weight of each major component in the engine including compressors, burners, turbines, heat exchangers, nozzles, propellers, and accessories. A preliminary design approach is used where the stress levels, maximum pressures and temperatures, material properties, geometry, stage loading, hub/tip radius ratio, and mechanical overspeed are used to determine the component weights and dimensions. The accuracy of the method is generally better than + or - 10 percent as verified by analysis of four small aircraft propulsion gas turbine engines.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
NASA Technical Reports Server (NTRS)
Wahba, Grace; Deepak, A. (Editor)
1988-01-01
The problem of merging direct and remotely sensed (indirect) data with forecast data to get an estimate of the present state of the atmosphere for the purpose of numerical weather prediction is examined. To carry out this merging optimally, it is necessary to provide an estimate of the relative weights to be given to the observations and forecast. It is possible to do this dynamically from the information to be merged, if the correlation structure of the errors from the various sources is sufficiently different. Some new statistical approaches to doing this are described, and conditions quantified in which such estimates are likely to be good.
Sahu, Nityananda; Singh, Gurmeet; Nandi, Apurba; Gadre, Shridhar R
2016-07-21
Owing to the steep scaling behavior, highly accurate CCSD(T) calculations, the contemporary gold standard of quantum chemistry, are prohibitively difficult for moderate- and large-sized water clusters even with the high-end hardware. The molecular tailoring approach (MTA), a fragmentation-based technique is found to be useful for enabling such high-level ab initio calculations. The present work reports the CCSD(T) level binding energies of many low-lying isomers of large (H2O)n (n = 16, 17, and 25) clusters employing aug-cc-pVDZ and aug-cc-pVTZ basis sets within the MTA framework. Accurate estimation of the CCSD(T) level binding energies [within 0.3 kcal/mol of the respective full calculation (FC) results] is achieved after effecting the grafting procedure, a protocol for minimizing the errors in the MTA-derived energies arising due to the approximate nature of MTA. The CCSD(T) level grafting procedure presented here hinges upon the well-known fact that the MP2 method, which scales as O(N(5)), can be a suitable starting point for approximating to the highly accurate CCSD(T) [that scale as O(N(7))] energies. On account of the requirement of only an MP2-level FC on the entire cluster, the current methodology ultimately leads to a cost-effective solution for the CCSD(T) level accurate binding energies of large-sized water clusters even at the complete basis set limit utilizing off-the-shelf hardware. PMID:27351269
Applying fuzzy logic to estimate the parameters of the length-weight relationship.
Bitar, S D; Campos, C P; Freitas, C E C
2016-05-01
We evaluated three mathematical procedures to estimate the parameters of the relationship between weight and length for Cichla monoculus: least squares ordinary regression on log-transformed data, non-linear estimation using raw data and a mix of multivariate analysis and fuzzy logic. Our goal was to find an alternative approach that considers the uncertainties inherent to this biological model. We found that non-linear estimation generated more consistent estimates than least squares regression. Our results also indicate that it is possible to find consistent estimates of the parameters directly from the centers of mass of each cluster. However, the most important result is the intervals obtained with the fuzzy inference system. PMID:27143051
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, Francis J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul
2015-01-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821
Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix
2015-12-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821
An Object-oriented Computer Code for Aircraft Engine Weight Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Naylor, Bret A.
2008-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented
Large-scale Advanced Propfan (LAP) performance, acoustic and weight estimation, January, 1984
NASA Technical Reports Server (NTRS)
Parzych, D.; Shenkman, A.; Cohen, S.
1985-01-01
In comparison to turbo-prop applications, the Prop-Fan is designed to operate in a significantly higher range of aircraft flight speeds. Two concerns arise regarding operation at very high speeds: aerodynamic performance and noise generation. This data package covers both topics over a broad range of operating conditions for the eight (8) bladed SR-7L Prop-Fan. Operating conditions covered are: Flight Mach Number 0 - 0.85; blade tip speed 600-800 ft/sec; and cruise power loading 20-40 SHP/D2. Prop-Fan weight and weight scaling estimates are also included.
NASA Astrophysics Data System (ADS)
Teegavarapu, Ramesh S. V.; Chandramouli, V.
2005-10-01
Distance-weighted and data-driven methods are extensively used for estimation of missing rainfall data. Inverse distance weighting method (IDWM) is one of the most frequently used methods for estimating missing rainfall values at a gage based on values recorded at all other available recording gages. In spite of the method's wide success and acceptability, it suffers from major conceptual limitations. Conceptual improvements are incorporated in the IDWM method that led to several modified distance-based methods. A data-driven model that uses artificial neural network concepts and a stochastic interpolation technique, kriging, are also developed and tested in the current study. These methods are tested for estimation of missing precipitation data. Historical precipitation data from 20 rain-gauging stations in the state of Kentucky, USA, are used to test the improvised methods and derive conclusions about the efficacy of incorporated improvements. Results suggest that the conceptual revisions can improve estimation of missing precipitation records by defining better weighting parameters and surrogate measures for distances that are used in the IDWM.
Reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization
Shi, Xin Zhao, Xiangmo Hui, Fei Ma, Junyan Yang, Lan
2014-10-06
Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations is constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.
Regularized Estimate of the Weight Vector of an Adaptive Interference Canceller
NASA Astrophysics Data System (ADS)
Ermolayev, V. T.; Sorokin, I. S.; Flaksman, A. G.; Yastrebov, A. V.
2016-05-01
We consider an adaptive multi-channel interference canceller, which ensures the minimum value of the average output power of interference. It is proposed to form the weight vector of such a canceller as the power-vector expansion. It is shown that this approach allows one to obtain an exact analytical solution for the optimal weight vector by using the procedure of the power-vector orthogonalization. In the case of a limited number of the input-process samples, the solution becomes ill-defined and its regularization is required. An effective regularization method, which ensures a high degree of the interference suppression and does not involve the procedure of inversion of the correlation matrix of interference, is proposed, which significantly reduces the computational cost of the weight-vector estimation.
Regularized Estimate of the Weight Vector of an Adaptive Interference Canceller
NASA Astrophysics Data System (ADS)
Ermolayev, V. T.; Sorokin, I. S.; Flaksman, A. G.; Yastrebov, A. V.
2016-06-01
We consider an adaptive multi-channel interference canceller, which ensures the minimum value of the average output power of interference. It is proposed to form the weight vector of such a canceller as the power-vector expansion. It is shown that this approach allows one to obtain an exact analytical solution for the optimal weight vector by using the procedure of the power-vector orthogonalization. In the case of a limited number of the input-process samples, the solution becomes ill-defined and its regularization is required. An effective regularization method, which ensures a high degree of the interference suppression and does not involve the procedure of inversion of the correlation matrix of interference, is proposed, which significantly reduces the computational cost of the weight-vector estimation.
2011-01-01
Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645
Veale, David; Gledhill, Lucinda J; Christodoulou, Polyxeni; Hodsoll, John
2016-09-01
Our aim was to systematically review the prevalence of body dysmorphic disorder (BDD) in a variety of settings. Weighted prevalence estimate and 95% confidence intervals in each study were calculated. The weighted prevalence of BDD in adults in the community was estimated to be 1.9%; in adolescents 2.2%; in student populations 3.3%; in adult psychiatric inpatients 7.4%; in adolescent psychiatric inpatients 7.4%; in adult psychiatric outpatients 5.8%; in general cosmetic surgery 13.2%; in rhinoplasty surgery 20.1%; in orthognathic surgery 11.2%; in orthodontics/cosmetic dentistry settings 5.2%; in dermatology outpatients 11.3%; in cosmetic dermatology outpatients 9.2%; and in acne dermatology clinics 11.1%. Women outnumbered men in the majority of settings but not in cosmetic or dermatological settings. BDD is common in some psychiatric and cosmetic settings but is poorly identified. PMID:27498379
Image haze removal using a hybrid of fuzzy inference system and weighted estimation
NASA Astrophysics Data System (ADS)
Wang, Jyun-Guo; Tai, Shen-Chuan; Lin, Cheng-Jian
2015-05-01
The attenuation of the light transmitted through air can reduce image quality when taking a photograph outdoors, especially in a hazy environment. Hazy images often lack sufficient information for image recognition systems to operate effectively. In order to solve the aforementioned problems, this study proposes a hybrid method combining fuzzy theory with weighted estimation for the removal of haze from images. A transmission map is first created based on fuzzy theory. According to the transmission map, the proposed method automatically finds the possible atmospheric lights and refines the atmospheric lights by mixing these candidates. Weighted estimation is then employed to generate a refined transmission map, which removes the halo artifact from around the sharp edges. Experimental results demonstrate the superiority of the proposed method over existing methods with regard to contrast, color depth, and the elimination of halo artifacts.
An Object-Oriented Computer Code for Aircraft Engine Weight Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Naylor, Bret A.
2009-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.
Weighted least square estimates of the parameters of a model of survivorship probabilities.
Mitra, S
1987-06-01
"A weighted regression has been fitted to estimate the parameters of a model involving functions of survivorship probability and age. Earlier, the parameters were estimated by the method of ordinary least squares and the results were very encouraging. However, a multiple regression equation passing through the origin has been found appropriate for the present model from statistical consideration. Fortunately, this method, while methodologically more sophisticated, has a slight edge over the former as evidenced by the respective measures of reproducibility in the model and actual life tables selected for this study." PMID:12281212
NASA Astrophysics Data System (ADS)
Lin, Bangjiang; Li, Yiwei; Zhang, Shihao; Tang, Xuan
2015-10-01
Weighted interframe averaging (WIFA)-based channel estimation (CE) is presented for orthogonal frequency division multiplexing passive optical network (OFDM-PON), in which the CE results of the adjacent frames are directly averaged to increase the estimation accuracy. The effectiveness of WIFA combined with conventional least square, intrasymbol frequency-domain averaging, and minimum mean square error, respectively, is demonstrated through 26.7-km standard single-mode fiber transmission. The experimental results show that the WIFA method with low complexity can significantly enhance transmission performance of OFDM-PON.
NASA Astrophysics Data System (ADS)
Omoniyi, Bayonle; Stow, Dorrik
2016-04-01
One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.
NASA Astrophysics Data System (ADS)
Lee, Ming-Wei; Chen, Yi-Chun
2014-02-01
In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.
Estimation of missing rainfall data in Pahang using modified spatial interpolation weighting methods
NASA Astrophysics Data System (ADS)
Azman, Muhammad Az-zuhri; Zakaria, Roslinazairimah; Ahmad Radi, Noor Fadhilah
2015-02-01
In meteorological and hydrological researches, missing rainfall data always become one of the challenging problems which need to be faced by the researchers. The problems of missing rainfall data are due to the wrong technique used when measuring the rainfall, relocation of the rain station and malfunctioned of instrument. Finding the suitable method to solve the missing data problem is very critical before going to the next level of data analysis. Most researchers used the spatial interpolation method to estimate the missing rainfall data at a particular target station which is based on the available rainfall data at their neighboring stations. The spatial interpolation method is one of the traditional weighting factors which also consider the strength of correlation between stations. This study uses the modified of spatial interpolation weighting methods to estimate the missing rainfall data in Pahang and only assume that the particular target station has the missing value. A new modified method of normal ratio and inverse distance weighting with correlation method is proposed by including the correlation coefficient and abbreviated by NRIDC. The performance of the modified spatial interpolation weighting methods used are assessed using the similarity index (S-index), mean absolute error (MAE) and coefficient of correlation (R) for different percentage of missing values (5%-30%).
Formant frequency estimation of high-pitched vowels using weighted linear prediction.
Alku, Paavo; Pohjalainen, Jouni; Vainio, Martti; Laukkanen, Anne-Maria; Story, Brad H
2013-08-01
All-pole modeling is a widely used formant estimation method, but its performance is known to deteriorate for high-pitched voices. In order to address this problem, several all-pole modeling methods robust to fundamental frequency have been proposed. This study compares five such previously known methods and introduces a technique, Weighted Linear Prediction with Attenuated Main Excitation (WLP-AME). WLP-AME utilizes temporally weighted linear prediction (LP) in which the square of the prediction error is multiplied by a given parametric weighting function. The weighting downgrades the contribution of the main excitation of the vocal tract in optimizing the filter coefficients. Consequently, the resulting all-pole model is affected more by the characteristics of the vocal tract leading to less biased formant estimates. By using synthetic vowels created with a physical modeling approach, the results showed that WLP-AME yields improved formant frequencies for high-pitched sounds in comparison to the previously known methods (e.g., relative error in the first formant of the vowel [a] decreased from 11% to 3% when conventional LP was replaced with WLP-AME). Experiments conducted on natural vowels indicate that the formants detected by WLP-AME changed in a more regular manner between repetitions of different pitch than those computed by conventional LP. PMID:23927127
NASA Technical Reports Server (NTRS)
Jensen, J. K.; Wright, R. L.
1981-01-01
Estimates of total spacecraft weight and packaging options were made for three conceptual designs of a microwave radiometer spacecraft. Erectable structures were found to be slightly lighter than deployable structures but could be packaged in one-tenth the volume. The tension rim concept, an unconventional design approach, was found to be the lightest and transportable to orbit in the least number of shuttle flights.
Effects of a limited class of nonlinearities on estimates of relative weights
NASA Astrophysics Data System (ADS)
Richards, Virginia M.
2002-02-01
Perturbation analyses have been applied in recent years to determine the relative contribution of individual stimulus components in detection and discrimination tasks. Responses to stimulus samples are compared to stimulus parameters to determine the details of the decision rule. Often, a linear model is assumed and it is of interest to determine the relative contribution of different stimulus elements to the decision. Here, biases in estimated relative weights are considered for the case where the decision variable is given by D=(∑(αiXin)k)m and the stimulus components, the Xi, are normally distributed, of equal variance, and mutually independent. The αi are the ``true'' combination weights, and n, k, and m are positive reals. The method used to estimate relative weights is the correlation coefficient between the Xi and the observer's responses. Estimates of relative αi do not depend on m but may depend on the mean values of the Xi and the values of n and k (a dependence on the variance, σi2, holds even for linear transformations).
Logan, Corina J; Palmstrom, Christin R
2015-01-01
There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size), as measured by computerized tomography (CT) scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex. PMID:26082858
A plan for accurate estimation of daily area-mean rainfall during the CaPE experiment
NASA Technical Reports Server (NTRS)
Duchon, Claude E.
1992-01-01
The Convection and Precipitation/Electrification (CaPE) experiment took place in east central Florida from 8 July to 18 August, 1991. There were five research themes associated with CaPE. In broad terms they are: investigation of the evolution of the electric field in convective clouds, determination of meteorological and electrical conditions associated with lightning, development of mesoscale numerical forecasts (2-12 hr) and nowcasts (less than 2 hr) of convective initiation and remote estimation of rainfall. It is the last theme coupled with numerous raingage and streamgage measurements, satellite and aircraft remote sensing, radiosondes and other meteorological measurements in the atmospheric boundary layer that provide the basis for determining the hydrologic cycle for the CaPE experiment area. The largest component of the hydrologic cycle in this region is rainfall. An accurate determination of daily area-mean rainfall is important in correctly modeling its apportionment into runoff, infiltration and evapotranspiration. In order to achieve this goal a research plan was devised and initial analysis begun. The overall research plan is discussed with special emphasis placed on the adjustment of radar rainfall estimates to raingage rainfall.
ERIC Educational Resources Information Center
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
Estimation of parameters of random effects models from samples collected via complex multistage designs is considered. One way to reduce estimation bias due to unequal probabilities of selection is to incorporate sampling weights. Many researchers have been proposed various weighting methods (Korn, & Graubard, 2003; Pfeffermann, Skinner, Holmes,…
Ellis, Alan R.; Dusetzina, Stacie B.; Hansen, Richard A.; Gaynes, Bradley N.; Farley, Joel F.; Stürmer, Til
2013-01-01
Purpose The choice of propensity score (PS) implementation influences treatment effect estimates not only because different methods estimate different quantities, but also because different estimators respond in different ways to phenomena such as treatment effect heterogeneity and limited availability of potential matches. Using effectiveness data, we describe lessons learned from sensitivity analyses with matched and weighted estimates. Methods With subsample data (N=1,292) from Sequenced Treatment Alternatives to Relieve Depression, a 2001–2004 effectiveness trial of depression treatments, we implemented PS matching and weighting to estimate the treatment effect in the treated and conducted multiple sensitivity analyses. Results Matching and weighting both balanced covariates but yielded different samples and treatment effect estimates (matched RR 1.00, 95% CI:0.75–1.34; weighted RR 1.28, 95% CI:0.97–1.69). In sensitivity analyses, as increasing numbers of observations at both ends of the PS distribution were excluded from the weighted analysis, weighted estimates approached the matched estimate (weighted RR 1.04, 95% CI 0.77–1.39 after excluding all observations below the 5th percentile of the treated and above the 95th percentile of the untreated). Treatment appeared to have benefits only in the highest and lowest PS strata. Conclusions Matched and weighted estimates differed due to incomplete matching, sensitivity of weighted estimates to extreme observations, and possibly treatment effect heterogeneity. PS analysis requires identifying the population and treatment effect of interest, selecting an appropriate implementation method, and conducting and reporting sensitivity analyses. Weighted estimation especially should include sensitivity analyses relating to influential observations, such as those treated contrary to prediction. PMID:23280682
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-01-01
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS. PMID:27420066
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-01-01
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS. PMID:27420066
Evaluation of Bias in Estimates of Early Childhood Obesity From Parent-Reported Heights and Weights
Weden, Margaret M.; Lau, Christopher; Brownell, Peter; Nazarov, Zafar; Fernandes, Meenakshi
2014-01-01
Objectives. We evaluated bias in estimated obesity prevalence owing to error in parental reporting. We also evaluated bias mitigation through application of Centers for Disease Control and Prevention’s biologically implausible value (BIV) cutoffs. Methods. We simulated obesity prevalence of children aged 2 to 5 years in 2 panel surveys after counterfactually substituting parameters estimated from 1999–2008 National Health and Nutrition Examination Survey data for prevalence of extreme height and weight and for proportions obese in extreme height or weight categories. Results. Heights reported below the first and fifth height-for-age percentiles explained between one half and two thirds, respectively, of total bias in obesity prevalence. Bias was reduced by one tenth when excluding cases with height-for-age and weight-for-age BIVs and by one fifth when excluding cases with body mass–index-for-age BIVs. Applying BIVs, however, resulted in incorrect exclusion of nonnegligible proportions of obese children. Conclusions. Correcting the reporting of children’s heights in the first percentile alone may reduce overestimation of early childhood obesity prevalence in surveys with parental reporting by one half to two thirds. Excluding BIVs has limited effectiveness in mitigating this bias. PMID:24832432
NASA Astrophysics Data System (ADS)
Beirle, Steffen; Hörmann, Christoph; Jöckel, Patrick; Liu, Song; Penning de Vries, Marloes; Pozzer, Andrea; Sihler, Holger; Valks, Pieter; Wagner, Thomas
2016-07-01
The STRatospheric Estimation Algorithm from Mainz (STREAM) determines stratospheric columns of NO2 which are needed for the retrieval of tropospheric columns from satellite observations. It is based on the total column measurements over clean, remote regions as well as over clouded scenes where the tropospheric column is effectively shielded. The contribution of individual satellite measurements to the stratospheric estimate is controlled by various weighting factors. STREAM is a flexible and robust algorithm and does not require input from chemical transport models. It was developed as a verification algorithm for the upcoming satellite instrument TROPOMI, as a complement to the operational stratospheric correction based on data assimilation. STREAM was successfully applied to the UV/vis satellite instruments GOME 1/2, SCIAMACHY, and OMI. It overcomes some of the artifacts of previous algorithms, as it is capable of reproducing gradients of stratospheric NO2, e.g., related to the polar vortex, and reduces interpolation errors over continents. Based on synthetic input data, the uncertainty of STREAM was quantified as about 0.1-0.2 × 1015 molecules cm-2, in accordance with the typical deviations between stratospheric estimates from different algorithms compared in this study.
NASA Astrophysics Data System (ADS)
Beirle, Steffen; Hörmann, Christoph; Jöckel, Patrick; Penning de Vries, Marloes; Pozzer, Andrea; Sihler, Holger; Valks, Pieter; Wagner, Thomas
2016-04-01
The STRatospheric Estimation Algorithm from Mainz (STREAM) determines stratospheric columns of NO2 which are needed for the retrieval of tropospheric columns from satellite observations. It is based on the total column measurements over clean, remote regions as well as over clouded scenes where the tropospheric column is effectively shielded. The contribution of individual satellite measurements to the stratospheric estimate is controlled by various weighting factors. STREAM is a flexible and robust algorithm and does not require input from chemical transport models. It was developed as verification algorithm for the upcoming satellite instrument TROPOMI, as complement to the operational stratospheric correction based on data assimilation. STREAM was successfully applied to the UV/vis satellite instruments GOME 1/2, SCIAMACHY, and OMI. It overcomes some of the artefacts of previous algorithms, as it is capable of reproducing gradients of stratospheric NO2, e.g. related to the polar vortex, and reduces interpolation errors over continents. Based on synthetic input data, the uncertainty of STREAM was quantified as about 0.1-0.2 ×1015 molecules cm‑2, in accordance to the typical deviations between stratospheric estimates from different algorithms compared in this study.
Model selection in the weighted generalized estimating equations for longitudinal data with dropout.
Gosho, Masahiko
2016-05-01
We propose criteria for variable selection in the mean model and for the selection of a working correlation structure in longitudinal data with dropout missingness using weighted generalized estimating equations. The proposed criteria are based on a weighted quasi-likelihood function and a penalty term. Our simulation results show that the proposed criteria frequently select the correct model in candidate mean models. The proposed criteria also have good performance in selecting the working correlation structure for binary and normal outcomes. We illustrate our approaches using two empirical examples. In the first example, we use data from a randomized double-blind study to test the cancer-preventing effects of beta carotene. In the second example, we use longitudinal CD4 count data from a randomized double-blind study. PMID:26509243
Fuel Consumption Reduction and Weight Estimate of an Intercooled-Recuperated Turboprop Engine
NASA Astrophysics Data System (ADS)
Andriani, Roberto; Ghezzi, Umberto; Ingenito, Antonella; Gamma, Fausto
2012-09-01
The introduction of intercooling and regeneration in a gas turbine engine can lead to performance improvement and fuel consumption reduction. Moreover, as first consequence of the saved fuel, also the pollutant emission can be greatly reduced. Turboprop seems to be the most suitable gas turbine engine to be equipped with intercooler and heat recuperator thanks to the relatively small mass flow rate and the small propulsion power fraction due to the exhaust nozzle. However, the extra weight and drag due to the heat exchangers must be carefully considered. An intercooled-recuperated turboprop engine is studied by means of a thermodynamic numeric code that, computing the thermal cycle, simulates the engine behavior at different operating conditions. The main aero engine performances, as specific power and specific fuel consumption, are then evaluated from the cycle analysis. The saved fuel, the pollution reduction, and the engine weight are then estimated for an example case.
NASA Astrophysics Data System (ADS)
Garden, Anna L.; Paulot, Fabien; Crounse, John D.; Maxwell-Cameron, Isobel J.; Wennberg, Paul O.; Kjaergaard, Henrik G.
2009-05-01
We have calculated relative energies and dipole moments of the stable conformers of nitrous acid, ethanol, ethylene glycol and propanone nitrate using a range of ab initio methods and basis sets. We have used these to calculate conformationally weighted dipole moments that are useful in estimates of collision rates between molecules and ions. We find that the average error in the conformationally weighted dipole moments is less than 5% for CCSD(T) with the aug-cc-pVTZ basis set, less than 10% for B3LYP/6-31G(d) and less than 20% for B3LYP/6-31+G(d) and B3LYP/aug-cc-pVTZ.
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
NASA Astrophysics Data System (ADS)
Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
NASA Astrophysics Data System (ADS)
Kassinopoulos, Michalis; Pitris, Costas
2016-03-01
The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.
Subramanian, Swetha; Mast, T Douglas
2015-10-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. PMID:26352462
NASA Astrophysics Data System (ADS)
Subramanian, Swetha; Mast, T. Douglas
2015-09-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
Kim, Sanghong; Kano, Manabu; Nakagawa, Hiroshi; Hasebe, Shinji
2011-12-15
Development of quality estimation models using near infrared spectroscopy (NIRS) and multivariate analysis has been accelerated as a process analytical technology (PAT) tool in the pharmaceutical industry. Although linear regression methods such as partial least squares (PLS) are widely used, they cannot always achieve high estimation accuracy because physical and chemical properties of a measuring object have a complex effect on NIR spectra. In this research, locally weighted PLS (LW-PLS) which utilizes a newly defined similarity between samples is proposed to estimate active pharmaceutical ingredient (API) content in granules for tableting. In addition, a statistical wavelength selection method which quantifies the effect of API content and other factors on NIR spectra is proposed. LW-PLS and the proposed wavelength selection method were applied to real process data provided by Daiichi Sankyo Co., Ltd., and the estimation accuracy was improved by 38.6% in root mean square error of prediction (RMSEP) compared to the conventional PLS using wavelengths selected on the basis of variable importance on the projection (VIP). The results clearly show that the proposed calibration modeling technique is useful for API content estimation and is superior to the conventional one. PMID:22001843
Towards higher sensitivity and stability of axon diameter estimation with diffusion‐weighted MRI
Alexander, Daniel C.; Kurniawan, Nyoman D.; Reutens, David C.; Yang, Zhengyi
2016-01-01
Diffusion‐weighted MRI is an important tool for in vivo and non‐invasive axon morphometry. The ActiveAx technique utilises an optimised acquisition protocol to infer orientationally invariant indices of axon diameter and density by fitting a model of white matter to the acquired data. In this study, we investigated the factors that influence the sensitivity to small‐diameter axons, namely the gradient strength of the acquisition protocol and the model fitting routine. Diffusion‐weighted ex. vivo images of the mouse brain were acquired using 16.4‐T MRI with high (G max of 300 mT/m) and ultra‐high (G max of 1350 mT/m) gradient strength acquisitions. The estimated axon diameter indices of the mid‐sagittal corpus callosum were validated using electron microscopy. In addition, a dictionary‐based fitting routine was employed and evaluated. Axon diameter indices were closer to electron microscopy measures when higher gradient strengths were employed. Despite the improvement, estimated axon diameter indices (a lower bound of ~ 1.8 μm) remained higher than the measurements obtained using electron microscopy (~1.2 μm). We further observed that limitations of pulsed gradient spin echo (PGSE) acquisition sequences and axonal dispersion could also influence the sensitivity with which axon diameter indices could be estimated. Our results highlight the influence of acquisition protocol, tissue model and model fitting, in addition to gradient strength, on advanced microstructural diffusion‐weighted imaging techniques. © 2016 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd. PMID:26748471
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Calculation of weighted averages approach for the estimation of ping tolerance values
Silalom, S.; Carter, J.L.; Chantaramongkol, P.
2010-01-01
A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.
Hayakawa, Carole K; Spanier, Jerome; Venugopalan, Vasan
2014-02-01
We examine the relative error of Monte Carlo simulations of radiative transport that employ two commonly used estimators that account for absorption differently, either discretely, at interaction points, or continuously, between interaction points. We provide a rigorous derivation of these discrete and continuous absorption weighting estimators within a stochastic model that we show to be equivalent to an analytic model, based on the radiative transport equation (RTE). We establish that both absorption weighting estimators are unbiased and, therefore, converge to the solution of the RTE. An analysis of spatially resolved reflectance predictions provided by these two estimators reveals no advantage to either in cases of highly scattering and highly anisotropic media. However, for moderate to highly absorbing media or isotropically scattering media, the discrete estimator provides smaller errors at proximal source locations while the continuous estimator provides smaller errors at distal locations. The origin of these differing variance characteristics can be understood through examination of the distribution of exiting photon weights. PMID:24562029
Hayakawa, Carole K.; Spanier, Jerome; Venugopalan, Vasan
2014-01-01
We examine the relative error of Monte Carlo simulations of radiative transport that employ two commonly used estimators that account for absorption differently, either discretely, at interaction points, or continuously, between interaction points. We provide a rigorous derivation of these discrete and continuous absorption weighting estimators within a stochastic model that we show to be equivalent to an analytic model, based on the radiative transport equation (RTE). We establish that both absorption weighting estimators are unbiased and, therefore, converge to the solution of the RTE. An analysis of spatially resolved reflectance predictions provided by these two estimators reveals no advantage to either in cases of highly scattering and highly anisotropic media. However, for moderate to highly absorbing media or isotropically scattering media, the discrete estimator provides smaller errors at proximal source locations while the continuous estimator provides smaller errors at distal locations. The origin of these differing variance characteristics can be understood through examination of the distribution of exiting photon weights. PMID:24562029
Feature weight estimation for gene selection: a local hyperlinear learning approach
2014-01-01
Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071
NASA Astrophysics Data System (ADS)
Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.
2015-12-01
Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi
Real-time combining of residual carrier array signals using ML weight estimates
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor A.; Rodemich, Eugene R.; Dolinar, Samuel J., Jr.
1992-01-01
A real-time digital signal combining system for use with array feeds is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed residual carrier samples in each channel using a 'sliding-window' implementation of a maximum-likelihood (ML) parameter estimator. It is shown that with averaging times of about 0.1 s, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the array feed, even in the presence of severe wind gusts and similar disturbances.
ERIC Educational Resources Information Center
Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.
1997-01-01
This review finds that formula-based procedures can be used in place of empirical validation for estimating population validity or in place of empirical cross-validation for estimating population cross-validity. Discusses conditions under which the equal weights procedure is a viable alternative. (SLD)
Technology Transfer Automated Retrieval System (TEKTRAN)
Records of 105,645 Nellore calves born from 1977 to 1994 in eight different regions of Brazil were used to estimate genetic parameters for weaning weight (kg). The objective of this study was to estimate genetic and environmental parameters and evaluate genotype x environment interaction for weaning...
Ahlberg, C M; Kuehn, L A; Thallman, R M; Kachman, S D; Snelling, W M; Spangler, M L
2016-05-01
Birth weight (BWT) and calving difficulty (CD) were recorded on 4,579 first-parity females from the Germplasm Evaluation Program at the U.S. Meat Animal Research Center (USMARC). Both traits were analyzed using a bivariate animal model with direct and maternal effects. Calving difficulty was transformed from the USMARC scores to corresponding -scores from the standard normal distribution based on the incidence rate of the USMARC scores. Breed fraction covariates were included to estimate breed differences. Heritability estimates (SE) for BWT direct, CD direct, BWT maternal, and CD maternal were 0.34 (0.10), 0.29 (0.10), 0.15 (0.08), and 0.13 (0.08), respectively. Calving difficulty direct breed effects deviated from Angus ranged from -0.13 to 0.77 and maternal breed effects deviated from Angus ranged from -0.27 to 0.36. Hereford-, Angus-, Gelbvieh-, and Brangus-sired calves would be the least likely to require assistance at birth, whereas Chiangus-, Charolais-, and Limousin-sired calves would be the most likely to require assistance at birth. Maternal breed effects for CD were least for Simmental and Charolais and greatest for Red Angus and Chiangus. Results showed that the diverse biological types of cattle have different effects on both BWT and CD. Furthermore, results provide a mechanism whereby beef cattle producers can compare EBV for CD direct and maternal arising from disjoined and breed-specific genetic evaluations. PMID:27285683
NASA Astrophysics Data System (ADS)
Cook, Tessa S.; Chadalavada, Seetharam C.; Boonn, William W.
2013-03-01
One of the biggest challenges in dose monitoring is customization of CT dose estimates to the patient. Patient size remains a highly significant variable. One metric that has previously been used for patient size is patient weight, though this is often criticized as inaccurate. In this work, we compare patients' weight to their effective diameters obtained from a CT scan of the chest or the abdomen. CT exams of the chest (N=163) and abdomen/pelvis (N=168) performed on adult patients in July 2012 were randomly selected for analysis. The effective diameter of the patient for each exam was determined using the central slice of the scan region for each exam using eXposure™ (Radimetrics, Inc., Toronto, Canada). In some cases, the same patient had both a chest and abdominopelvic CT, so effective diameters from both regions were analyzed. In this small sample size, there appears to be a linear relationship between patient weight and effective diameter when measured in the mid-chest and mid-abdomen of adult patients. However, for each weight, patient effective diameter can vary by 5 cm from the regression line in both the chest and the abdomen. A 5-cm difference corresponds to a difference of approximately 0.2 in the chest and 0.3 in the abdomen/pelvis for the correction factors recommended for size-specific dose estimation by the AAPM. This preliminary data suggests that weight-based CT protocoling may in fact be appropriate for some adults. However, more work is needed to identify those patients in whom weight-based protocoling is not appropriate.
NASA Technical Reports Server (NTRS)
Bagri, Durgadas S.; Majid, Walid
2009-01-01
At present spacecraft angular position with Deep Space Network (DSN) is determined using group delay estimates from very long baseline interferometer (VLBI) phase measurements employing differential one way ranging (DOR) tones. As an alternative to this approach, we propose estimating position of a spacecraft to half a fringe cycle accuracy using time variations between measured and calculated phases as the Earth rotates using DSN VLBI baseline(s). Combining fringe location of the target with the phase allows high accuracy for spacecraft angular position estimate. This can be achieved using telemetry signals of at least 4-8 MSamples/sec data rate or DOR tones.
ERIC Educational Resources Information Center
Mozumdar, Arupendra; Liguori, Gary
2011-01-01
The purposes of this study were to generate correction equations for self-reported height and weight quartiles and to test the accuracy of the body mass index (BMI) classification based on corrected self-reported height and weight among 739 male and 434 female college students. The BMIqc (from height and weight quartile-specific, corrected…
An Illustration of Inverse Probability Weighting to Estimate Policy-Relevant Causal Effects.
Edwards, Jessie K; Cole, Stephen R; Lesko, Catherine R; Mathews, W Christopher; Moore, Richard D; Mugavero, Michael J; Westreich, Daniel
2016-08-15
Traditional epidemiologic approaches allow us to compare counterfactual outcomes under 2 exposure distributions, usually 100% exposed and 100% unexposed. However, to estimate the population health effect of a proposed intervention, one may wish to compare factual outcomes under the observed exposure distribution to counterfactual outcomes under the exposure distribution produced by an intervention. Here, we used inverse probability weights to compare the 5-year mortality risk under observed antiretroviral therapy treatment plans to the 5-year mortality risk that would had been observed under an intervention in which all patients initiated therapy immediately upon entry into care among patients positive for human immunodeficiency virus in the US Centers for AIDS Research Network of Integrated Clinical Systems multisite cohort study between 1998 and 2013. Therapy-naïve patients (n = 14,700) were followed from entry into care until death, loss to follow-up, or censoring at 5 years or on December 31, 2013. The 5-year cumulative incidence of mortality was 11.65% under observed treatment plans and 10.10% under the intervention, yielding a risk difference of -1.57% (95% confidence interval: -3.08, -0.06). Comparing outcomes under the intervention with outcomes under observed treatment plans provides meaningful information about the potential consequences of new US guidelines to treat all patients with human immunodeficiency virus regardless of CD4 cell count under actual clinical conditions. PMID:27469514
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-01-01
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-01-01
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384
Global estimates of water-vapor-weighted mean temperature of the atmosphere for GPS applications
NASA Astrophysics Data System (ADS)
Wang, Junhong; Zhang, Liangying; Dai, Aiguo
2005-11-01
Water-vapor-weighted atmospheric mean temperature, Tm, is a key parameter in the retrieval of atmospheric precipitable water (PW) from ground-based Global Positioning System (GPS) measurements of zenith path delay (ZPD), as the accuracy of the GPS-derived PW is proportional to the accuracy of Tm. We compare and analyze global estimates of Tm from three different data sets from 1997 to 2002: the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-year reanalysis (ERA-40), the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis, and the newly released Integrated Global Radiosonde Archive (IGRA) data set. Temperature and humidity profiles from both the ERA-40 and NCEP/NCAR reanalyses produce reasonable Tm estimates compared with those from the IGRA soundings. The ERA-40, however, is a better option for global Tm estimation because of its better performance and its higher spatial resolution. Tm is found to increase from below 255 K in polar regions to 295-300 K in the tropics, with small longitudinal variations. Tm has an annual range of ˜2-4 K in the tropics and 20-35 K over much of Eurasia and northern North America. The day-to-day Tm variations are 1-3 K over most low latitudes and 4-7 K (2-4 K) in winter (summer) Northern Hemispheric land areas. Diurnal variations of Tm are generally small, with mean-to-peak amplitudes less than 0.5 K over most oceans and 0.5-1.5 K over most land areas and a local time of maximum around 16-20 LST. The commonly used Tm-Ts relationship from Bevis et al. (1992) is evaluated using the ERA-40 data. Tm derived from this relationship (referred to as Tmb) has a cold bias in the tropics and subtropics (-1 ˜ -6 K, largest in marine stratiform cloud regions) and a warm bias in the middle and high latitudes (2-5 K, largest over mountain regions). The random error in Tmb is much smaller than the bias. A serious problem in Tmb is its erroneous large diurnal cycle owing to
Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris
2012-01-01
Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa
NASA Technical Reports Server (NTRS)
Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.
1977-01-01
Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael
2016-04-01
The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.
Mello, Beatriz; Schrago, Carlos G
2014-01-01
Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333
2011-01-01
Background The purpose of this study is to explore how a patient's height and weight can be used to predict the effective dose to a reference phantom with similar height and weight from a chest abdomen pelvis computed tomography scan when machine-based parameters are unknown. Since machine-based scanning parameters can be misplaced or lost, a predictive model will enable the medical professional to quantify a patient's cumulative radiation dose. Methods One hundred mathematical phantoms of varying heights and weights were defined within an x-ray Monte Carlo based software code in order to calculate organ absorbed doses and effective doses from a chest abdomen pelvis scan. Regression analysis was used to develop an effective dose predictive model. The regression model was experimentally verified using anthropomorphic phantoms and validated against a real patient population. Results Estimates of the effective doses as calculated by the predictive model were within 10% of the estimates of the effective doses using experimentally measured absorbed doses within the anthropomorphic phantoms. Comparisons of the patient population effective doses show that the predictive model is within 33% of current methods of estimating effective dose using machine-based parameters. Conclusions A patient's height and weight can be used to estimate the effective dose from a chest abdomen pelvis computed tomography scan. The presented predictive model can be used interchangeably with current effective dose estimating techniques that rely on computed tomography machine-based techniques. PMID:22004072
How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates
ERIC Educational Resources Information Center
Otterbach, Steffen; Sousa-Poza, Alfonso
2010-01-01
This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…
NASA Technical Reports Server (NTRS)
Feiveson, A. H. (Principal Investigator)
1979-01-01
The use of a weighted aggregation technique to improve the precision of the overall LACIE estimate is considered. The manner in which a weighted aggregation technique is implemented given a set of weights is described. The problem of variance estimation is discussed and the question of how to obtain the weights in an operational environment is addressed.
Sahin, A; Ulutas, Z; Yilmaz Adkinson, A; Adkinson, R W
2012-06-01
A study was conducted to assess the influence of genetic and environmental factors on Brown Swiss calf birth weight, and to estimate variance components, genetic parameters, and breeding values. Data were collected on 1,761 Brown Swiss calves born from 1990 to 2005 in the Konuklar State Farm in Turkey. Mean birth weight for all calves was 39.3 ± 0.09 kg. Least squares mean birth weights for male and female Brown Swiss calves were 40.3 ± 0.02 and 39.0 ± 0.02 kg, respectively. Variance components, genetic parameters, and breeding values for birth weight in Brown Swiss calves were estimated by restricted error maximum likelihood (REML)-best linear unbiased prediction(BLUP) procedures using an MTDFREML (multiple trait derivative free restricted maximum likelihood) program employing an animal model. Direct heritability (h(d)(2)), maternal heritability (h(m)(2)), total heritability (h(T)(2)), r(am) and c(am) estimates were 0.12, 0.09, 0.23, -0.58, and -0.06, respectively. The estimated maternal permanent environmental variance expressed as a proportion of the phenotypic variance (c(2)) was 0.05. Breeding values were estimated for the trait and used to evaluate genetic trends across the time period investigated. The genetic trend linear regression was not different from zero. No genetic trend for birth weight was expected, since there had been no direct selection pressure on the trait. Absence of a trend confirms that there was no change due to selection pressure on correlated traits. Genetic and environmental parameter estimates were similar to literature values indicating that effective selection methods used in more developed improvement programs would be effective in Turkey as well. PMID:22203217
Holman, C D; Arnold-Reed, D E; de Klerk, N; McComb, C; English, D R
2001-03-01
A psychometric experiment in causal inference was performed on 159 Australian and New Zealand epidemiologists. Subjects each decided whether to attribute causality to 12 summaries of evidence concerning a disease and a chemical exposure. The 1,748 unique summaries embodied predetermined distributions of 19 characteristics generated by computerized evidence simulation. Effects of characteristics of evidence on causal attribution were estimated from logistic regression, and interactions were identified from a regression tree analysis. Factors with the strongest influence on the odds of causal attribution were statistical significance (odds ratio = 4.5 if 0.001 < or = P < 0.05 and 7.2 if P < 0.001, vs P > or = 0.05); refutation of alternative explanations (odds ratio = 8.1 for no known confounder vs none adjusted); strength of association (odds ratio = 2.0 if 1.5 < relative risk < or = 2.0 and 3.6 if relative risk > 2.0, vs relative risk < or = 1.5); and adjunct information concerning biological, factual, and theoretical coherence. The refutation of confounding reduced the cutpoint in the regression tree for decision-making based on strength of association. The effect of the number of supportive studies reached saturation after it exceeded 12 studies. There was evidence of flawed logic in the responses concerning specificity of effects of exposure and a tendency to discount evidence if the P-value was a "near miss" (0.050 < P < 0.065). Evidential weights based on regression coefficients for causal criteria can be applied to actual scientific evidence. PMID:11246588
NASA Astrophysics Data System (ADS)
Li, Changsong; Watanabe, Masayuki; Mitani, Yasunori; Monchusi, Bessie
In order to effectively monitor and control the interarea oscillations in a power system, it is crucial to obtain the full knowledge about the oscillation mode, such as participation level, mode distribution, frequency and damping. The paper presents an approach to estimate the participation weight of a generator in an oscillation mode based on synchronized phasor measurements and auto-spectrum analysis. The participation weight is a quantity defined in this paper to indicate the relative participation of one generator in one oscillation mode of interest. Compared to traditional participation factor computed from model-based modal analysis, the participation weight is directly defined and estimated based on the measurements of system output signals. The input-output relationship for a constant-parameter linear system when subjected to a stationary white noise disturbance is introduced to establish connection between measureable power system response quantities and participation weight. The auto-spectrum analysis is adopted to estimate participation weight with demonstrative examples, which include simulation examples as well as practical examples using measured phasor data from the CampusWAMS.
Matsuura, Akihiro; Irimajiri, Mami; Matsuzaki, Kunihiro; Hiraguri, Yuko; Nakanowatari, Toshihiko; Yamazaki, Atusi; Hodate, Koichi
2013-01-01
The aim of this study was to establish a method for estimating loading capacity for Japanese native horses by gait analysis using an accelerometer. Six mares of Japanese native horses were used. The acceleration of each horse was recorded during walking and trotting along a straight course at a sampling frequency of 200 Hz. Each horse performed 12 tests: one test with a loaded weight of 80 kg (First 80 kg) followed by 10 tests with random loaded weights between 85 kg and 130 kg and a final test with a loaded weight of 80 kg again. The time series of acceleration was subjected to fast Fourier transformation, and the autocorrelation coefficient was calculated. The first two peaks of the autocorrelation were defined as symmetry and regularity of the gait. At trot, symmetries in the 100, 110, and 125 kg tests were significantly lower than that in First 80 kg (P < 0.05, by analysis of covariance and Sidak's test). These results imply that the maximum permissible load weight is less than 100 kg, which is 29% of the body weight of Japanese native horses. Our method is a widely applicable and welfare-friendly method for estimating maximum permissible load weights of horses. PMID:23302086
NASA Astrophysics Data System (ADS)
Wu, Shunguang; Hong, Lang
2008-04-01
A framework of simultaneously estimating the motion and structure parameters of a 3D object by using high range resolution (HRR) and ground moving target indicator (GMTI) measurements with template information is given. By decoupling the motion and structure information and employing rigid-body constraints, we have developed the kinematic and measurement equations of the problem. Since the kinematic system is unobservable by using only one scan HRR and GMTI measurements, we designed an architecture to run the motion and structure filters in parallel by using multi-scan measurements. Moreover, to improve the estimation accuracy in large noise and/or false alarm environments, an interacting multi-template joint tracking (IMTJT) algorithm is proposed. Simulation results have shown that the averaged root mean square errors for both motion and structure state vectors have been significantly reduced by using the template information.
Technology Transfer Automated Retrieval System (TEKTRAN)
Objective: To describe energy intake reporting by gender, weight status, and interview sequence and to compare reported intakes to the Estimated Energy Requirement at different levels of physical activity. Methods: Energy intake was self-reported by 24-hour recall on two occasions (day 1 and day 2)...
NASA Astrophysics Data System (ADS)
Cofaru, Corneliu; Philips, Wilfried; Van Paepegem, Wim
2011-09-01
Digital image processing methods represent a viable and well acknowledged alternative to strain gauges and interferometric techniques for determining full-field displacements and strains in materials under stress. This paper presents an image adaptive technique for dense motion and strain estimation using high-resolution speckle images that show the analyzed material in its original and deformed states. The algorithm starts by dividing the speckle image showing the original state into irregular cells taking into consideration both spatial and gradient image information present. Subsequently the Newton-Raphson digital image correlation technique is applied to calculate the corresponding motion for each cell. Adaptive spatial regularization in the form of the Geman- McClure robust spatial estimator is employed to increase the spatial consistency of the motion components of a cell with respect to the components of neighbouring cells. To obtain the final strain information, local least-squares fitting using a linear displacement model is performed on the horizontal and vertical displacement fields. To evaluate the presented image partitioning and strain estimation techniques two numerical and two real experiments are employed. The numerical experiments simulate the deformation of a specimen with constant strain across the surface as well as small rigid-body rotations present while real experiments consist specimens that undergo uniaxial stress. The results indicate very good accuracy of the recovered strains as well as better rotation insensitivity compared to classical techniques.
An exploratory investigation of weight estimation techniques for hypersonic flight vehicles
NASA Technical Reports Server (NTRS)
Cook, E. L.
1981-01-01
The three basic methods of weight prediction (fixed-fraction, statistical correlation, and point stress analysis) and some of the computer programs that have been developed to implement them are discussed. A modified version of the WAATS (Weights Analysis of Advanced Transportation Systems) program is presented, along with input data forms and an example problem.
Clisby, Nathan
2010-02-01
We introduce a fast implementation of the pivot algorithm for self-avoiding walks, which we use to obtain large samples of walks on the cubic lattice of up to 33x10{6} steps. Consequently the critical exponent nu for three-dimensional self-avoiding walks is determined to great accuracy; the final estimate is nu=0.587 597(7). The method can be adapted to other models of polymers with short-range interactions, on the lattice or in the continuum. PMID:20366773
Shockley, Keith R.
2016-01-01
High-throughput in vitro screening experiments can be used to generate concentration-response data for large chemical libraries. It is often desirable to estimate the concentration needed to achieve a particular effect, or potency, for each chemical tested in an assay. Potency estimates can be used to directly compare chemical profiles and prioritize compounds for confirmation studies, or employed as input data for prediction modeling and association mapping. The concentration for half-maximal activity derived from the Hill equation model (i.e., AC50) is the most common potency measure applied in pharmacological research and toxicity testing. However, the AC50 parameter is subject to large uncertainty for many concentration-response relationships. In this study we introduce a new measure of potency based on a weighted Shannon entropy measure termed the weighted entropy score (WES). Our potency estimator (Point of Departure, PODWES) is defined as the concentration producing the maximum rate of change in weighted entropy along a concentration-response profile. This approach provides a new tool for potency estimation that does not depend on the assumption of monotonicity or any other pre-specified concentration-response relationship. PODWES estimates potency with greater precision and less bias compared to the conventional AC50 assessed across a range of simulated conditions. PMID:27302286
Shockley, Keith R
2016-01-01
High-throughput in vitro screening experiments can be used to generate concentration-response data for large chemical libraries. It is often desirable to estimate the concentration needed to achieve a particular effect, or potency, for each chemical tested in an assay. Potency estimates can be used to directly compare chemical profiles and prioritize compounds for confirmation studies, or employed as input data for prediction modeling and association mapping. The concentration for half-maximal activity derived from the Hill equation model (i.e., AC50) is the most common potency measure applied in pharmacological research and toxicity testing. However, the AC50 parameter is subject to large uncertainty for many concentration-response relationships. In this study we introduce a new measure of potency based on a weighted Shannon entropy measure termed the weighted entropy score (WES). Our potency estimator (Point of Departure, PODWES) is defined as the concentration producing the maximum rate of change in weighted entropy along a concentration-response profile. This approach provides a new tool for potency estimation that does not depend on the assumption of monotonicity or any other pre-specified concentration-response relationship. PODWES estimates potency with greater precision and less bias compared to the conventional AC50 assessed across a range of simulated conditions. PMID:27302286
Abd Rahman, Azrin N; Tett, Susan E; Staatz, Christine E
2014-03-01
Mycophenolic acid (MPA) is a potent immunosuppressant agent, which is increasingly being used in the treatment of patients with various autoimmune diseases. Dosing to achieve a specific target MPA area under the concentration-time curve from 0 to 12 h post-dose (AUC12) is likely to lead to better treatment outcomes in patients with autoimmune disease than a standard fixed-dose strategy. This review summarizes the available published data around concentration monitoring strategies for MPA in patients with autoimmune disease and examines the accuracy and precision of methods reported to date using limited concentration-time points to estimate MPA AUC12. A total of 13 studies were identified that assessed the correlation between single time points and MPA AUC12 and/or examined the predictive performance of limited sampling strategies in estimating MPA AUC12. The majority of studies investigated mycophenolate mofetil (MMF) rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation of MPA. Correlations between MPA trough concentrations and MPA AUC12 estimated by full concentration-time profiling ranged from 0.13 to 0.94 across ten studies, with the highest associations (r (2) = 0.90-0.94) observed in lupus nephritis patients. Correlations were generally higher in autoimmune disease patients compared with renal allograft recipients and higher after MMF compared with EC-MPS intake. Four studies investigated use of a limited sampling strategy to predict MPA AUC12 determined by full concentration-time profiling. Three studies used a limited sampling strategy consisting of a maximum combination of three sampling time points with the latest sample drawn 3-6 h after MMF intake, whereas the remaining study tested all combinations of sampling times. MPA AUC12 was best predicted when three samples were taken at pre-dose and at 1 and 3 h post-dose with a mean bias and imprecision of 0.8 and 22.6 % for multiple linear regression analysis and of -5.5 and 23.0 % for
Lee, Sunghee; Satter, Delight E; Ponce, Ninez A
2009-01-01
Racial classification is a paramount concern in data collection and analysis for American Indians and Alaska Natives (AI/ANs) and has far-reaching implications in health research. We examine how different racial classifications affect survey weights and consequently change health-related indicators for the AI/AN population in California. Using a very large random population-based sample of AI/ANs, we compared the impact of three weighting strategies on counts and rates of selected health indicators. We found that different weights examined in this study did not change the percentage estimates of health-related variables for AI/ANs, but did influence the population total estimates dramatically. In survey data, different racial classifications and tabulations of AI/ANs could yield discrepancies in weighted estimates for the AI/AN population. Policy makers need to be aware that the choice of racial classification schemes for this racial-political group can generally influence the data they use for decision making. PMID:20052630
Error estimates of Lagrange interpolation and orthonormal expansions for Freud weights
NASA Astrophysics Data System (ADS)
Kwon, K. H.; Lee, D. W.
2001-08-01
Let Sn[f] be the nth partial sum of the orthonormal polynomials expansion with respect to a Freud weight. Then we obtain sufficient conditions for the boundedness of Sn[f] and discuss the speed of the convergence of Sn[f] in weighted Lp space. We also find sufficient conditions for the boundedness of the Lagrange interpolation polynomial Ln[f], whose nodal points are the zeros of orthonormal polynomials with respect to a Freud weight. In particular, if W(x)=e-(1/2)x2 is the Hermite weight function, then we obtain sufficient conditions for the inequalities to hold:andwhere and k=0,1,2...,r.
Landwehr, J.M.; Matalas, N.C.; Wallis, J.R.
1979-01-01
Results were derived from Monte Carlo experiments by using both independent and serially correlated Gumbel numbers. The method of probability weighted moments was seen to compare favourably with two other techniques. -Authors
Using machine vision to estimate bird weight in the poultry industry
NASA Astrophysics Data System (ADS)
Lotufo, Roberto A.; Taube-Netto, Miguel; Conejo, Eduardo; Hoyos, Francisco J. d.
1999-03-01
This work describes a real-time continuous broiler weighting system based on machine vision, used for size sort planning in a process plant. We demonstrate that this technology can be used successfully as an alternative to classical electromechanical carcasses weighting system. A digitized silhouette image of the carcass hung by its feet is divided in six regions: the legs, the wings, the neck and the breast. Once the parts are separated, their individual areas are measured and used in a polynomial equation that predicts the overall bird weight. A sample of 1400 birds were collected, labeled and weighted in a precision scale, in different days and different hours. We found an error standard deviation of 78 grams for broilers weighing from 750 to 2100 grams. The morphological image processing algorithms demonstrated to be robust to extract the individual parts of the carcass.
Her, Namguk; Amy, Gary; Foss, David; Chow, Jaeweon
2002-08-01
High performance size exclusion chromatography (HPSEC) with ultraviolet absorbance (UVA) detection has been widely utilized to estimate the molecular weight (MW) and MW distribution of natural organic matter (NOM). However, the estimation of MW with UVA detection is inherently inaccurate because UVA at 254 nm only detects limited components (mostly pi bonded molecules) of NOM, and the molar absorptivity of these different NOM constituents is not equal. In comparison, a SEC chromatogram obtained with a DOC detector showed significant differences compared to a corresponding UVA chromatogram, resulting in different MW values as well as different estimates of polydispersivity. The MWs of Suwannee River humic acid (SRHA), Suwannee River fulvic acid (SRFA), and various mixtures thereof were estimated with HPSEC coupled with UVA and DOC detectors. The results show that UVA is not an adequate detector for quantitative analysis of MW estimation but rather can be used only for limited qualitative analysis. The NOM in several natural waters (Irvine Ranch, California groundwater, and Barr Lake, Colorado surface water) were also characterized to demonstrate the different MWs obtained with the two detectors. The results of the SEC-DOC chromatograms revealed NOM constituent peaks that went undetected by UVA. Utilizing online DOC detection, a better representation of NOM MWs was suggested, with NOM displaying higher weight-averaged MW (Mw) and lower number-averaged MW (Mn) as well as higher polydispersivity. A method for estimation of the MWs of NOM fractional components and polydispersivities is presented. PMID:12188370
Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo
2016-01-01
Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255
Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo
2016-01-01
Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255
Schuh, V; Sír, J; Galliová, J; Svandová, E
1966-01-01
A comparison of the weight and photometric methods of primary assay of BCG vaccine has been made, using a vaccine prepared in albumin-free medium but containing Tween 80. In the weight method, the bacteria were trapped on a membrane filter; for photometry a Pulfrich Elpho photometer and an instrument of Czech origin were used. The photometric results were the more precise, provided that the measurements were made within two days of completion of growth; after this time the optical density of the suspension began to decrease slowly. The lack of precision of the weighing method is probably due to the small weight of culture deposit (which was almost on the limit of accuracy of the analytical balance) and to difficulties in the manipulation of the ultrafilter. PMID:5335458
Williamson, Nathan H; Nydén, Magnus; Röding, Magnus
2016-06-01
We present comprehensive derivations for the statistical models and methods for the use of pulsed gradient spin echo (PGSE) NMR to characterize the molecular weight distribution of polymers via the well-known scaling law relating diffusion coefficients and molecular weights. We cover the lognormal and gamma distribution models and linear combinations of these distributions. Although the focus is on methodology, we illustrate the use experimentally with three polystyrene samples, comparing the NMR results to gel permeation chromatography (GPC) measurements, test the accuracy and noise-sensitivity on simulated data, and provide code for implementation. PMID:27116223
NASA Astrophysics Data System (ADS)
Williamson, Nathan H.; Nydén, Magnus; Röding, Magnus
2016-06-01
We present comprehensive derivations for the statistical models and methods for the use of pulsed gradient spin echo (PGSE) NMR to characterize the molecular weight distribution of polymers via the well-known scaling law relating diffusion coefficients and molecular weights. We cover the lognormal and gamma distribution models and linear combinations of these distributions. Although the focus is on methodology, we illustrate the use experimentally with three polystyrene samples, comparing the NMR results to gel permeation chromatography (GPC) measurements, test the accuracy and noise-sensitivity on simulated data, and provide code for implementation.
Magis, David
2015-03-01
Warm (in Psychometrika, 54, 427-450, 1989) established the equivalence between the so-called Jeffreys modal and the weighted likelihood estimators of proficiency level with some dichotomous item response models. The purpose of this note is to extend this result to polytomous item response models. First, a general condition is derived to ensure the perfect equivalence between these two estimators. Second, it is shown that this condition is fulfilled by two broad classes of polytomous models including, among others, the partial credit, rating scale, graded response, and nominal response models. PMID:24282130
Kamphuis, Claudia; Burke, Jennie K; Taukiri, Sarah; Petch, Susan-Fay; Turner, Sally-Anne
2016-08-01
Dairy cows grazing pasture and milked using automated milking systems (AMS) have lower milking frequencies than indoor fed cows milked using AMS. Therefore, milk recording intervals used for herd testing indoor fed cows may not be suitable for cows on pasture based farms. We hypothesised that accurate standardised 24 h estimates could be determined for AMS herds with milk recording intervals of less than the Gold Standard (48 hs), but that the optimum milk recording interval would depend on the herd average for milking frequency. The Gold Standard protocol was applied on five commercial dairy farms with AMS, between December 2011 and February 2013. From 12 milk recording test periods, involving 2211 cow-test days and 8049 cow milkings, standardised 24 h estimates for milk volume and milk composition were calculated for the Gold Standard protocol and compared with those collected during nine alternative sampling scenarios, including six shorter sampling periods and three in which a fixed number of milk samples per cow were collected. Results infer a 48 h milk recording protocol is unnecessarily long for collecting accurate estimates during milk recording on pasture based AMS farms. Collection of two milk samples only per cow was optimal in terms of high concordance correlation coefficients for milk volume and components and a low proportion of missed cow-test days. Further research is required to determine the effects of diurnal variations in milk composition on standardised 24 h estimates for milk volume and components, before a protocol based on a fixed number of samples could be considered. Based on the results of this study New Zealand have adopted a split protocol for herd testing based on the average milking frequency for the herd (NZ Herd Test Standard 8100:2015). PMID:27600967
Estimation of breed-specific heterosis effects for birth, weaning, and yearling weight in cattle
Technology Transfer Automated Retrieval System (TEKTRAN)
Heterosis, assumed proportional to expected breed heterozygosity, was calculated for 6,834 individuals with birth, weaning and yearling weight records from Cycle VII and advanced generations of the U.S. Meat Animal Research Center (USMARC) Germplasm Evaluation (GPE) project. Breeds represented in t...
Parent-reported height and weight as sources of bias in survey estimates of childhood obesity.
Weden, Margaret M; Brownell, Peter B; Rendall, Michael S; Lau, Christopher; Fernandes, Meenakshi; Nazarov, Zafar
2013-08-01
Parental reporting of height and weight was evaluated for US children aged 2-13 years. The prevalence of obesity (defined as a body mass index value (calculated as weight (kg)/height (m)(2)) in the 95th percentile or higher) and its height and weight components were compared in child supplements of 2 nationally representative surveys: the 1996-2008 Children of the National Longitudinal Survey of Youth 1979 Cohort (NLSY79-Child) and the 1997 Child Development Supplement of the Panel Study of Income Dynamics (PSID-CDS). Sociodemographic differences in parent reporting error were analyzed. Error was largest for children aged 2-5 years. Underreporting of height, not overreporting of weight, generated a strong upward bias in obesity prevalence at those ages. Frequencies of parent-reported heights below the Centers for Disease Control and Prevention's (Atlanta, Georgia) first percentile were implausibly high at 16.5% (95% confidence interval (CI): 14.3, 19.0) in the NLSY79-Child and 20.6% (95% CI: 16.0, 26.3) in the PSID-CDS. They were highest among low-income children at 33.2% (95% CI: 22.4, 46.1) in the PSID-CDS and 26.2% (95% CI: 20.2, 33.2) in the NLSY79-Child. Bias in the reporting of obesity decreased with children's age and reversed direction at ages 12-13 years. Underreporting of weight increased with age, and underreporting of height decreased with age. We recommend caution to researchers who use parent-reported heights, especially for very young children, and offer practical solutions for survey data collection and research on child obesity. PMID:23785115
NASA Astrophysics Data System (ADS)
Forkert, Nils Daniel; Kaesemann, Philipp; Fiehler, Jens; Thomalla, Götz
2012-03-01
Acute stroke is a major cause for death and disability among adults in the western hemisphere. Time-resolved perfusion-weighted (PWI) and diffusion-weighted (DWI) MR datasets are typically used for the estimation of tissue-at-risk, which is an important variable for acute stroke therapy decision-making. Although several parameters, which can be estimated based on PWI concentration curves, have been proposed for tissue-at-risk definition in the past, the time-to-peak (TTP) or time-to-max (Tmax) parameter is used most frequently in recent trials. Unfortunately, there is no clear consensus which method should be used for estimation of Tmax or TTP maps. Consequently, tissue-at-risk estimations and following treatment decision might vary considerably with the method used. In this work, 5 PWI datasets of acute stroke patients were used to calculate TTP or Tmax maps using 10 different estimation techniques. The resulting maps were segmented using a typical threshold of +4s and the corresponding PWI-lesions were calculated. The first results suggest that the TTP or Tmax method used has a major impact on the resulting tissue-at-risk volume. Numerically, the calculated volumes differed up to a factor of 3. In general, the deconvolution-based Tmax techniques estimate the ischemic penumbra rather smaller compared to direct TTP based techniques. In conclusion, the comparison of different methods for TTP or Tmax estimation revealed high variations regarding the resulting tissue-at-risk volume, which might lead to different therapy decisions. Therefore, a consensus how TTP or Tmax maps should be calculated seems necessary.
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.
2015-01-01
A structural concept called pultruded rod stitched efficient unitized structure (PRSEUS) was developed by the Boeing Company to address the complex structural design aspects associated with a pressurized hybrid wing body (HWB) aircraft configuration. While PRSEUS was an enabling technology for the pressurized HWB structure, limited investigation of PRSEUS for other aircraft structures, such as circular fuselages and wings, has been done. Therefore, a study was undertaken to investigate the potential weight savings afforded by using the PRSEUS concept for a commercial transport wing. The study applied PRSEUS to the Advanced Subsonic Technology (AST) Program composite semi-span test article, which was sized using three load cases. The initial PRSEUS design was developed by matching cross-sectional stiffnesses for each stringer/skin combination within the wing covers, then the design was modified to ensure that the PRSEUS design satisfied the design criteria. It was found that the PRSEUS wing design exhibited weight savings over the blade-stiffened composite AST Program wing of nearly 9%, and a weight savings of 49% and 29% for the lower and upper covers, respectively, compared to an equivalent metallic wing.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2013-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro
2011-01-01
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed. PMID:21289029
Hyperspectral imaging based biomass and nitrogen content estimations from light-weight UAV
NASA Astrophysics Data System (ADS)
Pölönen, I.; Saari, H.; Kaivosoja, J.; Honkavaara, E.; Pesonen, L.
2013-10-01
Hyperspectral imaging based precise fertilization is challenge in the northern Europe, because of the cloud conditions. In this paper we will introduce schemes for the biomass and nitrogen content estimations from hyperspectral images. In this research we used the Fabry-Perot interferometer based hypespectral imager that enables hyperspectral imaging from lightweight UAVs. During the summers 2011 and 2012 imaging and flight campaigns were carried out on the Finnish test field. Estimation mehtod uses features from linear and non-linear unmixing and vegetation indices. The results showed that the concept of small hyperspectral imager, UAV and data analysis is ready to operational use.
NASA Astrophysics Data System (ADS)
Rhode, Katherine L.; Osiensky, James L.; Miller, Stanley M.
2007-12-01
SummaryMost previous investigations to evaluate the "effective" or average transmissivity in heterogeneous environments have used calculations based on areas to weight the effects of each heterogeneity. Analysis of spatial volumetric variations within the cone of depression expressed at the potentiometric surface offers a more general solution to evaluate the meaning of transmissivity ( T) and storativity ( S) values derived from aquifer tests in these environments. The [Cooper Jr., H.H., Jacob, C.E. 1946. A generalized graphical method for evaluating formation constants and summarizing well-field history. Eos Trans. AGU, 27(4), 526-534] method is used to demonstrate that T variations reflected in slope changes in plots of the pumping well drawdown data correspond to changes in the volumetric weighted mean transmissivity ( VWMT) over time. Lognormal distributions of transmissivity are represented by block heterogeneities within two simulated aquifers, for both spatially random and spatially correlated data sets. By analyzing the volumetric evolution of the cone of depression observed in the potentiometric surface, the nature of T averaging within the cone of depression as a function of time is illustrated. Volumetric analysis shows that the average aquifer T varies with time as the cone of depression progressively envelops different heterogeneities. The initial trend is controlled primarily by the heterogeneities directly surrounding the pumping center. If steady-shape conditions are not achieved, late-time VWMT values do not approach an asymptotic limit.
Guo, Penghong; Rivera, Daniel E.; Downs, Danielle S.; Savage, Jennifer S.
2016-01-01
Excessive gestational weight gain (i.e., weight gain during pregnancy) is a significant public health concern, and has been the recent focus of novel, control systems-based interventions. This paper develops a control-oriented dynamical systems model based on a first-principles energy balance model from the literature, which is evaluated against participant data from a study targeted to obese and overweight pregnant women. The results indicate significant under-reporting of energy intake among the participant population. A series of approaches based on system identification and state estimation are developed in the paper to better understand and characterize the extent of under-reporting; these range from back-calculating energy intake from a closed-form of the energy balance model, to a constrained semi-physical identification approach that estimates the extent of systematic under-reporting in the presence of noise and possibly missing data. Additionally, we describe an adaptive algorithm based on Kalman filtering to estimate energy intake in real-time. The approaches are illustrated with data from both simulated and actual intervention participants. PMID:27570366
NASA Astrophysics Data System (ADS)
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
van Donkelaar, Aaron; Martin, Randall V; Spurr, Robert J D; Burnett, Richard T
2015-09-01
We used a geographically weighted regression (GWR) statistical model to represent bias of fine particulate matter concentrations (PM2.5) derived from a 1 km optimal estimate (OE) aerosol optical depth (AOD) satellite retrieval that used AOD-to-PM2.5 relationships from a chemical transport model (CTM) for 2004-2008 over North America. This hybrid approach combined the geophysical understanding and global applicability intrinsic to the CTM relationships with the knowledge provided by observational constraints. Adjusting the OE PM2.5 estimates according to the GWR-predicted bias yielded significant improvement compared with unadjusted long-term mean values (R(2) = 0.82 versus R(2) = 0.62), even when a large fraction (70%) of sites were withheld for cross-validation (R(2) = 0.78) and developed seasonal skill (R(2) = 0.62-0.89). The effect of individual GWR predictors on OE PM2.5 estimates additionally provided insight into the sources of uncertainty for global satellite-derived PM2.5 estimates. These predictor-driven effects imply that local variability in surface elevation and urban emissions are important sources of uncertainty in geophysical calculations of the AOD-to-PM2.5 relationship used in satellite-derived PM2.5 estimates over North America, and potentially worldwide. PMID:26261937
Object size can influence perceived weight independent of visual estimates of the volume of material
Plaisier, Myrthe A.; Smeets, Jeroen B.J.
2015-01-01
The size-weight illusion is the phenomenon that the smaller of two equally heavy objects is perceived to be heavier than the larger object when lifted. One explanation for this illusion is that heaviness perception is influenced by our expectations, and larger objects are expected to be heavier than smaller ones because they contain more material. If this would be the entire explanation, the illusion should disappear if we make objects larger while keeping the volume of visible material the same (i.e. objects with visible holes). Here we tested this prediction. Our results show that perceived heaviness decreased with object size regardless of whether objects visibly contained the same volume of material or not. This indicates that object size can influence perceived heaviness, even when it can be seen that differently sized objects contain the same volume of material. PMID:26626051
Identical twins in forensic genetics - Epidemiology and risk based estimation of weight of evidence.
Tvedebrink, Torben; Morling, Niels
2015-12-01
The increase in the number of forensic genetic loci used for identification purposes results in infinitesimal random match probabilities. These probabilities are computed under assumptions made for rather simple population genetic models. Often, the forensic expert reports likelihood ratios, where the alternative hypothesis is assumed not to encompass close relatives. However, this approach implies that important factors present in real human populations are discarded. This approach may be very unfavourable to the defendant. In this paper, we discuss some important aspects concerning the closest familial relationship, i.e., identical (monozygotic) twins, when reporting the weight of evidence. This can be done even when the suspect has no knowledge of an identical twin or when official records hold no twin information about the suspect. The derived expressions are not original as several authors previously have published results accounting for close familial relationships. However, we revisit the discussion to increase the awareness among forensic genetic practitioners and include new information on medical and societal factors to assess the risk of not considering a monozygotic twin as the true perpetrator. If accounting for a monozygotic twin in the weight of evidence, it implies that the likelihood ratio is truncated at a maximal value depending on the prevalence of monozygotic twins and the societal efficiency of recognising a monozygotic twin. If a monozygotic twin is considered as an alternative proposition, then data relevant for the Danish society suggests that the threshold of likelihood ratios should approximately be between 150,000 and 2,000,000 in order to take the risk of an unrecognised identical, monozygotic twin into consideration. In other societies, the threshold of the likelihood ratio in crime cases may reach other, often lower, values depending on the recognition of monozygotic twins and the age of the suspect. In general, more strictly kept
Thorup, V M; Edwards, D; Friggens, N C
2012-04-01
Precise energy balance estimates for individual cows are of great importance to monitor health, reproduction, and feed management. Energy balance is usually calculated as energy input minus output (EB(inout)), requiring measurements of feed intake and energy output sources (milk, maintenance, activity, growth, and pregnancy). Except for milk yield, direct measurements of the other sources are difficult to obtain in practice, and estimates contain considerable error sources, limiting on-farm use. Alternatively, energy balance can be estimated from body reserve changes (EB(body)) using body weight (BW) and body condition score (BCS). Automated weighing systems exist and new technology performing semi-automated body condition scoring has emerged, so frequent automated BW and BCS measurements are feasible. We present a method to derive individual EB(body) estimates from frequently measured BW and BCS and evaluate the performance of the estimated EB(body) against the traditional EB(inout) method. From 76 Danish Holstein and Jersey cows, parity 1 or 2+, on a glycerol-rich or a whole grain-rich total mixed ration, BW was measured automatically at each milking. The BW was corrected for the weight of milk produced and for gutfill. Changes in BW and BCS were used to calculate changes in body protein, body lipid, and EB(body) during the first 150 d in milk. The EB(body) was compared with the traditional EB(inout) by isolating the term within EB(inout) associated with most uncertainty; that is, feed energy content (FEC); FEC=(EB(body)+EMilk+EMaintenance+Eactivity)/dry matter intake, where the energy requirements are for milk produced (EMilk), maintenance (EMaintenance), and activity (EActivity). Estimated FEC agreed well with FEC values derived from tables (the mean estimate was 0.21 MJ of effective energy/kg of dry matter or 2.2% higher than the mean table value). Further, the FEC profile did not suggest systematic bias in EB(body) with stage of lactation. The EB
Deshpande, Amol A; Madhavan, P; Deshpande, Girish R; Chandel, Ravi Kumar; Yarbagi, Kaviraj M; Joshi, Alok R; Moses Babu, J; Murali Krishna, R; Rao, I M
2016-01-01
Fondaparinux sodium is a synthetic low-molecular-weight heparin (LMWH). This medication is an anticoagulant or a blood thinner, prescribed for the treatment of pulmonary embolism and prevention and treatment of deep vein thrombosis. Its determination in the presence of related impurities was studied and validated by a novel ion-pair HPLC method. The separation of the drug and its degradation products was achieved with the polymer-based PLRPs column (250 mm × 4.6 mm; 5 μm) in gradient elution mode. The mixture of 100 mM n-hexylamine and 100 mM acetic acid in water was used as buffer solution. Mobile phase A and mobile phase B were prepared by mixing the buffer and acetonitrile in the ratio of 90:10 (v/v) and 20:80 (v/v), respectively. Mobile phases were delivered in isocratic mode (2% B for 0-5 min) followed by gradient mode (2-85% B in 5-60 min). An Evaporative Light Scattering Detector (ELSD) was connected to the LC system to detect the responses of chromatographic separation. Further, the drug was subjected to stress studies for acidic, basic, oxidative, photolytic, and thermal degradations as per ICH guidelines and the drug was found to be labile in acid, base hydrolysis, and oxidation, while stable in neutral, thermal, and photolytic degradation conditions. The method provided linear responses over the concentration range of the LOQ to 0.30% for each impurity with respect to the analyte concentration of 12.5 mg/mL, and regression analysis showed a correlation coefficient value (r(2)) of more than 0.99 for all the impurities. The LOD and LOQ were found to be 1.4 µg/mL and 4.1 µg/mL, respectively, for fondaparinux. The developed ion-pair method was validated as per ICH guidelines with respect to accuracy, selectivity, precision, linearity, and robustness. PMID:27110496
Deshpande, Amol A.; Madhavan, P.; Deshpande, Girish R.; Chandel, Ravi Kumar; Yarbagi, Kaviraj M.; Joshi, Alok R.; Moses Babu, J.; Murali Krishna, R.; Rao, I. M.
2016-01-01
Fondaparinux sodium is a synthetic low-molecular-weight heparin (LMWH). This medication is an anticoagulant or a blood thinner, prescribed for the treatment of pulmonary embolism and prevention and treatment of deep vein thrombosis. Its determination in the presence of related impurities was studied and validated by a novel ion-pair HPLC method. The separation of the drug and its degradation products was achieved with the polymer-based PLRPs column (250 mm × 4.6 mm; 5 μm) in gradient elution mode. The mixture of 100 mM n-hexylamine and 100 mM acetic acid in water was used as buffer solution. Mobile phase A and mobile phase B were prepared by mixing the buffer and acetonitrile in the ratio of 90:10 (v/v) and 20:80 (v/v), respectively. Mobile phases were delivered in isocratic mode (2% B for 0–5 min) followed by gradient mode (2–85% B in 5–60 min). An Evaporative Light Scattering Detector (ELSD) was connected to the LC system to detect the responses of chromatographic separation. Further, the drug was subjected to stress studies for acidic, basic, oxidative, photolytic, and thermal degradations as per ICH guidelines and the drug was found to be labile in acid, base hydrolysis, and oxidation, while stable in neutral, thermal, and photolytic degradation conditions. The method provided linear responses over the concentration range of the LOQ to 0.30% for each impurity with respect to the analyte concentration of 12.5 mg/mL, and regression analysis showed a correlation coefficient value (r2) of more than 0.99 for all the impurities. The LOD and LOQ were found to be 1.4 µg/mL and 4.1 µg/mL, respectively, for fondaparinux. The developed ion-pair method was validated as per ICH guidelines with respect to accuracy, selectivity, precision, linearity, and robustness. PMID:27110496
Estimates of genetic parameters for visual scores and daily weight gain in Brangus animals.
Queiroz, S A; Oliveira, J A; Costa, G Z; Fries, L A
2011-05-01
(Co)variance components were estimated for visual scores of conformation (CY), early finishing (PY) and muscling (MY) at 550 days of age (yearling), average daily gain from weaning to yearling (GWY), conformation (CW), early finishing (PW) and muscling (MW) scores at weaning, and average daily gain from birth to weaning (GBW) in animals forming the Brazilian Brangus breed born between 1986 and 2002 from the livestock files of GenSys Consultants Associados S/C Ltda. The data set contained 53 683; 45 136; 52 937; 56 471; 24 531; 21 166; 24 006 and 25 419 records for CW, PW, MW, GBW, CY, PY, MY and GWY, respectively. Data were analyzed by the restricted maximum likelihood method using single- and two-trait animal models. Direct heritability estimates obtained by single-trait analysis were 0.12, 0.14, 0.13 and 0.14 for CY, PY and MY scores and GWY, respectively. A positive association was observed between the same visual scores at weaning and yearling, with correlations ranging from 0.64 to 0.94. Estimated correlations between GBW and weaning and yearling scores ranged from 0.60 to 0.77. The genetic correlation between GBW and GWY was low (0.10), whereas correlations of 0.55, 0.37 and 0.47 were observed between GWY and CY, PY and MY, respectively. Moreover, GWY showed a weak correlation with CW (0.10), PW (-0.08) and MW (-0.03) scores. These results indicate that selection of the traits that was studied would result in a small response. In addition, selection based on average daily gain may have an indirect effect on visual scores as the correlations between GWY and visual scores were generally strong. PMID:22440022
Iterative weighted risk estimation for nonlinear image restoration with analysis priors
NASA Astrophysics Data System (ADS)
Ramani, Sathish; Rosen, Jeffrey; Liu, Zhihao; Fessler, Jeffrey A.
2012-03-01
Image acquisition systems invariably introduce blur, which necessitates the use of deblurring algorithms for image restoration. Restoration techniques involving regularization require appropriate selection of the regularization parameter that controls the quality of the restored result. We focus on the problem of automatic adjustment of this parameter for nonlinear image restoration using analysis-type regularizers such as total variation (TV). For this purpose, we use two variants of Stein's unbiased risk estimate (SURE), Predicted-SURE and Projected-SURE, that are applicable for parameter selection in inverse problems involving Gaussian noise. These estimates require the Jacobian matrix of the restoration algorithm evaluated with respect to the data. We derive analytical expressions to recursively update the desired Jacobian matrix for a fast variant of the iterative reweighted least-squares restoration algorithm that can accommodate a variety of regularization criteria. Our method can also be used to compute a nonlinear version of the generalized cross-validation (NGCV) measure for parameter tuning. We demonstrate using simulations that Predicted-SURE, Projected-SURE, and NGCV-based adjustment of the regularization parameter yields near-MSE-optimal results for image restoration using TV, an analysis-type 1-regularization, and a smooth convex edge-preserving regularizer.
Doubly-robust dynamic treatment regimen estimation via weighted least squares.
Wallace, Michael P; Moodie, Erica E M
2015-09-01
Personalized medicine is a rapidly expanding area of health research wherein patient level information is used to inform their treatment. Dynamic treatment regimens (DTRs) are a means of formalizing the sequence of treatment decisions that characterize personalized management plans. Identifying the DTR which optimizes expected patient outcome is of obvious interest and numerous methods have been proposed for this purpose. We present a new approach which builds on two established methods: Q-learning and G-estimation, offering the doubly robust property of the latter but with ease of implementation much more akin to the former. We outline the underlying theory, provide simulation studies that demonstrate the double-robustness and efficiency properties of our approach, and illustrate its use on data from the Promotion of Breastfeeding Intervention Trial. PMID:25854539
Ngo, Van A; Kim, Ilsoo; Allen, Toby W; Noskov, Sergei Y
2016-03-01
Nonequilibrium pulling simulations have been a useful approach for investigating a variety of physical and biological problems. The major target in the simulations is to reconstruct reliable potentials of mean force (PMFs) or unperturbed free-energy profiles for quantitatively addressing both equilibrium mechanistic properties and contributions from nonequilibrium processes. While several current nonequilibrium methods were shown to be accurate in computing free-energy profiles in systems with relatively simple dynamics, they have proved to be unsuitable in more complicated systems. To extend the applicability of nonequilibrium sampling, we demonstrate a novel method that combines Minh-Adib's bidirectional estimator with nonlinear WHAM equations to reconstruct and assess PMFs from relatively fast pulling trajectories. We test the method in a one-dimensional model system and in a system of an antibiotic gramicidin-A (gA) channel, which is considered a significant challenge for nonequilibrium sampling. We identify key parameters for efficiently performing pulling simulations to improve and ensure the convergence and accuracy of estimated PMFs. We show that a few pulling trajectories of a relatively fast pulling speed v = 10 Å/ns can return a fair estimate of the PMF of a single potassium ion in gA. PMID:26799775
Zhang, Lifan; Zhou, Xiang; Michal, Jennifer J.; Ding, Bo; Li, Rui; Jiang, Zhihua
2014-01-01
Birth weight is an economically important trait in pig production because it directly impacts piglet growth and survival rate. In the present study, we performed a genome wide survey of candidate genes and pathways associated with individual birth weight (IBW) using the Illumina PorcineSNP60 BeadChip on 24 high (HEBV) and 24 low estimated breeding value (LEBV) animals. These animals were selected from a reference population of 522 individuals produced by three sires and six dam lines, which were crossbreds with multiple breeds. After quality-control, 43,257 SNPs (single nucleotide polymorphisms), including 42,243 autosomal SNPs and 1,014 SNPs on chromosome X, were used in the data analysis. A total of 27 differentially selected regions (DSRs), including 1 on Sus scrofa chromosome 1 (SSC1), 1 on SSC4, 2 on SSC5, 4 on SSC6, 2 on SSC7, 5 on SSC8, 3 on SSC9, 1 on SSC14, 3 on SSC18, and 5 on SSCX, were identified to show the genome wide separations between the HEBV and LEBV groups for IBW in piglets. A DSR with the most number of significant SNPs (including 7 top 0.1% and 31 top 5% SNPs) was located on SSC6, while another DSR with the largest genetic differences in FST was found on SSC18. These regions harbor known functionally important genes involved in growth and development, such as TNFRSF9 (tumor necrosis factor receptor superfamily member 9), CA6 (carbonic anhydrase VI) and MDFIC (MyoD family inhibitor domain containing). A DSR rich in imprinting genes appeared on SSC9, which included PEG10 (paternally expressed 10), SGCE (sarcoglycan, epsilon), PPP1R9A (protein phosphatase 1, regulatory subunit 9A) and ASB4 (ankyrin repeat and SOCS box containing 4). More importantly, our present study provided evidence to support six quantitative trait loci (QTL) regions for pig birth weight, six QTL regions for average birth weight (ABW) and three QTL regions for litter birth weight (LBW) reported previously by other groups. Furthermore, gene ontology analysis with 183 genes
Callaghan, L C; Walker, J D
2015-02-01
The risk of accidental over-dosing of obese children poses challenges to anaesthetists during dose calculations for drugs with serious side-effects, such as analgesics. For many drugs, dosing scalars such as ideal body weight and lean body mass are recommended instead of total body weight during weight-based dose calculations. However, the complex current methods of obtaining these dosing scalars are impractical in the peri-operative setting. Arbitrary dose adjustments and guesswork are, unfortunately, tempting solutions for the time-pressured anaesthetist. The study's aim was to develop and validate an accurate, convenient alternative. A nomogram was created and its performance compared with the standard calculation method by volunteers using measurements from 108 obese children. The nomogram was as accurate (bias 0.12 kg vs -0.41 kg, respectively, p = 0.4), faster (mean (SD) time taken 2.8 (1.0) min (vs 3.3 (0.9) min respectively, p = 0.003) and less likely to result in mistakes (significant errors 3% vs 19%, respectively, p = 0.001). We present a system that simplifies estimation of ideal body weight and lean body mass in obese children, providing foundations for safer drug dose calculation. PMID:25289986
Estimation of the weighted CTDI{sub {infinity}} for multislice CT examinations
Li Xinhua; Zhang Da; Liu, Bob
2012-02-15
Purpose: The aim of this study was to examine the variations of CT dose index (CTDI) efficiencies, {epsilon}(CTDI{sub 100})=CTDI{sub 100}/CTDI{sub {infinity}}, with bowtie filters and CT scanner types. Methods: This was an extension of our previous study [Li, Zhang, and Liu, Phys. Med. Biol. 56, 5789-5803 (2011)]. A validated Monte Carlo program was used to calculate {epsilon}(CTDI{sub 100}) on a Siemens Somatom Definition scanner. The {epsilon}(CTDI{sub 100}) dependencies on tube voltages and beam widths were tested in previous studies. The influences of different bowtie filters and CT scanner types were examined in this work. The authors tested the variations of {epsilon}(CTDI{sub 100}) with bowtie filters on the Siemens Definition scanner. The authors also analyzed the published CTDI measurements of four independent studies on five scanners of four models from three manufacturers. Results: On the Siemens Definition scanner, the difference in {epsilon}(CTDI{sub W}) between using the head and body bowtie filters was 2.5% (maximum) in the CT scans of the 32-cm phantom, and 1.7% (maximum) in the CT scans of the 16-cm phantom. Compared with CTDI{sub W}, the weighted CTDI{sub {infinity}} increased by 30.5% (on average) in the 32-cm phantom, and by 20.0% (on average) in the 16-cm phantom. These results were approximately the same for 80-140 kV and 1-40 mm beam widths (4.2% maximum deviation). The differences in {epsilon}(CTDI{sub 100}) between the simulations and the direct measurements of four previous studies were 1.3%-5.0% at the center/periphery of the 16-cm/32-cm phantom (on average). Conclusions: Compared with CTDI{sub vol}, the equilibrium dose for large scan lengths is 30.5% higher in the 32-cm phantom, and is 20.0% higher in the 16-cm phantom. The relative increases are practically independent of tube voltages (80-140 kV), beam widths (up to 4 cm), and the CT scanners covered in this study.
ERIC Educational Resources Information Center
Mozumdar, Arupendra; Liguori, Gary
2016-01-01
Purpose: Estimating obesity prevalence using self-reported height and weight is an economic and effective method and is often used in national surveys. However, self-reporting of height and weight can involve misreporting of those variables and has been found to be associated to the size of the individual. This study investigated the biases in…
Walton, K. W.; Scott, P. J.
1964-01-01
Studies have been made of the factors affecting the specificity of the interaction between high molecular weight dextran sulphate and low-density lipoproteins, both in pure solution and in serum. The results have been used in the development of a simple assay method for the serum concentration of low-density lipoproteins in small volumes of serum. The results obtained by this assay procedure have been found to correlate acceptably with parallel estimations of low-density lipoproteins by an ultracentrifugal technique and by paper electrophoresis. The technique has been applied to a survey of serum levels of these proteins in a normal population. The results have been compared with data in the literature. Satisfactory agreement was found between mean levels, matched for age and sex, between the dextran sulphate method and those methods based ultimately on chemical estimation of one or more components of the isolated lipoproteins. A systematic difference was observed when the dextran sulphate method was compared with estimates based on analytical ultracentrifugation or turbidimetry using amylopectin sulphate. Some indication of the range of application of the dextran sulphate method in clinical chemistry is provided. Images PMID:14227432
Precise and Accurate Density Determination of Explosives Using Hydrostatic Weighing
B. Olinger
2005-07-01
Precise and accurate density determination requires weight measurements in air and water using sufficiently precise analytical balances, knowledge of the densities of air and water, knowledge of thermal expansions, availability of a density standard, and a method to estimate the time to achieve thermal equilibrium with water. Density distributions in pressed explosives are inferred from the densities of elements from a central slice.
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1990-01-01
A real-time digital signal combining system for use with Ka-band feed arrays is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed samples by using a sliding-window implementation of a vector maximum-likelihood parameter estimator. It is shown that with averaging times of about 0.1 second, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the feed array, even in the presence of severe wind gusts and similar disturbances.
Salazar, Alejandro; Ojeda, Begoña; Dueñas, María; Fernández, Fernando; Failde, Inmaculada
2016-08-30
Missing data are a common problem in clinical and epidemiological research, especially in longitudinal studies. Despite many methodological advances in recent decades, many papers on clinical trials and epidemiological studies do not report using principled statistical methods to accommodate missing data or use ineffective or inappropriate techniques. Two refined techniques are presented here: generalized estimating equations (GEEs) and weighted generalized estimating equations (WGEEs). These techniques are an extension of generalized linear models to longitudinal or clustered data, where observations are no longer independent. They can appropriately handle missing data when the missingness is completely at random (GEE and WGEE) or at random (WGEE) and do not require the outcome to be normally distributed. Our aim is to describe and illustrate with a real example, in a simple and accessible way to researchers, these techniques for handling missing data in the context of longitudinal studies subject to dropout and show how to implement them in R. We apply them to assess the evolution of health-related quality of life in coronary patients in a data set subject to dropout. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27059703
Austin, Peter C; Stuart, Elizabeth A
2015-12-10
The propensity score is defined as a subject's probability of treatment selection, conditional on observed baseline covariates. Weighting subjects by the inverse probability of treatment received creates a synthetic sample in which treatment assignment is independent of measured baseline covariates. Inverse probability of treatment weighting (IPTW) using the propensity score allows one to obtain unbiased estimates of average treatment effects. However, these estimates are only valid if there are no residual systematic differences in observed baseline characteristics between treated and control subjects in the sample weighted by the estimated inverse probability of treatment. We report on a systematic literature review, in which we found that the use of IPTW has increased rapidly in recent years, but that in the most recent year, a majority of studies did not formally examine whether weighting balanced measured covariates between treatment groups. We then proceed to describe a suite of quantitative and qualitative methods that allow one to assess whether measured baseline covariates are balanced between treatment groups in the weighted sample. The quantitative methods use the weighted standardized difference to compare means, prevalences, higher-order moments, and interactions. The qualitative methods employ graphical methods to compare the distribution of continuous baseline covariates between treated and control subjects in the weighted sample. Finally, we illustrate the application of these methods in an empirical case study. We propose a formal set of balance diagnostics that contribute towards an evolving concept of 'best practice' when using IPTW to estimate causal treatment effects using observational data. PMID:26238958
NASA Astrophysics Data System (ADS)
Du, Lin; Shi, Shuo; Gong, Wei; Yang, Jian; Sun, Jia; Mao, Feiyue
2016-06-01
Hyperspectral LiDAR (HSL) is a novel tool in the field of active remote sensing, which has been widely used in many domains because of its advantageous ability of spectrum-gained. Especially in the precise monitoring of nitrogen in green plants, the HSL plays a dispensable role. The exiting HSL system used for nitrogen status monitoring has a multi-channel detector, which can improve the spectral resolution and receiving range, but maybe result in data redundancy, difficulty in system integration and high cost as well. Thus, it is necessary and urgent to pick out the nitrogen-sensitive feature wavelengths among the spectral range. The present study, aiming at solving this problem, assigns a feature weighting to each centre wavelength of HSL system by using matrix coefficient analysis and divergence threshold. The feature weighting is a criterion to amend the centre wavelength of the detector to accommodate different purpose, especially the estimation of leaf nitrogen content (LNC) in rice. By this way, the wavelengths high-correlated to the LNC can be ranked in a descending order, which are used to estimate rice LNC sequentially. In this paper, a HSL system which works based on a wide spectrum emission and a 32-channel detector is conducted to collect the reflectance spectra of rice leaf. These spectra collected by HSL cover a range of 538 nm - 910 nm with a resolution of 12 nm. These 32 wavelengths are strong absorbed by chlorophyll in green plant among this range. The relationship between the rice LNC and reflectance-based spectra is modeled using partial least squares (PLS) and support vector machines (SVMs) based on calibration and validation datasets respectively. The results indicate that I) wavelength selection method of HSL based on feature weighting is effective to choose the nitrogen-sensitive wavelengths, which can also be co-adapted with the hardware of HSL system friendly. II) The chosen wavelength has a high correlation with rice LNC which can be
Zapata, Julián; Lopez, Ricardo; Herrero, Paula; Ferreira, Vicente
2012-11-30
An automated headspace in-tube extraction (ITEX) method combined with multiple headspace extraction (MHE) has been developed to provide simultaneously information about the accurate wine content in 20 relevant aroma compounds and about their relative transfer rates to the headspace and hence about the relative strength of their interactions with the matrix. In the method, 5 μL (for alcohols, acetates and carbonyl alcohols) or 200 μL (for ethyl esters) of wine sample were introduced in a 2 mL vial, heated at 35°C and extracted with 32 (for alcohols, acetates and carbonyl alcohols) or 16 (for ethyl esters) 0.5 mL pumping strokes in four consecutive extraction and analysis cycles. The application of the classical theory of Multiple Extractions makes it possible to obtain a highly reliable estimate of the total amount of volatile compound present in the sample and a second parameter, β, which is simply the proportion of volatile not transferred to the trap in one extraction cycle, but that seems to be a reliable indicator of the actual volatility of the compound in that particular wine. A study with 20 wines of different types and 1 synthetic sample has revealed the existence of significant differences in the relative volatility of 15 out of 20 odorants. Differences are particularly intense for acetaldehyde and other carbonyls, but are also notable for alcohols and long chain fatty acid ethyl esters. It is expected that these differences, linked likely to sulphur dioxide and some unknown specific compositional aspects of the wine matrix, can be responsible for relevant sensory changes, and may even be the cause explaining why the same aroma composition can produce different aroma perceptions in two different wines. PMID:23102525
Bradley, Paul M; Journey, Celeste A; Brigham, Mark E; Burns, Douglas A; Button, Daniel T; Riva-Murray, Karen
2013-01-01
To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (ρ = 0.78; p = 0.003) and annual FMeHg basin yield (ρ = 0.66; p = 0.026). Improved correlation (ρ = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered. PMID:22982552
Hossain, Ahmed; Beyene, Joseph
2013-12-01
MicroRNAs (miRNAs) are short non-coding RNAs that play critical roles in numerous cellular processes through post-transcriptional functions. The aberrant role of miRNAs has been reported in a number of diseases. A robust computational method is vital to discover novel miRNAs where level of noise varies dramatically across the different miRNAs. In this paper, we propose a flexible rank-based procedure for estimating a weighted log partial area under the receiver operating characteristic (ROC) curve statistic for selecting differentially expressed miRNAs. The statistic combines results taking partial area under the curve (pAUC) and their corresponding variance. The proposed method does not involve complicated formulas and does not require advanced programming skills. Two real datasets are analyzed to illustrate the method and a simulation study is carried out to assess the performance of different miRNA ranking statistics. We conclude that the proposed method offers robust results with large samples for miRNA expression data, and the method can be used as an alternative analytical tool for identifying a list of target miRNAs for further biological and clinical investigation. PMID:24246291
Bradley, Paul M.; Journey, Celeste A.; Bringham, Mark E.; Burns, Douglas A.; Button, Daniel T.; Riva-Murray, Karen
2013-01-01
To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (p = 0.78; p = 0.003) and annual FMeHg basin yield (p = 0.66; p = 0.026). Improved correlation (p = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered.
Chittawatanarat, Kaweesak; Pruenglampoo, Sakda; Trakulhoon, Vibul; Ungpinitpong, Winai; Patumanond, Jayanton
2012-01-01
Background Many medical procedures routinely use body weight as a parameter for calculation. However, these measurements are not always available. In addition, the commonly used visual estimation has had high error rates. Therefore, the aim of this study was to develop a predictive equation for body weight using body circumferences. Methods A prospective study was performed in healthy volunteers. Body weight, height, and eight circumferential level parameters including neck, arm, chest, waist, umbilical level, hip, thigh, and calf were recorded. Linear regression equations were developed in a modeling sample group divided by sex and age (younger <60 years and older ≥60 years). Original regression equations were modified to simple equations by coefficients and intercepts adjustment. These equations were tested in an independent validation sample. Results A total of 2000 volunteers were included in this study. These were randomly separated into two groups (1000 in each modeling and validation group). Equations using height and one covariate circumference were developed. After the covariate selection processes, covariate circumference of chest, waist, umbilical level, and hip were selected for single covariate equations (Sco). To reduce the body somatotype difference, the combination covariate circumferences were created by summation between the chest and one torso circumference of waist, umbilical level, or hip and used in the equation development as a combination covariate equation (Cco). Of these equations, Cco had significantly higher 10% threshold error tolerance compared with Sco (mean percentage error tolerance of Cco versus Sco [95% confidence interval; 95% CI]: 76.9 [74.2–79.6] versus 70.3 [68.4–72.3]; P < 0.01, respectively). Although simple covariate equations had more evidence errors than the original covariate equations, there was comparable error tolerance between the types of equations (original versus simple: 74.5 [71.9–77.1] versus 71.7 [69.2
NASA Astrophysics Data System (ADS)
Sanford, W. E.
2015-12-01
Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially
NASA Astrophysics Data System (ADS)
Rebello, N. Sanjay
2012-02-01
Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.
Thompson, J A; Schweitzer, L E; Nelson, R L
1996-07-01
Increasing specific leaf weight (SLW) may improve leaf apparent photosynthesis (AP) in soybean [Glycine max (L.) Merr.] but screening for SLW and AP is laborious. The Objectives of this study were (i) to determine the time course of SLW and chlorophyll concentration in experimental lines selected for differences in SLW and (ii) to evaluate the potential use of the Minolta 502 SPAD meter as a rapid estimator of SLW, AP and chlorophyll concentration in leaves of soybean. In 1991 and 1992, sixteen experimental lines representing extremes in SLW were grown at Urbana, IL, and West Lafayette, IN, with three replications at each location. SPAD values, SLW and AP were measured at the R2 (full flower), R4 (full pod) and R5 (beginning seed) growth stages. In 1992 SLW, SPAD values and chlorophyll concentration were measured weekly. Seasonal patterns of SPAD values, SLW, and chlorophyll concentration were very similar through R5. After R5, SLW continued to increase but SPAD values and chlorophyll concentration declined. SPAD values and SLW were highly correlated at the R2, R4 and R5 stages at both locations and in both years. Environmental conditions during this research were not suitable for maximum AP expression, which is likely why AP and SPAD values were correlated only at the R4 growth stage at Urbana in 1992. SPAD measurements were consistent across diverse environments and effectively separated the high SLW lines from the low SLW lines. Measuring with the Minolta 502 SPAD meter is rapid, simple and non-destructive and could be an alternative method for direct selection for SLW. PMID:24271528
Jensen, Bente R; Hovgaard-Hansen, Line; Cappelen, Katrine L
2016-08-01
Running on a lower-body positive-pressure (LBPP) treadmill allows effects of weight support on leg muscle activation to be assessed systematically, and has the potential to facilitate rehabilitation and prevent overloading. The aim was to study the effect of running with weight support on leg muscle activation and to estimate relative knee and ankle joint forces. Runners performed 6-min running sessions at 2.22 m/s and 3.33 m/s, at 100%, 80%, 60%, 40%, and 20% body weight (BW). Surface electromyography, ground reaction force, and running characteristics were measured. Relative knee and ankle joint forces were estimated. Leg muscles responded differently to unweighting during running, reflecting different relative contribution to propulsion and antigravity forces. At 20% BW, knee extensor EMGpeak decreased to 22% at 2.22 m/s and 28% at 3.33 m/s of 100% BW values. Plantar flexors decreased to 52% and 58% at 20% BW, while activity of biceps femoris muscle remained unchanged. Unweighting with LBPP reduced estimated joint force significantly although less than proportional to the degree of weight support (ankle). It was concluded that leg muscle activation adapted to the new biomechanical environment, and the effect of unweighting on estimated knee force was more pronounced than on ankle force. PMID:26957520
Technology Transfer Automated Retrieval System (TEKTRAN)
Phosphorus sorption data for soil of the Pembroke classification are recorded at high replication — 10 experiments at each of 7 initial concentrations — for characterizing the data error structure through variance function estimation. The results permit the assignment of reliable weights for the su...
ERIC Educational Resources Information Center
Tao, Jian; Shi, Ning-Zhong; Chang, Hua-Hua
2012-01-01
For mixed-type tests composed of both dichotomous and polytomous items, polytomous items often yield more information than dichotomous ones. To reflect the difference between the two types of items, polytomous items are usually pre-assigned with larger weights. We propose an item-weighted likelihood method to better assess examinees' ability…
Coplen, T.B.; Peiser, H.S.
1998-01-01
International commissions and national committees for atomic weights (mean relative atomic masses) have recommended regularly updated, best values for these atomic weights as applicable to terrestrial sources of the chemical elements. Presented here is a historically complete listing starting with the values in F. W. Clarke's 1882 recalculation, followed by the recommended values in the annual reports of the American Chemical Society's Atomic Weights Commission. From 1903, an International Commission published such reports and its values (scaled to an atomic weight of 16 for oxygen) are here used in preference to those of national committees of Britain, Germany, Spain, Switzerland, and the U.S.A. We have, however, made scaling adjustments from Ar(16O) to Ar(12C) where not negligible. From 1920, this International Commission constituted itself under the International Union of Pure and Applied Chemistry (IUPAC). Since then, IUPAC has published reports (mostly biennially) listing the recommended atomic weights, which are reproduced here. Since 1979, these values have been called the "standard atomic weights" and, since 1969, all values have been published, with their estimated uncertainties. Few of the earlier values were published with uncertainties. Nevertheless, we assessed such uncertainties on the basis of our understanding of the likely contemporary judgement of the values' reliability. While neglecting remaining uncertainties of 1997 values, we derive "differences" and a retrospective index of reliability of atomic-weight values in relation to assessments of uncertainties at the time of their publication. A striking improvement in reliability appears to have been achieved since the commissions have imposed upon themselves the rule of recording estimated uncertainties from all recognized sources of error.
Padula, Amy M; Mortimer, Kathleen; Hubbard, Alan; Lurmann, Frederick; Jerrett, Michael; Tager, Ira B
2012-11-01
Traffic-related air pollution is recognized as an important contributor to health problems. Epidemiologic analyses suggest that prenatal exposure to traffic-related air pollutants may be associated with adverse birth outcomes; however, there is insufficient evidence to conclude that the relation is causal. The Study of Air Pollution, Genetics and Early Life Events comprises all births to women living in 4 counties in California's San Joaquin Valley during the years 2000-2006. The probability of low birth weight among full-term infants in the population was estimated using machine learning and targeted maximum likelihood estimation for each quartile of traffic exposure during pregnancy. If everyone lived near high-volume freeways (approximated as the fourth quartile of traffic density), the estimated probability of term low birth weight would be 2.27% (95% confidence interval: 2.16, 2.38) as compared with 2.02% (95% confidence interval: 1.90, 2.12) if everyone lived near smaller local roads (first quartile of traffic density). Assessment of potentially causal associations, in the absence of arbitrary model assumptions applied to the data, should result in relatively unbiased estimates. The current results support findings from previous studies that prenatal exposure to traffic-related air pollution may adversely affect birth weight among full-term infants. PMID:23045474
Padula, Amy M.; Mortimer, Kathleen; Hubbard, Alan; Lurmann, Frederick; Jerrett, Michael; Tager, Ira B.
2012-01-01
Traffic-related air pollution is recognized as an important contributor to health problems. Epidemiologic analyses suggest that prenatal exposure to traffic-related air pollutants may be associated with adverse birth outcomes; however, there is insufficient evidence to conclude that the relation is causal. The Study of Air Pollution, Genetics and Early Life Events comprises all births to women living in 4 counties in California's San Joaquin Valley during the years 2000–2006. The probability of low birth weight among full-term infants in the population was estimated using machine learning and targeted maximum likelihood estimation for each quartile of traffic exposure during pregnancy. If everyone lived near high-volume freeways (approximated as the fourth quartile of traffic density), the estimated probability of term low birth weight would be 2.27% (95% confidence interval: 2.16, 2.38) as compared with 2.02% (95% confidence interval: 1.90, 2.12) if everyone lived near smaller local roads (first quartile of traffic density). Assessment of potentially causal associations, in the absence of arbitrary model assumptions applied to the data, should result in relatively unbiased estimates. The current results support findings from previous studies that prenatal exposure to traffic-related air pollution may adversely affect birth weight among full-term infants. PMID:23045474
NASA Technical Reports Server (NTRS)
MacConochie, Ian O.; White, Nancy H.; Mills, Janelle C.
2004-01-01
A program, entitled Weights, Areas, and Mass Properties (or WAMI) is centered around an array of menus that contain constants that can be used in various mass estimating relationships for the ultimate purpose of obtaining the mass properties of Earth-to-Orbit Transports. The current Shuttle mass property data was relied upon heavily for baseline equation constant values from which other options were derived.
Technology Transfer Automated Retrieval System (TEKTRAN)
The efficacy of live animal, real-time, B-mode ultrasound (US) estimates of carcass traits as (partial) predictors of carcass composition warrants investigation in sheep of varying genetic and environmental backgrounds. Our objectives were to 1) evaluate US estimates of corresponding carcass measure...
ERIC Educational Resources Information Center
Lee, Sunghee; Satter, Delight E.; Ponce, Ninez A.
2009-01-01
Racial classification is a paramount concern in data collection and analysis for American Indians and Alaska Natives (AI/ANs) and has far-reaching implications in health research. We examine how different racial classifications affect survey weights and consequently change health-related indicators for the AI/AN population in California. Using a…
Technology Transfer Automated Retrieval System (TEKTRAN)
Birth weight (BWT) and calving difficulty (CD) were recorded on 4,579 first parity females from the Germplasm Evaluation (GPE) program at the U.S. Meat Animal Research Center (USMARC). Both traits were analyzed using a bivariate animal model with direct and maternal effects. Calving difficulty was...
Reported maternal education is an important predictor of pregnancy outcomes. Like income, it is believed to allow women to locate in more favorable conditions than less educated or affluent peers. We examine the effect of reported educational attainment on term birth weight (birt...
Technology Transfer Automated Retrieval System (TEKTRAN)
The objective of this study was to investigate the phenotypic relationship between common health disorders in dairy cows and lactation persistency, uncorrelated with 305 d yield. The relationships with peak yield and days in milk (DIM) at peak were also studied. Daily milk weights and treatment inc...
Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M
2016-08-01
Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. PMID:27179237
Validity of Mothers' Reports of Children's Weight in Japan.
Nosaka, Nobuyuki; Fujiwara, Takeo; Knaup, Emily; Okada, Ayumi; Tsukahara, Hirokazu
2016-08-01
Estimation methods for pediatric weight have not been evaluated for Japanese children. This study aimed to assess the accuracy of mothers' reports of their children's weight in Japan. We also evaluated potential alternatives to the estimation of weight, including the Broselow tape (BT), Advanced Pediatric Life Support (APLS), and Park's formulae. We prospectively collected cross-sectional data on a convenience sample of 237 children aged less than 10 years who presented to a general pediatric outpatient clinic with their mothers. Each weight estimation method was evaluated using Bland- Altman plots and by calculating the proportion within 10% and 20% of the measured weight. Mothers' reports of weight were the most accurate method, with 94.9% within 10% of the measured weight, the lowest mean difference (0.27kg), and the shortest 95% limit of agreement (－1.4 to 1.9kg). The BT was the most reliable alternative, followed by APLS and Park's formulae. Mothers' reports of their children 's weight are more accurate than other weight estimation methods. When no report of a child's weight by the mother is available, BT is the best alternative. When an aged-based formula is the only option, the APLS formula is preferred. PMID:27549669
Ríos-Utrera, A; Cundiff, L V; Gregory, K E; Koch, R M; Dikeman, M E; Koohmaraie, M; Van Vleck, L D
2006-01-01
The influence of different levels of adjusted fat thickness (AFT) and HCW slaughter end points (covariates) on estimates of breed and retained heterosis effects was studied for 14 carcass traits from serially slaughtered purebred and composite steers from the US Meat Animal Research Center (MARC). Contrasts among breed solutions were estimated at 0.7, 1.1, and 1.5 cm of AFT, and at 295.1, 340.5, and 385.9 kg of HCW. For constant slaughter age, contrasts were adjusted to the overall mean (432.5 d). Breed effects for Red Poll, Hereford, Limousin, Braunvieh, Pinzgauer, Gelbvieh, Simmental, Charolais, MARC I, MARC II, and MARC III were estimated as deviations from Angus. In addition, purebreds were pooled into 3 groups based on lean-to-fat ratio, and then differences were estimated among groups. Retention of combined individual and maternal heterosis was estimated for each composite. Mean retained heterosis for the 3 composites also was estimated. Breed rankings and expression of heterosis varied within and among end points. For example, Charolais had greater (P < 0.05) dressing percentages than Angus at the 2 largest levels of AFT and smaller (P < 0.01) percentages at the 2 largest levels of HCW, whereas the 2 breeds did not differ (P > or = 0.05) at a constant age. The MARC III composite produced 9.7 kg more (P < 0.01) fat than Angus at AFT of 0.7 cm, but 7.9 kg less (P < 0.05) at AFT of 1.5 cm. For MARC III, the estimate of retained heterosis for HCW was significant (P < 0.05) at the lowest level of AFT, but at the intermediate and greatest levels estimates were nil. The pattern was the same for MARC I and MARC III for LM area. Adjustment for age resulted in near zero estimates of retained heterosis for AFT, and similarly, adjustment for HCW resulted in nil estimates of retained heterosis for LM area. For actual retail product as a percentage of HCW, the estimate of retained heterosis for MARC III was negative (-1.27%; P < 0.05) at 0.7 cm but was significantly
NASA Astrophysics Data System (ADS)
Zouch, Wassim; Slima, Mohamed Ben; Feki, Imed; Derambure, Philippe; Taleb-Ahmed, Abdelmalik; Hamida, Ahmed Ben
2010-12-01
A new nonparametric method, based on the smooth weighted-minimum-norm (WMN) focal underdetermined-system solver (FOCUSS), for electrical cerebral activity localization using electroencephalography measurements is proposed. This method iteratively adjusts the spatial sources by reducing the size of the lead-field and the weighting matrix. Thus, an enhancement of source localization is obtained, as well as a reduction of the computational complexity. The performance of the proposed method, in terms of localization errors, robustness, and computation time, is compared with the WMN-FOCUSS and nonshrinking smooth WMN-FOCUSS methods as well as with standard generalized inverse methods (unweighted minimum norm, WMN, and FOCUSS). Simulation results for single-source localization confirm the effectiveness and robustness of the proposed method with respect to the reconstruction accuracy of a simulated single dipole.
Minimum variance beamformer weights revisited.
Moiseev, Alexander; Doesburg, Sam M; Grunau, Ruth E; Ribary, Urs
2015-10-15
Adaptive minimum variance beamformers are widely used analysis tools in MEG and EEG. When the target brain activity presents in the form of spatially localized responses, the procedure usually involves two steps. First, positions and orientations of the sources of interest are determined. Second, the filter weights are calculated and source time courses reconstructed. This last step is the object of the current study. Despite different approaches utilized at the source localization stage, basic expressions for the weights have the same form, dictated by the minimum variance condition. These classic expressions involve covariance matrix of the measured field, which includes contributions from both the sources of interest and the noise background. We show analytically that the same weights can alternatively be obtained, if the full field covariance is replaced with that of the noise, provided the beamformer points to the true sources precisely. In practice, however, a certain mismatch is always inevitable. We show that such mismatch results in partial suppression of the true sources if the traditional weights are used. To avoid this effect, the "alternative" weights based on properly estimated noise covariance should be applied at the second, source time course reconstruction step. We demonstrate mathematically and using simulated and real data that in many situations the alternative weights provide significantly better time course reconstruction quality than the traditional ones. In particular, they a) improve source-level SNR and yield more accurately reconstructed waveforms; b) provide more accurate estimates of inter-source correlations; and c) reduce the adverse influence of the source correlations on the performance of single-source beamformers, which are used most often. Importantly, the alternative weights come at no additional computational cost, as the structure of the expressions remains the same. PMID:26143207
NASA Astrophysics Data System (ADS)
Charbonneau, David; Harps-N Collaboration
2015-01-01
Although the NASA Kepler Mission has determined the physical sizes of hundreds of small planets, and we have in many cases characterized the star in detail, we know virtually nothing about the planetary masses: There are only 7 planets smaller than 2.5 Earth radii for which there exist published mass estimates with a precision better than 20 percent, the bare minimum value required to begin to distinguish between different models of composition.HARPS-N is an ultra-stable fiber-fed high-resolution spectrograph optimized for the measurement of very precise radial velocities. We have 80 nights of guaranteed time per year, of which half are dedicated to the study of small Kepler planets.In preparation for the 2014 season, we compared all available Kepler Objects of Interest to identify the ones for which our 40 nights could be used most profitably. We analyzed the Kepler light curves to constrain the stellar rotation periods, the lifetimes of active regions on the stellar surface, and the noise that would result in our radial velocities. We assumed various mass-radius relations to estimate the observing time required to achieve a mass measurement with a precision of 15%, giving preference to stars that had been well characterized through asteroseismology. We began by monitoring our long list of targets. Based on preliminary results we then selected our final short list, gathering typically 70 observations per target during summer 2014.These resulting mass measurements will have a signifcant impact on our understanding of these so-called super-Earths and small Neptunes. They would form a core dataset with which the international astronomical community can meaningfully seek to understand these objects and their formation in a quantitative fashion.HARPS-N was funded by the Swiss Space Office, the Harvard Origin of Life Initiative, the Scottish Universities Physics Alliance, the University of Geneva, the Smithsonian Astrophysical Observatory, the Italian National
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, David C.; Goorvitch, D.
1994-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
NASA Astrophysics Data System (ADS)
Herman, Jay R.
2010-12-01
Multiple scattering radiative transfer results are used to calculate action spectrum weighted irradiances and fractional irradiance changes in terms of a power law in ozone Ω, U(Ω/200)-RAF, where the new radiation amplification factor (RAF) is just a function of solar zenith angle. Including Rayleigh scattering caused small differences in the estimated 30 year changes in action spectrum-weighted irradiances compared to estimates that neglect multiple scattering. The radiative transfer results are applied to several action spectra and to an instrument response function corresponding to the Solar Light 501 meter. The effect of changing ozone on two plant damage action spectra are shown for plants with high sensitivity to UVB (280-315 nm) and those with lower sensitivity, showing that the probability for plant damage for the latter has increased since 1979, especially at middle to high latitudes in the Southern Hemisphere. Similarly, there has been an increase in rates of erythemal skin damage and pre-vitamin D3 production corresponding to measured ozone decreases. An example conversion function is derived to obtain erythemal irradiances and the UV index from measurements with the Solar Light 501 instrument response function. An analytic expressions is given to convert changes in erythemal irradiances to changes in CIE vitamin-D action spectrum weighted irradiances.
NASA Technical Reports Server (NTRS)
Herman, Jay R.
2010-01-01
Multiple scattering radiative transfer results are used to calculate action spectrum weighted irradiances and fractional irradiance changes in terms of a power law in ozone OMEGA, U(OMEGA/200)(sup -RAF), where the new radiation amplification factor (RAF) is just a function of solar zenith angle. Including Rayleigh scattering caused small differences in the estimated 30 year changes in action spectrum-weighted irradiances compared to estimates that neglect multiple scattering. The radiative transfer results are applied to several action spectra and to an instrument response function corresponding to the Solar Light 501 meter. The effect of changing ozone on two plant damage action spectra are shown for plants with high sensitivity to UVB (280-315 run) and those with lower sensitivity, showing that the probability for plant damage for the latter has increased since 1979, especially at middle to high latitudes in the Southern Hemisphere. Similarly, there has been an increase in rates of erythemal skin damage and pre-vitamin D3 production corresponding to measured ozone decreases. An example conversion function is derived to obtain erythemal irradiances and the UV index from measurements with the Solar Light 501 instrument response function. An analytic expressions is given to convert changes in erythemal irradiances to changes in CIE vitamin-D action spectrum weighted irradiances.
Karelis, Antony D; Rabasa-Lhoret, Rémi; Pompilus, Roseline; Messier, Virginie; Strychar, Irene; Brochu, Martin; Aubertin-Leheudre, Mylene
2012-04-01
The purpose of this study was to investigate the relationship between visceral adipose tissue (VAT), estimated with the Bertin index obtained from dual-energy X-ray absorptiometry (DXA), with cardiometabolic risk factors before and after a weight loss program and compare it with VAT measured with computed tomography (CT) scan. The study population for this analysis included 92 nondiabetic overweight and obese sedentary postmenopausal women (age: 58.1 ± 4.7 years, BMI: 31.8 ± 4.2 kg/m(2)) participating in a weight loss intervention that consisted of a caloric restricted diet with and without resistance training (RT). We measured (i) VAT using CT scan, (ii) body composition (using DXA) from which the Bertin index was calculated, (iii) cardiometabolic risk factors such as insulin sensitivity (using the hyperinsulinenic-euglycemic clamp technique), peak oxygen consumption, blood pressure, plasma lipids, C-reactive protein as well as fasting glucose and insulin. VAT levels for both methods significantly decreased after the weight loss intervention. Furthermore, no differences in VAT levels between both methods were observed before (88.0 ± 25.5 vs. 83.8 ± 22.0 cm(2)) and after (76.8 ± 27.8 vs. 73.6 ± 23.2 cm(2)) the weight loss intervention. In addition, the percent change in VAT levels after the weight loss intervention was similar between both methods (-13.0 ± 16.5 vs. -12.5 ± 12.6%). Moreover, similar relationships were observed between both measures of VAT with cardiometabolic risk factors before and after the weight loss intervention. Finally, results from the logistic regression analysis consistently showed that fat mass and lean body mass were independent predictors of pre- and post-VAT levels for both methods in our cohort. In conclusion, estimated visceral fat levels using the Bertin index may be able to trace variations of VAT after weight loss. This index also shows comparable relationships with cardiometabolic risk factors when compared to VAT
Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.
2012-04-01
traditionally used to estimate spinal cord NTCP may not apply to the dosimetry of SRS. Further research with additional NTCP models is needed.
Housami, Fadi; Drake, Marcus; Abrams, Paul
2009-01-01
Objectives: Measurement of bladder weight using ultrasound estimates of bladder wall thickness and bladder volume is an emerging clinical measurement technique that may have a role in the diagnosis of lower urinary tract dysfunction. We have reviewed available literature on this technique to assess current clinical status. Methods: A systematic literature search was carried out within PubMed and MedLine to identify relevant publications. These were then screened for relevance. Preliminary results from our clinical experiments using the technique are also included. Results: We identified 17 published papers concerning the technique which covered clinical studies relating ultrasound-estimated bladder wall thickness to urodynamic diagnosis in men, women, and children together with change in response to treatment of bladder outlet obstruction. The original manual technique has been challenged by a commercially available automated technique. Conclusion: Ultrasound-estimated bladder weight is a promising non-invasive technique for the categorization of storage and voiding disorders in both men and women. Further studies are needed to validate the technique and assess accuracy of diagnosis. PMID:19468439
Effect of clothing weight on body weight
Technology Transfer Automated Retrieval System (TEKTRAN)
Background: In clinical settings, it is common to measure weight of clothed patients and estimate a correction for the weight of clothing, but we can find no papers in the medical literature regarding the variability in clothing weight with weather, season, and gender. Methods: Fifty adults (35 wom...
49 CFR 375.405 - How must I provide a non-binding estimate?
Code of Federal Regulations, 2011 CFR
2011-10-01
... provide reasonably accurate non-binding estimates based upon both the estimated weight or volume of the... a shipper with an estimate based on volume that will later be converted to a weight-based rate, you must provide the shipper an explanation in writing of the formula used to calculate the conversion...
NASA Astrophysics Data System (ADS)
Wang, Gaili; Liu, Liping; Ding, Yuanyuan
2012-05-01
The errors in radar quantitative precipitation estimations consist not only of systematic biases caused by random noises but also spatially nonuniform biases in radar rainfall at individual rain-gauge stations. In this study, a real-time adjustment to the radar reflectivity-rainfall rates ( Z-R) relationship scheme and the gauge-corrected, radar-based, estimation scheme with inverse distance weighting interpolation was developed. Based on the characteristics of the two schemes, the two-step correction technique of radar quantitative precipitation estimation is proposed. To minimize the errors between radar quantitative precipitation estimations and rain gauge observations, a real-time adjustment to the Z-R relationship scheme is used to remove systematic bias on the time-domain. The gauge-corrected, radar-based, estimation scheme is then used to eliminate non-uniform errors in space. Based on radar data and rain gauge observations near the Huaihe River, the two-step correction technique was evaluated using two heavy-precipitation events. The results show that the proposed scheme improved not only in the underestimation of rainfall but also reduced the root-mean-square error and the mean relative error of radar-rain gauge pairs.
Ciarmiello, Andrea; Giovacchini, Giampiero; Guidotti, Claudio; Meniconi, Martina; Lazzeri, Patrizia; Carabelli, Elena; Mansi, Luigi; Mariani, Giuliano; Volterrani, Duccio; Del Sette, Massimo
2013-10-01
To test whether the use of a striatum weighted image may improve registration accuracy and diagnostic outcome in patients with parkinsonian syndromes (PS), weighted images were generated by increasing signal intensity of striatal voxels and used as intermediate dataset for co-registering the brain image onto template. Experimental validation was performed using an anthropomorphic striatal phantom. (123)I-FP-CIT SPECT binding ratios were manually determined in 67 PS subjects and compared to those obtained using unsupervised standard (UWR) and weighted registered (WR) approach. Normalized cost function was used to evaluate the accuracy of phantom and subjects registered images to the template. Reproducibility between unsupervised and manual ratios was assessed by using intra-class correlation coefficient (ICC) and Bland and Altman analysis. Correlation coefficient was used to assess the dependence of semi-quantitative ratios on clinical findings. Weighted method improves accuracy of brain registration onto template as determined by cost function in phantom (0.86 ± 0.06 vs. 0.98 ± 0.02; Student's t-test, P = 0.04) and in subject scans (0.69 ± 0.06 vs. 0.53 ± 0.06; Student's t-test, P < 0.0001). Agreement between manual and unsupervised derived binding ratios as measured by ICC was significantly higher on WR as compared to UWR images (0.91 vs. 0.76). Motor UPDRS score was significantly correlated with manual and unsupervised derived binding potential. In phantom, as well as in subjects studies, correlations were more significant using the WR method (BPm: R(2) = 0.36, P = 0.0001; BPwr: R(2) = 0.368, P = 0.0001; BPuwr: R(2) = 0.300, P = 0.0008). Weighted registration improves accuracy of binding potential estimates and may be a promising approach to enhance the diagnostic outcome of SPECT imaging, correlation with disease severity, and for monitoring disease progression in Parkinsonian syndromes. PMID:23559000
Guan, Yihong; Zhu, Qinfang; Huang, Delai; Zhao, Shuyi; Jan Lo, Li; Peng, Jinrong
2015-01-01
The molecular weight (MW) of a protein can be predicted based on its amino acids (AA) composition. However, in many cases a non-chemically modified protein shows an SDS PAGE-displayed MW larger than its predicted size. Some reports linked this fact to high content of acidic AA in the protein. However, the exact relationship between the acidic AA composition and the SDS PAGE-displayed MW is not established. Zebrafish nucleolar protein Def is composed of 753 AA and shows an SDS PAGE-displayed MW approximately 13 kDa larger than its predicted MW. The first 188 AA in Def is defined by a glutamate-rich region containing ~35.6% of acidic AA. In this report, we analyzed the relationship between the SDS PAGE-displayed MW of thirteen peptides derived from Def and the AA composition in each peptide. We found that the difference between the predicted and SDS PAGE-displayed MW showed a linear correlation with the percentage of acidic AA that fits the equation y = 276.5x - 31.33 (x represents the percentage of acidic AA, 11.4% ≤ x ≤ 51.1%; y represents the average ΔMW per AA). We demonstrated that this equation could be applied to predict the SDS PAGE-displayed MW for thirteen different natural acidic proteins. PMID:26311515
Rühm, W; Walsh, L
2007-01-01
Currently, most analyses of the A-bomb survivors' solid tumour and leukaemia data are based on a constant neutron relative biological effectiveness (RBE) value of 10 that is applied to all survivors, independent of their distance to the hypocentre at the time of bombing. The results of these analyses are then used as a major basis for current risk estimates suggested by the International Commission on Radiological Protection (ICRP) for use in international safety guidelines. It is shown here that (i) a constant value of 10 is not consistent with weighting factors recommended by the ICRP for neutrons and (ii) it does not account for the hardening of the neutron spectra in Hiroshima and Nagasaki, which takes place with increasing distance from the hypocentres. The purpose of this paper is to present new RBE values for the neutrons, calculated as a function of distance from the hypocentres for both cities that are consistent with the ICRP60 neutron weighting factor. If based on neutron spectra from the DS86 dosimetry system, these calculations suggest values of about 31 at 1000 m and 23 at 2000 m ground range in Hiroshima, while the corresponding values for Nagasaki are 24 and 22. If the neutron weighting factor that is consistent with ICRP92 is used, the corresponding values are about 23 and 21 for Hiroshima and 21 and 20 for Nagasaki, respectively. It is concluded that the current risk estimates will be subject to some changes in view of the changed RBE values. This conclusion does not change significantly if the new doses from the Dosimetry System DS02 are used. PMID:17533156
Jansen, Rob T P; Laeven, Mark; Kardol, Wim
2002-06-01
The analytical processes in clinical laboratories should be considered to be non-stationary, non-ergodic and probably non-stochastic processes. Both the process mean and the process standard deviation vary. The variation can be different at different levels of concentration. This behavior is shown in five examples of different analytical systems: alkaline phosphatase on the Hitachi 911 analyzer (Roche), vitamin B12 on the Access analyzer (Beckman), prothrombin time and activated partial thromboplastin time on the STA Compact analyzer (Roche) and PO2 on the ABL 520 analyzer (Radiometer). A model is proposed to assess the status of a process. An exponentially weighted moving average and standard deviation was used to estimate process mean and standard deviation. Process means were estimated overall and for each control level. The process standard deviation was estimated in terms of within-run standard deviation. Limits were defined in accordance with state of the art- or biological variance-derived cut-offs. The examples given are real, not simulated, data. Individual control sample results were normalized to a target value and target standard deviation. The normalized values were used in the exponentially weighted algorithm. The weighting factor was based on a process time constant, which was estimated from the period between two calibration or maintenance procedures. The proposed system was compared with Westgard rules. The Westgard rules perform well, despite the underlying presumption of ergodicity. This is mainly caused by the introduction of the starting rule of 12s, which proves essential to prevent a large number of rule violations. The probability of reporting a test result with an analytical error that exceeds the total allowable error was calculated for the proposed system as well as for the Westgard rules. The proposed method performed better. The proposed algorithm was implemented in a computer program running on computers to which the analyzers were
Chioccioli, Maurizio; Hankamer, Ben; Ross, Ian L.
2014-01-01
Dry weight biomass is an important parameter in algaculture. Direct measurement requires weighing milligram quantities of dried biomass, which is problematic for small volume systems containing few cells, such as laboratory studies and high throughput assays in microwell plates. In these cases indirect methods must be used, inducing measurement artefacts which vary in severity with the cell type and conditions employed. Here, we utilise flow cytometry pulse width data for the estimation of cell density and biomass, using Chlorella vulgaris and Chlamydomonas reinhardtii as model algae and compare it to optical density methods. Measurement of cell concentration by flow cytometry was shown to be more sensitive than optical density at 750 nm (OD750) for monitoring culture growth. However, neither cell concentration nor optical density correlates well to biomass when growth conditions vary. Compared to the growth of C. vulgaris in TAP (tris-acetate-phosphate) medium, cells grown in TAP + glucose displayed a slowed cell division rate and a 2-fold increased dry biomass accumulation compared to growth without glucose. This was accompanied by increased cellular volume. Laser scattering characteristics during flow cytometry were used to estimate cell diameters and it was shown that an empirical but nonlinear relationship could be shown between flow cytometric pulse width and dry weight biomass per cell. This relationship could be linearised by the use of hypertonic conditions (1 M NaCl) to dehydrate the cells, as shown by density gradient centrifugation. Flow cytometry for biomass estimation is easy to perform, sensitive and offers more comprehensive information than optical density measurements. In addition, periodic flow cytometry measurements can be used to calibrate OD750 measurements for both convenience and accuracy. This approach is particularly useful for small samples and where cellular characteristics, especially cell size, are expected to vary during growth. PMID
Kriging without negative weights
Szidarovszky, F.; Baafi, E.Y.; Kim, Y.C.
1987-08-01
Under a constant drift, the linear kriging estimator is considered as a weighted average of n available sample values. Kriging weights are determined such that the estimator is unbiased and optimal. To meet these requirements, negative kriging weights are sometimes found. Use of negative weights can produce negative block grades, which makes no practical sense. In some applications, all kriging weights may be required to be nonnegative. In this paper, a derivation of a set of nonlinear equations with the nonnegative constraint is presented. A numerical algorithm also is developed for the solution of the new set of kriging equations.
Anthropometric approximation of body weight in unresponsive stroke patients
Lorenz, M W; Graf, M; Henke, C; Hermans, M; Ziemann, U; Sitzer, M; Foerch, C
2007-01-01
Background and purpose Thrombolysis of acute ischaemic stroke is based strictly on body weight to ensure efficacy and to prevent bleeding complications. Many candidate stroke patients are unable to communicate their body weight, and there is often neither the means nor the time to weigh the patient. Instead, weight is estimated visually by the attending physician, but this is known to be inaccurate. Methods Based on a large general population sample of nearly 7000 subjects, we constructed approximation formulae for estimating body weight from simple anthropometric measurements (body height, and waist and hip circumference). These formulae were validated in a sample of 178 consecutive inpatients admitted to our stroke unit, and their accuracy was compared with the best visual estimation of two experienced physicians. Results The simplest formula gave the most accurate approximation (mean absolute difference 3.1 (2.6) kg), which was considerably better than the best visual estimation (physician 1: 6.5 (5.2) kg; physician 2: 7.4 (5.7) kg). It reduced the proportion of weight approximations mismatched by >10% from 31.5% and 40.4% (physicians 1 and 2, respectively) to 6.2% (anthropometric approximation). Only the patient's own estimation was more accurate (mean absolute difference 2.7 (2.4) kg). Conclusions By using an approximation formula based on simple anthropometric measurements (body height, and waist and hip circumference), it is possible to obtain a quick and accurate approximation of body weight. In situations where the exact weight of unresponsive patients cannot be ascertained quickly, we recommend using this approximation method rather than visual estimation. PMID:17494978
You, Wei; Zang, Zengliang; Zhang, Lifeng; Li, Yi; Wang, Weiqi
2016-05-01
Taking advantage of the continuous spatial coverage, satellite-derived aerosol optical depth (AOD) products have been widely used to assess the spatial and temporal characteristics of fine particulate matter (PM2.5) on the ground and their effects on human health. However, the national-scale ground-level PM2.5 estimation is still very limited because the lack of ground PM2.5 measurements to calibrate the model in China. In this study, a national-scale geographically weighted regression (GWR) model was developed to estimate ground-level PM2.5 concentration based on satellite AODs, newly released national-wide hourly PM2.5 concentrations, and meteorological parameters. The results showed good agreements between satellite-retrieved and ground-observed PM2.5 concentration at 943 stations in China. The overall cross-validation (CV) R (2) is 0.76 and root mean squared prediction error (RMSE) is 22.26 μg/m(3) for MODIS-derived AOD. The MISR-derived AOD also exhibits comparable performance with a CV R (2) and RMSE are 0.81 and 27.46 μg/m(3), respectively. Annual PM2.5 concentrations retrieved either by MODIS or MISR AOD indicated that most of the residential community areas exceeded the new annual Chinese PM2.5 National Standard level 2. These results suggest that this approach is useful for estimating large-scale ground-level PM2.5 distributions especially for the regions without PMs monitoring sites. PMID:26780051
Donato, Mary M.
2006-01-01
Streamflow and trace-metal concentration data collected at 10 locations in the Spokane River basin of northern Idaho and eastern Washington during 1999-2004 were used as input for the U.S. Geological Survey software, LOADEST, to estimate annual loads and mean flow-weighted concentrations of total and dissolved cadmium, lead, and zinc. Cadmium composed less than 1 percent of the total metal load at all stations; lead constituted from 6 to 42 percent of the total load at stations upstream from Coeur d'Alene Lake and from 2 to 4 percent at stations downstream of the lake. Zinc composed more than 90 percent of the total metal load at 6 of the 10 stations examined in this study. Trace-metal loads were lowest at the station on Pine Creek below Amy Gulch, where the mean annual total cadmium load for 1999-2004 was 39 kilograms per year (kg/yr), the mean estimated total lead load was about 1,700 kg/yr, and the mean annual total zinc load was 14,000 kg/yr. The trace-metal loads at stations on North Fork Coeur d'Alene River at Enaville, Ninemile Creek, and Canyon Creek also were relatively low. Trace-metal loads were highest at the station at Coeur d'Alene River near Harrison. The mean annual total cadmium load was 3,400 kg/yr, the mean total lead load was 240,000 kg/yr, and the mean total zinc load was 510,000 kg/yr for 1999-2004. Trace-metal loads at the station at South Fork Coeur d'Alene River near Pinehurst and the three stations on the Spokane River downstream of Coeur d'Alene Lake also were relatively high. Differences in metal loads, particularly lead, between stations upstream and downstream of Coeur d'Alene Lake likely are due to trapping and retention of metals in lakebed sediments. LOADEST software was used to estimate loads for water years 1999-2001 for many of the same sites discussed in this report. Overall, results from this study and those from a previous study are in good agreement. Observed differences between the two studies are attributable to streamflow
ERIC Educational Resources Information Center
Rom, Mark Carl
2011-01-01
Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…
Predict amine solution properties accurately
Cheng, S.; Meisen, A.; Chakma, A.
1996-02-01
Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.
Hwang, Jun Hyun; Ryu, Dong Hee; Park, Soon-Woo
2015-08-01
We investigated the interaction effect between body weight perception and chronic disease comorbidities on body weight control behavior in overweight/obese Korean adults. We analyzed data from 9,138 overweight/obese adults ≥20 yr of age from a nationally representative cross-sectional survey. Multiple logistic regression using an interaction model was performed to estimate the effect of chronic disease comorbidities on weight control behavior regarding weight perception. Adjusted odds ratios for weight control behavior tended to increase significantly with an increasing number of comorbidities in men regardless of weight perception (P<0.05 for trend), suggesting no interaction. Unlike women who perceived their weight accurately, women who under-perceived their weight did not show significant improvements in weight control behavior even with an increasing number of comorbidities. Thus, a significant interaction between weight perception and comorbidities was found only in women (P=0.031 for interaction). The effect of the relationship between accurate weight perception and chronic disease comorbidities on weight control behavior varied by sex. Improving awareness of body image is particularly necessary for overweight and obese women to prevent complications. PMID:26240477
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
A Simple Model Predicting Individual Weight Change in Humans.
Thomas, Diana M; Martin, Corby K; Heymsfield, Steven; Redman, Leanne M; Schoeller, Dale A; Levine, James A
2011-11-01
Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants' weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319
A Simple Model Predicting Individual Weight Change in Humans
Thomas, Diana M.; Martin, Corby K.; Heymsfield, Steven; Redman, Leanne M.; Schoeller, Dale A.; Levine, James A.
2010-01-01
Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319
... heart failure, and kidney disease. Good nutrition and exercise can help in losing weight. Eating extra calories within a well-balanced diet and treating any underlying medical problems can help to add weight.
... obese. Achieving a healthy weight can help you control your cholesterol, blood pressure and blood sugar. It ... use more calories than you eat. A weight-control strategy might include Choosing low-fat, low-calorie ...
... to medicines, thyroid problems, heart failure, and kidney disease. Good nutrition and exercise can help in losing weight. Eating extra calories within a well-balanced diet and treating any underlying medical problems can help to add weight.
Sran, Meena M; Khan, Karim M; Keiver, Kathy; Chew, Jason B; McKay, Heather A; Oxland, Thomas R
2005-12-01
Biomechanical studies of the thoracic spine often scan cadaveric segments by dual energy X-ray absorptiometry (DXA) to obtain measures of bone mass. Only one study has reported the accuracy of lateral scans of thoracic vertebral bodies. The accuracy of DXA scans of thoracic spine segments and of anterior-posterior (AP) thoracic scans has not been investigated. We have examined the accuracy of AP and lateral thoracic DXA scans by comparison with ash weight, the gold-standard for measuring bone mineral content (BMC). We have also compared three methods of estimating volumetric bone mineral density (vBMD) with a novel standard-ash weight (g)/bone volume (cm3) as measured by computed tomography (CT). Twelve T5-T8 spine segments were scanned with DXA (AP and lateral) and CT. The T6 vertebrae were excised, the posterior elements removed and then the vertebral bodies were ashed in a muffle furnace. We proposed a new method of estimating vBMD and compared it with two previously published methods. BMC values from lateral DXA scans displayed the strongest correlation with ash weight (r=0.99) and were on average 12.8% higher (p<0.001). As expected, BMC (AP or lateral) was more strongly correlated with ash weight than areal bone mineral density (aBMD; AP: r=0.54, or lateral: r=0.71) or estimated vBMD. Estimates of vBMD with either of the three methods were strongly and similarly correlated with volumetric BMD calculated by dividing ash weight by CT-derived volume. These data suggest that readily available DXA scanning is an appropriate surrogate measure for thoracic spine bone mineral and that the lateral scan might be the scan method of choice. PMID:15616862
ERIC Educational Resources Information Center
Nutter, June
1995-01-01
Secondary level physical education teachers can have their students use math concepts while working out on the weight-room equipment. The article explains how students can reinforce math skills while weightlifting by estimating their strength, estimating their power, or calculating other formulas. (SM)
Towards an accurate bioimpedance identification
NASA Astrophysics Data System (ADS)
Sanchez, B.; Louarroudi, E.; Bragos, R.; Pintelon, R.
2013-04-01
This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.
Calabrò, P S; Moraci, N; Suraci, P
2012-03-15
This paper presents the results of laboratory column tests aimed at defining the optimum weight ratio of zero-valent iron (ZVI)/pumice granular mixtures to be used in permeable reactive barriers (PRBs) for the removal of nickel from contaminated groundwater. The tests were carried out feeding the columns with aqueous solutions of nickel nitrate at concentrations of 5 and 50 mg/l using three ZVI/pumice granular mixtures at various weight ratios (10/90, 30/70 and 50/50), for a total of six column tests; two additional tests were carried out using ZVI alone. The most successful compromise between reactivity (higher ZVI content) and long-term hydraulic performance (higher Pumice content) seems to be given by the ZVI/pumice granular mixture with a 30/70 weight ratio. PMID:21885195
NASA Astrophysics Data System (ADS)
Itano, Wayne M.; Ramsey, Norman F.
1993-07-01
The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Modeling operating weight and axle weight distributions for highway vehicles
Greene, D.L.; Liang, J.C.
1988-07-01
The estimation of highway cost responsibility requires detailed information on vehicle operating weights and axle weights by type of vehicle. Typically, 10--20 vehicle types must be cross-classified by 10--20 registered weight classes and again by 20 or more operating weight categories, resulting in 100--400 relative frequencies to be determined for each vehicle type. For each of these, gross operating weight must be distributed to each axle or axle unit. Given the rarity of many of the heaviest vehicle types, direct estimation of these frequencies and axle weights from traffic classification count statistics and truck weight data may exceed the reliability of even the largest (e.g., 250,000 record) data sources. An alternative is to estimate statistical models of operating weight distributions as functions of registered weight, and models of axle weight shares as functions of operating weight. This paper describes the estimation of such functions using the multinomial logit model (a log-linear model) and the implementation of the modeling framework as a PC-based FORTRAN program. Areas for further research include the addition of highway class and region as explanatory variables in operating weight distribution models, and the development of theory for including registration costs and costs of operating overweight in the modeling framework. 14 refs., 45 figs., 5 tabs.
ERIC Educational Resources Information Center
Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.
1999-01-01
Performed an empirical Monte Carlo study using predictor and criterion data from 84,808 U.S. Air Force enlistees. Compared formula-based, traditional empirical, and equal-weights procedures. Discusses issues for basic research on validation and cross-validation. (SLD)
Epidemic spreading on complex networks with general degree and weight distributions
NASA Astrophysics Data System (ADS)
Wang, Wei; Tang, Ming; Zhang, Hai-Feng; Gao, Hui; Do, Younghae; Liu, Zong-Hua
2014-10-01
The spread of disease on complex networks has attracted wide attention in the physics community. Recent works have demonstrated that heterogeneous degree and weight distributions have a significant influence on the epidemic dynamics. In this study, a novel edge-weight-based compartmental approach is developed to estimate the epidemic threshold and epidemic size (final infected density) on networks with general degree and weight distributions, and a remarkable agreement with numerics is obtained. Even in complex networks with the strong heterogeneous degree and weight distributions, this approach is used. We then propose an edge-weight-based removal strategy with different biases and find that such a strategy can effectively control the spread of epidemic when the highly weighted edges are preferentially removed, especially when the weight distribution of a network is extremely heterogenous. The theoretical results from the suggested method can accurately predict the above removal effectiveness.
... Quit Smoking Benefits of Quitting Health Effects of Smoking Secondhand Smoke Withdrawal Ways to Quit QuitGuide Pregnancy & Motherhood Pregnancy & Motherhood Before Your Baby is Born From Birth to 2 Years Quitting for Two SmokefreeMom Healthy Kids Parenting & ... Weight Management Weight Management ...
Fujita, Masahiro; Yajima, Tomonari; Iijima, Kazuaki; Sato, Kiyoshi
2012-05-01
The uncertainty in pesticide residue levels (UPRL) associated with sampling size was estimated using individual acetamiprid and cypermethrin residue data from preharvested apple, broccoli, cabbage, grape, and sweet pepper samples. The relative standard deviation from the mean of each sampling size (n = 2(x), where x = 1-6) of randomly selected samples was defined as the UPRL for each sampling size. The estimated UPRLs, which were calculated on the basis of the regulatory sampling size recommended by the OECD Guidelines on Crop Field Trials (weights from 1 to 5 kg, and commodity unit numbers from 12 to 24), ranged from 2.1% for cypermethrin in sweet peppers to 14.6% for cypermethrin in cabbage samples. The percentages of commodity exceeding the maximum residue limits (MRLs) specified by the Japanese Food Sanitation Law may be predicted from the equation derived from this study, which was based on samples of various size ranges with mean residue levels below the MRL. The estimated UPRLs have confirmed that sufficient sampling weight and numbers are required for analysis and/or re-examination of subsamples to provide accurate values of pesticide residue levels for the enforcement of MRLs. The equation derived from the present study would aid the estimation of more accurate residue levels even from small sampling sizes. PMID:22475588
Simultaneous estimation of phase and its pth order derivatives
NASA Astrophysics Data System (ADS)
Kulkarni, Rishikesh; Rastogi, Pramod
2016-05-01
One unaddressed challenge in optical metrology has been the measurement of higher order derivatives of rough specimens subjected to loading. In this paper, we investigate an approach that allows for the simultaneous estimation of the phase and its higher order derivatives from a noisy interference field. The interference phase is represented as a weighted linear combination of linearly independent Fourier basis functions. The interference field is represented as a state space model with the weights of the basis functions as the elements of the state vector. These weights are accurately estimated by employing the extended Kalman filter. The interference phase and phase derivatives are subsequently computed using the estimated weights. Since the Fourier basis functions are infinitely differentiable, phase derivatives of any arbitrary order can be estimated. Simulation and experimental results are provided to substantiate the effectiveness of the proposed method in the presence of high noise.
Automatically determining lag phase in grapes to assist yield estimation practices
Technology Transfer Automated Retrieval System (TEKTRAN)
Estimating grapevine yield is an important though often difficult task. Accurate yield projections ensure that enough physical infrastructure is available to process fruit that cannot be stored for later handling. Crop estimation is based on recording cluster numbers and cluster weights of represent...
NASA Technical Reports Server (NTRS)
Howard, W. H.; Young, D. R.
1972-01-01
Device applies compressive force to bone to minimize loss of bone calcium during weightlessness or bedrest. Force is applied through weights, or hydraulic, pneumatic or electrically actuated devices. Device is lightweight and easy to maintain and operate.
Random Weighted Sobolev Inequalities and Application to Quantum Ergodicity
NASA Astrophysics Data System (ADS)
Robert, Didier; Thomann, Laurent
2015-05-01
This paper is a continuation of Poiret et al. (Ann Henri Poincaré 16:651-689, 2015), where we studied a randomisation method based on the Laplacian with harmonic potential. Here we extend our previous results to the case of any polynomial and confining potential V on . We construct measures, under concentration type assumptions, on the support of which we prove optimal weighted Sobolev estimates on . This construction relies on accurate estimates on the spectral function in a non-compact configuration space. Then we prove random quantum ergodicity results without specific assumption on the classical dynamics. Finally, we prove that almost all bases of Hermite functions are quantum uniquely ergodic.
Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping
2015-06-26
In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349
On Latent Trait Estimation in Multidimensional Compensatory Item Response Models.
Wang, Chun
2015-06-01
Making inferences from IRT-based test scores requires accurate and reliable methods of person parameter estimation. Given an already calibrated set of item parameters, the latent trait could be estimated either via maximum likelihood estimation (MLE) or using Bayesian methods such as maximum a posteriori (MAP) estimation or expected a posteriori (EAP) estimation. In addition, Warm's (Psychometrika 54:427-450, 1989) weighted likelihood estimation method was proposed to reduce the bias of the latent trait estimate in unidimensional models. In this paper, we extend the weighted MLE method to multidimensional models. This new method, denoted as multivariate weighted MLE (MWLE), is proposed to reduce the bias of the MLE even for short tests. MWLE is compared to alternative estimators (i.e., MLE, MAP and EAP) and shown, both analytically and through simulations studies, to be more accurate in terms of bias than MLE while maintaining a similar variance. In contrast, Bayesian estimators (i.e., MAP and EAP) result in biased estimates with smaller variability. PMID:24604245
Michael, Denna; Kanjala, Chifundo; Calvert, Clara; Pretorius, Carel; Wringe, Alison; Todd, Jim; Mtenga, Balthazar; Isingo, Raphael; Zaba, Basia; Urassa, Mark
2014-01-01
Introduction Spectrum epidemiological models are used by UNAIDS to provide global, regional and national HIV estimates and projections, which are then used for evidence-based health planning for HIV services. However, there are no validations of the Spectrum model against empirical serological and mortality data from populations in sub-Saharan Africa. Methods Serologic, demographic and verbal autopsy data have been regularly collected among over 30,000 residents in north-western Tanzania since 1994. Five-year age-specific mortality rates (ASMRs) per 1,000 person years and the probability of dying between 15 and 60 years of age (45Q15,) were calculated and compared with the Spectrum model outputs. Mortality trends by HIV status are shown for periods before the introduction of antiretroviral therapy (1994–1999, 2000–2005) and the first 5 years afterwards (2005–2009). Results Among 30–34 year olds of both sexes, observed ASMRs per 1,000 person years were 13.33 (95% CI: 10.75–16.52) in the period 1994–1999, 11.03 (95% CI: 8.84–13.77) in 2000–2004, and 6.22 (95% CI; 4.75–8.15) in 2005–2009. Among the same age group, the ASMRs estimated by the Spectrum model were 10.55, 11.13 and 8.15 for the periods 1994–1999, 2000–2004 and 2005–2009, respectively. The cohort data, for both sexes combined, showed that the 45Q15 declined from 39% (95% CI: 27–55%) in 1994 to 22% (95% CI: 17–29%) in 2009, whereas the Spectrum model predicted a decline from 43% in 1994 to 37% in 2009. Conclusion From 1994 to 2009, the observed decrease in ASMRs was steeper in younger age groups than that predicted by the Spectrum model, perhaps because the Spectrum model under-estimated the ASMRs in 30–34 year olds in 1994–99. However, the Spectrum model predicted a greater decrease in 45Q15 mortality than observed in the cohort, although the reasons for this over-estimate are unclear. PMID:24438873
Dubey, S. K.; Duddelly, S.; Jangala, H.; Saha, R. N.
2013-01-01
A reliable, rapid and sensitive isocratic reverse phase high-performance liquid chromatography method has been developed and validated for assay of ketorolac tromethamine in tablets and ophthalmic dosage forms using diclofenac sodium as an internal standard. An isocratic separation of ketorolac tromethamine was achieved on Oyster BDS (150×4.6 mm i.d., 5 μm particle size) column using mobile phase of methanol:acetonitrile:sodium dihydrogen phosphate (20 mM; pH 5.5) (50:10:40, %v/v) at a flow rate of 1.0 ml/min. The eluents were monitored at 322 nm for ketorolac and at 282 nm for diclofenac sodium with a photodiode array detector. The retention times of ketorolac and diclofenac sodium were found to be 1.9 min and 4.6 min, respectively. Response was a linear function of drug concentration in the range of 0.01-15 μg/ml (R2=0.994; linear regression model using weighing factor 1/x2) with a limit of detection and quantification of 0.002 μg/ml and 0.007 μg/ml, respectively. The % recovery and % relative standard deviation values indicated the method was accurate and precise. PMID:23901166
Accurate bulk density determination of irregularly shaped translucent and opaque aerogels
NASA Astrophysics Data System (ADS)
Petkov, M. P.; Jones, S. M.
2016-05-01
We present a volumetric method for accurate determination of bulk density of aerogels, calculated from extrapolated weight of the dry pure solid and volume estimates based on the Archimedes' principle of volume displacement, using packed 100 μm-sized monodispersed glass spheres as a "quasi-fluid" media. Hard particle packing theory is invoked to demonstrate the reproducibility of the apparent density of the quasi-fluid. Accuracy rivaling that of the refractive index method is demonstrated for both translucent and opaque aerogels with different absorptive properties, as well as for aerogels with regular and irregular shapes.
Hsu, Ya-Wen; Liou, Tsan-Hon; Liou, Yiing Mei; Chen, Hsin-Jen; Chien, Li-Yin
2016-01-01
Children and adolescents tend to lose weight, which may be associated with misperceptions of weight. Previous studies have emphasized establishing correlations between eating disorders and an overestimated perception of body weight, but few studies have focused on an underestimated perception of body weight. The objective of this study was to explore the relationship between misperceptions of body weight and weight-related risk factors, such as eating disorders, inactivity, and unhealthy behaviors, among overweight children who underestimated their body weight. We conducted a cross-sectional, descriptive study between December 1, 2006 and February 15, 2007. A total of 29,313 children and adolescents studying in grades 4-12 were enrolled in this nationwide, cross-sectional survey, and they were asked to complete questionnaires. A multivariate logistic regression using maximum likelihood estimates was used. The prevalence of body weight misperception was 43.2% (26.4% overestimation and 16.8% underestimation). Factors associated with the underestimated perception of weight among overweight children were parental obesity, dietary control for weight loss, breakfast consumption, self-induced vomiting as a weight control strategy, fried food consumption, engaging in vigorous physical activities, and sleeping for >8 hours per day (odds ratios=0.86, 0.42, 0.88, 1.37, 1.13, 1.11, and 1.17, respectively). In conclusion, the early establishment of an accurate perception of body weight may mitigate unhealthy behaviors. PMID:26965769
Estimating potential evapotranspiration with improved radiation estimation
Technology Transfer Automated Retrieval System (TEKTRAN)
Potential evapotranspiration (PET) is of great importance to estimation of surface energy budget and water balance calculation. The accurate estimation of PET will facilitate efficient irrigation scheduling, drainage design, and other agricultural and meteorological applications. However, accuracy o...
The Robust Weighted Multi-Objective Game
2015-01-01
This paper studies a class of multi-objective n-person non-zero sum games through a robust weighted approach where each player has more than one competing objective. This robust weighted multi-objective game model assumes that each player attaches a set of weights to its objectives instead of accessing accurate weights. Each player wishes to minimize its maximum weighted sum objective where the maximization is pointing to the set of weights. To address this new model, a new equilibrium concept-robust weighted Nash equilibrium is obtained. The existence of this new concept is proven on suitable assumptions about the multi-objective payoffs. PMID:26406986
Estimating toner usage with laser electrophotographic printers
NASA Astrophysics Data System (ADS)
Wang, Lu; Abramsohn, Dennis; Ives, Thom; Shaw, Mark; Allebach, Jan
2013-02-01
Accurate estimation of toner usage is an area of on-going importance for laser, electrophotographic (EP) printers. We propose a new two-stage approach in which we first predict on a pixel-by-pixel basis, the absorptance from printed and scanned pages. We then form a weighted sum of these pixel values to predict overall toner usage on the printed page. The weights are chosen by least-squares regression to toner usage measured with a set of printed test pages. Our twostage predictor significantly outperforms existing methods that are based on a simple pixel counting strategy in terms of both accuracy and robustness of the predictions.
NASA Astrophysics Data System (ADS)
Schlittenhardt, J.
- A comparison of regional and teleseismic log rms (root-mean-square) Lg amplitude measurements have been made for 14 underground nuclear explosions from the East Kazakh test site recorded both by the BRV (Borovoye) station in Kazakhstan and the GRF (Gräfenberg) array in Germany. The log rms Lg amplitudes observed at the BRV regional station at a distance of 690km and at the teleseismic GRF array at a distance exceeding 4700km show very similar relative values (standard deviation 0.048 magnitude units) for underground explosions of different sizes at the Shagan River test site. This result as well as the comparison of BRV rms Lg magnitudes (which were calculated from the log rms amplitudes using an appropriate calibration) with magnitude determinations for P waves of global seismic networks (standard deviation 0.054 magnitude units) point to a high precision in estimating the relative source sizes of explosions from Lg-based single station data. Similar results were also obtained by other investigators (Patton, 1988; Ringdaletal., 1992) using Lg data from different stations at different distances.Additionally, GRF log rms Lg and P-coda amplitude measurements were made for a larger data set from Novaya Zemlya and East Kazakh explosions, which were supplemented with mb(Lg) amplitude measurements using a modified version of Nuttli's (1973, 1986a) method. From this test of the relative performance of the three different magnitude scales, it was found that the Lg and P-coda based magnitudes performed equally well, whereas the modified Nuttli mb(Lg) magnitudes show greater scatter when compared to the worldwide mb reference magnitudes. Whether this result indicates that the rms amplitude measurements are superior to the zero-to-peak amplitude measurement of a single cycle used for the modified Nuttli method, however, cannot be finally assessed, since the calculated mb(Lg) magnitudes are only preliminary until appropriate attenuation corrections are available for the
Cheung, Yin Bun; Chan, Jerry Kok Yen; Tint, Mya Thway; Godfrey, Keith M.; Gluckman, Peter D.; Kwek, Kenneth; Saw, Seang Mei; Chong, Yap-Seng; Lee, Yung Seng; Yap, Fabian; Lek, Ngee
2016-01-01
Objective Inaccurate parental perception of their child’s weight status is commonly reported in Western countries. It is unclear whether similar misperception exists in Asian populations. This study aimed to evaluate the ability of Singaporean mothers to accurately describe their three-year-old child’s weight status verbally and visually. Methods At three years post-delivery, weight and height of the children were measured. Body mass index (BMI) was calculated and converted into actual weight status using International Obesity Task Force criteria. The mothers were blinded to their child’s measurements and asked to verbally and visually describe what they perceived was their child’s actual weight status. Agreement between actual and described weight status was assessed using Cohen’s Kappa statistic (κ). Results Of 1237 recruited participants, 66.4% (n = 821) with complete data on mothers’ verbal and visual perceptions and children’s anthropometric measurements were analysed. Nearly thirty percent of the mothers were unable to describe their child’s weight status accurately. In verbal description, 17.9% under-estimated and 11.8% over-estimated their child’s weight status. In visual description, 10.4% under-estimated and 19.6% over-estimated their child’s weight status. Many mothers of underweight children over-estimated (verbal 51.6%; visual 88.8%), and many mothers of overweight and obese children under-estimated (verbal 82.6%; visual 73.9%), their child’s weight status. In contrast, significantly fewer mothers of normal-weight children were inaccurate (verbal 16.8%; visual 8.8%). Birth order (p<0.001), maternal (p = 0.004) and child’s weight status (p<0.001) were associated with consistently inaccurate verbal and visual descriptions. Conclusions Singaporean mothers, especially those of underweight and overweight children, may not be able to perceive their young child’s weight status accurately. To facilitate prevention of childhood
Rapid estimates of relative water content.
Smart, R E
1974-02-01
Relative water content may be accurately estimated using the ratio of tissue fresh weight to tissue turgid weight, termed here relative tissue weight. That relative water content and relative tissue weight are linearly related is demonstrated algebraically. The mean value of r(2) for grapevine (Vitis vinifera L. cv. Shiraz) leaf tissue over eight separate sampling occasions was 0.993. Similarly high values were obtained for maize (Zea mays cv. Cornell M-3) (0.998) and apple (Malus sylvestris cv. Northern Spy) (0.997) using a range of leaf ages. The proposal by Downey and Miller (1971. Rapid measurements of relative turgidity in maize (Zea mays L.). New Phytol. 70: 555-560) that relative water content in maize may be estimated from water uptake was also investigated for grapevine leaves; this was found to be a less reliable estimate than that obtained with relative tissue weight. With either method, there is a need for calibration, although this could be achieved for relative tissue weight at least with only a few subsamples. PMID:16658686
NNLOPS accurate associated HW production
NASA Astrophysics Data System (ADS)
Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia
2016-06-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.
NASA Technical Reports Server (NTRS)
1995-01-01
The Attitude Adjuster is a system for weight repositioning corresponding to a SCUBA diver's changing positions. Compact tubes on the diver's air tank permit controlled movement of lead balls within the Adjuster, automatically repositioning when the diver changes position. Manufactured by Think Tank Technologies, the system is light and small, reducing drag and energy requirements and contributing to lower air consumption. The Mid-Continent Technology Transfer Center helped the company with both technical and business information and arranged for the testing at Marshall Space Flight Center's Weightlessness Environmental Training Facility for astronauts.
How to accurately bypass damage
Broyde, Suse; Patel, Dinshaw J.
2016-01-01
Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203
[Controlled weight bearing after osteosynthesis].
Perren, T; Matter, P
1993-01-01
Patient compliance with postoperative partial weight bearing can be a difficult management problem. The problem may be intentional or unintentional. There is no objective way to assess the amount of weight placed on the lower extremity by the patient. It is our clinical suspicion that patients place more weight than is desirable on the effected limb. There are few reports in the literature on this topic. One study has confirmed our suspicion of poor patient compliance with postoperative weight bearing. Our goal is to develop a system to accurately assess weight bearing and to improve this aspect of postoperative fracture care. Through an active feedback device we hope to improve patient education and understanding. We plan to study the clinical applications of using a pressure sensitive shoe insert device. Our ultimate goal is to improve upon the present device and to study the clinical application of there use. PMID:8123330
Link Prediction in Weighted Networks: A Weighted Mutual Information Model
Zhu, Boyao; Xia, Yongxiang
2016-01-01
The link-prediction problem is an open issue in data mining and knowledge discovery, which attracts researchers from disparate scientific communities. A wealth of methods have been proposed to deal with this problem. Among these approaches, most are applied in unweighted networks, with only a few taking the weights of links into consideration. In this paper, we present a weighted model for undirected and weighted networks based on the mutual information of local network structures, where link weights are applied to further enhance the distinguishable extent of candidate links. Empirical experiments are conducted on four weighted networks, and results show that the proposed method can provide more accurate predictions than not only traditional unweighted indices but also typical weighted indices. Furthermore, some in-depth discussions on the effects of weak ties in link prediction as well as the potential to predict link weights are also given. This work may shed light on the design of algorithms for link prediction in weighted networks. PMID:26849659
Link Prediction in Weighted Networks: A Weighted Mutual Information Model.
Zhu, Boyao; Xia, Yongxiang
2016-01-01
The link-prediction problem is an open issue in data mining and knowledge discovery, which attracts researchers from disparate scientific communities. A wealth of methods have been proposed to deal with this problem. Among these approaches, most are applied in unweighted networks, with only a few taking the weights of links into consideration. In this paper, we present a weighted model for undirected and weighted networks based on the mutual information of local network structures, where link weights are applied to further enhance the distinguishable extent of candidate links. Empirical experiments are conducted on four weighted networks, and results show that the proposed method can provide more accurate predictions than not only traditional unweighted indices but also typical weighted indices. Furthermore, some in-depth discussions on the effects of weak ties in link prediction as well as the potential to predict link weights are also given. This work may shed light on the design of algorithms for link prediction in weighted networks. PMID:26849659
Ensemble estimators for multivariate entropy estimation
Sricharan, Kumar; Wei, Dennis; Hero, Alfred O.
2015-01-01
The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size T as the dimension d of the samples increases. In particular, the rate is often glacially slow of order O(T−γ/d), where γ > 0 is a rate parameter. Examples of such estimators include kernel density estimators, k-nearest neighbor (k-NN) density estimators, k-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of O(T−1). Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample. PMID:25897177
... loss-rapid weight loss; Overweight-rapid weight loss; Obesity-rapid weight loss; Diet-rapid weight loss ... for people who have health problems because of obesity. For these people, losing a lot of weight ...
Pregnancy Weight Gain Calculator
... Newsroom Dietary Guidelines Communicator’s Guide Pregnancy Weight Gain Calculator You are here Home / Online Tools Pregnancy Weight Gain Calculator Print Share Pregnancy Weight Gain Calculator Pregnancy Weight Gain Calculator Pregnancy Weight Gain Intro ...
Accurate radiative transfer calculations for layered media.
Selden, Adrian C
2016-07-01
Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700
Accurate Prediction of Docked Protein Structure Similarity.
Akbal-Delibas, Bahar; Pomplun, Marc; Haspel, Nurit
2015-09-01
One of the major challenges for protein-protein docking methods is to accurately discriminate nativelike structures. The protein docking community agrees on the existence of a relationship between various favorable intermolecular interactions (e.g. Van der Waals, electrostatic, desolvation forces, etc.) and the similarity of a conformation to its native structure. Different docking algorithms often formulate this relationship as a weighted sum of selected terms and calibrate their weights against specific training data to evaluate and rank candidate structures. However, the exact form of this relationship is unknown and the accuracy of such methods is impaired by the pervasiveness of false positives. Unlike the conventional scoring functions, we propose a novel machine learning approach that not only ranks the candidate structures relative to each other but also indicates how similar each candidate is to the native conformation. We trained the AccuRMSD neural network with an extensive dataset using the back-propagation learning algorithm. Our method achieved predicting RMSDs of unbound docked complexes with 0.4Å error margin. PMID:26335807
Ekizoglu, Oguzhan; Hocaoglu, Elif; Inci, Ercan; Can, Ismail Ozgur; Aksoy, Sema; Kazimoglu, Cemal
2016-03-01
Radiation exposure during forensic age estimation is associated with ethical implications. It is important to prevent repetitive radiation exposure when conducting advanced ultrasonography (USG) and magnetic resonance imaging (MRI). The purpose of this study was to investigate the utility of 3.0-T MRI in determining the degree of ossification of the distal femoral and proximal tibial epiphyses in a group of Turkish population. We retrospectively evaluated coronal T2-weighted and turbo spin-echo sequences taken upon MRI of 503 patients (305 males, 198 females; age 10-30 years) using a five-stage method. Intra- and interobserver variations were very low. (Intraobserver reliability was κ=0.919 for the distal femoral epiphysis and κ=0.961 for the proximal tibial epiphysis, and interobserver reliability was κ=0.836 for the distal femoral epiphysis and κ=0.885 for the proximal tibial epiphysis.) Spearman's rank correlation analysis indicated a significant positive relationship between age and the extent of ossification of the distal femoral and proximal tibial epiphyses (p<0.001). Comparison of male and female data revealed significant between-gender differences in the ages at first attainment of stages 2, 3, and 4 ossifications of the distal femoral epiphysis and stage 1 and 4 ossifications of the proximal tibial epiphysis (p<0.05). The earliest ages at which ossification of stages 3, 4, and 5 was evident in the distal femoral epiphysis were 14, 17, and 22 years in males and 13, 16, and 21 years in females, respectively. Proximal tibial epiphysis of stages 3, 4, and 5 ossification was first noted at ages 14, 17, and 18 years in males and 13, 15, and 16 years in females, respectively. MRI of the distal femoral and proximal tibial epiphyses is an alternative, noninvasive, and reliable technique to estimate age. PMID:26797254
Multi sensor transducer and weight factor
NASA Technical Reports Server (NTRS)
Immer, Christopher D. (Inventor); Lane, John (Inventor); Eckhoff, Anthony J. (Inventor); Perotti, Jose M. (Inventor)
2004-01-01
A multi-sensor transducer and processing method allow insitu monitoring of the senor accuracy and transducer `health`. In one embodiment, the transducer has multiple sensors to provide corresponding output signals in response to a stimulus, such as pressure. A processor applies individual weight factors to reach of the output signals and provide a single transducer output that reduces the contribution from inaccurate sensors. The weight factors can be updated and stored. The processor can use the weight factors to provide a `health` of the transducer based upon the number of accurate versus in-accurate sensors in the transducer.
2012-01-01
Background Self-reported anthropometric data are commonly used to estimate prevalence of obesity in population and community-based studies. We aim to: 1) Determine whether survey participants are able and willing to self-report height and weight; 2) Assess the accuracy of self-reported compared to measured anthropometric data in a community-based sample of young people. Methods Participants (16–29 years) of a behaviour survey, recruited at a Melbourne music festival (January 2011), were asked to self-report height and weight; researchers independently weighed and measured a sub-sample. Body Mass Index was calculated and overweight/obesity classified as ≥25kg/m2. Differences between measured and self-reported values were assessed using paired t-test/Wilcoxon signed ranks test. Accurate report of height and weight were defined as <2cm and <2kg difference between self-report and measured values, respectively. Agreement between classification of overweight/obesity by self-report and measured values was assessed using McNemar’s test. Results Of 1405 survey participants, 82% of males and 72% of females self-reported their height and weight. Among 67 participants who were also independently measured, self-reported height and weight were significantly less than measured height (p=0.01) and weight (p<0.01) among females, but no differences were detected among males. Overall, 52% accurately self-reported height, 30% under-reported, and 18% over-reported; 34% accurately self-reported weight, 52% under-reported and 13% over-reported. More females (70%) than males (35%) under-reported weight (p=0.01). Prevalence of overweight/obesity was 33% based on self-report data and 39% based on measured data (p=0.16). Conclusions Self-reported measurements may underestimate weight but accurately identified overweight/obesity in the majority of this sample of young people. PMID:23170838
NASA Astrophysics Data System (ADS)
Leihong, Zhang; Bei, Li; Dong, Liang; Xiuhua, Ma
2016-07-01
In order to reconstruct the spectral reflectance accurately, a new method of spectral reflectance reconstruction based on the weighted measurement matrix is proposed in this paper. By optimizing the measurement matrix between spectral reflectance and the response of a camera, the method can improve the reconstruction accuracy. The new method is a combination of three kinds of common reflectance reconstruction methods, which are the pseudo inverse method, the Wiener estimation method and the principal component analysis method. The new measurement matrix can be achieved after weighting the measurement matrices of these three methods to reconstruct the spectral reflectance. What is more, the weights of the three methods can be obtained by minimizing the color difference. Results show that the CIE1976 color difference and RMSE value of the weighted reconstructed spectra are less than that of three common reconstruction methods. The spectral matching accuracy GFC of the method is higher than 0.99 and its reconstruction accuracy is high.
Weighting climate model projections using observational constraints
Gillett, Nathan P.
2015-01-01
Projected climate change integrates the net response to multiple climate feedbacks. Whereas existing long-term climate change projections are typically based on unweighted individual climate model simulations, as observed climate change intensifies it is increasingly becoming possible to constrain the net response to feedbacks and hence projected warming directly from observed climate change. One approach scales simulated future warming based on a fit to observations over the historical period, but this approach is only accurate for near-term projections and for scenarios of continuously increasing radiative forcing. For this reason, the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) included such observationally constrained projections in its assessment of warming to 2035, but used raw model projections of longer term warming to 2100. Here a simple approach to weighting model projections based on an observational constraint is proposed which does not assume a linear relationship between past and future changes. This approach is used to weight model projections of warming in 2081–2100 relative to 1986–2005 under the Representative Concentration Pathway 4.5 forcing scenario, based on an observationally constrained estimate of the Transient Climate Response derived from a detection and attribution analysis. The resulting observationally constrained 5–95% warming range of 0.8–2.5 K is somewhat lower than the unweighted range of 1.1–2.6 K reported in the IPCC AR5. PMID:26438283
Accuracy of hands v. household measures as portion size estimation aids.
Gibson, Alice A; Hsu, Michelle S H; Rangan, Anna M; Seimon, Radhika V; Lee, Crystal M Y; Das, Arpita; Finch, Charles H; Sainsbury, Amanda
2016-01-01
Accurate estimation of food portion size is critical in dietary studies. Hands are potentially useful as portion size estimation aids; however, their accuracy has not been tested. The aim of the present study was to test the accuracy of a novel portion size estimation method using the width of the fingers as a 'ruler' to measure the dimensions of foods ('finger width method'), as well as fists and thumb or finger tips. These hand measures were also compared with household measures (cups and spoons). A total of sixty-seven participants (70 % female; age 32·7 (sd 13·7) years; BMI 23·2 (sd 3·5) kg/m(2)) attended a 1·5 h session in which they estimated the portion sizes of forty-two pre-weighed foods and liquids. Hand measurements were used in conjunction with geometric formulas to convert estimations to volumes. Volumes determined with hand and household methods were converted to estimated weights using density factors. Estimated weights were compared with true weights, and the percentage difference from the true weight was used to compare accuracy between the hand and household methods. Of geometrically shaped foods and liquids estimated with the finger width method, 80 % were within ±25 % of the true weight of the food, and 13 % were within ±10 %, in contrast to 29 % of those estimated with the household method being within ±25 % of the true weight of the food, and 8 % being within ±10 %. For foods that closely resemble a geometric shape, the finger width method provides a novel and acceptably accurate method of estimating portion size. PMID:27547392
Misperceptions of weight status among adolescents: sociodemographic and behavioral correlates
Bodde, Amy E; Beebe, Timothy J; Chen, Laura P; Jenkins, Sarah; Perez-Vergara, Kelly; Finney Rutten, Lila J; Ziegenfuss, Jeanette Y
2014-01-01
Objective Accurate perceptions of weight status are important motivational triggers for weight loss among overweight or obese individuals, yet weight misperception is prevalent. To identify and characterize individuals holding misperceptions around their weight status, it may be informative for clinicians to assess self-reported body mass index (BMI) classification (ie, underweight, normal, overweight, obese) in addition to clinical weight measurement. Methods Self-reported weight classification data from the 2007 Current Visit Information – Child and Adolescent Survey collected at Mayo Clinic in Rochester, MN, were compared with measured clinical height and weight for 2,993 adolescents. Results While, overall, 74.2% of adolescents accurately reported their weight status, females, younger adolescents, and proxy (vs self) reporters were more accurate. Controlling for demographic and behavioral characteristics, the higher an individual’s BMI percentile, the less likely there was agreement between self-report and measured BMI percentile. Those with high BMI who misperceive their weight status were less likely than accurate perceivers to attempt weight loss. Conclusion Adolescents’ and proxies’ misperception of weight status increases with BMI percentile. Obtaining an adolescent’s self-perceived weight status in addition to measured height and weight offers clinicians valuable baseline information to discuss motivation for weight loss. PMID:25525400
Accurate calculation of diffraction-limited encircled and ensquared energy.
Andersen, Torben B
2015-09-01
Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873
NASA Astrophysics Data System (ADS)
Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio
2015-12-01
This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).
Fast and Accurate Construction of Confidence Intervals for Heritability.
Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran
2016-06-01
Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052
Toward Accurate and Quantitative Comparative Metagenomics.
Nayfach, Stephen; Pollard, Katherine S
2016-08-25
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Lo, Wing-Sze; Ho, Sai-Yin; Mak, Kwok-Kei; Lai, Yuen-Kwan; Lam, Tai-Hing
2009-01-01
Background Weight comments are commonly received by adolescents, but the accuracy of the comments and their effects on weight misperception are unclear. We assessed the prevalence and accuracy of weight comments received by Chinese adolescents from different sources and their relation to weight misperception. Methods In the Hong Kong Student Obesity Surveillance (HKSOS) project 2006–07, 22612 students aged 11–18 (41.5% boys) completed a questionnaire on obesity. Students responded if family members, peers and professionals had seriously commented over the past 30 days that they were "too fat" or "too thin" in two separate questions. The accuracy of the comments was judged against the actual weight status derived from self-reported height and weight. Self-perceived weight status was also reported and any discordance with the actual weight status denoted weight misperception. Logistic regression yielded adjusted odd ratios for weight misperception by the type of weight comments received. Results One in three students received weight comments, and the mother was the most common source of weight comments. Health professional was the most accurate source of weight comments, yet less than half the comments were correct. Adolescents receiving incorrect comments had increased risk of having weight misperception in all weight status groups. Receiving conflicting comments was positively associated with weight misperception among normal weight adolescents. In contrast, underweight and overweight/obese adolescents receiving correct weight comments were less likely to have weight misperception. Conclusion Weight comments, mostly incorrect, were commonly received by Chinese adolescents in Hong Kong, and such incorrect comments were associated with weight misperception. PMID:19642972
Rapid Weight Loss in Sports with Weight Classes.
Khodaee, Morteza; Olewinski, Lucianne; Shadgan, Babak; Kiningham, Robert R
2015-01-01
Weight-sensitive sports are popular among elite and nonelite athletes. Rapid weight loss (RWL) practice has been an essential part of many of these sports for many decades. Due to the limited epidemiological studies on the prevalence of RWL, its true prevalence is unknown. It is estimated that more than half of athletes in weight-class sports have practiced RWL during the competitive periods. As RWL can have significant physical, physiological, and psychological negative effects on athletes, its practice has been discouraged for many years. It seems that appropriate rule changes have had the biggest impact on the practice of RWL in sports like wrestling. An individualized and well-planned gradual and safe weight loss program under the supervision of a team of coaching staff, athletic trainers, sports nutritionists, and sports physicians is recommended. PMID:26561763
Accurate thickness measurement of graphene
NASA Astrophysics Data System (ADS)
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Schröder, Julian; Cheng, Bastian; Ebinger, Martin; Köhrmann, Martin; Wu, Ona; Kang, Dong-Wha; Liebeskind, David S.; Tourdias, Thomas; Singer, Oliver C.; Christensen, Soren; Campbell, Bruce; Luby, Marie; Warach, Steven; Fiehler, Jens; Fiebach, Jochen B.; Gerloff, Christian; Thomalla, Götz
2016-01-01
Background and Purpose Alberta Stroke Program Early Computed Tomographic Score (ASPECTS) has been used to estimate diffusion-weighted imaging (DWI) lesion volume in acute stroke. We aimed to assess correlations of DWI-ASPECTS with lesion volume in different middle cerebral artery (MCA) subregions and reproduce existing ASPECTS thresholds of a malignant profile defined by lesion volume ≥100 mL. Methods We analyzed data of patients with MCA stroke from a prospective observational study of DWI and fluid-attenuated inversion recovery in acute stroke. DWI-ASPECTS and lesion volume were calculated. The population was divided into subgroups based on lesion localization (superficial MCA territory, deep MCA territory, or both). Correlation of ASPECTS and infarct volume was calculated, and receiver-operating characteristics curve analysis was performed to identify the optimal ASPECTS threshold for ≥100-mL lesion volume. Results A total of 496 patients were included. There was a significant negative correlation between ASPECTS and DWI lesion volume (r=−0.78; P<0.0001). With regards to lesion localization, correlation was weaker in deep MCA region (r=−0.19; P=0.038) when compared with superficial (r=−0.72; P<0.001) or combined superficial and deep MCA lesions (r=−0.72; P<0.001). Receiver-operating characteristics analysis revealed ASPECTS≤6 as best cutoff to identify ≥100-mL DWI lesion volume; however, positive predictive value was low (0.35). Conclusions ASPECTS has limitations when lesion location is not considered. Identification of patients with malignant profile by DWI-ASPECTS may be unreliable. ASPECTS may be a useful tool for the evaluation of noncontrast computed tomography. However, if MRI is used, ASPECTS seems dispensable because lesion volume can easily be quantified on DWI maps. PMID:25316278
Technology Transfer Automated Retrieval System (TEKTRAN)
Having accurate estimates of the cost of irrigation is important when making irrigation decisions. Estimates of fixed costs are critical for investment decisions. Operating cost estimates can assist in decisions regarding additional irrigations. This fact sheet examines the costs associated with ...
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.
1985-01-01
Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.
African American Women's Perception of Their Own Weight Status Compared to Measured Weight Status
Technology Transfer Automated Retrieval System (TEKTRAN)
Previous research indicates that African American (AA) women may be more accepting of larger body sizes compared with women of other races. This study assessed whether AA women perceived their own weight status accurately, when compared with their actual weight classification. Participants were 528 ...
Informed Test Component Weighting.
ERIC Educational Resources Information Center
Rudner, Lawrence M.
2001-01-01
Identifies and evaluates alternative methods for weighting tests. Presents formulas for composite reliability and validity as a function of component weights and suggests a rational process that identifies and considers trade-offs in determining weights. Discusses drawbacks to implicit weighting and explicit weighting and the difficulty of…
Correlates of Low Birth Weight
Hazarika, Jayant; Dutta, Sudip
2014-01-01
Background. Low birth weight is the single most important factor that determines the chances of child survival. A recent annual estimation indicated that nearly 8 million infants are born with low birth weight in India. The infant mortality rate is about 20 times greater for all low birth weight babies. Methods. A matched case–control study was conducted on 130 low birth weight babies and 130 controls for 12 months (from August 1, 2007, to July 31, 2008) at the Central Referral Hospital, Tadong, East District of Sikkim, India. Data were analyzed using the Statistical Package for Social Sciences, version 10.0 for Windows. Chi-square test and multiple logistic regression were applied. A P value less than .05 was considered as significant. Results. In the first phase of this study, 711 newborn babies, borne by 680 mothers, were screened at the Central Referral Hospital of Sikkim during the 1-year study period, and the proportion of low birth weight babies was determined to be 130 (18.3%). Conclusion. Multiple logistic regression analysis, conducted in the second phase, revealed that low or middle socioeconomic status, maternal underweight, twin pregnancy, previous history of delivery of low birth weight babies, smoking and consumption of alcohol during pregnancy, and congenital anomalies had independent significant association with low birth weight in this study population.
Accurate adiabatic correction in the hydrogen molecule
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate adiabatic correction in the hydrogen molecule
NASA Astrophysics Data System (ADS)
Pachucki, Krzysztof; Komasa, Jacek
2014-12-01
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate ab Initio Spin Densities
2012-01-01
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921
Weight status and the perception of body image in men
Gardner, Rick M
2014-01-01
Understanding the role of body size in relation to the accuracy of body image perception in men is an important topic because of the implications for avoiding and treating obesity, and it may serve as a potential diagnostic criterion for eating disorders. The early research on this topic produced mixed findings. About one-half of the early studies showed that obese men overestimated their body size, with the remaining half providing accurate estimates. Later, improvements in research technology and methodology provided a clearer indication of the role of weight status in body image perception. Research in our laboratory has also produced diverse findings, including that obese subjects sometimes overestimate their body size. However, when examining our findings across several studies, obese subjects had about the same level of accuracy in estimating their body size as normal-weight subjects. Studies in our laboratory also permitted the separation of sensory and nonsensory factors in body image perception. In all but one instance, no differences were found overall between the ability of obese and normal-weight subjects to detect overall changes in body size. Importantly, however, obese subjects are better at detecting changes in their body size when the image is distorted to be too thin as compared to too wide. Both obese and normal-weight men require about a 3%–7% change in the width of their body size in order to detect the change reliably. Correlations between a range of body mass index values and body size estimation accuracy indicated no relationship between these variables. Numerous studies in other laboratories asked men to place their body size into discrete categorizes, ranging from thin to obese. Researchers found that overweight and obese men underestimate their weight status, and that men are less accurate in their categorizations than are women. Cultural influences have been found to be important, with body size underestimations occurring in cultures
Feng, Hui; Jiang, Ni; Huang, Chenglong; Fang, Wei; Yang, Wanneng; Chen, Guoxing; Xiong, Lizhong; Liu, Qian
2013-09-01
Biomass is an important component of the plant phenomics, and the existing methods for biomass estimation for individual plants are either destructive or lack accuracy. In this study, a hyperspectral imaging system was developed for the accurate prediction of the above-ground biomass of individual rice plants in the visible and near-infrared spectral region. First, the structure of the system and the influence of various parameters on the camera acquisition speed were established. Then the system was used to image 152 rice plants, which selected from the rice mini-core collection, in two stages, the tillering to elongation (T-E) stage and the booting to heading (B-H) stage. Several variables were extracted from the images. Following, linear stepwise regression analysis and 5-fold cross-validation were used to select effective variables for model construction and test the stability of the model, respectively. For the T-E stage, the R(2) value was 0.940 for the fresh weight (FW) and 0.935 for the dry weight (DW). For the B-H stage, the R(2) value was 0.891 for the FW and 0.783 for the DW. Moreover, estimations of the biomass using visible light images were also calculated. These comparisons showed that hyperspectral imaging performed better than the visible light imaging. Therefore, this study provides not only a stable hyperspectral imaging platform but also an accurate and nondestructive method for the prediction of biomass for individual rice plants. PMID:24089866
... Measure and Interpret Weight Status Adult Body Mass Index or BMI Body Mass Index (BMI) is a person's weight in kilograms divided ... finding your height and weight in this BMI Index Chart 1 . If your BMI is less than ...
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Parametric study of helicopter aircraft systems costs and weights
NASA Technical Reports Server (NTRS)
Beltramo, M. N.
1980-01-01
Weight estimating relationships (WERs) and recurring production cost estimating relationships (CERs) were developed for helicopters at the system level. The WERs estimate system level weight based on performance or design characteristics which are available during concept formulation or the preliminary design phase. The CER (or CERs in some cases) for each system utilize weight (either actual or estimated using the appropriate WER) and production quantity as the key parameters.
Accurate Inventories Of Irrigated Land
NASA Technical Reports Server (NTRS)
Wall, S.; Thomas, R.; Brown, C.
1992-01-01
System for taking land-use inventories overcomes two problems in estimating extent of irrigated land: only small portion of large state surveyed in given year, and aerial photographs made on 1 day out of year do not provide adequate picture of areas growing more than one crop per year. Developed for state of California as guide to controlling, protecting, conserving, and distributing water within state. Adapted to any large area in which large amounts of irrigation water needed for agriculture. Combination of satellite images, aerial photography, and ground surveys yields data for computer analysis. Analyst also consults agricultural statistics, current farm reports, weather reports, and maps. These information sources aid in interpreting patterns, colors, textures, and shapes on Landsat-images.
Judging body weight from faces: the height-weight illusion.
Schneider, Tobias M; Hecht, Heiko; Carbon, Claus-Christian
2012-01-01
Being able to exploit features of the human face to predict health and fitness can serve as an evolutionary advantage. Surface features such as facial symmetry, averageness, and skin colour are known to influence attractiveness. We sought to determine whether observers are able to extract more complex features, namely body weight. If possible, it could be used as a predictor for health and fitness. For instance, facial adiposity could be taken to indicate a cardiovascular challenge or proneness to infections. Observers seem to be able to glean body weight information from frontal views of a face. Is weight estimation robust across different viewing angles? We showed that participants strongly overestimated body weight for faces photographed from a lower vantage point while underestimating it for faces photographed from a higher vantage point. The perspective distortions of simple facial measures (e.g., width-to-height ratio) that accompany changes in vantage point do not suffice to predict body weight. Instead, more complex patterns must be involved in the height-weight illusion. PMID:22611670
Structural weight analysis of hypersonic aircraft
NASA Technical Reports Server (NTRS)
Ardema, M. D.
1972-01-01
The weights of major structural components of hypersonic, liquid hydrogen fueled aircraft are estimated and discussed. The major components are the body structure, body thermal protection system tankage and wing structure. The method of estimating body structure weight is presented in detail while the weights of the other components are estimated by methods given in referenced papers. Two nominal vehicle concepts are considered. The advanced concept employs a wing-body configuration and hot structure with a nonintegral tank, while the potential concept employs an all body configuration and cold, integral pillow tankage structure. Characteristics of these two concepts are discussed and parametric data relating their weight fractions to variations in vehicle shape and size design criteria and mission requirements, and structural arrangement are presented. Although the potential concept is shown to have a weight advantage over the advanced, it involves more design uncertainties since it is farther removed in design from existing aircraft.
Accurate body composition measures from whole-body silhouettes
Xie, Bowen; Avila, Jesus I.; Ng, Bennett K.; Fan, Bo; Loo, Victoria; Gilsanz, Vicente; Hangartner, Thomas; Kalkwarf, Heidi J.; Lappe, Joan; Oberfield, Sharon; Winer, Karen; Zemel, Babette; Shepherd, John A.
2015-01-01
Purpose: Obesity and its consequences, such as diabetes, are global health issues that burden about 171 × 106 adult individuals worldwide. Fat mass index (FMI, kg/m2), fat-free mass index (FFMI, kg/m2), and percent fat mass may be useful to evaluate under- and overnutrition and muscle development in a clinical or research environment. This proof-of-concept study tested whether frontal whole-body silhouettes could be used to accurately measure body composition parameters using active shape modeling (ASM) techniques. Methods: Binary shape images (silhouettes) were generated from the skin outline of dual-energy x-ray absorptiometry (DXA) whole-body scans of 200 healthy children of ages from 6 to 16 yr. The silhouette shape variation from the average was described using an ASM, which computed principal components for unique modes of shape. Predictive models were derived from the modes for FMI, FFMI, and percent fat using stepwise linear regression. The models were compared to simple models using demographics alone [age, sex, height, weight, and body mass index z-scores (BMIZ)]. Results: The authors found that 95% of the shape variation of the sampled population could be explained using 26 modes. In most cases, the body composition variables could be predicted similarly between demographics-only and shape-only models. However, the combination of shape with demographics improved all estimates of boys and girls compared to the demographics-only model. The best prediction models for FMI, FFMI, and percent fat agreed with the actual measures with R2 adj. (the coefficient of determination adjusted for the number of parameters used in the model equation) values of 0.86, 0.95, and 0.75 for boys and 0.90, 0.89, and 0.69 for girls, respectively. Conclusions: Whole-body silhouettes in children may be useful to derive estimates of body composition including FMI, FFMI, and percent fat. These results support the feasibility of measuring body composition variables from simple
Body Weight Relationships in Early Marriage: Weight Relevance, Weight Comparisons, and Weight Talk
Bove, Caron F.; Sobal, Jeffery
2011-01-01
This investigation uncovered processes underlying the dynamics of body weight and body image among individuals involved in nascent heterosexual marital relationships in Upstate New York. In-depth, semi-structured qualitative interviews conducted with 34 informants, 20 women and 14 men, just prior to marriage and again one year later were used to explore continuity and change in cognitive, affective, and behavioral factors relating to body weight and body image at the time of marriage, an important transition in the life course. Three major conceptual themes operated in the process of developing and enacting informants’ body weight relationships with their partner: weight relevance, weight comparisons, and weight talk. Weight relevance encompassed the changing significance of weight during early marriage and included attracting and capturing a mate, relaxing about weight, living healthily, and concentrating on weight. Weight comparisons between partners involved weight relativism, weight competition, weight envy, and weight role models. Weight talk employed pragmatic talk, active and passive reassurance, and complaining and critiquing criticism. Concepts emerging from this investigation may be useful in designing future studies of and approaches to managing body weight in adulthood. PMID:21864601
Counterexamples concerning a weighted L^2 projection
NASA Astrophysics Data System (ADS)
Xu, Jinchao
1991-10-01
Counterexamples are given to show that some results concerning a weighted {L^2} projection presented earlier by Bramble and the author are sharp, i.e., that certain error and stability estimates are impossible in some cases.
Fast Quaternion Attitude Estimation from Two Vector Measurements
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Bauer, Frank H. (Technical Monitor)
2001-01-01
Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. Existing closed-form attitude estimates based on Wahba's optimality criterion for two arbitrarily weighted observations are somewhat slow to evaluate. This paper presents two new fast quaternion attitude estimation algorithms using two vector observations, one optimal and one suboptimal. The suboptimal method gives the same estimate as the TRIAD algorithm, at reduced computational cost. Simulations show that the TRIAD estimate is almost as accurate as the optimal estimate in representative test scenarios.
Fact Sheet Proven Weight Loss Methods What can weight loss do for you? Losing weight can improve your health in a number of ways. It can lower ... at www.hormone.org/Spanish . Proven Weight Loss Methods Fact Sheet www.hormone.org
Weight loss surgery helps people with extreme obesity to lose weight. It may be an option if you cannot lose weight through diet and exercise or have serious health problems caused by obesity. There are different types of weight loss surgery. They often limit the ...
Estimating the path-average rainwater content and updraft speed along a microwave link
NASA Technical Reports Server (NTRS)
Jameson, Arthur R.
1993-01-01
There is a scarcity of methods for accurately estimating the mass of rainwater rather than its flux. A recently proposed technique uses the difference between the observed rates of attenuation A with increasing distance at 38 and 25 GHz, A(38-25), to estimate the rainwater content W. Unfortunately, this approach is still somewhat sensitive to the form of the drop-size distribution. An alternative proposed here uses the ratio A38/A25 to estimate the mass-weighted average raindrop size Dm. Rainwater content is then estimated from measurements of polarization propagation differential phase shift (Phi-DP) divided by (1-R), where R is the mass-weighted mean axis ratio of the raindrops computed from Dm. This paper investigates these two water-content estimators using results from a numerical simulation of observations along a microwave link. From these calculations, it appears that the combination (R, Phi-DP) produces more accurate estimates of W than does A38-25. In addition, by combining microwave estimates of W and the rate of rainfall in still air with the mass-weighted mean terminal fall speed derived using A38/A25, it is possible to detect the potential influence of vertical air motion on the raingage-microwave rainfall comparisons.
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
PREVENTING WEIGHT REGAIN AFTER WEIGHT LOSS
Technology Transfer Automated Retrieval System (TEKTRAN)
For most dieters, a regaining of lost weight is an all too common experience. Indeed, virtually all interventions for weight loss show limited or even poor long-term effectiveness. This sobering reality was reflected in a comprehensive review of nonsurgical treatments of obesity conducted by the Ins...
Phase estimation from noisy phase fringe patterns using linearly independent basis functions
NASA Astrophysics Data System (ADS)
Kulkarni, Rishikesh; Rastogi, Pramod
2015-12-01
A novel technique is proposed for obtaining unwrapped phase estimation from a highly noisy exponential phase field. In this technique, the interference phase is represented as a linear combination of linearly independent and pre-defined basis functions along each row/column of the phase field at a time. Consequently, the problem of phase estimation is converted into the problem of the estimation of the weights of the basis functions. The extended Kalman filter formulation allows for the accurate estimation of these weights. The simulation results indicate that the formulation offers a strong noise robustness in the phase estimation. Experimental results obtained using digital holographic interferometry and digital speckle pattern interferometry set-ups are provided to demonstrate the practical applicability of the proposed method.