NASA Technical Reports Server (NTRS)
Kranz, David William
2010-01-01
The goal of this research project was be to compare and contrast the selected materials used in step measurements during pre-fits of thermal protection system tiles and to compare and contrast the accuracy of measurements made using these selected materials. The reasoning for conducting this test was to obtain a clearer understanding to which of these materials may yield the highest accuracy rate of exacting measurements in comparison to the completed tile bond. These results in turn will be presented to United Space Alliance and Boeing North America for their own analysis and determination. Aerospace structures operate under extreme thermal environments. Hot external aerothermal environments in high Mach number flights lead to high structural temperatures. The differences between tile heights from one to another are very critical during these high Mach reentries. The Space Shuttle Thermal Protection System is a very delicate and highly calculated system. The thermal tiles on the ship are measured to within an accuracy of .001 of an inch. The accuracy of these tile measurements is critical to a successful reentry of an orbiter. This is why it is necessary to find the most accurate method for measuring the height of each tile in comparison to each of the other tiles. The test results indicated that there were indeed differences in the selected materials used in step measurements during prefits of Thermal Protection System Tiles and that Bees' Wax yielded a higher rate of accuracy when compared to the baseline test. In addition, testing for experience level in accuracy yielded no evidence of difference to be found. Lastly the use of the Trammel tool over the Shim Pack yielded variable difference for those tests.
NASA Astrophysics Data System (ADS)
Davenport, F., IV; Harrison, L.; Shukla, S.; Husak, G. J.; Funk, C. C.
2017-12-01
We evaluate the predictive accuracy of an ensemble of empirical model specifications that use earth observation data to predict sub-national grain yields in Mexico and East Africa. Products that are actively used for seasonal drought monitoring are tested as yield predictors. Our research is driven by the fact that East Africa is a region where decisions regarding agricultural production are critical to preventing the loss of economic livelihoods and human life. Regional grain yield forecasts can be used to anticipate availability and prices of key staples, which can turn can inform decisions about targeting humanitarian response such as food aid. Our objective is to identify-for a given region, grain, and time year- what type of model and/or earth observation can most accurately predict end of season yields. We fit a set of models to county level panel data from Mexico, Kenya, Sudan, South Sudan, and Somalia. We then examine out of sample predicative accuracy using various linear and non-linear models that incorporate spatial and time varying coefficients. We compare accuracy within and across models that use predictor variables from remotely sensed measures of precipitation, temperature, soil moisture, and other land surface processes. We also examine at what point in the season a given model or product is most useful for determining predictive accuracy. Finally we compare predictive accuracy across a variety of agricultural regimes including high intensity irrigated commercial agricultural and rain fed subsistence level farms.
Comparing diagnostic tests on benefit-risk.
Pennello, Gene; Pantoja-Galicia, Norberto; Evans, Scott
2016-01-01
Comparing diagnostic tests on accuracy alone can be inconclusive. For example, a test may have better sensitivity than another test yet worse specificity. Comparing tests on benefit risk may be more conclusive because clinical consequences of diagnostic error are considered. For benefit-risk evaluation, we propose diagnostic yield, the expected distribution of subjects with true positive, false positive, true negative, and false negative test results in a hypothetical population. We construct a table of diagnostic yield that includes the number of false positive subjects experiencing adverse consequences from unnecessary work-up. We then develop a decision theory for evaluating tests. The theory provides additional interpretation to quantities in the diagnostic yield table. It also indicates that the expected utility of a test relative to a perfect test is a weighted accuracy measure, the average of sensitivity and specificity weighted for prevalence and relative importance of false positive and false negative testing errors, also interpretable as the cost-benefit ratio of treating non-diseased and diseased subjects. We propose plots of diagnostic yield, weighted accuracy, and relative net benefit of tests as functions of prevalence or cost-benefit ratio. Concepts are illustrated with hypothetical screening tests for colorectal cancer with test positive subjects being referred to colonoscopy.
Wheat productivity estimates using LANDSAT data
NASA Technical Reports Server (NTRS)
Nalepka, R. F.; Colwell, J. E. (Principal Investigator); Rice, D. P.; Bresnahan, P. A.
1977-01-01
The author has identified the following significant results. Large area LANDSAT yield estimates were generated. These results were compared with estimates computed using a meteorological yield model (CCEA). Both of these estimates were compared with Kansas Crop and Livestock Reporting Service (KCLRS) estimates of yield, in an attempt to assess the relative and absolute accuracy of the LANDSAT and CCEA estimates. Results were inconclusive. A large area direct wheat prediction procedure was implemented. Initial results have produced a wheat production estimate comparable with the KCLRS estimate.
Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi
2016-01-01
Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362
Cow genotyping strategies for genomic selection in a small dairy cattle population.
Jenko, J; Wiggans, G R; Cooper, T A; Eaglen, S A E; Luff, W G de L; Bichard, M; Pong-Wong, R; Woolliams, J A
2017-01-01
This study compares how different cow genotyping strategies increase the accuracy of genomic estimated breeding values (EBV) in dairy cattle breeds with low numbers. In these breeds, few sires have progeny records, and genotyping cows can improve the accuracy of genomic EBV. The Guernsey breed is a small dairy cattle breed with approximately 14,000 recorded individuals worldwide. Predictions of phenotypes of milk yield, fat yield, protein yield, and calving interval were made for Guernsey cows from England and Guernsey Island using genomic EBV, with training sets including 197 de-regressed proofs of genotyped bulls, with cows selected from among 1,440 genotyped cows using different genotyping strategies. Accuracies of predictions were tested using 10-fold cross-validation among the cows. Genomic EBV were predicted using 4 different methods: (1) pedigree BLUP, (2) genomic BLUP using only bulls, (3) univariate genomic BLUP using bulls and cows, and (4) bivariate genomic BLUP. Genotyping cows with phenotypes and using their data for the prediction of single nucleotide polymorphism effects increased the correlation between genomic EBV and phenotypes compared with using only bulls by 0.163±0.022 for milk yield, 0.111±0.021 for fat yield, and 0.113±0.018 for protein yield; a decrease of 0.014±0.010 for calving interval from a low base was the only exception. Genetic correlation between phenotypes from bulls and cows were approximately 0.6 for all yield traits and significantly different from 1. Only a very small change occurred in correlation between genomic EBV and phenotypes when using the bivariate model. It was always better to genotype all the cows, but when only half of the cows were genotyped, a divergent selection strategy was better compared with the random or directional selection approach. Divergent selection of 30% of the cows remained superior for the yield traits in 8 of 10 folds. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Arend, Carlos Frederico; Arend, Ana Amalia; da Silva, Tiago Rodrigues
2014-06-01
The aim of our study was to systematically compare different methodologies to establish an evidence-based approach based on tendon thickness and structure for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. US was obtained from 164 symptomatic patients with supraspinatus tendinopathy detected at MRI and 42 asymptomatic controls with normal MRI. Diagnostic yield was calculated for either maximal supraspinatus tendon thickness (MSTT) and tendon structure as isolated criteria and using different combinations of parallel and sequential testing at US. Chi-squared tests were performed to assess sensitivity, specificity, and accuracy of different diagnostic approaches. Mean MSTT was 6.68 mm in symptomatic patients and 5.61 mm in asymptomatic controls (P<.05). When used as an isolated criterion, MSTT>6.0mm provided best results for accuracy (93.7%) when compared to other measurements of tendon thickness. Also as an isolated criterion, abnormal tendon structure (ATS) yielded 93.2% accuracy for diagnosis. The best overall yield was obtained by both parallel and sequential testing using either MSTT>6.0mm or ATS as diagnostic criteria at no particular order, which provided 99.0% accuracy, 100% sensitivity, and 95.2% specificity. Among these parallel and sequential tests that provided best overall yield, additional analysis revealed that sequential testing first evaluating tendon structure required assessment of 258 criteria (vs. 261 for sequential testing first evaluating tendon thickness and 412 for parallel testing) and demanded a mean of 16.1s to assess diagnostic criteria and reach the diagnosis (vs. 43.3s for sequential testing first evaluating tendon thickness and 47.4s for parallel testing). We found that using either MSTT>6.0mm or ATS as diagnostic criteria for both parallel and sequential testing provides the best overall yield for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. Among these strategies, a two-step sequential approach first assessing tendon structure was advantageous because it required a lower number of criteria to be assessed and demanded less time to assess diagnostic criteria and reach the diagnosis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Yield estimation of corn with multispectral data and the potential of using imaging spectrometers
NASA Astrophysics Data System (ADS)
Bach, Heike
1997-05-01
In the frame of the special yield estimation, a regular procedure conducted for the European Union to more accurately estimate agricultural yield, a project was conducted for the state minister for Rural Environment, Food and Forestry of Baden-Wuerttemberg, Germany) to test remote sensing data with advanced yield formation models for accuracy and timelines of yield estimation of corn. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on 4 LANDSAT-derived estimates and daily meteorological data the grain yield of corn stands was determined for 1995. The modeled yield was compared with results independently gathered within the special yield estimation for 23 test fields in the Upper Rhine Valley. The agrement between LANDSAT-based estimates and Special Yield Estimation shows a relative error of 2.3 percent. The comparison of the results for single fields shows, that six weeks before harvest the grain yield of single corn fields was estimated with a mean relative accuracy of 13 percent using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results or yield prediction with remote sensing.
Navigation strategy and filter design for solar electric missions
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Hagar, H., Jr.
1972-01-01
Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.
Smoot, Betty J.; Wong, Josephine F.; Dodd, Marylin J.
2013-01-01
Objective To compare diagnostic accuracy of measures of breast cancer–related lymphedema (BCRL). Design Cross-sectional design comparing clinical measures with the criterion standard of previous diagnosis of BCRL. Setting University of California San Francisco Translational Science Clinical Research Center. Participants Women older than 18 years and more than 6 months posttreatment for breast cancer (n=141; 70 with BCRL, 71 without BCRL). Interventions Not applicable. Main Outcome Measures Sensitivity, specificity, receiver operator characteristic curve, and area under the curve (AUC) were used to evaluate accuracy. Results A total of 141 women were categorized as having (n=70) or not having (n=71) BCRL based on past diagnosis by a health care provider, which was used as the reference standard. Analyses of ROC curves for the continuous outcomes yielded AUC of .68 to .88 (P<.001); of the physical measures bioimpedance spectroscopy yielded the highest accuracy with an AUC of .88 (95% confidence interval, .80–.96) for women whose dominant arm was the affected arm. The lowest accuracy was found using the 2-cm diagnostic cutoff score to identify previously diagnosed BCRL (AUC, .54–.65). Conclusions Our findings support the use of bioimpedance spectroscopy in the assessment of existing BCRL. Refining diagnostic cutoff values may improve accuracy of diagnosis and warrant further investigation. PMID:21440706
Compact Intraoperative MRI: Stereotactic Accuracy and Future Directions.
Markowitz, Daniel; Lin, Dishen; Salas, Sussan; Kohn, Nina; Schulder, Michael
2017-01-01
Intraoperative imaging must supply data that can be used for accurate stereotactic navigation. This information should be at least as accurate as that acquired from diagnostic imagers. The aim of this study was to compare the stereotactic accuracy of an updated compact intraoperative MRI (iMRI) device based on a 0.15-T magnet to standard surgical navigation on a 1.5-T diagnostic scan MRI and to navigation with an earlier model of the same system. The accuracy of each system was assessed using a water-filled phantom model of the brain. Data collected with the new system were compared to those obtained in a previous study assessing the older system. The accuracy of the new iMRI was measured against standard surgical navigation on a 1.5-T MRI using T1-weighted (W) images. The mean error with the iMRI using T1W images was lower than that based on images from the 1.5-T scan (1.24 vs. 2.43 mm). T2W images from the newer iMRI yielded a lower navigation error than those acquired with the prior model (1.28 vs. 3.15 mm). Improvements in magnet design can yield progressive increases in accuracy, validating the concept of compact, low-field iMRI. Avoiding the need for registration between image and surgical space increases navigation accuracy. © 2017 S. Karger AG, Basel.
Adjusting site index and age to account for genetic effects in yield equations for loblolly pine
Steven A. Knowe; G. Sam Foster
2010-01-01
Nine combinations of site index curves and age adjustments methods were evaluated for incorporating genetic effects for open-pollinated loblolly pine (Pinus taeda L.) families. An explicit yield system consisting of dominant height, basal area, and merchantable green weight functions was used to compare the accuracy of predictions associated with...
Hopkins, D L; Safari, E; Thompson, J M; Smith, C R
2004-06-01
A wide selection of lamb types of mixed sex (ewes and wethers) were slaughtered at a commercial abattoir and during this process images of 360 carcasses were obtained online using the VIAScan® system developed by Meat and Livestock Australia. Soft tissue depth at the GR site (thickness of tissue over the 12th rib 110 mm from the midline) was measured by an abattoir employee using the AUS-MEAT sheep probe (PGR). Another measure of this thickness was taken in the chiller using a GR knife (NGR). Each carcass was subsequently broken down to a range of trimmed boneless retail cuts and the lean meat yield determined. The current industry model for predicting meat yield uses hot carcass weight (HCW) and tissue depth at the GR site. A low level of accuracy and precision was found when HCW and PGR were used to predict lean meat yield (R(2)=0.19, r.s.d.=2.80%), which could be improved markedly when PGR was replaced by NGR (R(2)=0.41, r.s.d.=2.39%). If the GR measures were replaced by 8 VIAScan® measures then greater prediction accuracy could be achieved (R(2)=0.52, r.s.d.=2.17%). A similar result was achieved when the model was based on principal components (PCs) computed from the 8 VIAScan® measures (R(2)=0.52, r.s.d.=2.17%). The use of PCs also improved the stability of the model compared to a regression model based on HCW and NGR. The transportability of the models was tested by randomly dividing the data set and comparing coefficients and the level of accuracy and precision. Those models based on PCs were superior to those based on regression. It is demonstrated that with the appropriate modeling the VIAScan® system offers a workable method for predicting lean meat yield automatically.
Random Forests for Global and Regional Crop Yield Predictions.
Jeong, Jig Han; Resop, Jonathan P; Mueller, Nathaniel D; Fleisher, David H; Yun, Kyungdahm; Butler, Ethan E; Timlin, Dennis J; Shim, Kyo-Moon; Gerber, James S; Reddy, Vangimalla R; Kim, Soo-Hyung
2016-01-01
Accurate predictions of crop yield are critical for developing effective agricultural and food policies at the regional and global scales. We evaluated a machine-learning method, Random Forests (RF), for its ability to predict crop yield responses to climate and biophysical variables at global and regional scales in wheat, maize, and potato in comparison with multiple linear regressions (MLR) serving as a benchmark. We used crop yield data from various sources and regions for model training and testing: 1) gridded global wheat grain yield, 2) maize grain yield from US counties over thirty years, and 3) potato tuber and maize silage yield from the northeastern seaboard region. RF was found highly capable of predicting crop yields and outperformed MLR benchmarks in all performance statistics that were compared. For example, the root mean square errors (RMSE) ranged between 6 and 14% of the average observed yield with RF models in all test cases whereas these values ranged from 14% to 49% for MLR models. Our results show that RF is an effective and versatile machine-learning method for crop yield predictions at regional and global scales for its high accuracy and precision, ease of use, and utility in data analysis. RF may result in a loss of accuracy when predicting the extreme ends or responses beyond the boundaries of the training data.
Calus, M P L; de Haas, Y; Veerkamp, R F
2013-10-01
Genomic selection holds the promise to be particularly beneficial for traits that are difficult or expensive to measure, such that access to phenotypes on large daughter groups of bulls is limited. Instead, cow reference populations can be generated, potentially supplemented with existing information from the same or (highly) correlated traits available on bull reference populations. The objective of this study, therefore, was to develop a model to perform genomic predictions and genome-wide association studies based on a combined cow and bull reference data set, with the accuracy of the phenotypes differing between the cow and bull genomic selection reference populations. The developed bivariate Bayesian stochastic search variable selection model allowed for an unbalanced design by imputing residuals in the residual updating scheme for all missing records. The performance of this model is demonstrated on a real data example, where the analyzed trait, being milk fat or protein yield, was either measured only on a cow or a bull reference population, or recorded on both. Our results were that the developed bivariate Bayesian stochastic search variable selection model was able to analyze 2 traits, even though animals had measurements on only 1 of 2 traits. The Bayesian stochastic search variable selection model yielded consistently higher accuracy for fat yield compared with a model without variable selection, both for the univariate and bivariate analyses, whereas the accuracy of both models was very similar for protein yield. The bivariate model identified several additional quantitative trait loci peaks compared with the single-trait models on either trait. In addition, the bivariate models showed a marginal increase in accuracy of genomic predictions for the cow traits (0.01-0.05), although a greater increase in accuracy is expected as the size of the bull population increases. Our results emphasize that the chosen value of priors in Bayesian genomic prediction models are especially important in small data sets. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Aihara, Hiroyuki; Kumar, Nitin; Thompson, Christopher C
2018-04-19
An education system for narrow band imaging (NBI) interpretation requires sufficient exposure to key features. However, access to didactic lectures by experienced teachers is limited in the United States. To develop and assess the effectiveness of a colorectal lesion identification tutorial. In the image analysis pretest, subjects including 9 experts and 8 trainees interpreted 50 white light (WL) and 50 NBI images of colorectal lesions. Results were not reviewed with subjects. Trainees then participated in an online tutorial emphasizing NBI interpretation in colorectal lesion analysis. A post-test was administered and diagnostic yields were compared to pre-education diagnostic yields. Under the NBI mode, experts showed higher diagnostic yields (sensitivity 91.5% [87.3-94.4], specificity 90.6% [85.1-94.2], and accuracy 91.1% [88.5-93.7] with substantial interobserver agreement [κ value 0.71]) compared to trainees (sensitivity 89.6% [84.8-93.0], specificity 80.6% [73.5-86.3], and accuracy 86.0% [82.6-89.2], with substantial interobserver agreement [κ value 0.69]). The online tutorial improved the diagnostic yields of trainees to the equivalent level of experts (sensitivity 94.1% [90.0-96.6], specificity 89.0% [83.0-93.2], and accuracy 92.0% [89.3-94.7], p < 0.001 with substantial interobserver agreement [κ value 0.78]). This short, online tutorial improved diagnostic performance and interobserver agreement. © 2018 S. Karger AG, Basel.
Iftikhar, Imran H; Alghothani, Lana; Sardi, Alejandro; Berkowitz, David; Musani, Ali I
2017-07-01
Transbronchial lung cryobiopsy is increasingly being used for the assessment of diffuse parenchymal lung diseases. Several studies have shown larger biopsy samples and higher yields compared with conventional transbronchial biopsies. However, the higher risk of bleeding and other complications has raised concerns for widespread use of this modality. To study the diagnostic accuracy and safety profile of transbronchial lung cryobiopsy and compare with video-assisted thoracoscopic surgery (VATS) by reviewing available evidence from the literature. Medline and PubMed were searched from inception until December 2016. Data on diagnostic performance were abstracted by constructing two-by-two contingency tables for each study. Data on a priori selected safety outcomes were collected. Risk of bias was assessed with the Quality Assessment of Diagnostic Accuracy Studies tool. Random effects meta-analyses were performed to obtain summary estimates of the diagnostic accuracy. The pooled diagnostic yield, pooled sensitivity, and pooled specificity of transbronchial lung cryobiopsy were 83.7% (76.9-88.8%), 87% (85-89%), and 57% (40-73%), respectively. The pooled diagnostic yield, pooled sensitivity, and pooled specificity of VATS were 92.7% (87.6-95.8%), 91.0% (89-92%), and 58% (31-81%), respectively. The incidence of grade 2 (moderate to severe) endobronchial bleeding after transbronchial lung cryobiopsy and of post-procedural pneumothorax was 4.9% (2.2-10.7%) and 9.5% (5.9-14.9%), respectively. Although the diagnostic test accuracy measures of transbronchial lung cryobiopsy lag behind those of VATS, with an acceptable safety profile and potential cost savings, the former could be considered as an alternative in the evaluation of patients with diffuse parenchymal lung diseases.
NASA Astrophysics Data System (ADS)
Bach, Heike
1998-07-01
In order to test remote sensing data with advanced yield formation models for accuracy and timeliness of yield estimation of corn, a project was conducted for the State Ministry for Rural Environment, Food, and Forestry of Baden-Württemberg (Germany). This project was carried out during the course of the `Special Yield Estimation', a regular procedure conducted for the European Union, to more accurately estimate agricultural yield. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on four LANDSAT-derived estimates (between May and August) and daily meteorological data, the grain yield of corn fields was determined for 1995. The modelled yields were compared with results gathered independently within the Special Yield Estimation for 23 test fields in the upper Rhine valley. The agreement between LANDSAT-based estimates (six weeks before harvest) and Special Yield Estimation (at harvest) shows a relative error of 2.3%. The comparison of the results for single fields shows that six weeks before harvest, the grain yield of corn was estimated with a mean relative accuracy of 13% using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results for yield prediction with remote sensing.
Accuracy of parameterized proton range models; A comparison
NASA Astrophysics Data System (ADS)
Pettersen, H. E. S.; Chaar, M.; Meric, I.; Odland, O. H.; Sølie, J. R.; Röhrich, D.
2018-03-01
An accurate calculation of proton ranges in phantoms or detector geometries is crucial for decision making in proton therapy and proton imaging. To this end, several parameterizations of the range-energy relationship exist, with different levels of complexity and accuracy. In this study we compare the accuracy of four different parameterizations models for proton range in water: Two analytical models derived from the Bethe equation, and two different interpolation schemes applied to range-energy tables. In conclusion, a spline interpolation scheme yields the highest reproduction accuracy, while the shape of the energy loss-curve is best reproduced with the differentiated Bragg-Kleeman equation.
Tekin, Eylul; Roediger, Henry L
2017-01-01
Researchers use a wide range of confidence scales when measuring the relationship between confidence and accuracy in reports from memory, with the highest number usually representing the greatest confidence (e.g., 4-point, 20-point, and 100-point scales). The assumption seems to be that the range of the scale has little bearing on the confidence-accuracy relationship. In two old/new recognition experiments, we directly investigated this assumption using word lists (Experiment 1) and faces (Experiment 2) by employing 4-, 5-, 20-, and 100-point scales. Using confidence-accuracy characteristic (CAC) plots, we asked whether confidence ratings would yield similar CAC plots, indicating comparability in use of the scales. For the comparisons, we divided 100-point and 20-point scales into bins of either four or five and asked, for example, whether confidence ratings of 4, 16-20, and 76-100 would yield similar values. The results show that, for both types of material, the different scales yield similar CAC plots. Notably, when subjects express high confidence, regardless of which scale they use, they are likely to be very accurate (even though they studied 100 words and 50 faces in each list in 2 experiments). The scales seem convertible from one to the other, and choice of scale range probably does not affect research into the relationship between confidence and accuracy. High confidence indicates high accuracy in recognition in the present experiments.
NASA Astrophysics Data System (ADS)
Nyckowiak, Jedrzej; Lesny, Jacek; Haas, Edwin; Juszczak, Radoslaw; Kiese, Ralf; Butterbach-Bahl, Klaus; Olejnik, Janusz
2014-05-01
Modeling of nitrous oxide emissions from soil is very complex. Many different biological and chemical processes take place in soils which determine the amount of emitted nitrous oxide. Additionaly, biogeochemical models contain many detailed factors which may determine fluxes and other simulated variables. We used the LandscapeDNDC model in order to simulate N2O emissions, crop yields and soil physical properties from mineral cultivated soils in Poland. Nitrous oxide emissions from soils were modeled for fields with winter wheat, winter rye, spring barley, triticale, potatoes and alfalfa crops. Simulations were carried out for the plots of the Brody arable experimental station of Poznan University of Life Science in western Poland and covered the period 2003 - 2012. The model accuracy and its efficiency was determined by comparing simulations result with measurements of nitrous oxide emissions (measured with static chambers) from about 40 field campaigns. N2O emissions are strongly dependent on temperature and soil water content, hence we compared also simulated soil temperature at 10cm depth and soil water content at the same depth with the daily measured values of these driving variables. We compared also simulated yield quantities for each individual experimental plots with yield quantities which were measured in the period 2003-2012. We conclude that the LandscapeDNDC model is capable to simulate soil N2O emissions, crop yields and physical properties of soil with satisfactorily good accuracy and efficiency.
Genomic selection across multiple breeding cycles in applied bread wheat breeding.
Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann
2016-06-01
We evaluated genomic selection across five breeding cycles of bread wheat breeding. Bias of within-cycle cross-validation and methods for improving the prediction accuracy were assessed. The prospect of genomic selection has been frequently shown by cross-validation studies using the same genetic material across multiple environments, but studies investigating genomic selection across multiple breeding cycles in applied bread wheat breeding are lacking. We estimated the prediction accuracy of grain yield, protein content and protein yield of 659 inbred lines across five independent breeding cycles and assessed the bias of within-cycle cross-validation. We investigated the influence of outliers on the prediction accuracy and predicted protein yield by its components traits. A high average heritability was estimated for protein content, followed by grain yield and protein yield. The bias of the prediction accuracy using populations from individual cycles using fivefold cross-validation was accordingly substantial for protein yield (17-712 %) and less pronounced for protein content (8-86 %). Cross-validation using the cycles as folds aimed to avoid this bias and reached a maximum prediction accuracy of [Formula: see text] = 0.51 for protein content, [Formula: see text] = 0.38 for grain yield and [Formula: see text] = 0.16 for protein yield. Dropping outlier cycles increased the prediction accuracy of grain yield to [Formula: see text] = 0.41 as estimated by cross-validation, while dropping outlier environments did not have a significant effect on the prediction accuracy. Independent validation suggests, on the other hand, that careful consideration is necessary before an outlier correction is undertaken, which removes lines from the training population. Predicting protein yield by multiplying genomic estimated breeding values of grain yield and protein content raised the prediction accuracy to [Formula: see text] = 0.19 for this derived trait.
Nelson, Sarah C.; Stilp, Adrienne M.; Papanicolaou, George J.; Taylor, Kent D.; Rotter, Jerome I.; Thornton, Timothy A.; Laurie, Cathy C.
2016-01-01
Imputation is commonly used in genome-wide association studies to expand the set of genetic variants available for analysis. Larger and more diverse reference panels, such as the final Phase 3 of the 1000 Genomes Project, hold promise for improving imputation accuracy in genetically diverse populations such as Hispanics/Latinos in the USA. Here, we sought to empirically evaluate imputation accuracy when imputing to a 1000 Genomes Phase 3 versus a Phase 1 reference, using participants from the Hispanic Community Health Study/Study of Latinos. Our assessments included calculating the correlation between imputed and observed allelic dosage in a subset of samples genotyped on a supplemental array. We observed that the Phase 3 reference yielded higher accuracy at rare variants, but that the two reference panels were comparable at common variants. At a sample level, the Phase 3 reference improved imputation accuracy in Hispanic/Latino samples from the Caribbean more than for Mainland samples, which we attribute primarily to the additional reference panel samples available in Phase 3. We conclude that a 1000 Genomes Project Phase 3 reference panel can yield improved imputation accuracy compared with Phase 1, particularly for rare variants and for samples of certain genetic ancestry compositions. Our findings can inform imputation design for other genome-wide association studies of participants with diverse ancestries, especially as larger and more diverse reference panels continue to become available. PMID:27346520
Runoff prediction is a cornerstone of water resources planning, and therefore modeling performance is a key issue. This paper investigates the comparative advantages of conceptual versus process- based models in predicting warm season runoff for upland, low-yield micro-catchments...
Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Akgöl, Batuhan; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann
2017-02-01
Early generation genomic selection is superior to conventional phenotypic selection in line breeding and can be strongly improved by including additional information from preliminary yield trials. The selection of lines that enter resource-demanding multi-environment trials is a crucial decision in every line breeding program as a large amount of resources are allocated for thoroughly testing these potential varietal candidates. We compared conventional phenotypic selection with various genomic selection approaches across multiple years as well as the merit of integrating phenotypic information from preliminary yield trials into the genomic selection framework. The prediction accuracy using only phenotypic data was rather low (r = 0.21) for grain yield but could be improved by modeling genetic relationships in unreplicated preliminary yield trials (r = 0.33). Genomic selection models were nevertheless found to be superior to conventional phenotypic selection for predicting grain yield performance of lines across years (r = 0.39). We subsequently simplified the problem of predicting untested lines in untested years to predicting tested lines in untested years by combining breeding values from preliminary yield trials and predictions from genomic selection models by a heritability index. This genomic assisted selection led to a 20% increase in prediction accuracy, which could be further enhanced by an appropriate marker selection for both grain yield (r = 0.48) and protein content (r = 0.63). The easy to implement and robust genomic assisted selection gave thus a higher prediction accuracy than either conventional phenotypic or genomic selection alone. The proposed method took the complex inheritance of both low and high heritable traits into account and appears capable to support breeders in their selection decisions to develop enhanced varieties more efficiently.
Accuracy comparison among different machine learning techniques for detecting malicious codes
NASA Astrophysics Data System (ADS)
Narang, Komal
2016-03-01
In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.
He, Jun; Xu, Jiaqi; Wu, Xiao-Lin; Bauck, Stewart; Lee, Jungjae; Morota, Gota; Kachman, Stephen D; Spangler, Matthew L
2018-04-01
SNP chips are commonly used for genotyping animals in genomic selection but strategies for selecting low-density (LD) SNPs for imputation-mediated genomic selection have not been addressed adequately. The main purpose of the present study was to compare the performance of eight LD (6K) SNP panels, each selected by a different strategy exploiting a combination of three major factors: evenly-spaced SNPs, increased minor allele frequencies, and SNP-trait associations either for single traits independently or for all the three traits jointly. The imputation accuracies from 6K to 80K SNP genotypes were between 96.2 and 98.2%. Genomic prediction accuracies obtained using imputed 80K genotypes were between 0.817 and 0.821 for daughter pregnancy rate, between 0.838 and 0.844 for fat yield, and between 0.850 and 0.863 for milk yield. The two SNP panels optimized on the three major factors had the highest genomic prediction accuracy (0.821-0.863), and these accuracies were very close to those obtained using observed 80K genotypes (0.825-0.868). Further exploration of the underlying relationships showed that genomic prediction accuracies did not respond linearly to imputation accuracies, but were significantly affected by genotype (imputation) errors of SNPs in association with the traits to be predicted. SNPs optimal for map coverage and MAF were favorable for obtaining accurate imputation of genotypes whereas trait-associated SNPs improved genomic prediction accuracies. Thus, optimal LD SNP panels were the ones that combined both strengths. The present results have practical implications on the design of LD SNP chips for imputation-enabled genomic prediction.
Classification of EEG Signals Based on Pattern Recognition Approach.
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.
Classification of EEG Signals Based on Pattern Recognition Approach
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90–7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11–89.63% and 91.60–81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy. PMID:29209190
NASA Technical Reports Server (NTRS)
Quattrochi, D. A.
1984-01-01
An initial analysis of LANDSAT 4 Thematic Mapper (TM) data for the discrimination of agricultural, forested wetland, and urban land covers is conducted using a scene of data collected over Arkansas and Tennessee. A classification of agricultural lands derived from multitemporal LANDSAT Multispectral Scanner (MSS) data is compared with a classification of TM data for the same area. Results from this comparative analysis show that the multitemporal MSS classification produced an overall accuracy of 80.91% while the TM classification yields an overall classification accuracy of 97.06% correct.
Nissan, Aviram; Protic, Mladjan; Bilchik, Anton J; Howard, Robin S; Peoples, George E; Stojadinovic, Alexander
2012-09-01
Our randomized controlled trial previously demonstrated improved staging accuracy with targeted nodal assessment and ultrastaging (TNA-us) in colon cancer (CC). Our objective was to test the hypothesis that TNA-us improves disease-free survival (DFS) in CC. In this randomized trial, targeted nodal assessment and ultrastaging resulted in enhanced lymph node diagnostic yield associated with improved staging accuracy, which was further associated with improved disease-free survival in early colon cancer. Clinical parameters of the control (n = 94) and TNA-us (n = 98) groups were comparable. Median (interquartile range) lymph node yield was higher in the TNA-us arm: 16 (12-22) versus 13 (10-18); P = 0.002. Median follow-up was 46 (29-70) months. Overall 5-year DFS was 61% in the control arm and 71% in the TNA-us arm (P = 0.11). Clinical parameters of node-negative patients in the control (n = 51) and TNA-us (n = 55) groups were comparable. Lymph node yield was higher in the TNA-us arm: 15 (12-21) versus 13 (8-18); P = 0.03. Five-year DFS differed significantly between groups with node-negative CC (control 71% vs TNA-us 86%; P = 0.04). Survival among stage II CC alone was higher in the TNA-us group, 83% versus 65%; P = 0.03. Adjuvant chemotherapy use was nearly identical between groups. TNA-us stratified CC prognosis; DFS differed significantly between ultrastaged and conventionally staged node-negative patients [control pN0 72% vs TNA-us pN0(i-) 87%; P = 0.03]. Survival varied according to lymph node yield in patients with node-negative CC [5-year DFS: <12 lymph nodes = 57% vs 12+ lymph nodes = 85%; P = 0.011] but not in stage III CC. TNA-us is associated with improved nodal diagnostic yield and enhanced staging accuracy (stage migration), which is further associated with improved DFS in early CC. This study is registered at clinicaltrials.gov under the registration number: NCT01623258.
A spectral-spatial-dynamic hierarchical Bayesian (SSD-HB) model for estimating soybean yield
NASA Astrophysics Data System (ADS)
Kazama, Yoriko; Kujirai, Toshihiro
2014-10-01
A method called a "spectral-spatial-dynamic hierarchical-Bayesian (SSD-HB) model," which can deal with many parameters (such as spectral and weather information all together) by reducing the occurrence of multicollinearity, is proposed. Experiments conducted on soybean yields in Brazil fields with a RapidEye satellite image indicate that the proposed SSD-HB model can predict soybean yield with a higher degree of accuracy than other estimation methods commonly used in remote-sensing applications. In the case of the SSD-HB model, the mean absolute error between estimated yield of the target area and actual yield is 0.28 t/ha, compared to 0.34 t/ha when conventional PLS regression was applied, showing the potential effectiveness of the proposed model.
Accuracy Assessment and Correction of Vaisala RS92 Radiosonde Water Vapor Measurements
NASA Technical Reports Server (NTRS)
Whiteman, David N.; Miloshevich, Larry M.; Vomel, Holger; Leblanc, Thierry
2008-01-01
Relative humidity (RH) measurements from Vaisala RS92 radiosondes are widely used in both research and operational applications, although the measurement accuracy is not well characterized as a function of its known dependences on height, RH, and time of day (or solar altitude angle). This study characterizes RS92 mean bias error as a function of its dependences by comparing simultaneous measurements from RS92 radiosondes and from three reference instruments of known accuracy. The cryogenic frostpoint hygrometer (CFH) gives the RS92 accuracy above the 700 mb level; the ARM microwave radiometer gives the RS92 accuracy in the lower troposphere; and the ARM SurTHref system gives the RS92 accuracy at the surface using 6 RH probes with NIST-traceable calibrations. These RS92 assessments are combined using the principle of Consensus Referencing to yield a detailed estimate of RS92 accuracy from the surface to the lowermost stratosphere. An empirical bias correction is derived to remove the mean bias error, yielding corrected RS92 measurements whose mean accuracy is estimated to be +/-3% of the measured RH value for nighttime soundings and +/-4% for daytime soundings, plus an RH offset uncertainty of +/-0.5%RH that is significant for dry conditions. The accuracy of individual RS92 soundings is further characterized by the 1-sigma "production variability," estimated to be +/-1.5% of the measured RH value. The daytime bias correction should not be applied to cloudy daytime soundings, because clouds affect the solar radiation error in a complicated and uncharacterized way.
Jeong, Seok Hoo; Yoon, Hyun Hwa; Kim, Eui Joo; Kim, Yoon Jae; Kim, Yeon Suk; Cho, Jae Hee
2017-01-01
Abstract Endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) is the accurate diagnostic method for pancreatic masses and its accuracy is affected by various FNA methods and EUS equipment. Therefore, we aimed to elucidate the instrumental and methodologic factors for determining the diagnostic yield of EUS-FNA for pancreatic solid masses without an on-site cytopathology evaluation. We retrospectively reviewed the medical records of 260 patients (265 pancreatic solid masses) who underwent EUS-FNA. We compared historical conventional EUS groups with high-resolution imaging devices and finally analyzed various factors affecting EUS-FNA accuracy. In total, 265 pancreatic solid masses of 260 patients were included in this study. The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of EUS-FNA for pancreatic solid masses without on-site cytopathology evaluation were 83.4%, 81.8%, 100.0%, 100.0%, and 34.3%, respectively. In comparison with conventional image group, high-resolution image group showed the increased accuracy, sensitivity and specificity of EUS-FNA (71.3% vs 92.7%, 68.9% vs 91.9%, and 100% vs 100%, respectively). On the multivariate analysis with various instrumental and methodologic factors, high-resolution imaging (P = 0.040, odds ratio = 3.28) and 3 or more needle passes (P = 0.039, odds ratio = 2.41) were important factors affecting diagnostic yield of pancreatic solid masses. High-resolution imaging and 3 or more passes were the most significant factors influencing diagnostic yield of EUS-FNA in patients with pancreatic solid masses without an on-site cytopathologist. PMID:28079803
Accuracy optimization with wavelength tunability in overlay imaging technology
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Kang, Yoonshik; Han, Sangjoon; Shim, Kyuchan; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, Dongyoung; Oh, Eungryong; Choi, Ahlin; Kim, Youngsik; Marciano, Tal; Klein, Dana; Hajaj, Eitan M.; Aharon, Sharon; Ben-Dov, Guy; Lilach, Saltoun; Serero, Dan; Golotsvan, Anna
2018-03-01
As semiconductor manufacturing technology progresses and the dimensions of integrated circuit elements shrink, overlay budget is accordingly being reduced. Overlay budget closely approaches the scale of measurement inaccuracies due to both optical imperfections of the measurement system and the interaction of light with geometrical asymmetries of the measured targets. Measurement inaccuracies can no longer be ignored due to their significant effect on the resulting device yield. In this paper we investigate a new approach for imaging based overlay (IBO) measurements by optimizing accuracy rather than contrast precision, including its effect over the total target performance, using wavelength tunable overlay imaging metrology. We present new accuracy metrics based on theoretical development and present their quality in identifying the measurement accuracy when compared to CD-SEM overlay measurements. The paper presents the theoretical considerations and simulation work, as well as measurement data, for which tunability combined with the new accuracy metrics is shown to improve accuracy performance.
Comparing Features for Classification of MEG Responses to Motor Imagery.
Halme, Hanna-Leena; Parkkonen, Lauri
2016-01-01
Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio-spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system.
Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.
Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack
2017-06-01
In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.
Testing the accuracy of growth and yield models for southern hardwood forests
H. Michael Rauscher; Michael J. Young; Charles D. Webb; Daniel J. Robison
2000-01-01
The accuracy of ten growth and yield models for Southern Appalachian upland hardwood forests and southern bottomland forests was evaluated. In technical applications, accuracy is the composite of both bias (average error) and precision. Results indicate that GHAT, NATPIS, and a locally calibrated version of NETWIGS may be regarded as being operationally valid...
Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning.
Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li
2016-06-07
Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer.
Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning
Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li
2016-01-01
Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer. PMID:27273294
Evidence for Enhanced Interoceptive Accuracy in Professional Musicians
Schirmer-Mokwa, Katharina L.; Fard, Pouyan R.; Zamorano, Anna M.; Finkel, Sebastian; Birbaumer, Niels; Kleber, Boris A.
2015-01-01
Interoception is defined as the perceptual activity involved in the processing of internal bodily signals. While the ability of internal perception is considered a relatively stable trait, recent data suggest that learning to integrate multisensory information can modulate it. Making music is a uniquely rich multisensory experience that has shown to alter motor, sensory, and multimodal representations in the brain of musicians. We hypothesize that musical training also heightens interoceptive accuracy comparable to other perceptual modalities. Thirteen professional singers, twelve string players, and thirteen matched non-musicians were examined using a well-established heartbeat discrimination paradigm complemented by self-reported dispositional traits. Results revealed that both groups of musicians displayed higher interoceptive accuracy than non-musicians, whereas no differences were found between singers and string-players. Regression analyses showed that accumulated musical practice explained about 49% variation in heartbeat perception accuracy in singers but not in string-players. Psychometric data yielded a number of psychologically plausible inter-correlations in musicians related to performance anxiety. However, dispositional traits were not a confounding factor on heartbeat discrimination accuracy. Together, these data provide first evidence indicating that professional musicians show enhanced interoceptive accuracy compared to non-musicians. We argue that musical training largely accounted for this effect. PMID:26733836
Bradac, Ondrej; Steklacova, Anna; Nebrenska, Katerina; Vrana, Jiri; de Lacy, Patricia; Benes, Vladimir
2017-08-01
Frameless stereotactic brain biopsy systems are widely used today. VarioGuide (VG) is a relatively novel frameless system. Its accuracy was studied in a laboratory setting but has not yet been studied in the clinical setting. The purpose of this study was to determine its accuracy and diagnostic yield and to compare this with frame-based (FB) stereotaxy. Overall, 53 patients (33 males and 20 females, 60 ± 15 years old) were enrolled into this prospective, randomized, single-center study. Twenty-six patients were randomized into the FB group and 27 patients into the VG group. Real trajectory was pointed on intraoperative magnetic resonance. The distance of the targets and angle deviation between the planned and real trajectories were computed. The overall discomfort of the patient was subjectively assessed by the visual analog scale score. The median lesion volume was 5 mL (interquartile range [IQR]: 2-16 mL) (FB) and 16 mL (IQR: 2-27 mL) (VG), P = 0.133. The mean distance of the targets was 2.7 ± 1.1 mm (FB) and 2.9 ± 1.3 mm (VG), P = 0.456. Mean angle deviation was 2.6 ± 1.3 deg (FB) and 3.5 ± 2.1 deg (VG), P = 0.074. Diagnostic yield was 93% (25/27) in VG and 96% (25/26) in FB, P = 1.000. Mean operating time was 47 ± 26 minutes (FB) and 59 ± 31 minutes (VG), P = 0.140. One minor bleeding was encountered in the VG group. Overall patient discomfort was significantly higher in the FB group (visual analog scale score 2.5 ± 2.1 vs. 1.2 ± 0.6, P = 0,004). The VG system proved to be comparable in terms of the trajectory accuracy, rate of complications and diagnostic yield compared with the "gold standard" represented by the traditional FB stereotaxy for patients undergoing brain biopsy. VG is also better accepted by patients. Copyright © 2017 Elsevier Inc. All rights reserved.
The ability of video image analysis to predict lean meat yield and EUROP score of lamb carcasses.
Einarsson, E; Eythórsdóttir, E; Smith, C R; Jónmundsson, J V
2014-07-01
A total of 862 lamb carcasses that were evaluated by both the VIAscan® and the current EUROP classification system were deboned and the actual yield was measured. Models were derived for predicting lean meat yield of the legs (Leg%), loin (Loin%) and shoulder (Shldr%) using the best VIAscan® variables selected by stepwise regression analysis of a calibration data set (n=603). The equations were tested on validation data set (n=259). The results showed that the VIAscan® predicted lean meat yield in the leg, loin and shoulder with an R 2 of 0.60, 0.31 and 0.47, respectively, whereas the current EUROP system predicted lean yield with an R 2 of 0.57, 0.32 and 0.37, respectively, for the three carcass parts. The VIAscan® also predicted the EUROP score of the trial carcasses, using a model derived from an earlier trial. The EUROP classification from VIAscan® and the current system were compared for their ability to explain the variation in lean yield of the whole carcass (LMY%) and trimmed fat (FAT%). The predicted EUROP scores from the VIAscan® explained 36% of the variation in LMY% and 60% of the variation in FAT%, compared with the current EUROP system that explained 49% and 72%, respectively. The EUROP classification obtained by the VIAscan® was tested against a panel of three expert classifiers (n=696). The VIAscan® classification agreed with 82% of conformation and 73% of the fat classes assigned by a panel of expert classifiers. It was concluded that VIAscan® provides a technology that can directly predict LMY% of lamb carcasses with more accuracy than the current EUROP classification system. The VIAscan® is also capable of classifying lamb carcasses into EUROP classes with an accuracy that fulfils minimum demands for the Icelandic sheep industry. Although the VIAscan® prediction of the Loin% is low, it is comparable to the current EUROP system, and should not hinder the adoption of the technology to estimate the yield of Icelandic lambs as it delivered a more accurate prediction for the Leg%, Shldr% and overall LMY% with negligible prediction bias.
Quantitative falls risk estimation through multi-sensor assessment of standing balance.
Greene, Barry R; McGrath, Denise; Walsh, Lorcan; Doheny, Emer P; McKeown, David; Garattini, Chiara; Cunningham, Clodagh; Crosby, Lisa; Caulfield, Brian; Kenny, Rose A
2012-12-01
Falls are the most common cause of injury and hospitalization and one of the principal causes of death and disability in older adults worldwide. Measures of postural stability have been associated with the incidence of falls in older adults. The aim of this study was to develop a model that accurately classifies fallers and non-fallers using novel multi-sensor quantitative balance metrics that can be easily deployed into a home or clinic setting. We compared the classification accuracy of our model with an established method for falls risk assessment, the Berg balance scale. Data were acquired using two sensor modalities--a pressure sensitive platform sensor and a body-worn inertial sensor, mounted on the lower back--from 120 community dwelling older adults (65 with a history of falls, 55 without, mean age 73.7 ± 5.8 years, 63 female) while performing a number of standing balance tasks in a geriatric research clinic. Results obtained using a support vector machine yielded a mean classification accuracy of 71.52% (95% CI: 68.82-74.28) in classifying falls history, obtained using one model classifying all data points. Considering male and female participant data separately yielded classification accuracies of 72.80% (95% CI: 68.85-77.17) and 73.33% (95% CI: 69.88-76.81) respectively, leading to a mean classification accuracy of 73.07% in identifying participants with a history of falls. Results compare favourably to those obtained using the Berg balance scale (mean classification accuracy: 59.42% (95% CI: 56.96-61.88)). Results from the present study could lead to a robust method for assessing falls risk in both supervised and unsupervised environments.
Cunha, B C N; Belk, K E; Scanga, J A; LeValley, S B; Tatum, J D; Smith, G C
2004-07-01
This study was performed to validate previous equations and to develop and evaluate new regression equations for predicting lamb carcass fabrication yields using outputs from a lamb vision system-hot carcass component (LVS-HCC) and the lamb vision system-chilled carcass LM imaging component (LVS-CCC). Lamb carcasses (n = 149) were selected after slaughter, imaged hot using the LVS-HCC, and chilled for 24 to 48 h at -3 to 1 degrees C. Chilled carcasses yield grades (YG) were assigned on-line by USDA graders and by expert USDA grading supervisors with unlimited time and access to the carcasses. Before fabrication, carcasses were ribbed between the 12th and 13th ribs and imaged using the LVS-CCC. Carcasses were fabricated into bone-in subprimal/primal cuts. Yields calculated included 1) saleable meat yield (SMY); 2) subprimal yield (SPY); and 3) fat yield (FY). On-line (whole-number) USDA YG accounted for 59, 58, and 64%; expert (whole-number) USDA YG explained 59, 59, and 65%; and expert (nearest-tenth) USDA YG accounted for 60, 60, and 67% of the observed variation in SMY, SPY, and FY, respectively. The best prediction equation developed in this trial using LVS-HCC output and hot carcass weight as independent variables explained 68, 62, and 74% of the variation in SMY, SPY, and FY, respectively. Addition of output from LVS-CCC improved predictive accuracy of the equations; the combined output equations explained 72 and 66% of the variability in SMY and SPY, respectively. Accuracy and repeatability of measurement of LM area made with the LVS-CCC also was assessed, and results suggested that use of LVS-CCC provided reasonably accurate (R2 = 0.59) and highly repeatable (repeatability = 0.98) measurements of LM area. Compared with USDA YG, use of the dual-component lamb vision system to predict cut yields of lamb carcasses improved accuracy and precision, suggesting that this system could have an application as an objective means for pricing carcasses in a value-based marketing system.
Mulder, C; Mgode, G F; Ellis, H; Valverde, E; Beyene, N; Cox, C; Reid, S E; Van't Hoog, A H; Edwards, T L
2017-11-01
Enhanced tuberculosis (TB) case finding using detection rats in Tanzania. To assess the diagnostic accuracy of detection rats compared with culture and Xpert® MTB/RIF, and to compare enhanced case-finding algorithms using rats in smear-negative presumptive TB patients. A fully paired diagnostic accuracy study in which sputum of new adult presumptive TB patients in Tanzania was tested using smear microscopy, 11 detection rats, culture and Xpert. Of 771 eligible participants, 345 (45%) were culture-positive for Mycobacterium tuberculosis, and 264 (34%) were human immunodeficiency virus (HIV) positive. The sensitivity of the detection rats was up to 75.1% (95%CI 70.1-79.5) when compared with culture, and up to 81.8% (95%CI 76.0-86.5) when compared with Xpert, which was statistically significantly higher than the sensitivity of smear microscopy. Corresponding specificity was 40.6% (95%CI 35.9-45.5) compared with culture. The accuracy of rat detection was independent of HIV status. Using rats for triage, followed by Xpert, would result in a statistically higher yield than rats followed by light-emitting diode fluorescence microscopy, whereas the number of false-positives would be significantly lower than when using Xpert alone. Although detection rats did not meet the accuracy criteria as standalone diagnostic or triage testing for presumptive TB, they have additive value as a triage test for enhanced case finding among smear-negative TB patients if more advanced diagnostics are not available.
Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification
NASA Astrophysics Data System (ADS)
Sharif, I.; Khare, S.
2014-11-01
With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.
Diamond, Kevin R; Farrell, Thomas J; Patterson, Michael S
2003-12-21
Steady-state diffusion theory models of fluorescence in tissue have been investigated for recovering fluorophore concentrations and fluorescence quantum yield. Spatially resolved fluorescence, excitation and emission reflectance Carlo simulations, and measured using a multi-fibre probe on tissue-simulating phantoms containing either aluminium phthalocyanine tetrasulfonate (AlPcS4), Photofrin meso-tetra-(4-sulfonatophenyl)-porphine dihydrochloride The accuracy of the fluorophore concentration and fluorescence quantum yield recovered by three different models of spatially resolved fluorescence were compared. The models were based on: (a) weighted difference of the excitation and emission reflectance, (b) fluorescence due to a point excitation source or (c) fluorescence due to a pencil beam excitation source. When literature values for the fluorescence quantum yield were used for each of the fluorophores, the fluorophore absorption coefficient (and hence concentration) at the excitation wavelength (mu(a,x,f)) was recovered with a root-mean-square accuracy of 11.4% using the point source model of fluorescence and 8.0% using the more complicated pencil beam excitation model. The accuracy was calculated over a broad range of optical properties and fluorophore concentrations. The weighted difference of reflectance model performed poorly, with a root-mean-square error in concentration of about 50%. Monte Carlo simulations suggest that there are some situations where the weighted difference of reflectance is as accurate as the other two models, although this was not confirmed experimentally. Estimates of the fluorescence quantum yield in multiple scattering media were also made by determining mu(a,x,f) independently from the fitted absorption spectrum and applying the various diffusion theory models. The fluorescence quantum yields for AlPcS4 and TPPS4 were calculated to be 0.59 +/- 0.03 and 0.121 +/- 0.001 respectively using the point source model, and 0.63 +/- 0.03 and 0.129 +/- 0.002 using the pencil beam excitation model. These results are consistent with published values.
Increased genomic prediction accuracy in wheat breeding using a large Australian panel.
Norman, Adam; Taylor, Julian; Tanaka, Emi; Telfer, Paul; Edwards, James; Martinant, Jean-Pierre; Kuchel, Haydn
2017-12-01
Genomic prediction accuracy within a large panel was found to be substantially higher than that previously observed in smaller populations, and also higher than QTL-based prediction. In recent years, genomic selection for wheat breeding has been widely studied, but this has typically been restricted to population sizes under 1000 individuals. To assess its efficacy in germplasm representative of commercial breeding programmes, we used a panel of 10,375 Australian wheat breeding lines to investigate the accuracy of genomic prediction for grain yield, physical grain quality and other physiological traits. To achieve this, the complete panel was phenotyped in a dedicated field trial and genotyped using a custom Axiom TM Affymetrix SNP array. A high-quality consensus map was also constructed, allowing the linkage disequilibrium present in the germplasm to be investigated. Using the complete SNP array, genomic prediction accuracies were found to be substantially higher than those previously observed in smaller populations and also more accurate compared to prediction approaches using a finite number of selected quantitative trait loci. Multi-trait genetic correlations were also assessed at an additive and residual genetic level, identifying a negative genetic correlation between grain yield and protein as well as a positive genetic correlation between grain size and test weight.
Jia, Cang-Zhi; He, Wen-Ying; Yao, Yu-Hua
2017-03-01
Hydroxylation of proline or lysine residues in proteins is a common post-translational modification event, and such modifications are found in many physiological and pathological processes. Nonetheless, the exact molecular mechanism of hydroxylation remains under investigation. Because experimental identification of hydroxylation is time-consuming and expensive, bioinformatics tools with high accuracy represent desirable alternatives for large-scale rapid identification of protein hydroxylation sites. In view of this, we developed a supporter vector machine-based tool, OH-PRED, for the prediction of protein hydroxylation sites using the adapted normal distribution bi-profile Bayes feature extraction in combination with the physicochemical property indexes of the amino acids. In a jackknife cross validation, OH-PRED yields an accuracy of 91.88% and a Matthew's correlation coefficient (MCC) of 0.838 for the prediction of hydroxyproline sites, and yields an accuracy of 97.42% and a MCC of 0.949 for the prediction of hydroxylysine sites. These results demonstrate that OH-PRED increased significantly the prediction accuracy of hydroxyproline and hydroxylysine sites by 7.37 and 14.09%, respectively, when compared with the latest predictor PredHydroxy. In independent tests, OH-PRED also outperforms previously published methods.
Floating shock fitting via Lagrangian adaptive meshes
NASA Technical Reports Server (NTRS)
Vanrosendale, John
1994-01-01
In recent works we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM) is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence. Shock-capturing algorithms like this, which warp the mesh to yield shock-fitted accuracy, are new and relatively untried. However, their potential is clear. In the context of sonic booms, accurate calculation of near-field sonic boom signatures is critical to the design of the High Speed Civil Transport (HSCT). SLAM should allow computation of accurate N-wave pressure signatures on comparatively coarse meshes, significantly enhancing our ability to design low-boom configurations for high-speed aircraft.
Li, Hang; He, Junting; Liu, Qin; Huo, Zhaohui; Liang, Si; Liang, Yong
2011-03-01
A tandem solid-phase extraction method (SPE) of connecting two different cartridges (C(18) and MCX) in series was developed as the extraction procedure in this article, which provided better extraction yields (>86%) for all analytes and more appropriate sample purification from endogenous interference materials compared with a single cartridge. Analyte separation was achieved on a C(18) reversed-phase column at the wavelength of 265 nm by high-performance liquid chromatography (HPLC). The method was validated in terms of extraction yield, precision and accuracy. These assays gave mean accuracy values higher than 89% with RSD values that were always less than 3.8%. The method has been successfully applied to plasma samples from rats after oral administration of target compounds. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Strategies for implementing genomic selection for feed efficiency in dairy cattle breeding schemes.
Wallén, S E; Lillehammer, M; Meuwissen, T H E
2017-08-01
Alternative genomic selection and traditional BLUP breeding schemes were compared for the genetic improvement of feed efficiency in simulated Norwegian Red dairy cattle populations. The change in genetic gain over time and achievable selection accuracy were studied for milk yield and residual feed intake, as a measure of feed efficiency. When including feed efficiency in genomic BLUP schemes, it was possible to achieve high selection accuracies for genomic selection, and all genomic BLUP schemes gave better genetic gain for feed efficiency than BLUP using a pedigree relationship matrix. However, introducing a second trait in the breeding goal caused a reduction in the genetic gain for milk yield. When using contracted test herds with genotyped and feed efficiency recorded cows as a reference population, adding an additional 4,000 new heifers per year to the reference population gave accuracies that were comparable to a male reference population that used progeny testing with 250 daughters per sire. When the test herd consisted of 500 or 1,000 cows, lower genetic gain was found than using progeny test records to update the reference population. It was concluded that to improve difficult to record traits, the use of contracted test herds that had additional recording (e.g., measurements required to calculate feed efficiency) is a viable option, possibly through international collaborations. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin
2018-04-26
Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance.
Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin
2018-01-01
Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance. PMID:29701668
The accuracy of Genomic Selection in Norwegian red cattle assessed by cross-validation.
Luan, Tu; Woolliams, John A; Lien, Sigbjørn; Kent, Matthew; Svendsen, Morten; Meuwissen, Theo H E
2009-11-01
Genomic Selection (GS) is a newly developed tool for the estimation of breeding values for quantitative traits through the use of dense markers covering the whole genome. For a successful application of GS, accuracy of the prediction of genomewide breeding value (GW-EBV) is a key issue to consider. Here we investigated the accuracy and possible bias of GW-EBV prediction, using real bovine SNP genotyping (18,991 SNPs) and phenotypic data of 500 Norwegian Red bulls. The study was performed on milk yield, fat yield, protein yield, first lactation mastitis traits, and calving ease. Three methods, best linear unbiased prediction (G-BLUP), Bayesian statistics (BayesB), and a mixture model approach (MIXTURE), were used to estimate marker effects, and their accuracy and bias were estimated by using cross-validation. The accuracies of the GW-EBV prediction were found to vary widely between 0.12 and 0.62. G-BLUP gave overall the highest accuracy. We observed a strong relationship between the accuracy of the prediction and the heritability of the trait. GW-EBV prediction for production traits with high heritability achieved higher accuracy and also lower bias than health traits with low heritability. To achieve a similar accuracy for the health traits probably more records will be needed.
Assessment of energy crops alternative to maize for biogas production in the Greater Region.
Mayer, Frédéric; Gerin, Patrick A; Noo, Anaïs; Lemaigre, Sébastien; Stilmant, Didier; Schmit, Thomas; Leclech, Nathael; Ruelle, Luc; Gennen, Jerome; von Francken-Welz, Herbert; Foucart, Guy; Flammang, Jos; Weyland, Marc; Delfosse, Philippe
2014-08-01
The biomethane yield of various energy crops, selected among potential alternatives to maize in the Greater Region, was assessed. The biomass yield, the volatile solids (VS) content and the biochemical methane potential (BMP) were measured to calculate the biomethane yield per hectare of all plant species. For all species, the dry matter biomass yield and the VS content were the main factors that influence, respectively, the biomethane yield and the BMP. Both values were predicted with good accuracy by linear regressions using the biomass yield and the VS as independent variable. The perennial crop miscanthus appeared to be the most promising alternative to maize when harvested as green matter in autumn and ensiled. Miscanthus reached a biomethane yield of 5.5 ± 1 × 10(3)m(3)ha(-1) during the second year after the establishment, as compared to 5.3 ± 1 × 10(3)m(3)ha(-1) for maize under similar crop conditions. Copyright © 2014. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Bosi, F.; Pellegrino, S.
2017-01-01
A molecular formulation of the onset of plasticity is proposed to assess temperature and strain rate effects in anisotropic semi-crystalline rubbery films. The presented plane stress criterion is based on the strain rate-temperature superposition principle and the cooperative theory of yielding, where some parameters are assumed to be material constants, while others are considered to depend on specific modes of deformation. An orthotropic yield function is developed for a linear low density polyethylene thin film. Uniaxial and biaxial inflation experiments were carried out to determine the yield stress of the membrane via a strain recovery method. It is shown that the 3% offset method predicts the uniaxial elastoplastic transition with good accuracy. Both the tensile yield points along the two principal directions of the film and the biaxial yield stresses are found to obey the superposition principle. The proposed yield criterion is compared against experimental measurements, showing excellent agreement over a wide range of deformation rates and temperatures.
Matsuba, Shinji; Tabuchi, Hitoshi; Ohsugi, Hideharu; Enno, Hiroki; Ishitobi, Naofumi; Masumoto, Hiroki; Kiuchi, Yoshiaki
2018-05-09
To predict exudative age-related macular degeneration (AMD), we combined a deep convolutional neural network (DCNN), a machine-learning algorithm, with Optos, an ultra-wide-field fundus imaging system. First, to evaluate the diagnostic accuracy of DCNN, 364 photographic images (AMD: 137) were amplified and the area under the curve (AUC), sensitivity and specificity were examined. Furthermore, in order to compare the diagnostic abilities between DCNN and six ophthalmologists, we prepared yield 84 sheets comprising 50% of normal and wet-AMD data each, and calculated the correct answer rate, specificity, sensitivity, and response times. DCNN exhibited 100% sensitivity and 97.31% specificity for wet-AMD images, with an average AUC of 99.76%. Moreover, comparing the diagnostic abilities of DCNN versus six ophthalmologists, the average accuracy of the DCNN was 100%. On the other hand, the accuracy of ophthalmologists, determined only by Optos images without a fundus examination, was 81.9%. A combination of DCNN with Optos images is not better than a medical examination; however, it can identify exudative AMD with a high level of accuracy. Our system is considered useful for screening and telemedicine.
Kappa and Rater Accuracy: Paradigms and Parameters.
Conger, Anthony J
2017-12-01
Drawing parallels to classical test theory, this article clarifies the difference between rater accuracy and reliability and demonstrates how category marginal frequencies affect rater agreement and Cohen's kappa (κ). Category assignment paradigms are developed: comparing raters to a standard (index) versus comparing two raters to one another (concordance), using both nonstochastic and stochastic category membership. Using a probability model to express category assignments in terms of rater accuracy and random error, it is shown that observed agreement (Po) depends only on rater accuracy and number of categories; however, expected agreement (Pe) and κ depend additionally on category frequencies. Moreover, category frequencies affect Pe and κ solely through the variance of the category proportions, regardless of the specific frequencies underlying the variance. Paradoxically, some judgment paradigms involving stochastic categories are shown to yield higher κ values than their nonstochastic counterparts. Using the stated probability model, assignments to categories were generated for 552 combinations of paradigms, rater and category parameters, category frequencies, and number of stimuli. Observed means and standard errors for Po, Pe, and κ were fully consistent with theory expectations. Guidelines for interpretation of rater accuracy and reliability are offered, along with a discussion of alternatives to the basic model.
Erbe, M; Hayes, B J; Matukumalli, L K; Goswami, S; Bowman, P J; Reich, C M; Mason, B A; Goddard, M E
2012-07-01
Achieving accurate genomic estimated breeding values for dairy cattle requires a very large reference population of genotyped and phenotyped individuals. Assembling such reference populations has been achieved for breeds such as Holstein, but is challenging for breeds with fewer individuals. An alternative is to use a multi-breed reference population, such that smaller breeds gain some advantage in accuracy of genomic estimated breeding values (GEBV) from information from larger breeds. However, this requires that marker-quantitative trait loci associations persist across breeds. Here, we assessed the gain in accuracy of GEBV in Jersey cattle as a result of using a combined Holstein and Jersey reference population, with either 39,745 or 624,213 single nucleotide polymorphism (SNP) markers. The surrogate used for accuracy was the correlation of GEBV with daughter trait deviations in a validation population. Two methods were used to predict breeding values, either a genomic BLUP (GBLUP_mod), or a new method, BayesR, which used a mixture of normal distributions as the prior for SNP effects, including one distribution that set SNP effects to zero. The GBLUP_mod method scaled both the genomic relationship matrix and the additive relationship matrix to a base at the time the breeds diverged, and regressed the genomic relationship matrix to account for sampling errors in estimating relationship coefficients due to a finite number of markers, before combining the 2 matrices. Although these modifications did result in less biased breeding values for Jerseys compared with an unmodified genomic relationship matrix, BayesR gave the highest accuracies of GEBV for the 3 traits investigated (milk yield, fat yield, and protein yield), with an average increase in accuracy compared with GBLUP_mod across the 3 traits of 0.05 for both Jerseys and Holsteins. The advantage was limited for either Jerseys or Holsteins in using 624,213 SNP rather than 39,745 SNP (0.01 for Holsteins and 0.03 for Jerseys, averaged across traits). Even this limited and nonsignificant advantage was only observed when BayesR was used. An alternative panel, which extracted the SNP in the transcribed part of the bovine genome from the 624,213 SNP panel (to give 58,532 SNP), performed better, with an increase in accuracy of 0.03 for Jerseys across traits. This panel captures much of the increased genomic content of the 624,213 SNP panel, with the advantage of a greatly reduced number of SNP effects to estimate. Taken together, using this panel, a combined breed reference and using BayesR rather than GBLUP_mod increased the accuracy of GEBV in Jerseys from 0.43 to 0.52, averaged across the 3 traits. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Coniferous forest classification and inventory using Landsat and digital terrain data
NASA Technical Reports Server (NTRS)
Franklin, J.; Logan, T. L.; Woodcock, C. E.; Strahler, A. H.
1986-01-01
Machine-processing techniques were used in a Forest Classification and Inventory System (FOCIS) procedure to extract and process tonal, textural, and terrain information from registered Landsat multispectral and digital terrain data. Using FOCIS as a basis for stratified sampling, the softwood timber volumes of the Klamath National Forest and Eldorado National Forest were estimated within standard errors of 4.8 and 4.0 percent, respectively. The accuracy of these large-area inventories is comparable to the accuracy yielded by use of conventional timber inventory methods, but, because of automation, the FOCIS inventories are more rapid (9-12 months compared to 2-3 years for conventional manual photointerpretation, map compilation and drafting, field sampling, and data processing) and are less costly.
NASA Technical Reports Server (NTRS)
Jensen, J. R.; Tinney, L. R.; Estes, J. E.
1975-01-01
Cropland inventories utilizing high altitude and Landsat imagery were conducted in Kern County, California. It was found that in terms of the overall mean relative and absolute inventory accuracies, a Landsat multidate analysis yielded the most optimum results, i.e., 98% accuracy. The 1:125,000 CIR high altitude inventory is a serious alternative which can be very accurate (97% or more) if imagery is available for a specific study area. The operational remote sensing cropland inventories documented in this study are considered cost-effective. When compared to conventional survey costs of $62-66 per 10,000 acres, the Landsat and high-altitude inventories required only 3-5% of this amount, i.e., $1.97-2.98.
Deep Space Navigation with Noncoherent Tracking Data
NASA Technical Reports Server (NTRS)
Ellis, J.
1983-01-01
Navigation capabilities of noncoherent tracking data are evaluated for interplanetary cruise phase and planetary (Venus) flyby orbit determination. Results of a formal covariance analysis are presented which show that a combination of one-way Doppler and delta DOR yields orbit accuracies comparable to conventional two-way Doppler tracking. For the interplanetary cruise phase, a tracking cycle consisting of a 3-hour Doppler pass and delta DOR (differential one-way range) from two baselines (one observation per overlap) acquired 3 times a month results in 100-km orbit determination accuracy. For reconstruction of a Venus flyby orbit, 10 days tracking at encounter consisting of continuous one-way Doppler and delta DOR sampled at one observation per overlap is sufficient to satisfy the accuracy requirements.
ERIC Educational Resources Information Center
Jaffery, Rose; Johnson, Austin H.; Bowler, Mark C.; Riley-Tillman, T. Chris; Chafouleas, Sandra M.; Harrison, Sayward E.
2015-01-01
To date, rater accuracy when using Direct Behavior Rating (DBR) has been evaluated by comparing DBR-derived data to scores yielded through systematic direct observation. The purpose of this study was to evaluate an alternative method for establishing comparison scores using expert-completed DBR alongside best practices in consensus building…
Accuracy of electron densities obtained via Koopmans-compliant hybrid functionals
NASA Astrophysics Data System (ADS)
Elmaslmane, A. R.; Wetherell, J.; Hodgson, M. J. P.; McKenna, K. P.; Godby, R. W.
2018-04-01
We evaluate the accuracy of electron densities and quasiparticle energy gaps given by hybrid functionals by directly comparing these to the exact quantities obtained from solving the many-electron Schrödinger equation. We determine the admixture of Hartree-Fock exchange to approximate exchange-correlation in our hybrid functional via one of several physically justified constraints, including the generalized Koopmans' theorem. We find that hybrid functionals yield strikingly accurate electron densities and gaps in both exchange-dominated and correlated systems. We also discuss the role of the screened Fock operator in the success of hybrid functionals.
Comparing Features for Classification of MEG Responses to Motor Imagery
Halme, Hanna-Leena; Parkkonen, Lauri
2016-01-01
Background Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. Methods MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio—spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. Results The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. Conclusions We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system. PMID:27992574
Matías-Guiu, Jordi A; Valles-Salgado, María; Rognoni, Teresa; Hamre-Gil, Frank; Moreno-Ramos, Teresa; Matías-Guiu, Jorge
2017-01-01
Our aim was to evaluate and compare the diagnostic properties of 5 screening tests for the diagnosis of mild Alzheimer disease (AD). We conducted a prospective and cross-sectional study of 92 patients with mild AD and of 68 healthy controls from our Department of Neurology. The diagnostic properties of the following tests were compared: Mini-Mental State Examination (MMSE), Addenbrooke's Cognitive Examination III (ACE-III), Memory Impairment Screen (MIS), Montreal Cognitive Assessment (MoCA), and Rowland Universal Dementia Assessment Scale (RUDAS). All tests yielded high diagnostic accuracy, with the ACE-III achieving the best diagnostic properties. The area under the curve was 0.897 for the ACE-III, 0.889 for the RUDAS, 0.874 for the MMSE, 0.866 for the MIS, and 0.856 for the MoCA. The Mini-ACE score from the ACE-III showed the highest diagnostic capacity (area under the curve 0.939). Memory scores of the ACE-III and of the RUDAS showed a better diagnostic accuracy than those of the MMSE and of the MoCA. All tests, especially the ACE-III, conveyed a higher diagnostic accuracy in patients with full primary education than in the less educated group. Implementing normative data improved the diagnostic accuracy of the ACE-III but not that of the other tests. The ACE-III achieved the highest diagnostic accuracy. This better discrimination was more evident in the more educated group. © 2017 S. Karger AG, Basel.
Walsworth, Matthew K; Doukas, William C; Murphy, Kevin P; Mielcarek, Billie J; Michener, Lori A
2008-01-01
Glenoid labral tears provide a diagnostic challenge. Combinations of items in the patient history and physical examination will provide stronger diagnostic accuracy to suggest the presence or absence of glenoid labral tear than will individual items. Cohort study (diagnosis); Level of evidence, 1. History and examination findings in patients with shoulder pain (N = 55) were compared with arthroscopic findings to determine diagnostic accuracy and intertester reliability. The intertester reliability of the crank, anterior slide, and active compression tests was 0.20 to 0.24. A combined history of popping or catching and positive crank or anterior slide results yielded specificities of 0.91 and 1.00 and positive likelihood ratios of 3.0 and infinity, respectively. A positive anterior slide result combined with either a positive active compression or crank result yielded specificities of 0.91 and positive likelihood ratio of 2.75 and 3.75, respectively. Requiring only a single positive finding in the combination of popping or catching and the anterior slide or crank yielded sensitivities of 0.82 and 0.89 and negative likelihood ratios of 0.31 and 0.33, respectively. The diagnostic accuracy of individual tests in previous studies is quite variable, which may be explained in part by the modest reliability of these tests. The combination of popping or catching with a positive crank or anterior slide result or a positive anterior slide result with a positive active compression or crank test result suggests the presence of a labral tear. The combined absence of popping or catching and a negative anterior slide or crank result suggests the absence of a labral tear.
Endelman, Jeffrey B; Carley, Cari A Schmitz; Bethke, Paul C; Coombs, Joseph J; Clough, Mark E; da Silva, Washington L; De Jong, Walter S; Douches, David S; Frederick, Curtis M; Haynes, Kathleen G; Holm, David G; Miller, J Creighton; Muñoz, Patricio R; Navarro, Felix M; Novy, Richard G; Palta, Jiwan P; Porter, Gregory A; Rak, Kyle T; Sathuvalli, Vidyasagar R; Thompson, Asunta L; Yencho, G Craig
2018-05-01
As one of the world's most important food crops, the potato ( Solanum tuberosum L.) has spurred innovation in autotetraploid genetics, including in the use of SNP arrays to determine allele dosage at thousands of markers. By combining genotype and pedigree information with phenotype data for economically important traits, the objectives of this study were to (1) partition the genetic variance into additive vs. nonadditive components, and (2) determine the accuracy of genome-wide prediction. Between 2012 and 2017, a training population of 571 clones was evaluated for total yield, specific gravity, and chip fry color. Genomic covariance matrices for additive ( G ), digenic dominant ( D ), and additive × additive epistatic ( G # G ) effects were calculated using 3895 markers, and the numerator relationship matrix ( A ) was calculated from a 13-generation pedigree. Based on model fit and prediction accuracy, mixed model analysis with G was superior to A for yield and fry color but not specific gravity. The amount of additive genetic variance captured by markers was 20% of the total genetic variance for specific gravity, compared to 45% for yield and fry color. Within the training population, including nonadditive effects improved accuracy and/or bias for all three traits when predicting total genotypic value. When six F 1 populations were used for validation, prediction accuracy ranged from 0.06 to 0.63 and was consistently lower (0.13 on average) without allele dosage information. We conclude that genome-wide prediction is feasible in potato and that it will improve selection for breeding value given the substantial amount of nonadditive genetic variance in elite germplasm. Copyright © 2018 by the Genetics Society of America.
Stinchfield, Randy; McCready, John; Turner, Nigel E; Jimenez-Murcia, Susana; Petry, Nancy M; Grant, Jon; Welte, John; Chapman, Heather; Winters, Ken C
2016-09-01
The DSM-5 was published in 2013 and it included two substantive revisions for gambling disorder (GD). These changes are the reduction in the threshold from five to four criteria and elimination of the illegal activities criterion. The purpose of this study was to twofold. First, to assess the reliability, validity and classification accuracy of the DSM-5 diagnostic criteria for GD. Second, to compare the DSM-5-DSM-IV on reliability, validity, and classification accuracy, including an examination of the effect of the elimination of the illegal acts criterion on diagnostic accuracy. To compare DSM-5 and DSM-IV, eight datasets from three different countries (Canada, USA, and Spain; total N = 3247) were used. All datasets were based on similar research methods. Participants were recruited from outpatient gambling treatment services to represent the group with a GD and from the community to represent the group without a GD. All participants were administered a standardized measure of diagnostic criteria. The DSM-5 yielded satisfactory reliability, validity and classification accuracy. In comparing the DSM-5 to the DSM-IV, most comparisons of reliability, validity and classification accuracy showed more similarities than differences. There was evidence of modest improvements in classification accuracy for DSM-5 over DSM-IV, particularly in reduction of false negative errors. This reduction in false negative errors was largely a function of lowering the cut score from five to four and this revision is an improvement over DSM-IV. From a statistical standpoint, eliminating the illegal acts criterion did not make a significant impact on diagnostic accuracy. From a clinical standpoint, illegal acts can still be addressed in the context of the DSM-5 criterion of lying to others.
Hierarchical image segmentation via recursive superpixel with adaptive regularity
NASA Astrophysics Data System (ADS)
Nakamura, Kensuke; Hong, Byung-Woo
2017-11-01
A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.
Prediction of beta-turns from amino acid sequences using the residue-coupled model.
Guruprasad, K; Shukla, S
2003-04-01
We evaluated the prediction of beta-turns from amino acid sequences using the residue-coupled model with an enlarged representative protein data set selected from the Protein Data Bank. Our results show that the probability values derived from a data set comprising 425 protein chains yielded an overall beta-turn prediction accuracy 68.74%, compared with 94.7% reported earlier on a data set of 30 proteins using the same method. However, we noted that the overall beta-turn prediction accuracy using probability values derived from the 30-protein data set reduces to 40.74% when tested on the data set comprising 425 protein chains. In contrast, using probability values derived from the 425 data set used in this analysis, the overall beta-turn prediction accuracy yielded consistent results when tested on either the 30-protein data set (64.62%) used earlier or a more recent representative data set comprising 619 protein chains (64.66%) or on a jackknife data set comprising 476 representative protein chains (63.38%). We therefore recommend the use of probability values derived from the 425 representative protein chains data set reported here, which gives more realistic and consistent predictions of beta-turns from amino acid sequences.
Wang, Z X; Chen, S L; Wang, Q Q; Liu, B; Zhu, J; Shen, J
2015-06-01
The aim of this study was to evaluate the accuracy of magnetic resonance imaging in the detection of triangular fibrocartilage complex injury through a meta-analysis. A comprehensive literature search was conducted before 1 April 2014. All studies comparing magnetic resonance imaging results with arthroscopy or open surgery findings were reviewed, and 25 studies that satisfied the eligibility criteria were included. Data were pooled to yield pooled sensitivity and specificity, which were respectively 0.83 and 0.82. In detection of central and peripheral tears, magnetic resonance imaging had respectively a pooled sensitivity of 0.90 and 0.88 and a pooled specificity of 0.97 and 0.97. Six high-quality studies using Ringler's recommended magnetic resonance imaging parameters were selected for analysis to determine whether optimal imaging protocols yielded better results. The pooled sensitivity and specificity of these six studies were 0.92 and 0.82, respectively. The overall accuracy of magnetic resonance imaging was acceptable. For peripheral tears, the pooled data showed a relatively high accuracy. Magnetic resonance imaging with appropriate parameters are an ideal method for diagnosing different types of triangular fibrocartilage complex tears. © The Author(s) 2015.
Decay Properties of K-Vacancy States in Fe X-Fe XVII
NASA Technical Reports Server (NTRS)
Mendoza, C.; Kallman, T. R.; Bautista, M. A.; Palmeri, P.
2003-01-01
We report extensive calculations of the decay properties of fine-structure K-vacancy levels in Fe X-Fe XVII. A large set of level energies, wavelengths, radiative and Auger rates, and fluorescence yields has been computed using three different standard atomic codes, namely Cowan's HFR, AUTOSTRUCTURE and the Breit-Pauli R-matrix package. This multi-code approach is used to the study the effects of core relaxation, configuration interaction and the Breit interaction, and enables the estimate of statistical accuracy ratings. The Ksigma and KLL Auger widths have been found to be nearly independent of both the outer-electron configuration and electron occupancy keeping a constant ratio of 1.53 +/- 0.06. By comparing with previous theoretical and measured wavelengths, the accuracy of the present set is determined to be within 2 m Angstrom. Also, the good agreement found between the different radiative and Auger data sets that have been computed allow us to propose with confidence an accuracy rating of 20% for the line fluorescence yields greater than 0.01. Emission and absorption spectral features are predicted finding good correlation with measurements in both laboratory and astrophysical plasmas.
NASA Astrophysics Data System (ADS)
Rahayu, A. P.; Hartatik, T.; Purnomoadi, A.; Kurnianto, E.
2018-02-01
The aims of this study were to estimate 305 day first lactation milk yield of Indonesian Holstein cattle from cumulative monthly and bimonthly test day records and to analyze its accuracy.The first lactation records of 258 dairy cows from 2006 to 2014 consisted of 2571 monthly (MTDY) and 1281 bimonthly test day yield (BTDY) records were used. Milk yields were estimated by regression method. Correlation coefficients between actual and estimated milk yield by cumulative MTDY were 0.70, 0.78, 0.83, 0.86, 0.89, 0.92, 0.94 and 0.96 for 2-9 months, respectively, meanwhile by cumulative BTDY were 0.69, 0.81, 0.87 and 0.92 for 2, 4, 6 and 8 months, respectively. The accuracy of fitting regression models (R2) increased with the increasing in the number of cumulative test day used. The used of 5 cumulative MTDY was considered sufficient for estimating 305 day first lactation milk yield with 80.6% accuracy and 7% error percentage of estimation. The estimated milk yield from MTDY was more accurate than BTDY by 1.1 to 2% less error percentage in the same time.
Reference-based phasing using the Haplotype Reference Consortium panel.
Loh, Po-Ru; Danecek, Petr; Palamara, Pier Francesco; Fuchsberger, Christian; A Reshef, Yakir; K Finucane, Hilary; Schoenherr, Sebastian; Forer, Lukas; McCarthy, Shane; Abecasis, Goncalo R; Durbin, Richard; L Price, Alkes
2016-11-01
Haplotype phasing is a fundamental problem in medical and population genetics. Phasing is generally performed via statistical phasing in a genotyped cohort, an approach that can yield high accuracy in very large cohorts but attains lower accuracy in smaller cohorts. Here we instead explore the paradigm of reference-based phasing. We introduce a new phasing algorithm, Eagle2, that attains high accuracy across a broad range of cohort sizes by efficiently leveraging information from large external reference panels (such as the Haplotype Reference Consortium; HRC) using a new data structure based on the positional Burrows-Wheeler transform. We demonstrate that Eagle2 attains a ∼20× speedup and ∼10% increase in accuracy compared to reference-based phasing using SHAPEIT2. On European-ancestry samples, Eagle2 with the HRC panel achieves >2× the accuracy of 1000 Genomes-based phasing. Eagle2 is open source and freely available for HRC-based phasing via the Sanger Imputation Service and the Michigan Imputation Server.
Orbit Determination Accuracy for Comets on Earth-Impacting Trajectories
NASA Technical Reports Server (NTRS)
Kay-Bunnell, Linda
2004-01-01
The results presented show the level of orbit determination accuracy obtainable for long-period comets discovered approximately one year before collision with Earth. Preliminary orbits are determined from simulated observations using Gauss' method. Additional measurements are incorporated to improve the solution through the use of a Kalman filter, and include non-gravitational perturbations due to outgassing. Comparisons between observatories in several different circular heliocentric orbits show that observatories in orbits with radii less than 1 AU result in increased orbit determination accuracy for short tracking durations due to increased parallax per unit time. However, an observatory at 1 AU will perform similarly if the tracking duration is increased, and accuracy is significantly improved if additional observatories are positioned at the Sun-Earth Lagrange points L3, L4, or L5. A single observatory at 1 AU capable of both optical and range measurements yields the highest orbit determination accuracy in the shortest amount of time when compared to other systems of observatories.
Korosoglou, Grigorios; Dubart, Alain-Eric; DaSilva, K Gaspar C; Labadze, Nino; Hardt, Stefan; Hansen, Alexander; Bekeredjian, Raffi; Zugck, Christian; Zehelein, Joerg; Katus, Hugo A; Kuecherer, Helmut
2006-01-01
Little is known about the incremental value of real-time myocardial contrast echocardiography (MCE) as an adjunct to pharmacologic stress testing. This study was performed to evaluate the diagnostic value of MCE to detect abnormal myocardial perfusion by technetium Tc 99m sestamibi-single photon emission computed tomography (SPECT) and anatomically significant coronary artery disease (CAD) by angiography. Myocardial contrast echocardiography was performed at rest and during vasodilator stress in consecutive patients (N = 120) undergoing SPECT imaging for known or suspected CAD. Myocardial opacification, wall motion, and tracer uptake were visually analyzed in 12 myocardial segments by 2 pairs of blinded observers. Concordance between the 2 methods was assessed using the kappa statistic. Of 1356 segments, 1025 (76%) were interpretable by MCE, wall motion, and SPECT. Sensitivity of wall motion was 75%, specificity 83%, and accuracy 81% for detecting abnormal myocardial perfusion by SPECT (kappa = 0.53). Myocardial contrast echocardiography and wall motion together yielded significantly higher sensitivity (85% vs 74%, P < .05), specificity of 83%, and accuracy of 85% (kappa = 0.64) for the detection of abnormal myocardial perfusion. In 89 patients who underwent coronary angiography, MCE and wall motion together yielded higher sensitivity (83% vs 64%, P < .05) and accuracy (77% vs 68%, P < .05) but similar specificity (72%) compared with SPECT for the detection of high-grade, stenotic (> or = 75%) coronary lesions. Assessment of myocardial perfusion adds value to conventional stress echocardiography by increasing its sensitivity for the detection of functionally abnormal myocardial perfusion. Myocardial contrast echocardiography and wall motion together provide higher sensitivity and accuracy for detection of CAD compared with SPECT.
Egger, Alexander E; Theiner, Sarah; Kornauth, Christoph; Heffeter, Petra; Berger, Walter; Keppler, Bernhard K; Hartinger, Christian G
2014-09-01
Laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) was used to study the spatially-resolved distribution of ruthenium and platinum in viscera (liver, kidney, spleen, and muscle) originating from mice treated with the investigational ruthenium-based antitumor compound KP1339 or cisplatin, a potent, but nephrotoxic clinically-approved platinum-based anticancer drug. Method development was based on homogenized Ru- and Pt-containing samples (22.0 and 0.257 μg g(-1), respectively). Averaging yielded satisfactory precision and accuracy for both concentrations (3-15% and 93-120%, respectively), however when considering only single data points, the highly concentrated Ru sample maintained satisfactory precision and accuracy, while the low concentrated Pt sample yielded low recoveries and precision, which could not be improved by use of internal standards ((115)In, (185)Re or (13)C). Matrix-matched standards were used for quantification in LA-ICP-MS which yielded comparable metal distributions, i.e., enrichment in the cortex of the kidney in comparison with the medulla, a homogenous distribution in the liver and the muscle and areas of enrichment in the spleen. Elemental distributions were assigned to histological structures exceeding 100 μm in size. The accuracy of a quantitative LA-ICP-MS imaging experiment was validated by an independent method using microwave-assisted digestion (MW) followed by direct infusion ICP-MS analysis.
Samad, Manar D; Ulloa, Alvaro; Wehner, Gregory J; Jing, Linyuan; Hartzel, Dustin; Good, Christopher W; Williams, Brent A; Haggerty, Christopher M; Fornwalt, Brandon K
2018-06-09
The goal of this study was to use machine learning to more accurately predict survival after echocardiography. Predicting patient outcomes (e.g., survival) following echocardiography is primarily based on ejection fraction (EF) and comorbidities. However, there may be significant predictive information within additional echocardiography-derived measurements combined with clinical electronic health record data. Mortality was studied in 171,510 unselected patients who underwent 331,317 echocardiograms in a large regional health system. We investigated the predictive performance of nonlinear machine learning models compared with that of linear logistic regression models using 3 different inputs: 1) clinical variables, including 90 cardiovascular-relevant International Classification of Diseases, Tenth Revision, codes, and age, sex, height, weight, heart rate, blood pressures, low-density lipoprotein, high-density lipoprotein, and smoking; 2) clinical variables plus physician-reported EF; and 3) clinical variables and EF, plus 57 additional echocardiographic measurements. Missing data were imputed with a multivariate imputation by using a chained equations algorithm (MICE). We compared models versus each other and baseline clinical scoring systems by using a mean area under the curve (AUC) over 10 cross-validation folds and across 10 survival durations (6 to 60 months). Machine learning models achieved significantly higher prediction accuracy (all AUC >0.82) over common clinical risk scores (AUC = 0.61 to 0.79), with the nonlinear random forest models outperforming logistic regression (p < 0.01). The random forest model including all echocardiographic measurements yielded the highest prediction accuracy (p < 0.01 across all models and survival durations). Only 10 variables were needed to achieve 96% of the maximum prediction accuracy, with 6 of these variables being derived from echocardiography. Tricuspid regurgitation velocity was more predictive of survival than LVEF. In a subset of studies with complete data for the top 10 variables, multivariate imputation by chained equations yielded slightly reduced predictive accuracies (difference in AUC of 0.003) compared with the original data. Machine learning can fully utilize large combinations of disparate input variables to predict survival after echocardiography with superior accuracy. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Nikkels, A F; Debrus, S; Sadzot-Delvaux, C; Piette, J; Rentier, B; Piérard, G E
1995-12-01
Early and specific recognition of varicella zoster virus (VZV) infection is of vital concern in immunocompromised patients. The aim of this study was to compare the diagnostic accuracy of histochemical and immunohistochemical identification of the VZV ORF63 encoded protein (IE63) and of the VZV late protein gE on smears and formalin-fixed paraffin-embedded skin sections taken from lesions clinically diagnosed as varicella (n = 15) and herpes zoster (n = 51). Microscopic examinations of Tzanck smears and skin sections yielded a diagnostic accuracy of Herpesviridae infections in 66.7% (10/15) and 92.3% (12/13) of varicella, and 74.4% (29/39) and 87.8% (43/49) of herpes zoster, respectively. Immunohistochemistry applied to varicella provided a type-specific virus diagnostic accuracy of 86.7% (13/15; IE63) and 100% (15/15; gE) on smears, and of 92.3% for both VZV proteins on skin sections. In herpes zoster, the diagnostic accuracy of immunohistochemistry reached 92.3% (36/39; IE63) and 94.9% (37/39; gE) on smears, and 91.7% (44/48; IE63) and 91.8% (45/49; gE) on skin sections. These findings indicate that the immunohistochemical detection of IE63 and gE on both smears and skin sections yields a higher specificity and sensitivity than standard microscopic assessments.
Effects of log defects on lumber recovery.
James M. Cahill; Vincent S. Cegelka
1989-01-01
The impact of log defects on lumber recovery and the accuracy of cubic log scale deductions were evaluated from log scale and product recovery data for more than 3,000 logs. Lumber tally loss was estimated by comparing the lumber yield of sound logs to that of logs containing defects. The data were collected at several product recovery studies; they represent most of...
2014-01-01
Background Support vector regression (SVR) and Gaussian process regression (GPR) were used for the analysis of electroanalytical experimental data to estimate diffusion coefficients. Results For simulated cyclic voltammograms based on the EC, Eqr, and EqrC mechanisms these regression algorithms in combination with nonlinear kernel/covariance functions yielded diffusion coefficients with higher accuracy as compared to the standard approach of calculating diffusion coefficients relying on the Nicholson-Shain equation. The level of accuracy achieved by SVR and GPR is virtually independent of the rate constants governing the respective reaction steps. Further, the reduction of high-dimensional voltammetric signals by manual selection of typical voltammetric peak features decreased the performance of both regression algorithms compared to a reduction by downsampling or principal component analysis. After training on simulated data sets, diffusion coefficients were estimated by the regression algorithms for experimental data comprising voltammetric signals for three organometallic complexes. Conclusions Estimated diffusion coefficients closely matched the values determined by the parameter fitting method, but reduced the required computational time considerably for one of the reaction mechanisms. The automated processing of voltammograms according to the regression algorithms yields better results than the conventional analysis of peak-related data. PMID:24987463
Van Norman, Ethan R; Nelson, Peter M; Klingbeil, David A
2017-09-01
Educators need recommendations to improve screening practices without limiting students' instructional opportunities. Repurposing previous years' state test scores has shown promise in identifying at-risk students within multitiered systems of support. However, researchers have not directly compared the diagnostic accuracy of previous years' state test scores with data collected during fall screening periods to identify at-risk students. In addition, the benefit of using previous state test scores in conjunction with data from a separate measure to identify at-risk students has not been explored. The diagnostic accuracy of 3 types of screening approaches were tested to predict proficiency on end-of-year high-stakes assessments: state test data obtained during the previous year, data from a different measure administered in the fall, and both measures combined (i.e., a gated model). Extant reading and math data (N = 2,996) from 10 schools in the Midwest were analyzed. When used alone, both measures yielded similar sensitivity and specificity values. The gated model yielded superior specificity values compared with using either measure alone, at the expense of sensitivity. Implications, limitations, and ideas for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Sung, Yun J; Gu, C Charles; Tiwari, Hemant K; Arnett, Donna K; Broeckel, Ulrich; Rao, Dabeeru C
2012-07-01
Genotype imputation provides imputation of untyped single nucleotide polymorphisms (SNPs) that are present on a reference panel such as those from the HapMap Project. It is popular for increasing statistical power and comparing results across studies using different platforms. Imputation for African American populations is challenging because their linkage disequilibrium blocks are shorter and also because no ideal reference panel is available due to admixture. In this paper, we evaluated three imputation strategies for African Americans. The intersection strategy used a combined panel consisting of SNPs polymorphic in both CEU and YRI. The union strategy used a panel consisting of SNPs polymorphic in either CEU or YRI. The merge strategy merged results from two separate imputations, one using CEU and the other using YRI. Because recent investigators are increasingly using the data from the 1000 Genomes (1KG) Project for genotype imputation, we evaluated both 1KG-based imputations and HapMap-based imputations. We used 23,707 SNPs from chromosomes 21 and 22 on Affymetrix SNP Array 6.0 genotyped for 1,075 HyperGEN African Americans. We found that 1KG-based imputations provided a substantially larger number of variants than HapMap-based imputations, about three times as many common variants and eight times as many rare and low-frequency variants. This higher yield is expected because the 1KG panel includes more SNPs. Accuracy rates using 1KG data were slightly lower than those using HapMap data before filtering, but slightly higher after filtering. The union strategy provided the highest imputation yield with next highest accuracy. The intersection strategy provided the lowest imputation yield but the highest accuracy. The merge strategy provided the lowest imputation accuracy. We observed that SNPs polymorphic only in CEU had much lower accuracy, reducing the accuracy of the union strategy. Our findings suggest that 1KG-based imputations can facilitate discovery of significant associations for SNPs across the whole MAF spectrum. Because the 1KG Project is still under way, we expect that later versions will provide better imputation performance. © 2012 Wiley Periodicals, Inc.
Nateghi, Roshanak; Guikema, Seth D; Quiring, Steven M
2011-12-01
This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out-of-sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy. © 2011 Society for Risk Analysis.
Predicting grain yield using canopy hyperspectral reflectance in wheat breeding data.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; de Los Campos, Gustavo; Alvarado, Gregorio; Suchismita, Mondal; Rutkoski, Jessica; González-Pérez, Lorena; Burgueño, Juan
2017-01-01
Modern agriculture uses hyperspectral cameras to obtain hundreds of reflectance data measured at discrete narrow bands to cover the whole visible light spectrum and part of the infrared and ultraviolet light spectra, depending on the camera. This information is used to construct vegetation indices (VI) (e.g., green normalized difference vegetation index or GNDVI, simple ratio or SRa, etc.) which are used for the prediction of primary traits (e.g., biomass). However, these indices only use some bands and are cultivar-specific; therefore they lose considerable information and are not robust for all cultivars. This study proposes models that use all available bands as predictors to increase prediction accuracy; we compared these approaches with eight conventional vegetation indexes (VIs) constructed using only some bands. The data set we used comes from CIMMYT's global wheat program and comprises 1170 genotypes evaluated for grain yield (ton/ha) in five environments (Drought, Irrigated, EarlyHeat, Melgas and Reduced Irrigated); the reflectance data were measured in 250 discrete narrow bands ranging between 392 and 851 nm. The proposed models for the simultaneous analysis of all the bands were ordinal least square (OLS), Bayes B, principal components with Bayes B, functional B-spline, functional Fourier and functional partial least square. The results of these models were compared with the OLS performed using as predictors each of the eight VIs individually and combined. We found that using all bands simultaneously increased prediction accuracy more than using VI alone. The Splines and Fourier models had the best prediction accuracy for each of the nine time-points under study. Combining image data collected at different time-points led to a small increase in prediction accuracy relative to models that use data from a single time-point. Also, using bands with heritabilities larger than 0.5 only in Drought as predictor variables showed improvements in prediction accuracy.
Towards systematic evaluation of crop model outputs for global land-use models
NASA Astrophysics Data System (ADS)
Leclere, David; Azevedo, Ligia B.; Skalský, Rastislav; Balkovič, Juraj; Havlík, Petr
2016-04-01
Land provides vital socioeconomic resources to the society, however at the cost of large environmental degradations. Global integrated models combining high resolution global gridded crop models (GGCMs) and global economic models (GEMs) are increasingly being used to inform sustainable solution for agricultural land-use. However, little effort has yet been done to evaluate and compare the accuracy of GGCM outputs. In addition, GGCM datasets require a large amount of parameters whose values and their variability across space are weakly constrained: increasing the accuracy of such dataset has a very high computing cost. Innovative evaluation methods are required both to ground credibility to the global integrated models, and to allow efficient parameter specification of GGCMs. We propose an evaluation strategy for GGCM datasets in the perspective of use in GEMs, illustrated with preliminary results from a novel dataset (the Hypercube) generated by the EPIC GGCM and used in the GLOBIOM land use GEM to inform on present-day crop yield, water and nutrient input needs for 16 crops x 15 management intensities, at a spatial resolution of 5 arc-minutes. We adopt the following principle: evaluation should provide a transparent diagnosis of model adequacy for its intended use. We briefly describe how the Hypercube data is generated and how it articulates with GLOBIOM in order to transparently identify the performances to be evaluated, as well as the main assumptions and data processing involved. Expected performances include adequately representing the sub-national heterogeneity in crop yield and input needs: i) in space, ii) across crop species, and iii) across management intensities. We will present and discuss measures of these expected performances and weight the relative contribution of crop model, input data and data processing steps in performances. We will also compare obtained yield gaps and main yield-limiting factors against the M3 dataset. Next steps include iterative improvement of parameter assumptions and evaluation of implications of GGCM performances for intended use in the IIASA EPIC-GLOBIOM model cluster. Our approach helps targeting future efforts at improving GGCM accuracy and would achieve highest efficiency if combined with traditional field-scale evaluation and sensitivity analysis.
Spatial modeling and classification of corneal shape.
Marsolo, Keith; Twa, Michael; Bullimore, Mark A; Parthasarathy, Srinivasan
2007-03-01
One of the most promising applications of data mining is in biomedical data used in patient diagnosis. Any method of data analysis intended to support the clinical decision-making process should meet several criteria: it should capture clinically relevant features, be computationally feasible, and provide easily interpretable results. In an initial study, we examined the feasibility of using Zernike polynomials to represent biomedical instrument data in conjunction with a decision tree classifier to distinguish between the diseased and non-diseased eyes. Here, we provide a comprehensive follow-up to that work, examining a second representation, pseudo-Zernike polynomials, to determine whether they provide any increase in classification accuracy. We compare the fidelity of both methods using residual root-mean-square (rms) error and evaluate accuracy using several classifiers: neural networks, C4.5 decision trees, Voting Feature Intervals, and Naïve Bayes. We also examine the effect of several meta-learning strategies: boosting, bagging, and Random Forests (RFs). We present results comparing accuracy as it relates to dataset and transformation resolution over a larger, more challenging, multi-class dataset. They show that classification accuracy is similar for both data transformations, but differs by classifier. We find that the Zernike polynomials provide better feature representation than the pseudo-Zernikes and that the decision trees yield the best balance of classification accuracy and interpretability.
Mizinga, Kemmy M; Burnett, Thomas J; Brunelle, Sharon L; Wallace, Michael A; Coleman, Mark R
2018-05-01
The U.S. Department of Agriculture, Food Safety Inspection Service regulatory method for monensin, Chemistry Laboratory Guidebook CLG-MON, is a semiquantitative bioautographic method adopted in 1991. Official Method of AnalysisSM (OMA) 2011.24, a modern quantitative and confirmatory LC-tandem MS method, uses no chlorinated solvents and has several advantages, including ease of use, ready availability of reagents and materials, shorter run-time, and higher throughput than CLG-MON. Therefore, a bridging study was conducted to support the replacement of method CLG-MON with OMA 2011.24 for regulatory use. Using fortified bovine tissue samples, CLG-MON yielded accuracies of 80-120% in 44 of the 56 samples tested (one sample had no result, six samples had accuracies of >120%, and five samples had accuracies of 40-160%), but the semiquantitative nature of CLG-MON prevented assessment of precision, whereas OMA 2011.24 had accuracies of 88-110% and RSDr of 0.00-15.6%. Incurred residue results corroborated these results, demonstrating improved accuracy (83.3-114%) and good precision (RSDr of 2.6-20.5%) for OMA 2011.24 compared with CLG-MON (accuracy generally within 80-150%, with exceptions). Furthermore, χ2 analysis revealed no statistically significant difference between the two methods. Thus, the microbiological activity of monensin correlated with the determination of monensin A in bovine tissues, and OMA 2011.24 provided improved accuracy and precision over CLG-MON.
NASA Technical Reports Server (NTRS)
Rignot, Eric; Williams, Cynthia; Way, Jobea; Viereck, Leslie
1993-01-01
A maximum a posteriori Bayesian classifier for multifrequency polarimetric SAR data is used to perform a supervised classification of forest types in the floodplains of Alaska. The image classes include white spruce, balsam poplar, black spruce, alder, non-forests, and open water. The authors investigate the effect on classification accuracy of changing environmental conditions, and of frequency and polarization of the signal. The highest classification accuracy (86 percent correctly classified forest pixels, and 91 percent overall) is obtained combining L- and C-band frequencies fully polarimetric on a date where the forest is just recovering from flooding. The forest map compares favorably with a vegetation map assembled from digitized aerial photos which took five years for completion, and address the state of the forest in 1978, ignoring subsequent fires, changes in the course of the river, clear-cutting of trees, and tree growth. HV-polarization is the most useful polarization at L- and C-band for classification. C-band VV (ERS-1 mode) and L-band HH (J-ERS-1 mode) alone or combined yield unsatisfactory classification accuracies. Additional data acquired in the winter season during thawed and frozen days yield classification accuracies respectively 20 percent and 30 percent lower due to a greater confusion between conifers and deciduous trees. Data acquired at the peak of flooding in May 1991 also yield classification accuracies 10 percent lower because of dominant trunk-ground interactions which mask out finer differences in radar backscatter between tree species. Combination of several of these dates does not improve classification accuracy. For comparison, panchromatic optical data acquired by SPOT in the summer season of 1991 are used to classify the same area. The classification accuracy (78 percent for the forest types and 90 percent if open water is included) is lower than that obtained with AIRSAR although conifers and deciduous trees are better separated due to the presence of leaves on the deciduous trees. Optical data do not separate black spruce and white spruce as well as SAR data, cannot separate alder from balsam poplar, and are of course limited by the frequent cloud cover in the polar regions. Yet, combining SPOT and AIRSAR offers better chances to identify vegetation types independent of ground truth information using a combination of NDVI indexes from SPOT, biomass numbers from AIRSAR, and a segmentation map from either one.
Phylogenetic inference under varying proportions of indel-induced alignment gaps
Dwivedi, Bhakti; Gadagkar, Sudhindra R
2009-01-01
Background The effect of alignment gaps on phylogenetic accuracy has been the subject of numerous studies. In this study, we investigated the relationship between the total number of gapped sites and phylogenetic accuracy, when the gaps were introduced (by means of computer simulation) to reflect indel (insertion/deletion) events during the evolution of DNA sequences. The resulting (true) alignments were subjected to commonly used gap treatment and phylogenetic inference methods. Results (1) In general, there was a strong – almost deterministic – relationship between the amount of gap in the data and the level of phylogenetic accuracy when the alignments were very "gappy", (2) gaps resulting from deletions (as opposed to insertions) contributed more to the inaccuracy of phylogenetic inference, (3) the probabilistic methods (Bayesian, PhyML & "MLε, " a method implemented in DNAML in PHYLIP) performed better at most levels of gap percentage when compared to parsimony (MP) and distance (NJ) methods, with Bayesian analysis being clearly the best, (4) methods that treat gapped sites as missing data yielded less accurate trees when compared to those that attribute phylogenetic signal to the gapped sites (by coding them as binary character data – presence/absence, or as in the MLε method), and (5) in general, the accuracy of phylogenetic inference depended upon the amount of available data when the gaps resulted from mainly deletion events, and the amount of missing data when insertion events were equally likely to have caused the alignment gaps. Conclusion When gaps in an alignment are a consequence of indel events in the evolution of the sequences, the accuracy of phylogenetic analysis is likely to improve if: (1) alignment gaps are categorized as arising from insertion events or deletion events and then treated separately in the analysis, (2) the evolutionary signal provided by indels is harnessed in the phylogenetic analysis, and (3) methods that utilize the phylogenetic signal in indels are developed for distance methods too. When the true homology is known and the amount of gaps is 20 percent of the alignment length or less, the methods used in this study are likely to yield trees with 90–100 percent accuracy. PMID:19698168
LaDuke, Mike; Monti, Jon; Cronin, Aaron; Gillum, Bart
2017-03-01
Patients commonly present to emergency rooms and primary care clinics with cellulitic skin infections with or without abscess formation. In military operational units, non-physician medical personnel provide most primary and initial emergency medical care. The objective of this study was to determine if, after minimal training, Army physician assistants and medics could use portable ultrasound (US) machines to detect superficial soft tissue abscesses. This was a single-blinded, randomized, prospective observational study conducted over the course of 2 days at a military installation. Active duty military physician assistants and medics with little or no US experience were recruited as participants. They received a short block of training on abscess detection using both clinical examination skills (inspection/palpation) and US examination. The participants were then asked to provide a yes/no answer regarding abscess presence in a chicken tissue model. Results were analyzed to assess the participants' abilities to detect abscesses, compare the diagnostic accuracy of their clinical examinations with their US examinations, and assess how often US results changed treatment plans initially on the basis of clinical examination findings alone. 22 participants performed a total of 220 clinical examinations and 220 US scans on 10 chicken tissue abscess models. Clinical examination for abscess detection yielded a sensitivity of 73.5% (95% confidence interval [CI], 65.3-80.3%) and a specificity of 77.2% (95% CI, 67.4-84.9%), although US examination for abscess detection yielded a sensitivity of 99.2% (95% CI, 95.4-99.9%) and a specificity of 95.5% (95% CI, 88.5-98.6%). Clinical examination yielded a diagnostic accuracy of 75.0% (95% CI, 68.9-80.3) although US examination yielded a diagnostic accuracy of 97.7% (95% CI, 94.6-99.2%), a difference in accuracy of 22.7% favoring US (p < 0.01). US changed the diagnosis in 56 of 220 cases (25.4% of all cases, p = 0.02). Of these 56 cases, US led to the correct diagnosis 53 of 56 times (94.6%). Non-physician military medical providers can be trained in a very brief period to use US to detect superficial soft tissue abscesses with excellent accuracy. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.
Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei
2018-01-01
Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.
All-inkjet-printed thin-film transistors: manufacturing process reliability by root cause analysis.
Sowade, Enrico; Ramon, Eloi; Mitra, Kalyan Yoti; Martínez-Domingo, Carme; Pedró, Marta; Pallarès, Jofre; Loffredo, Fausta; Villani, Fulvia; Gomes, Henrique L; Terés, Lluís; Baumann, Reinhard R
2016-09-21
We report on the detailed electrical investigation of all-inkjet-printed thin-film transistor (TFT) arrays focusing on TFT failures and their origins. The TFT arrays were manufactured on flexible polymer substrates in ambient condition without the need for cleanroom environment or inert atmosphere and at a maximum temperature of 150 °C. Alternative manufacturing processes for electronic devices such as inkjet printing suffer from lower accuracy compared to traditional microelectronic manufacturing methods. Furthermore, usually printing methods do not allow the manufacturing of electronic devices with high yield (high number of functional devices). In general, the manufacturing yield is much lower compared to the established conventional manufacturing methods based on lithography. Thus, the focus of this contribution is set on a comprehensive analysis of defective TFTs printed by inkjet technology. Based on root cause analysis, we present the defects by developing failure categories and discuss the reasons for the defects. This procedure identifies failure origins and allows the optimization of the manufacturing resulting finally to a yield improvement.
ERIC Educational Resources Information Center
Green, Debbie; Rosenfeld, Barry; Belfi, Brian
2013-01-01
The current study evaluated the accuracy of the Structured Interview of Reported Symptoms, Second Edition (SIRS-2) in a criterion-group study using a sample of forensic psychiatric patients and a community simulation sample, comparing it to the original SIRS and to results published in the SIRS-2 manual. The SIRS-2 yielded an impressive…
Spring Small Grains Area Estimation
NASA Technical Reports Server (NTRS)
Palmer, W. F.; Mohler, R. J.
1986-01-01
SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.
ERIC Educational Resources Information Center
Rathod, Sujit D.; Minnis, Alexandra M.; Subbiah, Kalyani; Krishnan, Suneeta
2011-01-01
Background: Audio computer-assisted self-interviews (ACASI) are increasingly used in health research to improve the accuracy of data on sensitive behaviors. However, evidence is limited on its use among low-income populations in countries like India and for measurement of sensitive issues such as domestic violence. Method: We compared reports of…
Jin, Jing; Allison, Brendan Z; Kaufmann, Tobias; Kübler, Andrea; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej
2012-01-01
One of the most common types of brain-computer interfaces (BCIs) is called a P300 BCI, since it relies on the P300 and other event-related potentials (ERPs). In the canonical P300 BCI approach, items on a monitor flash briefly to elicit the necessary ERPs. Very recent work has shown that this approach may yield lower performance than alternate paradigms in which the items do not flash but instead change in other ways, such as moving, changing colour or changing to characters overlaid with faces. The present study sought to extend this research direction by parametrically comparing different ways to change items in a P300 BCI. Healthy subjects used a P300 BCI across six different conditions. Three conditions were similar to our prior work, providing the first direct comparison of characters flashing, moving, and changing to faces. Three new conditions also explored facial motion and emotional expression. The six conditions were compared across objective measures such as classification accuracy and bit rate as well as subjective measures such as perceived difficulty. In line with recent studies, our results indicated that the character flash condition resulted in the lowest accuracy and bit rate. All four face conditions (mean accuracy >91%) yielded significantly better performance than the flash condition (mean accuracy = 75%). Objective results reaffirmed that the face paradigm is superior to the canonical flash approach that has dominated P300 BCIs for over 20 years. The subjective reports indicated that the conditions that yielded better performance were not considered especially burdensome. Therefore, although further work is needed to identify which face paradigm is best, it is clear that the canonical flash approach should be replaced with a face paradigm when aiming at increasing bit rate. However, the face paradigm has to be further explored with practical applications particularly with locked-in patients.
Effect of Anisotropic Yield Function Evolution on Estimation of Forming Limit Diagram
NASA Astrophysics Data System (ADS)
Bandyopadhyay, K.; Basak, S.; Choi, H. J.; Panda, S. K.; Lee, M. G.
2017-09-01
In case of theoretical prediction of the FLD, the variations in yield stress and R-values along different material directions, were long been implemented to enhance the accuracy. Although influences of different yield models and hardening laws on formability were well addressed, anisotropic evolution of yield loci under monotonic loading with different deformation modes is yet to be explored. In the present study, Marciniak-Kuckzinsky (M-K) model was modified to incorporate the change in the shape of the initial yield function with evolution due to anisotropic hardening. Swift’s hardening law along with two different anisotropic yield criteria, namely Hill48 and Yld2000-2d were implemented in the model. The Hill48 yield model was applied with non-associated flow rule to comprehend the effect of variations in both yield stress and R-values. The numerically estimated FLDs were validated after comparing with FLD evaluated through experiments. A low carbon steel was selected, and hemispherical punch stretching test was performed for FLD evaluation. Additionally, the numerically estimated FLDs were incorporated in FE simulations to predict limiting dome heights for validation purpose. Other formability performances like strain distributions over the deformed cup surface were validated with experimental results.
Anomalous effects in the aluminum oxide sputtering yield
NASA Astrophysics Data System (ADS)
Schelfhout, R.; Strijckmans, K.; Depla, D.
2018-04-01
The sputtering yield of aluminum oxide during reactive magnetron sputtering has been quantified by a new and fast method. The method is based on the meticulous determination of the reactive gas consumption during reactive DC magnetron sputtering and has been deployed to determine the sputtering yield of aluminum oxide. The accuracy of the proposed method is demonstrated by comparing its results to the common weight loss method excluding secondary effects such as redeposition. Both methods exhibit a decrease in sputtering yield with increasing discharge current. This feature of the aluminum oxide sputtering yield is described for the first time. It resembles the discrepancy between published high sputtering yield values determined by low current ion beams and the low deposition rate in the poisoned mode during reactive magnetron sputtering. Moreover, the usefulness of the new method arises from its time-resolved capabilities. The evolution of the alumina sputtering yield can now be measured up to a resolution of seconds. This reveals the complex dynamical behavior of the sputtering yield. A plausible explanation of the observed anomalies seems to originate from the balance between retention and out-diffusion of implanted gas atoms, while other possible causes are commented.
Hrabok, Marianne; Brooks, Brian L; Fay-McClymont, Taryn B; Sherman, Elisabeth M S
2014-01-01
The purpose of this article was to investigate the accuracy of the WISC-IV short forms in estimating Full Scale Intelligence Quotient (FSIQ) and General Ability Index (GAI) in pediatric epilepsy. One hundred and four children with epilepsy completed the WISC-IV as part of a neuropsychological assessment at a tertiary-level children's hospital. The clinical accuracy of eight short forms was assessed in two ways: (a) accuracy within +/- 5 index points of FSIQ and (b) the clinical classification rate according to Wechsler conventions. The sample was further subdivided into low FSIQ (≤ 80) and high FSIQ (> 80). All short forms were significantly correlated with FSIQ. Seven-subtest (Crawford et al. [2010] FSIQ) and 5-subtest (BdSiCdVcLn) short forms yielded the highest clinical accuracy rates (77%-89%). Overall, a 2-subtest (VcMr) short form yielded the lowest clinical classification rates for FSIQ (35%-63%). The short form yielding the most accurate estimate of GAI was VcSiMrBd (73%-84%). Short forms show promise as useful estimates. The 7-subtest (Crawford et al., 2010) and 5-subtest (BdSiVcLnCd) short forms yielded the most accurate estimates of FSIQ. VcSiMrBd yielded the most accurate estimate of GAI. Clinical recommendations are provided for use of short forms in pediatric epilepsy.
Chernyshev, Oleg Y; Garami, Zsolt; Calleja, Sergio; Song, Joon; Campbell, Morgan S; Noser, Elizabeth A; Shaltoni, Hashem; Chen, Chin-I; Iguchi, Yasuyuki; Grotta, James C; Alexandrov, Andrei V
2005-01-01
We routinely perform an urgent bedside neurovascular ultrasound examination (NVUE) with carotid/vertebral duplex and transcranial Doppler (TCD) in patients with acute cerebral ischemia. We aimed to determine the yield and accuracy of NVUE to identify lesions amenable for interventional treatment (LAITs). NVUE was performed with portable carotid duplex and TCD using standardized fast-track (<15 minutes) insonation protocols. Digital subtraction angiography (DSA) was the gold standard for identifying LAIT. These lesions were defined as proximal intra- or extracranial occlusions, near-occlusions, > or =50% stenoses or thrombus in the symptomatic artery. One hundred and fifty patients (70 women, mean age 66+/-15 years) underwent NVUE at median 128 minutes after symptom onset. Fifty-four patients (36%) received intravenous or intra-arterial thrombolysis (median National Institutes of Health Stroke Scale (NIHSS) score 14, range 4 to 29; 81% had NIHSS > or =10 points). NVUE demonstrated LAITs in 98% of patients eligible for thrombolysis, 76% of acute stroke patients ineligible for thrombolysis (n=63), and 42% in patients with transient ischemic attack (n=33), P<0.001. Urgent DSA was performed in 30 patients on average 230 minutes after NVUE. Compared with DSA, NVUE predicted LAIT presence with 100% sensitivity and 100% specificity, although individual accuracy parameters for TCD and carotid duplex specific to occlusion location ranged 75% to 96% because of the presence of tandem lesions and 10% rate of no temporal windows. Bedside neurovascular ultrasound examination, combining carotid/vertebral duplex with TCD yields a substantial proportion of LAITs in excellent agreement with urgent DSA.
Improved accuracy for finite element structural analysis via an integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Weather-based forecasts of California crop yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobell, D B; Cahill, K N; Field, C B
2005-09-26
Crop yield forecasts provide useful information to a range of users. Yields for several crops in California are currently forecast based on field surveys and farmer interviews, while for many crops official forecasts do not exist. As broad-scale crop yields are largely dependent on weather, measurements from existing meteorological stations have the potential to provide a reliable, timely, and cost-effective means to anticipate crop yields. We developed weather-based models of state-wide yields for 12 major California crops (wine grapes, lettuce, almonds, strawberries, table grapes, hay, oranges, cotton, tomatoes, walnuts, avocados, and pistachios), and tested their accuracy using cross-validation over themore » 1980-2003 period. Many crops were forecast with high accuracy, as judged by the percent of yield variation explained by the forecast, the number of yields with correctly predicted direction of yield change, or the number of yields with correctly predicted extreme yields. The most successfully modeled crop was almonds, with 81% of yield variance captured by the forecast. Predictions for most crops relied on weather measurements well before harvest time, allowing for lead times that were longer than existing procedures in many cases.« less
Selection Index in the Study of Adaptability and Stability in Maize
Lunezzo de Oliveira, Rogério; Garcia Von Pinho, Renzo; Furtado Ferreira, Daniel; Costa Melo, Wagner Mateus
2014-01-01
This paper proposes an alternative method for evaluating the stability and adaptability of maize hybrids using a genotype-ideotype distance index (GIDI) for selection. Data from seven variables were used, obtained through evaluation of 25 maize hybrids at six sites in southern Brazil. The GIDI was estimated by means of the generalized Mahalanobis distance for each plot of the test. We then proceeded to GGE biplot analysis in order to compare the predictive accuracy of the GGE models and the grouping of environments and to select the best five hybrids. The G × E interaction was significant for both variables assessed. The GGE model with two principal components obtained a predictive accuracy (PRECORR) of 0.8913 for the GIDI and 0.8709 for yield (t ha−1). Two groups of environments were obtained upon analyzing the GIDI, whereas all the environments remained in the same group upon analyzing yield. Coincidence occurred in only two hybrids considering evaluation of the two features. The GIDI assessment provided for selection of hybrids that combine adaptability and stability in most of the variables assessed, making its use more highly recommended than analyzing each variable separately. Not all the higher-yielding hybrids were the best in the other variables assessed. PMID:24696641
Improved segmentation of cerebellar structures in children
Narayanan, Priya Lakshmi; Boonazier, Natalie; Warton, Christopher; Molteno, Christopher D; Joseph, Jesuchristopher; Jacobson, Joseph L; Jacobson, Sandra W; Zöllei, Lilla; Meintjes, Ernesta M
2016-01-01
Background Consistent localization of cerebellar cortex in a standard coordinate system is important for functional studies and detection of anatomical alterations in studies of morphometry. To date, no pediatric cerebellar atlas is available. New method The probabilistic Cape Town Pediatric Cerebellar Atlas (CAPCA18) was constructed in the age-appropriate National Institute of Health Pediatric Database asymmetric template space using manual tracings of 16 cerebellar compartments in 18 healthy children (9–13 years) from Cape Town, South Africa. The individual atlases of the training subjects were also used to implement multi atlas label fusion using multi atlas majority voting (MAMV) and multi atlas generative model (MAGM) approaches. Segmentation accuracy in 14 test subjects was compared for each method to ‘gold standard’ manual tracings. Results Spatial overlap between manual tracings and CAPCA18 automated segmentation was 73% or higher for all lobules in both hemispheres, except VIIb and X. Automated segmentation using MAGM yielded the best segmentation accuracy over all lobules (mean Dice Similarity Coefficient 0.76; range 0.55–0.91). Comparison with existing methods In all lobules, spatial overlap of CAPCA18 segmentations with manual tracings was similar or higher than those obtained with SUIT (spatially unbiased infra-tentorial template), providing additional evidence of the benefits of an age appropriate atlas. MAGM segmentation accuracy was comparable to values reported recently by Park et al. (2014) in adults (across all lobules mean DSC = 0.73, range 0.40–0.89). Conclusions CAPCA18 and the associated multi atlases of the training subjects yield improved segmentation of cerebellar structures in children. PMID:26743973
A Remote Sensing-Derived Corn Yield Assessment Model
NASA Astrophysics Data System (ADS)
Shrestha, Ranjay Man
Agricultural studies and food security have become critical research topics due to continuous growth in human population and simultaneous shrinkage in agricultural land. In spite of modern technological advancements to improve agricultural productivity, more studies on crop yield assessments and food productivities are still necessary to fulfill the constantly increasing food demands. Besides human activities, natural disasters such as flood and drought, along with rapid climate changes, also inflect an adverse effect on food productivities. Understanding the impact of these disasters on crop yield and making early impact estimations could help planning for any national or international food crisis. Similarly, the United States Department of Agriculture (USDA) Risk Management Agency (RMA) insurance management utilizes appropriately estimated crop yield and damage assessment information to sustain farmers' practice through timely and proper compensations. Through County Agricultural Production Survey (CAPS), the USDA National Agricultural Statistical Service (NASS) uses traditional methods of field interviews and farmer-reported survey data to perform annual crop condition monitoring and production estimations at the regional and state levels. As these manual approaches of yield estimations are highly inefficient and produce very limited samples to represent the entire area, NASS requires supplemental spatial data that provides continuous and timely information on crop production and annual yield. Compared to traditional methods, remote sensing data and products offer wider spatial extent, more accurate location information, higher temporal resolution and data distribution, and lower data cost--thus providing a complementary option for estimation of crop yield information. Remote sensing derived vegetation indices such as Normalized Difference Vegetation Index (NDVI) provide measurable statistics of potential crop growth based on the spectral reflectance and could be further associated with the actual yield. Utilizing satellite remote sensing products, such as daily NDVI derived from Moderate Resolution Imaging Spectroradiometer (MODIS) at 250 m pixel size, the crop yield estimation can be performed at a very fine spatial resolution. Therefore, this study examined the potential of these daily NDVI products within agricultural studies and crop yield assessments. In this study, a regression-based approach was proposed to estimate the annual corn yield through changes in MODIS daily NDVI time series. The relationship between daily NDVI and corn yield was well defined and established, and as changes in corn phenology and yield were directly reflected by the changes in NDVI within the growing season, these two entities were combined to develop a relational model. The model was trained using 15 years (2000-2014) of historical NDVI and county-level corn yield data for four major corn producing states: Kansas, Nebraska, Iowa, and Indiana, representing four climatic regions as South, West North Central, East North Central, and Central, respectively, within the U.S. Corn Belt area. The model's goodness of fit was well defined with a high coefficient of determination (R2>0.81). Similarly, using 2015 yield data for validation, 92% of average accuracy signified the performance of the model in estimating corn yield at county level. Besides providing the county-level corn yield estimations, the derived model was also accurate enough to estimate the yield at finer spatial resolution (field level). The model's assessment accuracy was evaluated using the randomly selected field level corn yield within the study area for 2014, 2015, and 2016. A total of over 120 plot level corn yield were used for validation, and the overall average accuracy was 87%, which statistically justified the model's capability to estimate plot-level corn yield. Additionally, the proposed model was applied to the impact estimation by examining the changes in corn yield due to flood events during the growing season. Using a 2011 Missouri River flood event as a case study, field-level flood impact map on corn yield throughout the flooded regions was produced and an overall agreement of over 82.2% was achieved when compared with the reference impact map. The future research direction of this dissertation research would be to examine other major crops outside the Corn Belt region of the U.S.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gastegger, Michael; Kauffmann, Clemens; Marquetand, Philipp, E-mail: philipp.marquetand@univie.ac.at
Many approaches, which have been developed to express the potential energy of large systems, exploit the locality of the atomic interactions. A prominent example is the fragmentation methods in which the quantum chemical calculations are carried out for overlapping small fragments of a given molecule that are then combined in a second step to yield the system’s total energy. Here we compare the accuracy of the systematic molecular fragmentation approach with the performance of high-dimensional neural network (HDNN) potentials introduced by Behler and Parrinello. HDNN potentials are similar in spirit to the fragmentation approach in that the total energy ismore » constructed as a sum of environment-dependent atomic energies, which are derived indirectly from electronic structure calculations. As a benchmark set, we use all-trans alkanes containing up to eleven carbon atoms at the coupled cluster level of theory. These molecules have been chosen because they allow to extrapolate reliable reference energies for very long chains, enabling an assessment of the energies obtained by both methods for alkanes including up to 10 000 carbon atoms. We find that both methods predict high-quality energies with the HDNN potentials yielding smaller errors with respect to the coupled cluster reference.« less
Fast retinal layer segmentation of spectral domain optical coherence tomography images
NASA Astrophysics Data System (ADS)
Zhang, Tianqiao; Song, Zhangjun; Wang, Xiaogang; Zheng, Huimin; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Hu, Qingmao
2015-09-01
An approach to segment macular layer thicknesses from spectral domain optical coherence tomography has been proposed. The main contribution is to decrease computational costs while maintaining high accuracy via exploring Kalman filtering, customized active contour, and curve smoothing. Validation on 21 normal volumes shows that 8 layer boundaries could be segmented within 5.8 s with an average layer boundary error <2.35 μm. It has been compared with state-of-the-art methods for both normal and age-related macular degeneration cases to yield similar or significantly better accuracy and is 37 times faster. The proposed method could be a potential tool to clinically quantify the retinal layer boundaries.
Determination of the Gravitational Constant with a Beam Balance
NASA Astrophysics Data System (ADS)
Schlamminger, St.; Holzschuh, E.; Kündig, W.
2002-09-01
The Newtonian gravitational constant G was determined by means of a novel beam-balance experiment with an accuracy comparable to that of the most precise torsion-balance experiments. The gravitational force of two stainless steel tanks filled with 13 521kg mercury on 1.1kg test masses was measured using a commercial mass comparator. A careful analysis of the data and the experimental error yields G=6.674 07(22)×10-11 m3 kg-1 s-2. This value is in excellent agreement with most values previously obtained with different methods.
NASA Astrophysics Data System (ADS)
Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia
Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.
Jürgens, Rebecca; Grass, Annika; Drolet, Matthis; Fischer, Julia
Both in the performative arts and in emotion research, professional actors are assumed to be capable of delivering emotions comparable to spontaneous emotional expressions. This study examines the effects of acting training on vocal emotion depiction and recognition. We predicted that professional actors express emotions in a more realistic fashion than non-professional actors. However, professional acting training may lead to a particular speech pattern; this might account for vocal expressions by actors that are less comparable to authentic samples than the ones by non-professional actors. We compared 80 emotional speech tokens from radio interviews with 80 re-enactments by professional and inexperienced actors, respectively. We analyzed recognition accuracies for emotion and authenticity ratings and compared the acoustic structure of the speech tokens. Both play-acted conditions yielded similar recognition accuracies and possessed more variable pitch contours than the spontaneous recordings. However, professional actors exhibited signs of different articulation patterns compared to non-trained speakers. Our results indicate that for emotion research, emotional expressions by professional actors are not better suited than those from non-actors.
Hsieh, Chi-Wen; Liu, Tzu-Chiang; Wang, Jui-Kai; Jong, Tai-Lang; Tiu, Chui-Mei
2011-08-01
The Tanner-Whitehouse III (TW3) method is popular for assessing children's bone age, but it is time-consuming in clinical settings; to simplify this, a grouped-TW algorithm (GTA) was developed. A total of 534 left-hand roentgenograms of subjects aged 2-15 years, including 270 training and 264 testing datasets, were evaluated by a senior pediatrician. Next, GTA was used to choose the appropriate candidate of radius, ulna, and short bones and to classify the bones into three groups by data mining. Group 1 was composed of the maturity pattern of the radius and the middle phalange of the third and fifth digits and three weights were obtained by data mining, yielding a result similar to that of TW3. Subsequently, new bone-age assessment tables were constructed for boys and girls by linear regression and fuzzy logic. In addition, the Bland-Altman plot was utilized to compare accuracy between the GTA, the Greulich-Pyle (GP), and the TW3 method. The relative accuracy between the GTA and the TW3 was 96.2% in boys and 95% in girls, with an error of 1 year, while that between the assessment results of the GP and TW3 was about 87%, with an error of 1 year. However, even if the three weights were not optimally processed, GTA yielded a marginal result with an accuracy of 78.2% in boys and 79.6% in girls. GTA can efficiently simplify the complexity of the TW3 method, while maintaining almost the same accuracy. The relative accuracy between the assessment results of GTA and GP can also be marginal. © 2011 The Authors. Pediatrics International © 2011 Japan Pediatric Society.
A Swarm Optimization approach for clinical knowledge mining.
Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A
2015-10-01
Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
O'Bryant, Sid E; Xiao, Guanghua; Barber, Robert; Huebinger, Ryan; Wilhelmsen, Kirk; Edwards, Melissa; Graff-Radford, Neill; Doody, Rachelle; Diaz-Arrastia, Ramon
2011-01-01
There is no rapid and cost effective tool that can be implemented as a front-line screening tool for Alzheimer's disease (AD) at the population level. To generate and cross-validate a blood-based screener for AD that yields acceptable accuracy across both serum and plasma. Analysis of serum biomarker proteins were conducted on 197 Alzheimer's disease (AD) participants and 199 control participants from the Texas Alzheimer's Research Consortium (TARC) with further analysis conducted on plasma proteins from 112 AD and 52 control participants from the Alzheimer's Disease Neuroimaging Initiative (ADNI). The full algorithm was derived from a biomarker risk score, clinical lab (glucose, triglycerides, total cholesterol, homocysteine), and demographic (age, gender, education, APOE*E4 status) data. Alzheimer's disease. 11 proteins met our criteria and were utilized for the biomarker risk score. The random forest (RF) biomarker risk score from the TARC serum samples (training set) yielded adequate accuracy in the ADNI plasma sample (training set) (AUC = 0.70, sensitivity (SN) = 0.54 and specificity (SP) = 0.78), which was below that obtained from ADNI cerebral spinal fluid (CSF) analyses (t-tau/Aβ ratio AUC = 0.92). However, the full algorithm yielded excellent accuracy (AUC = 0.88, SN = 0.75, and SP = 0.91). The likelihood ratio of having AD based on a positive test finding (LR+) = 7.03 (SE = 1.17; 95% CI = 4.49-14.47), the likelihood ratio of not having AD based on the algorithm (LR-) = 3.55 (SE = 1.15; 2.22-5.71), and the odds ratio of AD were calculated in the ADNI cohort (OR) = 28.70 (1.55; 95% CI = 11.86-69.47). It is possible to create a blood-based screening algorithm that works across both serum and plasma that provides a comparable screening accuracy to that obtained from CSF analyses.
Lin, Zibei; Cogan, Noel O I; Pembleton, Luke W; Spangenberg, German C; Forster, John W; Hayes, Ben J; Daetwyler, Hans D
2016-03-01
Genomic selection (GS) provides an attractive option for accelerating genetic gain in perennial ryegrass () improvement given the long cycle times of most current breeding programs. The present study used simulation to investigate the level of genetic gain and inbreeding obtained from GS breeding strategies compared with traditional breeding strategies for key traits (persistency, yield, and flowering time). Base population genomes were simulated through random mating for 60,000 generations at an effective population size of 10,000. The degree of linkage disequilibrium (LD) in the resulting population was compared with that obtained from empirical studies. Initial parental varieties were simulated to match diversity of current commercial cultivars. Genomic selection was designed to fit into a company breeding program at two selection points in the breeding cycle (spaced plants and miniplot). Genomic estimated breeding values (GEBVs) for productivity traits were trained with phenotypes and genotypes from plots. Accuracy of GEBVs was 0.24 for persistency and 0.36 for yield for single plants, while for plots it was lower (0.17 and 0.19, respectively). Higher accuracy of GEBVs was obtained for flowering time (up to 0.7), partially as a result of the larger reference population size that was available from the clonal row stage. The availability of GEBVs permit a 4-yr reduction in cycle time, which led to at least a doubling and trebling genetic gain for persistency and yield, respectively, than the traditional program. However, a higher rate of inbreeding per cycle among varieties was also observed for the GS strategy. Copyright © 2016 Crop Science Society of America.
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction
Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-01-01
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.
Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-06-07
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.
Tuberculosis disease diagnosis using artificial immune recognition system.
Shamshirband, Shahaboddin; Hessam, Somayeh; Javidnia, Hossein; Amiribesheli, Mohsen; Vahdat, Shaghayegh; Petković, Dalibor; Gani, Abdullah; Kiah, Miss Laiha Mat
2014-01-01
There is a high risk of tuberculosis (TB) disease diagnosis among conventional methods. This study is aimed at diagnosing TB using hybrid machine learning approaches. Patient epicrisis reports obtained from the Pasteur Laboratory in the north of Iran were used. All 175 samples have twenty features. The features are classified based on incorporating a fuzzy logic controller and artificial immune recognition system. The features are normalized through a fuzzy rule based on a labeling system. The labeled features are categorized into normal and tuberculosis classes using the Artificial Immune Recognition Algorithm. Overall, the highest classification accuracy reached was for the 0.8 learning rate (α) values. The artificial immune recognition system (AIRS) classification approaches using fuzzy logic also yielded better diagnosis results in terms of detection accuracy compared to other empirical methods. Classification accuracy was 99.14%, sensitivity 87.00%, and specificity 86.12%.
Corn and soybean Landsat MSS classification performance as a function of scene characteristics
NASA Technical Reports Server (NTRS)
Batista, G. T.; Hixson, M. M.; Bauer, M. E.
1982-01-01
In order to fully utilize remote sensing to inventory crop production, it is important to identify the factors that affect the accuracy of Landsat classifications. The objective of this study was to investigate the effect of scene characteristics involving crop, soil, and weather variables on the accuracy of Landsat classifications of corn and soybeans. Segments sampling the U.S. Corn Belt were classified using a Gaussian maximum likelihood classifier on multitemporally registered data from two key acquisition periods. Field size had a strong effect on classification accuracy with small fields tending to have low accuracies even when the effect of mixed pixels was eliminated. Other scene characteristics accounting for variability in classification accuracy included proportions of corn and soybeans, crop diversity index, proportion of all field crops, soil drainage, slope, soil order, long-term average soybean yield, maximum yield, relative position of the segment in the Corn Belt, weather, and crop development stage.
Mutual coupling effects in antenna arrays, volume 1
NASA Technical Reports Server (NTRS)
Collin, R. E.
1986-01-01
Mutual coupling between rectangular apertures in a finite antenna array, in an infinite ground plane, is analyzed using the vector potential approach. The method of moments is used to solve the equations that result from setting the tangential magnetic fields across each aperture equal. The approximation uses a set of vector potential model functions to solve for equivalent magnetic currents. A computer program was written to carry out this analysis and the resulting currents were used to determine the co- and cross-polarized far zone radiation patterns. Numerical results for various arrays using several modes in the approximation are presented. Results for one and two aperture arrays are compared against published data to check on the agreement of this model with previous work. Computer derived results are also compared against experimental results to test the accuracy of the model. These tests of the accuracy of the program showed that it yields valid data.
NASA Technical Reports Server (NTRS)
Carpenter, M. H.
1988-01-01
The generalized chemistry version of the computer code SPARK is extended to include two higher-order numerical schemes, yielding fourth-order spatial accuracy for the inviscid terms. The new and old formulations are used to study the influences of finite rate chemical processes on nozzle performance. A determination is made of the computationally optimum reaction scheme for use in high-enthalpy nozzles. Finite rate calculations are compared with the frozen and equilibrium limits to assess the validity of each formulation. In addition, the finite rate SPARK results are compared with the constant ratio of specific heats (gamma) SEAGULL code, to determine its accuracy in variable gamma flow situations. Finally, the higher-order SPARK code is used to calculate nozzle flows having species stratification. Flame quenching occurs at low nozzle pressures, while for high pressures, significant burning continues in the nozzle.
Feature Selection Methods for Zero-Shot Learning of Neural Activity.
Caceres, Carlos A; Roos, Matthew J; Rupp, Kyle M; Milsap, Griffin; Crone, Nathan E; Wolmetz, Michael E; Ratto, Christopher R
2017-01-01
Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.
Kronlage, Moritz; Pitarokoili, Kalliopi; Schwarz, Daniel; Godel, Tim; Heiland, Sabine; Yoon, Min-Suk; Bendszus, Martin; Bäumer, Philipp
2017-11-01
The aims of this study were to assess diagnostic accuracy of diffusion tensor imaging (DTI) in chronic inflammatory demyelinating polyneuropathy (CIDP), to correlate DTI with electrophysiological parameters, and to evaluate whether radial diffusivity (RD) and axial diffusivity (AD) might serve as specific biomarkers of demyelinating and axonal pathology. This prospective study was approved by the institutional ethics committee, and written informed consent was obtained from all participants. Magnetic resonance neurography of upper and lower extremity nerves (median, ulnar, radial, sciatic, tibial) was performed by single-shot DTI sequences at 3.0 T in 18 patients with a diagnosis of CIDP and 18 healthy controls, matched to age and sex. The scalar readout parameters nerve fractional anisotropy (FA), mean diffusivity (MD), RD, and AD were obtained after manual segmentation and postprocessing and compared between patients and controls. Diagnostic accuracy was assessed by receiver operating characteristic analysis, and cutoff values were calculated by maximizing the Youden index. All patients underwent a complementary electroneurography and correlation of electrophysiological markers and DTI parameters was analyzed and described by Pearson and Spearman coefficients. Nerve FA was decreased to a mean of 0.42 ± 0.08 in patients compared with 0.52 ± 0.04 in healthy controls (P < 0.001). This decrease in FA was a result of an increase of RD (P = 0.02), whereas AD did not differ between the two groups. Of all DTI parameters, FA showed best diagnostic accuracy with a receiver operating characteristic area under the curve of 0.90. Optimal cutoff for an average FA of all analyzed nerves was 0.47, yielding a sensitivity of 0.83 and a specificity of 0.94. Fractional anisotropy and RD correlated strongly with electrophysiological markers of demyelination, whereas AD did not correlate with markers of axonal neuropathy. Diffusion tensor imaging yields valid quantitative biomarkers in CIDP and might aid in diagnosis with high diagnostic accuracy. Fractional anisotropy and RD may serve as parameters of myelin sheath integrity, but AD is unable to reflect axonal damage in CIDP.
NASA Technical Reports Server (NTRS)
Spruce, J. P.; Smoot, James; Ellis, Jean; Hilbert, Kent; Swann, Roberta
2012-01-01
This paper discusses the development and implementation of a geospatial data processing method and multi-decadal Landsat time series for computing general coastal U.S. land-use and land-cover (LULC) classifications and change products consisting of seven classes (water, barren, upland herbaceous, non-woody wetland, woody upland, woody wetland, and urban). Use of this approach extends the observational period of the NOAA-generated Coastal Change and Analysis Program (C-CAP) products by almost two decades, assuming the availability of one cloud free Landsat scene from any season for each targeted year. The Mobile Bay region in Alabama was used as a study area to develop, demonstrate, and validate the method that was applied to derive LULC products for nine dates at approximate five year intervals across a 34-year time span, using single dates of data for each classification in which forests were either leaf-on, leaf-off, or mixed senescent conditions. Classifications were computed and refined using decision rules in conjunction with unsupervised classification of Landsat data and C-CAP value-added products. Each classification's overall accuracy was assessed by comparing stratified random locations to available reference data, including higher spatial resolution satellite and aerial imagery, field survey data, and raw Landsat RGBs. Overall classification accuracies ranged from 83 to 91% with overall Kappa statistics ranging from 0.78 to 0.89. The accuracies are comparable to those from similar, generalized LULC products derived from C-CAP data. The Landsat MSS-based LULC product accuracies are similar to those from Landsat TM or ETM+ data. Accurate classifications were computed for all nine dates, yielding effective results regardless of season. This classification method yielded products that were used to compute LULC change products via additive GIS overlay techniques.
Simultaneous fitting of genomic-BLUP and Bayes-C components in a genomic prediction model.
Iheshiulor, Oscar O M; Woolliams, John A; Svendsen, Morten; Solberg, Trygve; Meuwissen, Theo H E
2017-08-24
The rapid adoption of genomic selection is due to two key factors: availability of both high-throughput dense genotyping and statistical methods to estimate and predict breeding values. The development of such methods is still ongoing and, so far, there is no consensus on the best approach. Currently, the linear and non-linear methods for genomic prediction (GP) are treated as distinct approaches. The aim of this study was to evaluate the implementation of an iterative method (called GBC) that incorporates aspects of both linear [genomic-best linear unbiased prediction (G-BLUP)] and non-linear (Bayes-C) methods for GP. The iterative nature of GBC makes it less computationally demanding similar to other non-Markov chain Monte Carlo (MCMC) approaches. However, as a Bayesian method, GBC differs from both MCMC- and non-MCMC-based methods by combining some aspects of G-BLUP and Bayes-C methods for GP. Its relative performance was compared to those of G-BLUP and Bayes-C. We used an imputed 50 K single-nucleotide polymorphism (SNP) dataset based on the Illumina Bovine50K BeadChip, which included 48,249 SNPs and 3244 records. Daughter yield deviations for somatic cell count, fat yield, milk yield, and protein yield were used as response variables. GBC was frequently (marginally) superior to G-BLUP and Bayes-C in terms of prediction accuracy and was significantly better than G-BLUP only for fat yield. On average across the four traits, GBC yielded a 0.009 and 0.006 increase in prediction accuracy over G-BLUP and Bayes-C, respectively. Computationally, GBC was very much faster than Bayes-C and similar to G-BLUP. Our results show that incorporating some aspects of G-BLUP and Bayes-C in a single model can improve accuracy of GP over the commonly used method: G-BLUP. Generally, GBC did not statistically perform better than G-BLUP and Bayes-C, probably due to the close relationships between reference and validation individuals. Nevertheless, it is a flexible tool, in the sense, that it simultaneously incorporates some aspects of linear and non-linear models for GP, thereby exploiting family relationships while also accounting for linkage disequilibrium between SNPs and genes with large effects. The application of GBC in GP merits further exploration.
Accuracy Estimation and Parameter Advising for Protein Multiple Sequence Alignment
DeBlasio, Dan
2013-01-01
Abstract We develop a novel and general approach to estimating the accuracy of multiple sequence alignments without knowledge of a reference alignment, and use our approach to address a new task that we call parameter advising: the problem of choosing values for alignment scoring function parameters from a given set of choices to maximize the accuracy of a computed alignment. For protein alignments, we consider twelve independent features that contribute to a quality alignment. An accuracy estimator is learned that is a polynomial function of these features; its coefficients are determined by minimizing its error with respect to true accuracy using mathematical optimization. Compared to prior approaches for estimating accuracy, our new approach (a) introduces novel feature functions that measure nonlocal properties of an alignment yet are fast to evaluate, (b) considers more general classes of estimators beyond linear combinations of features, and (c) develops new regression formulations for learning an estimator from examples; in addition, for parameter advising, we (d) determine the optimal parameter set of a given cardinality, which specifies the best parameter values from which to choose. Our estimator, which we call Facet (for “feature-based accuracy estimator”), yields a parameter advisor that on the hardest benchmarks provides more than a 27% improvement in accuracy over the best default parameter choice, and for parameter advising significantly outperforms the best prior approaches to assessing alignment quality. PMID:23489379
Application of Nondestructive Testing Techniques to Materials Testing.
1987-12-01
microscopy gives little quanti- image the center place of the Bragg cell to the back focal tative information on surface height. Nomarski differential...case we can write our technique in a shot-noise limited system, intensity (i2) f 2qloB = 2q 2 7PB measurements can yield interferometric accuracies. nh...comparable in sensitivity to OPTICAL AXIS phase-dependent interferometric techniques. Thedo--i thicknesses of photoresist films have been measured to f_
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
All-inkjet-printed thin-film transistors: manufacturing process reliability by root cause analysis
Sowade, Enrico; Ramon, Eloi; Mitra, Kalyan Yoti; Martínez-Domingo, Carme; Pedró, Marta; Pallarès, Jofre; Loffredo, Fausta; Villani, Fulvia; Gomes, Henrique L.; Terés, Lluís; Baumann, Reinhard R.
2016-01-01
We report on the detailed electrical investigation of all-inkjet-printed thin-film transistor (TFT) arrays focusing on TFT failures and their origins. The TFT arrays were manufactured on flexible polymer substrates in ambient condition without the need for cleanroom environment or inert atmosphere and at a maximum temperature of 150 °C. Alternative manufacturing processes for electronic devices such as inkjet printing suffer from lower accuracy compared to traditional microelectronic manufacturing methods. Furthermore, usually printing methods do not allow the manufacturing of electronic devices with high yield (high number of functional devices). In general, the manufacturing yield is much lower compared to the established conventional manufacturing methods based on lithography. Thus, the focus of this contribution is set on a comprehensive analysis of defective TFTs printed by inkjet technology. Based on root cause analysis, we present the defects by developing failure categories and discuss the reasons for the defects. This procedure identifies failure origins and allows the optimization of the manufacturing resulting finally to a yield improvement. PMID:27649784
Fatigue properties of JIS H3300 C1220 copper for strain life prediction
NASA Astrophysics Data System (ADS)
Harun, Muhammad Faiz; Mohammad, Roslina
2018-05-01
The existing methods for estimating strain life parameters are dependent on the material's monotonic tensile properties. However, a few of these methods yield quite complicated expressions for calculating fatigue parameters, and are specific to certain groups of materials only. The Universal Slopes method, Modified Universal Slopes method, Uniform Material Law, the Hardness method, and Medians method are a few existing methods for predicting strain-life fatigue based on monotonic tensile material properties and hardness of material. In the present study, nine methods for estimating fatigue life and properties are applied on JIS H3300 C1220 copper to determine the best methods for strain life estimation of this ductile material. Experimental strain-life curves are compared to estimations obtained using each method. Muralidharan-Manson's Modified Universal Slopes method and Bäumel-Seeger's method for unalloyed and low-alloy steels are found to yield batter accuracy in estimating fatigue life with a deviation of less than 25%. However, the prediction of both methods only yield much better accuracy for a cycle of less than 1000 or for strain amplitudes of more than 1% and less than 6%. Manson's Original Universal Slopes method and Ong's Modified Four-Point Correlation method are found to predict the strain-life fatigue of copper with better accuracy for a high number of cycles of strain amplitudes of less than 1%. The differences between mechanical behavior during monotonic and cyclic loading and the complexity in deciding the coefficient in an equation are probably the reason for the lack of a reliable method for estimating fatigue behavior using the monotonic properties of a group of materials. It is therefore suggested that a differential approach and new expressions be developed to estimate the strain-life fatigue parameters for ductile materials such as copper.
Effect of the mandible on mouthguard measurements of head kinematics.
Kuo, Calvin; Wu, Lyndia C; Hammoor, Brad T; Luck, Jason F; Cutcliffe, Hattie C; Lynall, Robert C; Kait, Jason R; Campbell, Kody R; Mihalik, Jason P; Bass, Cameron R; Camarillo, David B
2016-06-14
Wearable sensors are becoming increasingly popular for measuring head motions and detecting head impacts. Many sensors are worn on the skin or in headgear and can suffer from motion artifacts introduced by the compliance of soft tissue or decoupling of headgear from the skull. The instrumented mouthguard is designed to couple directly to the upper dentition, which is made of hard enamel and anchored in a bony socket by stiff ligaments. This gives the mouthguard superior coupling to the skull compared with other systems. However, multiple validation studies have yielded conflicting results with respect to the mouthguard׳s head kinematics measurement accuracy. Here, we demonstrate that imposing different constraints on the mandible (lower jaw) can alter mouthguard kinematic accuracy in dummy headform testing. In addition, post mortem human surrogate tests utilizing the worst-case unconstrained mandible condition yield 40% and 80% normalized root mean square error in angular velocity and angular acceleration respectively. These errors can be modeled using a simple spring-mass system in which the soft mouthguard material near the sensors acts as a spring and the mandible as a mass. However, the mouthguard can be designed to mitigate these disturbances by isolating sensors from mandible loads, improving accuracy to below 15% normalized root mean square error in all kinematic measures. Thus, while current mouthguards would suffer from measurement errors in the worst-case unconstrained mandible condition, future mouthguards should be designed to account for these disturbances and future validation testing should include unconstrained mandibles to ensure proper accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multimodel ensembles of wheat growth: many models are better than one.
Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W; Rötter, Reimund P; Boote, Kenneth J; Ruane, Alex C; Thorburn, Peter J; Cammarano, Davide; Hatfield, Jerry L; Rosenzweig, Cynthia; Aggarwal, Pramod K; Angulo, Carlos; Basso, Bruno; Bertuzzi, Patrick; Biernath, Christian; Brisson, Nadine; Challinor, Andrew J; Doltra, Jordi; Gayler, Sebastian; Goldberg, Richie; Grant, Robert F; Heng, Lee; Hooker, Josh; Hunt, Leslie A; Ingwersen, Joachim; Izaurralde, Roberto C; Kersebaum, Kurt Christian; Müller, Christoph; Kumar, Soora Naresh; Nendel, Claas; O'leary, Garry; Olesen, Jørgen E; Osborne, Tom M; Palosuo, Taru; Priesack, Eckart; Ripoche, Dominique; Semenov, Mikhail A; Shcherbak, Iurii; Steduto, Pasquale; Stöckle, Claudio O; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Travasso, Maria; Waha, Katharina; White, Jeffrey W; Wolf, Joost
2015-02-01
Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24-38% for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models. © 2014 John Wiley & Sons Ltd.
Multimodel Ensembles of Wheat Growth: More Models are Better than One
NASA Technical Reports Server (NTRS)
Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W.; Rotter, Reimund P.; Boote, Kenneth J.; Ruane, Alex C.; Thorburn, Peter J.; Cammarano, Davide;
2015-01-01
Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24-38% for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models.
Multimodel Ensembles of Wheat Growth: Many Models are Better than One
NASA Technical Reports Server (NTRS)
Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W.; Rotter, Reimund P.; Boote, Kenneth J.; Ruane, Alexander C.; Thorburn, Peter J.; Cammarano, Davide;
2015-01-01
Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop model scan give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 2438 for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models.
Wright, Gavin; Harrold, Natalie; Bownes, Peter
2018-01-01
Aims To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. Methods Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. Results Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. Conclusions Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses. PMID:29657896
Chow, Benjamin J W; Freeman, Michael R; Bowen, James M; Levin, Leslie; Hopkins, Robert B; Provost, Yves; Tarride, Jean-Eric; Dennie, Carole; Cohen, Eric A; Marcuzzi, Dan; Iwanochko, Robert; Moody, Alan R; Paul, Narinder; Parker, John D; O'Reilly, Daria J; Xie, Feng; Goeree, Ron
2011-06-13
Computed tomographic coronary angiography (CTCA) has gained clinical acceptance for the detection of obstructive coronary artery disease. Although single-center studies have demonstrated excellent accuracy, multicenter studies have yielded variable results. The true diagnostic accuracy of CTCA in the "real world" remains uncertain. We conducted a field evaluation comparing multidetector CTCA with invasive CA (ICA) to understand CTCA's diagnostic accuracy in a real-world setting. A multicenter cohort study of patients awaiting ICA was conducted between September 2006 and June 2009. All patients had either a low or an intermediate pretest probability for coronary artery disease and underwent CTCA and ICA within 10 days. The results of CTCA and ICA were interpreted visually by local expert observers who were blinded to all clinical data and imaging results. Using a patient-based analysis (diameter stenosis ≥50%) of 169 patients, the sensitivity, specificity, positive predictive value, and negative predictive value were 81.3% (95% confidence interval [CI], 71.0%-89.1%), 93.3% (95% CI, 85.9%-97.5%), 91.6% (95% CI, 82.5%-96.8%), and 84.7% (95% CI, 76.0%-91.2%), respectively; the area under receiver operating characteristic curve was 0.873. The diagnostic accuracy varied across centers (P < .001), with a sensitivity, specificity, positive predictive value, and negative predictive value ranging from 50.0% to 93.2%, 92.0% to 100%, 84.6% to 100%, and 42.9% to 94.7%, respectively. Compared with ICA, CTCA appears to have good accuracy; however, there was variability in diagnostic accuracy across centers. Factors affecting institutional variability need to be better understood before CTCA is universally adopted. Additional real-world evaluations are needed to fully understand the impact of CTCA on clinical care. clinicaltrials.gov Identifier: NCT00371891.
Arnould, Valérie M. R.; Reding, Romain; Bormann, Jeanne; Gengler, Nicolas; Soyeurt, Hélène
2015-01-01
Simple Summary Reducing the frequency of milk recording decreases the costs of official milk recording. However, this approach can negatively affect the accuracy of predicting daily yields. Equations to predict daily yield from morning or evening data were developed in this study for fatty milk components from traits recorded easily by milk recording organizations. The correlation values ranged from 96.4% to 97.6% (96.9% to 98.3%) when the daily yields were estimated from the morning (evening) milkings. The simplicity of the proposed models which do not include the milking interval should facilitate their use by breeding and milk recording organizations. Abstract Reducing the frequency of milk recording would help reduce the costs of official milk recording. However, this approach could also negatively affect the accuracy of predicting daily yields. This problem has been investigated in numerous studies. In addition, published equations take into account milking intervals (MI), and these are often not available and/or are unreliable in practice. The first objective of this study was to propose models in which the MI was replaced by a combination of data easily recorded by dairy farmers. The second objective was to further investigate the fatty acids (FA) present in milk. Equations to predict daily yield from AM or PM data were based on a calibration database containing 79,971 records related to 51 traits [milk yield (expected AM, expected PM, and expected daily); fat content (expected AM, expected PM, and expected daily); fat yield (expected AM, expected PM, and expected daily; g/day); levels of seven different FAs or FA groups (expected AM, expected PM, and expected daily; g/dL milk), and the corresponding FA yields for these seven FA types/groups (expected AM, expected PM, and expected daily; g/day)]. These equations were validated using two distinct external datasets. The results obtained from the proposed models were compared to previously published results for models which included a MI effect. The corresponding correlation values ranged from 96.4% to 97.6% when the daily yields were estimated from the AM milkings and ranged from 96.9% to 98.3% when the daily yields were estimated from the PM milkings. The simplicity of these proposed models should facilitate their use by breeding and milk recording organizations. PMID:26479379
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konobeevski, E. S., E-mail: konobeev@inr.ru; Burmistrov, Yu. M.; Zuyev, S. V.
The first results are obtained in a kinematically complete experiment devoted to measuring the n + d {sup {yields}}p + n + n reaction yield at energies in the range E{sub n} = 40-60 MeV and various angles of divergence of two neutrons ({Delta}{theta} = 4{sup o}, 6{sup o}, and 8{sup o}) in the geometry of neutron-neutron final-state interaction. The {sup 1}S{sub 0} neutron-neutron scattering length a{sub nn} is determined by comparing the experimental energy dependence of the reaction yield with the results of a simulation in the Watson-Migdal approximation, which depend on a{sub nn}. For E{sub n} = 40more » MeV and {Delta}{theta} = 6{sup o} (the best statistics in the experiment), the value a{sub nn} = -17.9 {+-} 1.0 fm was obtained. A further improvement of the experimental accuracy will make it possible to remove the existing disagreement of the results from different experiments.« less
Nyman, R S; Cappelen-Smith, J; al Suhaibani, H; Alfurayh, O; Shakweer, W; Akhtar, M
1997-05-01
To compare the yield and complications of ultrasound-guided gun-biopsy and manual Tru-Cut techniques in percutaneous renal biopsy. A total of 448 biopsies were reviewed. They comprised 124 manual and 131 gun-biopsies in native kidneys, and 111 manual and 82 gun-biopsies in transplant kidneys. The gun-biopsies were performed under real-time ultrasound (US) guidance. The manual technique used US mainly for marking the position of the kidney. There was a significantly higher diagnostic yield and fewer complications in the gun-biopsy group. A total of 8 major complications were found, all in the manual group. Provided that the operator is experienced in US scanning, a switch from the manual technique to real-time US-guided gun-biopsy will result in the improvement of diagnostic accuracy together with a reduced risk of complications.
NASA Technical Reports Server (NTRS)
Morain, S. A. (Principal Investigator); Williams, D. L.
1974-01-01
The author has identified the following significant results. Wheat area, yield, and production statistics as derived from satellite image analysis, combined with a weather model, are presented for a ten county area in southwest Kansas. The data (representing the 1972-73 crop year) are compared for accuracy against both the USDA August estimate and its final (official) tabulation. The area estimates from imagery for both dryland and irrigated winter wheat were within 5% of the official figures for the same area, and predated them by almost one year. Yield on dryland wheat was estimated by the Thompson weather model to within 0.1% of the observed yield. A combined irrigated and dryland wheat production estimate for the ten county area was completed in July, 1973 and was within 1% of the production reported by USDA in February, 1974.
Gas-phase conformations of 2-methyl-1,3-dithiolane investigated by microwave spectroscopy
NASA Astrophysics Data System (ADS)
Van, Vinh; Stahl, Wolfgang; Schwell, Martin; Nguyen, Ha Vinh Lam
2018-03-01
The conformational analysis of 2-methyl-1,3-dithiolane using quantum chemical calculations at some levels of theory yielded only one stable conformer with envelope geometry. However, other levels of theory indicated two envelope conformers. Analysis of the microwave spectrum recorded using two molecular jet Fourier transform microwave spectrometers covering the frequency range from 2 to 40 GHz confirms that only one conformer exists under jet conditions. The experimental spectrum was reproduced using a rigid-rotor model with centrifugal distortion correction within the measurement accuracy of 1.5 kHz, and molecular parameters were determined with very high accuracy. The gas phase structure of the title molecule is compared with the structures of other related molecules studied under the same experimental conditions.
Large Area Crop Inventory Experiment (LACIE). Phase 2 evaluation report
NASA Technical Reports Server (NTRS)
1977-01-01
Documentation of the activities of the Large Area Crop Inventory Experiment during the 1976 Northern Hemisphere crop year is presented. A brief overview of the experiment is included as well as phase two area, yield, and production estimates for the United States Great Plains, Canada, and the Union of Soviet Socialist Republics spring winter wheat regions. The accuracies of these estimates are compared with independent government estimates. Accuracy assessment of the United States Great Plains yardstick region based on a through blind sight analysis is given, and reasons for variations in estimating performance are discussed. Other phase two technical activities including operations, exploratory analysis, reporting, methods of assessment, phase three and advanced system design, technical issues, and developmental activities are also included.
NASA Astrophysics Data System (ADS)
Krupinski, Elizabeth A.; Berbaum, Kevin S.; Caldwell, Robert; Schartz, Kevin M.
2012-02-01
Radiologists are reading more cases with more images, especially in CT and MRI and thus working longer hours than ever before. There have been concerns raised regarding fatigue and whether it impacts diagnostic accuracy. This study measured the impact of reader visual fatigue by assessing symptoms, visual strain via dark focus of accommodation, and diagnostic accuracy. Twenty radiologists and 20 radiology residents were given two diagnostic performance tests searching CT chest sequences for a solitary pulmonary nodule before (rested) and after (tired) a day of clinical reading. 10 cases used free search and navigation, and the other 100 cases used preset scrolling speed and duration. Subjects filled out the Swedish Occupational Fatigue Inventory (SOFI) and the oculomotor strain subscale of the Simulator Sickness Questionnaire (SSQ) before each session. Accuracy was measured using ROC techniques. Using Swensson's technique yields an ROC area = 0.86 rested vs. 0.83 tired, p (one-tailed) = 0.09. Using Swensson's LROC technique yields an area = 0.73 rested vs. 0.66 tired, p (one-tailed) = 0.09. Using Swensson's Loc Accuracy technique yields an area = 0.77 rested vs. 0.72 tired, p (one-tailed) = 0.13). Subjective measures of fatigue increased significantly from early to late reading. To date, the results support our findings with static images and detection of bone fractures. Radiologists at the end of a long work day experience greater levels of measurable visual fatigue or strain, contributing to a decrease in diagnostic accuracy. The decrease in accuracy was not as great however as with static images.
Solav, Dana; Camomilla, Valentina; Cereatti, Andrea; Barré, Arnaud; Aminian, Kamiar; Wolf, Alon
2017-09-06
The aim of this study was to analyze the accuracy of bone pose estimation based on sub-clusters of three skin-markers characterized by triangular Cosserat point elements (TCPEs) and to evaluate the capability of four instantaneous physical parameters, which can be measured non-invasively in vivo, to identify the most accurate TCPEs. Moreover, TCPE pose estimations were compared with the estimations of two least squares minimization methods applied to the cluster of all markers, using rigid body (RBLS) and homogeneous deformation (HDLS) assumptions. Analysis was performed on previously collected in vivo treadmill gait data composed of simultaneous measurements of the gold-standard bone pose by bi-plane fluoroscopy tracking the subjects' knee prosthesis and a stereophotogrammetric system tracking skin-markers affected by soft tissue artifact. Femur orientation and position errors estimated from skin-marker clusters were computed for 18 subjects using clusters of up to 35 markers. Results based on gold-standard data revealed that instantaneous subsets of TCPEs exist which estimate the femur pose with reasonable accuracy (median root mean square error during stance/swing: 1.4/2.8deg for orientation, 1.5/4.2mm for position). A non-invasive and instantaneous criteria to select accurate TCPEs for pose estimation (4.8/7.3deg, 5.8/12.3mm), was compared with RBLS (4.3/6.6deg, 6.9/16.6mm) and HDLS (4.6/7.6deg, 6.7/12.5mm). Accounting for homogeneous deformation, using HDLS or selected TCPEs, yielded more accurate position estimations than RBLS method, which, conversely, yielded more accurate orientation estimations. Further investigation is required to devise effective criteria for cluster selection that could represent a significant improvement in bone pose estimation accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Real-time, resource-constrained object classification on a micro-air vehicle
NASA Astrophysics Data System (ADS)
Buck, Louis; Ray, Laura
2013-12-01
A real-time embedded object classification algorithm is developed through the novel combination of binary feature descriptors, a bag-of-visual-words object model and the cortico-striatal loop (CSL) learning algorithm. The BRIEF, ORB and FREAK binary descriptors are tested and compared to SIFT descriptors with regard to their respective classification accuracies, execution times, and memory requirements when used with CSL on a 12.6 g ARM Cortex embedded processor running at 800 MHz. Additionally, the effect of x2 feature mapping and opponent-color representations used with these descriptors is examined. These tests are performed on four data sets of varying sizes and difficulty, and the BRIEF descriptor is found to yield the best combination of speed and classification accuracy. Its use with CSL achieves accuracies between 67% and 95% of those achieved with SIFT descriptors and allows for the embedded classification of a 128x192 pixel image in 0.15 seconds, 60 times faster than classification with SIFT. X2 mapping is found to provide substantial improvements in classification accuracy for all of the descriptors at little cost, while opponent-color descriptors are offer accuracy improvements only on colorful datasets.
Assessment of climate change impact on yield of major crops in the Banas River Basin, India.
Dubey, Swatantra Kumar; Sharma, Devesh
2018-09-01
Crop growth models like AquaCrop are useful in understanding the impact of climate change on crop production considering the various projections from global circulation models and regional climate models. The present study aims to assess the climate change impact on yield of major crops in the Banas River Basin i.e., wheat, barley and maize. Banas basin is part of the semi-arid region of Rajasthan state in India. AquaCrop model is used to calculate the yield of all the three crops for a historical period of 30years (1981-2010) and then compared with observed yield data. Root Mean Square Error (RMSE) values are calculated to assess the model accuracy in prediction of yield. Further, the calibrated model is used to predict the possible impacts of climate change and CO 2 concentration on crop yield using CORDEX-SA climate projections of three driving climate models (CNRM-CM5, CCSM4 and MPI-ESM-LR) for two different scenarios (RCP4.5 and RCP8.5) for the future period 2021-2050. RMSE values of simulated yield with respect to observed yield of wheat, barley and maize are 11.99, 16.15 and 19.13, respectively. It is predicted that crop yield of all three crops will increase under the climate change conditions for future period (2021-2050). Copyright © 2018 Elsevier B.V. All rights reserved.
Field design factors affecting the precision of ryegrass forage yield estimation
USDA-ARS?s Scientific Manuscript database
Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision and accuracy of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to ...
Estimation of genomic breeding values for milk yield in UK dairy goats.
Mucha, S; Mrode, R; MacLaren-Lee, I; Coffey, M; Conington, J
2015-11-01
The objective of this study was to estimate genomic breeding values for milk yield in crossbred dairy goats. The research was based on data provided by 2 commercial goat farms in the UK comprising 590,409 milk yield records on 14,453 dairy goats kidding between 1987 and 2013. The population was created by crossing 3 breeds: Alpine, Saanen, and Toggenburg. In each generation the best performing animals were selected for breeding, and as a result, a synthetic breed was created. The pedigree file contained 30,139 individuals, of which 2,799 were founders. The data set contained test-day records of milk yield, lactation number, farm, age at kidding, and year and season of kidding. Data on milk composition was unavailable. In total 1,960 animals were genotyped with the Illumina 50K caprine chip. Two methods for estimation of genomic breeding value were compared-BLUP at the single nucleotide polymorphism level (BLUP-SNP) and single-step BLUP. The highest accuracy of 0.61 was obtained with single-step BLUP, and the lowest (0.36) with BLUP-SNP. Linkage disequilibrium (r(2), the squared correlation of the alleles at 2 loci) at 50 kb (distance between 2 SNP) was 0.18. This is the first attempt to implement genomic selection in UK dairy goats. Results indicate that the single-step method provides the highest accuracy for populations with a small number of genotyped individuals, where the number of genotyped males is low and females are predominant in the reference population. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Verification of Plutonium Content in PuBe Sources Using MCNP® 6.2.0 Beta with TENDL 2012 Libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lockhart, Madeline Louise; McMath, Garrett Earl
Although the production of PuBe neutron sources has discontinued, hundreds of sources with unknown or inaccurately declared plutonium content are in existence around the world. Institutions have undertaken the task of assaying these sources, measuring, and calculating the isotopic composition, plutonium content, and neutron yield. The nominal plutonium content, based off the neutron yield per gram of pure 239Pu, has shown to be highly inaccurate. New methods of measuring the plutonium content allow a more accurate estimate of the true Pu content, but these measurements need verification. Using the TENDL 2012 nuclear data libraries, MCNP6 has the capability to simulatemore » the (α, n) interactions in a PuBe source. Theoretically, if the source is modeled according to the plutonium content, isotopic composition, and other source characteristics, the calculated neutron yield in MCNP can be compared to the experimental yield, offering an indication of the accuracy of the declared plutonium content. In this study, three sets of PuBe sources from various backgrounds were modeled in MCNP6 1.2 Beta, according to the source specifications dictated by the individuals who assayed the source. Verification of the source parameters with MCNP6 also serves as a means to test the alpha transport capabilities of MCNP6 1.2 Beta with TENDL 2012 alpha transport libraries. Finally, good agreement in the comparison would indicate the accuracy of the source parameters in addition to demonstrating MCNP's capabilities in simulating (α, n) interactions.« less
Verification of Plutonium Content in PuBe Sources Using MCNP® 6.2.0 Beta with TENDL 2012 Libraries
Lockhart, Madeline Louise; McMath, Garrett Earl
2017-10-26
Although the production of PuBe neutron sources has discontinued, hundreds of sources with unknown or inaccurately declared plutonium content are in existence around the world. Institutions have undertaken the task of assaying these sources, measuring, and calculating the isotopic composition, plutonium content, and neutron yield. The nominal plutonium content, based off the neutron yield per gram of pure 239Pu, has shown to be highly inaccurate. New methods of measuring the plutonium content allow a more accurate estimate of the true Pu content, but these measurements need verification. Using the TENDL 2012 nuclear data libraries, MCNP6 has the capability to simulatemore » the (α, n) interactions in a PuBe source. Theoretically, if the source is modeled according to the plutonium content, isotopic composition, and other source characteristics, the calculated neutron yield in MCNP can be compared to the experimental yield, offering an indication of the accuracy of the declared plutonium content. In this study, three sets of PuBe sources from various backgrounds were modeled in MCNP6 1.2 Beta, according to the source specifications dictated by the individuals who assayed the source. Verification of the source parameters with MCNP6 also serves as a means to test the alpha transport capabilities of MCNP6 1.2 Beta with TENDL 2012 alpha transport libraries. Finally, good agreement in the comparison would indicate the accuracy of the source parameters in addition to demonstrating MCNP's capabilities in simulating (α, n) interactions.« less
Improved accuracy for finite element structural analysis via a new integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Data accuracy assessment using enterprise architecture
NASA Astrophysics Data System (ADS)
Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias
2011-02-01
Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.
NASA Astrophysics Data System (ADS)
Wijesingha, J. S. J.; Deshapriya, N. L.; Samarakoon, L.
2015-04-01
Billions of people in the world depend on rice as a staple food and as an income-generating crop. Asia is the leader in rice cultivation and it is necessary to maintain an up-to-date rice-related database to ensure food security as well as economic development. This study investigates general applicability of high temporal resolution Moderate Resolution Imaging Spectroradiometer (MODIS) 250m gridded vegetation product for monitoring rice crop growth, mapping rice crop acreage and analyzing crop yield, at the province-level. The MODIS 250m Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) time series data, field data and crop calendar information were utilized in this research in Sa Kaeo Province, Thailand. The following methodology was used: (1) data pre-processing and rice plant growth analysis using Vegetation Indices (VI) (2) extraction of rice acreage and start-of-season dates from VI time series data (3) accuracy assessment, and (4) yield analysis with MODIS VI. The results show a direct relationship between rice plant height and MODIS VI. The crop calendar information and the smoothed NDVI time series with Whittaker Smoother gave high rice acreage estimation (with 86% area accuracy and 75% classification accuracy). Point level yield analysis showed that the MODIS EVI is highly correlated with rice yield and yield prediction using maximum EVI in the rice cycle predicted yield with an average prediction error 4.2%. This study shows the immense potential of MODIS gridded vegetation product for keeping an up-to-date Geographic Information System of rice cultivation.
Ferragina, A; Cipolat-Gotet, C; Cecchinato, A; Bittante, G
2013-01-01
Cheese yield is an important technological trait in the dairy industry in many countries. The aim of this study was to evaluate the effectiveness of Fourier-transform infrared (FTIR) spectral analysis of fresh unprocessed milk samples for predicting cheese yield and nutrient recovery traits. A total of 1,264 model cheeses were obtained from 1,500-mL milk samples collected from individual Brown Swiss cows. Individual measurements of 7 new cheese yield-related traits were obtained from the laboratory cheese-making procedure, including the fresh cheese yield, total solid cheese yield, and the water retained in curd, all as a percentage of the processed milk, and nutrient recovery (fat, protein, total solids, and energy) in the curd as a percentage of the same nutrient contained in the milk. All individual milk samples were analyzed using a MilkoScan FT6000 over the spectral range from 5,000 to 900 wavenumber × cm(-1). Two spectral acquisitions were carried out for each sample and the results were averaged before data analysis. Different chemometric models were fitted and compared with the aim of improving the accuracy of the calibration equations for predicting these traits. The most accurate predictions were obtained for total solid cheese yield and fresh cheese yield, which exhibited coefficients of determination between the predicted and measured values in cross-validation (1-VR) of 0.95 and 0.83, respectively. A less favorable result was obtained for water retained in curd (1-VR=0.65). Promising results were obtained for recovered protein (1-VR=0.81), total solids (1-VR=0.86), and energy (1-VR=0.76), whereas recovered fat exhibited a low accuracy (1-VR=0.41). As FTIR spectroscopy is a rapid, cheap, high-throughput technique that is already used to collect standard milk recording data, these FTIR calibrations for cheese yield and nutrient recovery highlight additional potential applications of the technique in the dairy industry, especially for monitoring cheese-making processes and milk payment systems. In addition, the prediction models can be used to provide breeding organizations with information on new phenotypes for cheese yield and milk nutrient recovery, potentially allowing these traits to be enhanced through selection. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Feasibility of Multimodal Deformable Registration for Head and Neck Tumor Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fortunati, Valerio, E-mail: v.fortunati@erasmusmc.nl; Verhaart, René F.; Angeloni, Francesco
2014-09-01
Purpose: To investigate the feasibility of using deformable registration in clinical practice to fuse MR and CT images of the head and neck for treatment planning. Method and Materials: A state-of-the-art deformable registration algorithm was optimized, evaluated, and compared with rigid registration. The evaluation was based on manually annotated anatomic landmarks and regions of interest in both modalities. We also developed a multiparametric registration approach, which simultaneously aligns T1- and T2-weighted MR sequences to CT. This was evaluated and compared with single-parametric approaches. Results: Our results show that deformable registration yielded a better accuracy than rigid registration, without introducing unrealisticmore » deformations. For deformable registration, an average landmark alignment of approximatively 1.7 mm was obtained. For all the regions of interest excluding the cerebellum and the parotids, deformable registration provided a median modified Hausdorff distance of approximatively 1 mm. Similar accuracies were obtained for the single-parameter and multiparameter approaches. Conclusions: This study demonstrates that deformable registration of head-and-neck CT and MR images is feasible, with overall a significanlty higher accuracy than for rigid registration.« less
Belgiu, Mariana; Dr Guţ, Lucian
2014-10-01
Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea that classification is dependent on segmentation is challenged by our unexpected results, casting doubt on the value of pursuing 'optimal segmentation'. Our results rather suggest that as long as under-segmentation remains at acceptable levels, imperfections in segmentation can be ruled out, so that a high level of classification accuracy can still be achieved.
Genotyping by sequencing for genomic prediction in a soybean breeding population.
Jarquín, Diego; Kocak, Kyle; Posadas, Luis; Hyma, Katie; Jedlicka, Joseph; Graef, George; Lorenz, Aaron
2014-08-29
Advances in genotyping technology, such as genotyping by sequencing (GBS), are making genomic prediction more attractive to reduce breeding cycle times and costs associated with phenotyping. Genomic prediction and selection has been studied in several crop species, but no reports exist in soybean. The objectives of this study were (i) evaluate prospects for genomic selection using GBS in a typical soybean breeding program and (ii) evaluate the effect of GBS marker selection and imputation on genomic prediction accuracy. To achieve these objectives, a set of soybean lines sampled from the University of Nebraska Soybean Breeding Program were genotyped using GBS and evaluated for yield and other agronomic traits at multiple Nebraska locations. Genotyping by sequencing scored 16,502 single nucleotide polymorphisms (SNPs) with minor-allele frequency (MAF) > 0.05 and percentage of missing values ≤ 5% on 301 elite soybean breeding lines. When SNPs with up to 80% missing values were included, 52,349 SNPs were scored. Prediction accuracy for grain yield, assessed using cross validation, was estimated to be 0.64, indicating good potential for using genomic selection for grain yield in soybean. Filtering SNPs based on missing data percentage had little to no effect on prediction accuracy, especially when random forest imputation was used to impute missing values. The highest accuracies were observed when random forest imputation was used on all SNPs, but differences were not significant. A standard additive G-BLUP model was robust; modeling additive-by-additive epistasis did not provide any improvement in prediction accuracy. The effect of training population size on accuracy began to plateau around 100, but accuracy steadily climbed until the largest possible size was used in this analysis. Including only SNPs with MAF > 0.30 provided higher accuracies when training populations were smaller. Using GBS for genomic prediction in soybean holds good potential to expedite genetic gain. Our results suggest that standard additive G-BLUP models can be used on unfiltered, imputed GBS data without loss in accuracy.
NASA Astrophysics Data System (ADS)
Lee, J.; Kang, S.; Jang, K.; Ko, J.; Hong, S.
2012-12-01
Crop productivity is associated with the food security and hence, several models have been developed to estimate crop yield by combining remote sensing data with carbon cycle processes. In present study, we attempted to estimate crop GPP and NPP using algorithm based on the LUE model and a simplified respiration model. The state of Iowa and Illinois was chosen as the study site for estimating the crop yield for a period covering the 5 years (2006-2010), as it is the main Corn-Belt area in US. Present study focuses on developing crop-specific parameters for corn and soybean to estimate crop productivity and yield mapping using satellite remote sensing data. We utilized a 10 km spatial resolution daily meteorological data from WRF to provide cloudy-day meteorological variables but in clear-say days, MODIS-based meteorological data were utilized to estimate daily GPP, NPP, and biomass. County-level statistics on yield, area harvested, and productions were used to test model predicted crop yield. The estimated input meteorological variables from MODIS and WRF showed with good agreements with the ground observations from 6 Ameriflux tower sites in 2006. For examples, correlation coefficients ranged from 0.93 to 0.98 for Tmin and Tavg ; from 0.68 to 0.85 for daytime mean VPD; from 0.85 to 0.96 for daily shortwave radiation, respectively. We developed county-specific crop conversion coefficient, i.e. ratio of yield to biomass on 260 DOY and then, validated the estimated county-level crop yield with the statistical yield data. The estimated corn and soybean yields at the county level ranged from 671 gm-2 y-1 to 1393 gm-2 y-1 and from 213 gm-2 y-1 to 421 gm-2 y-1, respectively. The county-specific yield estimation mostly showed errors less than 10%. Furthermore, we estimated crop yields at the state level which were validated against the statistics data and showed errors less than 1%. Further analysis for crop conversion coefficient was conducted for 200 DOY and 280 DOY. For the case of 280 DOY, Crop yield estimation showed better accuracy for soybean at county level. Though the case of 200 DOY resulted in less accuracy (i.e. 20% mean bias), it provides a useful tool for early forecasting of crop yield. We improved the spatial accuracy of estimated crop yield at county level by developing county-specific crop conversion coefficient. Our results indicate that the aboveground crop biomass can be estimated successfully with the simple LUE and respiration models combined with MODIS data and then, county-specific conversion coefficient can be different with each other across different counties. Hence, applying region-specific conversion coefficient is necessary to estimate crop yield with better accuracy.
Comparison of methods for the implementation of genome-assisted evaluation of Spanish dairy cattle.
Jiménez-Montero, J A; González-Recio, O; Alenda, R
2013-01-01
The aim of this study was to evaluate methods for genomic evaluation of the Spanish Holstein population as an initial step toward the implementation of routine genomic evaluations. This study provides a description of the population structure of progeny tested bulls in Spain at the genomic level and compares different genomic evaluation methods with regard to accuracy and bias. Two bayesian linear regression models, Bayes-A and Bayesian-LASSO (B-LASSO), as well as a machine learning algorithm, Random-Boosting (R-Boost), and BLUP using a realized genomic relationship matrix (G-BLUP), were compared. Five traits that are currently under selection in the Spanish Holstein population were used: milk yield, fat yield, protein yield, fat percentage, and udder depth. In total, genotypes from 1859 progeny tested bulls were used. The training sets were composed of bulls born before 2005; including 1601 bulls for production and 1574 bulls for type, whereas the testing sets contained 258 and 235 bulls born in 2005 or later for production and type, respectively. Deregressed proofs (DRP) from January 2009 Interbull (Uppsala, Sweden) evaluation were used as the dependent variables for bulls in the training sets, whereas DRP from the December 2011 DRPs Interbull evaluation were used to compare genomic predictions with progeny test results for bulls in the testing set. Genomic predictions were more accurate than traditional pedigree indices for predicting future progeny test results of young bulls. The gain in accuracy, due to inclusion of genomic data varied by trait and ranged from 0.04 to 0.42 Pearson correlation units. Results averaged across traits showed that B-LASSO had the highest accuracy with an advantage of 0.01, 0.03 and 0.03 points in Pearson correlation compared with R-Boost, Bayes-A, and G-BLUP, respectively. The B-LASSO predictions also showed the least bias (0.02, 0.03 and 0.10 SD units less than Bayes-A, R-Boost and G-BLUP, respectively) as measured by mean difference between genomic predictions and progeny test results. The R-Boosting algorithm provided genomic predictions with regression coefficients closer to unity, which is an alternative measure of bias, for 4 out of 5 traits and also resulted in mean squared errors estimates that were 2%, 10%, and 12% smaller than B-LASSO, Bayes-A, and G-BLUP, respectively. The observed prediction accuracy obtained with these methods was within the range of values expected for a population of similar size, suggesting that the prediction method and reference population described herein are appropriate for implementation of routine genome-assisted evaluations in Spanish dairy cattle. R-Boost is a competitive marker regression methodology in terms of predictive ability that can accommodate large data sets. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Iskandar, Aline; Limone, Brendan; Parker, Matthew W; Perugini, Andrew; Kim, Hyejin; Jones, Charles; Calamari, Brian; Coleman, Craig I; Heller, Gary V
2013-02-01
It remains controversial whether the diagnostic accuracy of single-photon emission computed tomography myocardial perfusion imaging (SPECT MPI) is different in men as compared to women. We performed a meta-analysis to investigate gender differences of SPECT MPI for the diagnosis of CAD (≥50% stenosis). Two investigators independently performed a systematic review of the MEDLINE and EMBASE databases from inception through January 2012 for English-language studies determining the diagnostic accuracy of SPECT MPI. We included prospective studies that compared SPECT MPI with conventional coronary angiography which provided sufficient data to calculate gender-specific true and false positives and negatives. Data from studies evaluating <20 patients of one gender were excluded. Bivariate meta-analysis was used to create summary receiver operating curves. Twenty-six studies met inclusion criteria, representing 1,148 women and 1,142 men. Bivariate meta-analysis yielded a mean sensitivity and specificity of 84.2% (95% confidence interval [CI] 78.7%-88.6%) and 78.7% (CI 70.0%-85.3%) for SPECT MPI in women and 89.1% (CI 84.0%-92.7%) and 71.2% (CI 60.8%-79.8%) for SPECT MPI in men. There was no significant difference in the sensitivity (P = .15) or specificity (P = .23) between male and female subjects. In a bivariate meta-analysis of the available literature, the diagnostic accuracy of SPECT MPI is similar for both men and women.
Feature Selection Methods for Zero-Shot Learning of Neural Activity
Caceres, Carlos A.; Roos, Matthew J.; Rupp, Kyle M.; Milsap, Griffin; Crone, Nathan E.; Wolmetz, Michael E.; Ratto, Christopher R.
2017-01-01
Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy. PMID:28690513
Analysis of MINIE2013 Explosion Air-Blast Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnurr, Julie M.; Rodgers, Arthur J.; Kim, Keehoon
We report analysis of air-blast overpressure measurements from the MINIE2013 explosive experiments. The MINIE2013 experiment involved a series of nearly 70 near-surface (height-ofburst, HOB, ranging from -1 to +4 m) low-yield (W=2-20 kg TNT equivalent) chemical highexplosives tests that were recorded at local distances (230 m – 28.5 km). Many of the W and HOB combinations were repeated, allowing for quantification of the variability in air-blast features and corresponding yield estimates. We measured canonical signal features (peak overpressure, impulse per unit area, and positive pulse duration) from the air-blast data and compared these to existing air-blast models. Peak overpressure measurementsmore » showed good agreement with the models at close ranges but tended to attenuate more rapidly at longer range (~ 1 km), which is likely caused by upward refraction of acoustic waves due to a negative vertical gradient of sound speed. We estimated yields of the MINIE2013 explosions using the Integrated Yield Determination Tool (IYDT). Errors of the estimated yields were on average within 30% of the reported yields, and there were no significant differences in the accuracy of the IYDT predictions grouped by yield. IYDT estimates tend to be lower than ground truth yields, possibly because of reduced overpressure amplitudes by upward refraction. Finally, we report preliminary results on a development of a new parameterized air-blast waveform.« less
An adapted yield criterion for the evolution of subsequent yield surfaces
NASA Astrophysics Data System (ADS)
Küsters, N.; Brosius, A.
2017-09-01
In numerical analysis of sheet metal forming processes, the anisotropic material behaviour is often modelled with isotropic work hardening and an average Lankford coefficient. In contrast, experimental observations show an evolution of the Lankford coefficients, which can be associated with a yield surface change due to kinematic and distortional hardening. Commonly, extensive efforts are carried out to describe these phenomena. In this paper an isotropic material model based on the Yld2000-2d criterion is adapted with an evolving yield exponent in order to change the yield surface shape. The yield exponent is linked to the accumulative plastic strain. This change has the effect of a rotating yield surface normal. As the normal is directly related to the Lankford coefficient, the change can be used to model the evolution of the Lankford coefficient during yielding. The paper will focus on the numerical implementation of the adapted material model for the FE-code LS-Dyna, mpi-version R7.1.2-d. A recently introduced identification scheme [1] is used to obtain the parameters for the evolving yield surface and will be briefly described for the proposed model. The suitability for numerical analysis will be discussed for deep drawing processes in general. Efforts for material characterization and modelling will be compared to other common yield surface descriptions. Besides experimental efforts and achieved accuracy, the potential of flexibility in material models and the risk of ambiguity during identification are of major interest in this paper.
Deep-learning derived features for lung nodule classification with limited datasets
NASA Astrophysics Data System (ADS)
Thammasorn, P.; Wu, W.; Pierce, L. A.; Pipavath, S. N.; Lampe, P. D.; Houghton, A. M.; Haynor, D. R.; Chaovalitwongse, W. A.; Kinahan, P. E.
2018-02-01
Only a few percent of indeterminate nodules found in lung CT images are cancer. However, enabling earlier diagnosis is important to avoid invasive procedures or long-time surveillance to those benign nodules. We are evaluating a classification framework using radiomics features derived with a machine learning approach from a small data set of indeterminate CT lung nodule images. We used a retrospective analysis of 194 cases with pulmonary nodules in the CT images with or without contrast enhancement from lung cancer screening clinics. The nodules were contoured by a radiologist and texture features of the lesion were calculated. In addition, sematic features describing shape were categorized. We also explored a Multiband network, a feature derivation path that uses a modified convolutional neural network (CNN) with a Triplet Network. This was trained to create discriminative feature representations useful for variable-sized nodule classification. The diagnostic accuracy was evaluated for multiple machine learning algorithms using texture, shape, and CNN features. In the CT contrast-enhanced group, the texture or semantic shape features yielded an overall diagnostic accuracy of 80%. Use of a standard deep learning network in the framework for feature derivation yielded features that substantially underperformed compared to texture and/or semantic features. However, the proposed Multiband approach of feature derivation produced results similar in diagnostic accuracy to the texture and semantic features. While the Multiband feature derivation approach did not outperform the texture and/or semantic features, its equivalent performance indicates promise for future improvements to increase diagnostic accuracy. Importantly, the Multiband approach adapts readily to different size lesions without interpolation, and performed well with relatively small amount of training data.
Dall'Ara, E; Barber, D; Viceconti, M
2014-09-22
The accurate measurement of local strain is necessary to study bone mechanics and to validate micro computed tomography (µCT) based finite element (FE) models at the tissue scale. Digital volume correlation (DVC) has been used to provide a volumetric estimation of local strain in trabecular bone sample with a reasonable accuracy. However, nothing has been reported so far for µCT based analysis of cortical bone. The goal of this study was to evaluate accuracy and precision of a deformable registration method for prediction of local zero-strains in bovine cortical and trabecular bone samples. The accuracy and precision were analyzed by comparing scans virtually displaced, repeated scans without any repositioning of the sample in the scanner and repeated scans with repositioning of the samples. The analysis showed that both precision and accuracy errors decrease with increasing the size of the region analyzed, by following power laws. The main source of error was found to be the intrinsic noise of the images compared to the others investigated. The results, once extrapolated for larger regions of interest that are typically used in the literature, were in most cases better than the ones previously reported. For a nodal spacing equal to 50 voxels (498 µm), the accuracy and precision ranges were 425-692 µε and 202-394 µε, respectively. In conclusion, it was shown that the proposed method can be used to study the local deformation of cortical and trabecular bone loaded beyond yield, if a sufficiently high nodal spacing is used. Copyright © 2014 Elsevier Ltd. All rights reserved.
Därr, Roland; Kuhn, Matthias; Bode, Christoph; Bornstein, Stefan R; Pacak, Karel; Lenders, Jacques W M; Eisenhofer, Graeme
2017-06-01
To determine the accuracy of biochemical tests for the diagnosis of pheochromocytoma and paraganglioma. A search of the PubMed database was conducted for English-language articles published between October 1958 and December 2016 on the biochemical diagnosis of pheochromocytoma and paraganglioma using immunoassay methods or high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection for measurement of fractionated metanephrines in 24-h urine collections or plasma-free metanephrines obtained under seated or supine blood sampling conditions. Application of the Standards for Reporting of Diagnostic Studies Accuracy Group criteria yielded 23 suitable articles. Summary receiver operating characteristic analysis revealed sensitivities/specificities of 94/93% and 91/93% for measurement of plasma-free metanephrines and urinary fractionated metanephrines using high-performance liquid chromatography or immunoassay methods, respectively. Partial areas under the curve were 0.947 vs. 0.911. Irrespective of the analytical method, sensitivity was significantly higher for supine compared with seated sampling, 95 vs. 89% (p < 0.02), while specificity was significantly higher for supine sampling compared with 24-h urine, 95 vs. 90% (p < 0.03). Partial areas under the curve were 0.942, 0.913, and 0.932 for supine sampling, seated sampling, and urine. Test accuracy increased linearly from 90 to 93% for 24-h urine at prevalence rates of 0.0-1.0, decreased linearly from 94 to 89% for seated sampling and was constant at 95% for supine conditions. Current tests for the biochemical diagnosis of pheochromocytoma and paraganglioma show excellent diagnostic accuracy. Supine sampling conditions and measurement of plasma-free metanephrines using high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection provides the highest accuracy at all prevalence rates.
EXhype: A tool for mineral classification using hyperspectral data
NASA Astrophysics Data System (ADS)
Adep, Ramesh Nityanand; shetty, Amba; Ramesh, H.
2017-02-01
Various supervised classification algorithms have been developed to classify earth surface features using hyperspectral data. Each algorithm is modelled based on different human expertises. However, the performance of conventional algorithms is not satisfactory to map especially the minerals in view of their typical spectral responses. This study introduces a new expert system named 'EXhype (Expert system for hyperspectral data classification)' to map minerals. The system incorporates human expertise at several stages of it's implementation: (i) to deal with intra-class variation; (ii) to identify absorption features; (iii) to discriminate spectra by considering absorption features, non-absorption features and by full spectra comparison; and (iv) finally takes a decision based on learning and by emphasizing most important features. It is developed using a knowledge base consisting of an Optimal Spectral Library, Segmented Upper Hull method, Spectral Angle Mapper (SAM) and Artificial Neural Network. The performance of the EXhype is compared with a traditional, most commonly used SAM algorithm using Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data acquired over Cuprite, Nevada, USA. A virtual verification method is used to collect samples information for accuracy assessment. Further, a modified accuracy assessment method is used to get a real users accuracies in cases where only limited or desired classes are considered for classification. With the modified accuracy assessment method, SAM and EXhype yields an overall accuracy of 60.35% and 90.75% and the kappa coefficient of 0.51 and 0.89 respectively. It was also found that the virtual verification method allows to use most desired stratified random sampling method and eliminates all the difficulties associated with it. The experimental results show that EXhype is not only producing better accuracy compared to traditional SAM but, can also rightly classify the minerals. It is proficient in avoiding misclassification between target classes when applied on minerals.
Schuelke, Matthew J; Day, Eric Anthony; McEntire, Lauren E; Boatman, Jazmine Espejo; Wang, Xiaoqian; Kowollik, Vanessa; Boatman, Paul R
2009-07-01
The authors examined the relative criterion-related validity of knowledge structure coherence and two accuracy-based indices (closeness and correlation) as well as the utility of using a combination of knowledge structure indices in the prediction of skill acquisition and transfer. Findings from an aggregation of 5 independent samples (N = 958) whose participants underwent training on a complex computer simulation indicated that coherence and the accuracy-based indices yielded comparable zero-order predictive validities. Support for the incremental validity of using a combination of indices was mixed; the most, albeit small, gain came in pairing coherence and closeness when predicting transfer. After controlling for baseline skill, general mental ability, and declarative knowledge, only coherence explained a statistically significant amount of unique variance in transfer. Overall, the results suggested that the different indices largely overlap in their representation of knowledge organization, but that coherence better reflects adaptable aspects of knowledge organization important to skill transfer.
Performance Analysis of Classification Methods for Indoor Localization in Vlc Networks
NASA Astrophysics Data System (ADS)
Sánchez-Rodríguez, D.; Alonso-González, I.; Sánchez-Medina, J.; Ley-Bosch, C.; Díaz-Vilariño, L.
2017-09-01
Indoor localization has gained considerable attention over the past decade because of the emergence of numerous location-aware services. Research works have been proposed on solving this problem by using wireless networks. Nevertheless, there is still much room for improvement in the quality of the proposed classification models. In the last years, the emergence of Visible Light Communication (VLC) brings a brand new approach to high quality indoor positioning. Among its advantages, this new technology is immune to electromagnetic interference and has the advantage of having a smaller variance of received signal power compared to RF based technologies. In this paper, a performance analysis of seventeen machine leaning classifiers for indoor localization in VLC networks is carried out. The analysis is accomplished in terms of accuracy, average distance error, computational cost, training size, precision and recall measurements. Results show that most of classifiers harvest an accuracy above 90 %. The best tested classifier yielded a 99.0 % accuracy, with an average error distance of 0.3 centimetres.
Asiimwe, Stephen; Oloya, James; Song, Xiao; Whalen, Christopher C
2014-12-01
Unsupervised HIV self-testing (HST) has potential to increase knowledge of HIV status; however, its accuracy is unknown. To estimate the accuracy of unsupervised HST in field settings in Uganda, we performed a non-blinded, randomized controlled, non-inferiority trial of unsupervised compared with supervised HST among selected high HIV risk fisherfolk (22.1 % HIV Prevalence) in three fishing villages in Uganda between July and September 2013. The study enrolled 246 participants and randomized them in a 1:1 ratio to unsupervised HST or provider-supervised HST. In an intent-to-treat analysis, the HST sensitivity was 90 % in the unsupervised arm and 100 % among the provider-supervised, yielding a difference 0f -10 % (90 % CI -21, 1 %); non-inferiority was not shown. In a per protocol analysis, the difference in sensitivity was -5.6 % (90 % CI -14.4, 3.3 %) and did show non-inferiority. We conclude that unsupervised HST is feasible in rural Africa and may be non-inferior to provider-supervised HST.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyüre, B.; Márkus, B. G.; Bernáth, B.
2015-09-15
We present a novel method to determine the resonant frequency and quality factor of microwave resonators which is faster, more stable, and conceptually simpler than the yet existing techniques. The microwave resonator is pumped with the microwave radiation at a frequency away from its resonance. It then emits an exponentially decaying radiation at its eigen-frequency when the excitation is rapidly switched off. The emitted microwave signal is down-converted with a microwave mixer, digitized, and its Fourier transformation (FT) directly yields the resonance curve in a single shot. Being a FT based method, this technique possesses the Fellgett (multiplex) and Connesmore » (accuracy) advantages and it conceptually mimics that of pulsed nuclear magnetic resonance. We also establish a novel benchmark to compare accuracy of the different approaches of microwave resonator measurements. This shows that the present method has similar accuracy to the existing ones, which are based on sweeping or modulating the frequency of the microwave radiation.« less
MC-PDFT can calculate singlet-triplet splittings of organic diradicals
NASA Astrophysics Data System (ADS)
Stoneburner, Samuel J.; Truhlar, Donald G.; Gagliardi, Laura
2018-02-01
The singlet-triplet splittings of a set of diradical organic molecules are calculated using multiconfiguration pair-density functional theory (MC-PDFT), and the results are compared with those obtained by Kohn-Sham density functional theory (KS-DFT) and complete active space second-order perturbation theory (CASPT2) calculations. We found that MC-PDFT, even with small and systematically defined active spaces, is competitive in accuracy with CASPT2, and it yields results with greater accuracy and precision than Kohn-Sham DFT with the parent functional. MC-PDFT also avoids the challenges associated with spin contamination in KS-DFT. It is also shown that MC-PDFT is much less computationally expensive than CASPT2 when applied to larger active spaces, and this illustrates the promise of this method for larger diradical organic systems.
Frequency domain laser velocimeter signal processor: A new signal processing scheme
NASA Technical Reports Server (NTRS)
Meyers, James F.; Clemmons, James I., Jr.
1987-01-01
A new scheme for processing signals from laser velocimeter systems is described. The technique utilizes the capabilities of advanced digital electronics to yield a smart instrument that is able to configure itself, based on the characteristics of the input signals, for optimum measurement accuracy. The signal processor is composed of a high-speed 2-bit transient recorder for signal capture and a combination of adaptive digital filters with energy and/or zero crossing detection signal processing. The system is designed to accept signals with frequencies up to 100 MHz with standard deviations up to 20 percent of the average signal frequency. Results from comparative simulation studies indicate measurement accuracies 2.5 times better than with a high-speed burst counter, from signals with as few as 150 photons per burst.
Eyewitness accuracy rates in police showup and lineup presentations: a meta-analytic comparison.
Steblay, Nancy; Dysart, Jennifer; Fulero, Solomon; Lindsay, R C
2003-10-01
Meta-analysis is used to compare identification accuracy rates in showups and lineups. Eight papers were located, providing 12 tests of the hypothesis and including 3013 participants. Results indicate that showups generate lower choosing rates than lineups. In target present conditions, showups and lineups yield approximately equal hit rates, and in target absent conditions, showups produce a significantly higher level of correct rejections. False identification rates are approximately equal in showups and lineups when lineup foil choices are excluded from analysis. Dangerous false identifications are more numerous for showups when an innocent suspect resembles the perpetrator. Function of lineup foils, assessment strategies for false identifications, and the potential impact of biases in lineup practice are suggested as additional considerations in evaluation of showup versus lineup efficacy.
Noise parameter estimation for poisson corrupted images using variance stabilization transforms.
Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo
2014-03-01
Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.
Closed-Form Evaluation of Mutual Coupling in a Planar Array of Circular Apertures
NASA Technical Reports Server (NTRS)
Bailey, M. C.
1996-01-01
The integral expression for the mutual admittance between circular apertures in a planar array is evaluated in closed form. Very good accuracy is realized when compared with values that were obtained by numerical integration. Utilization of this closed-form expression, for all element pairs that are separated by more than one element spacing, yields extremely accurate results and significantly reduces the computation time that is required to analyze the performance of a large electronically scanning antenna array.
Montesinos-López, Abelardo; Montesinos-López, Osval A; Cuevas, Jaime; Mata-López, Walter A; Burgueño, Juan; Mondal, Sushismita; Huerta, Julio; Singh, Ravi; Autrique, Enrique; González-Pérez, Lorena; Crossa, José
2017-01-01
Modern agriculture uses hyperspectral cameras that provide hundreds of reflectance data at discrete narrow bands in many environments. These bands often cover the whole visible light spectrum and part of the infrared and ultraviolet light spectra. With the bands, vegetation indices are constructed for predicting agronomically important traits such as grain yield and biomass. However, since vegetation indices only use some wavelengths (referred to as bands), we propose using all bands simultaneously as predictor variables for the primary trait grain yield; results of several multi-environment maize (Aguate et al. in Crop Sci 57(5):1-8, 2017) and wheat (Montesinos-López et al. in Plant Methods 13(4):1-23, 2017) breeding trials indicated that using all bands produced better prediction accuracy than vegetation indices. However, until now, these prediction models have not accounted for the effects of genotype × environment (G × E) and band × environment (B × E) interactions incorporating genomic or pedigree information. In this study, we propose Bayesian functional regression models that take into account all available bands, genomic or pedigree information, the main effects of lines and environments, as well as G × E and B × E interaction effects. The data set used is comprised of 976 wheat lines evaluated for grain yield in three environments (Drought, Irrigated and Reduced Irrigation). The reflectance data were measured in 250 discrete narrow bands ranging from 392 to 851 nm (nm). The proposed Bayesian functional regression models were implemented using two types of basis: B-splines and Fourier. Results of the proposed Bayesian functional regression models, including all the wavelengths for predicting grain yield, were compared with results from conventional models with and without bands. We observed that the models with B × E interaction terms were the most accurate models, whereas the functional regression models (with B-splines and Fourier basis) and the conventional models performed similarly in terms of prediction accuracy. However, the functional regression models are more parsimonious and computationally more efficient because the number of beta coefficients to be estimated is 21 (number of basis), rather than estimating the 250 regression coefficients for all bands. In this study adding pedigree or genomic information did not increase prediction accuracy.
NASA Astrophysics Data System (ADS)
Shao, Yang; Campbell, James B.; Taff, Gregory N.; Zheng, Baojuan
2015-06-01
The Midwestern United States is one of the world's most important corn-producing regions. Monitoring and forecasting of corn yields in this intensive agricultural region are important activities to support food security, commodity markets, bioenergy industries, and formation of national policies. This study aims to develop forecasting models that have the capability to provide mid-season prediction of county-level corn yields for the entire Midwestern United States. We used multi-temporal MODIS NDVI (normalized difference vegetation index) 16-day composite data as the primary input, with digital elevation model (DEM) and parameter-elevation relationships on independent slopes model (PRISM) climate data as additional inputs. The DEM and PRISM data, along with three types of cropland masks were tested and compared to evaluate their impacts on model predictive accuracy. Our results suggested that the use of general cropland masks (e.g., summer crop or cultivated crops) generated similar results compared with use of an annual corn-specific mask. Leave-one-year-out cross-validation resulted in an average R2 of 0.75 and RMSE value of 1.10 t/ha. Using a DEM as an additional model input slightly improved performance, while inclusion of PRISM climate data appeared not to be important for our regional corn-yield model. Furthermore, our model has potential for real-time/early prediction. Our corn yield esitmates are available as early as late July, which is an improvement upon previous corn-yield prediction models. In addition to annual corn yield forecasting, we examined model uncertainties through spatial and temporal analysis of the model's predictive error distribution. The magnitude of predictive error (by county) appears to be associated with the spatial patterns of corn fields in the study area.
Impact of capillary rise and recirculation on simulated crop yields
NASA Astrophysics Data System (ADS)
Kroes, Joop; Supit, Iwan; van Dam, Jos; van Walsum, Paul; Mulder, Martin
2018-05-01
Upward soil water flow is a vital supply of water to crops. The purpose of this study is to determine if upward flow and recirculated percolation water can be quantified separately, and to determine the contribution of capillary rise and recirculated water to crop yield and groundwater recharge. Therefore, we performed impact analyses of various soil water flow regimes on grass, maize and potato yields in the Dutch delta. Flow regimes are characterized by soil composition and groundwater depth and derived from a national soil database. The intermittent occurrence of upward flow and its influence on crop growth are simulated with the combined SWAP-WOFOST model using various boundary conditions. Case studies and model experiments are used to illustrate the impact of upward flow on yield and crop growth. This impact is clearly present in situations with relatively shallow groundwater levels (85 % of the Netherlands), where capillary rise is a well-known source of upward flow; but also in free-draining situations the impact of upward flow is considerable. In the latter case recirculated percolation water is the flow source. To make this impact explicit we implemented a synthetic modelling option that stops upward flow from reaching the root zone, without inhibiting percolation. Such a hypothetically moisture-stressed situation compared to a natural one in the presence of shallow groundwater shows mean yield reductions for grassland, maize and potatoes of respectively 26, 3 and 14 % or respectively about 3.7, 0.3 and 1.5 t dry matter per hectare. About half of the withheld water behind these yield effects comes from recirculated percolation water as occurs in free-drainage conditions and the other half comes from increased upward capillary rise. Soil water and crop growth modelling should consider both capillary rise from groundwater and recirculation of percolation water as this improves the accuracy of yield simulations. This also improves the accuracy of the simulated groundwater recharge: neglecting these processes causes overestimates of 17 % for grassland and 46 % for potatoes, or 63 and 34 mm yr-1, respectively.
White, Robin R; McGill, Tyler; Garnett, Rebecca; Patterson, Robert J; Hanigan, Mark D
2017-04-01
The objective of this work was to evaluate the precision and accuracy of the milk yield predictions made by the PREP10 model in comparison to those from the National Research Council (NRC) Nutrient Requirements of Dairy Cattle. The PREP10 model is a ration-balancing system that allows protein use efficiency to vary with production level. The model also has advanced AA supply and requirement calculations that enable estimation of AA-allowable milk (Milk AA ) based on 10 essential AA. A literature data set of 374 treatment means was collected and used to quantitatively evaluate the estimates of protein-allowable milk (Milk MP ) and energy-allowable milk yields from the NRC and PREP10 models. The PREP10 Milk AA prediction was also evaluated, as were both models' estimates of milk based on the most-limiting nutrient or the mean of the estimated milk yields. For most milk estimates compared, the PREP10 model had reduced root mean squared prediction error (RMSPE), improved concordance correlation coefficient, and reduced mean and slope bias in comparison to the NRC model. In particular, utilizing the variable protein use efficiency for milk production notably improved the estimate of Milk MP when compared with NRC. The PREP10 Milk MP estimate had an RMSPE of 18.2% (NRC = 25.7%), concordance correlation coefficient of 0.82% (NRC = 0.64), slope bias of -0.14 kg/kg of predicted milk (NRC = -0.34 kg/kg), and mean bias of -0.63 kg (NRC = -2.85 kg). The PREP10 estimate of Milk AA had slightly elevated RMSPE and mean and slope bias when compared with Milk MP . The PREP10 estimate of Milk AA was not advantageous when compared with Milk MP , likely because AA use efficiency for milk was constant whereas MP use was variable. Future work evaluating variable AA use efficiencies for milk production is likely to improve accuracy and precision of models of allowable milk. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Analysis of atomic force microscopy data for surface characterization using fuzzy logic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Mousa, Amjed, E-mail: aalmousa@vt.edu; Niemann, Darrell L.; Niemann, Devin J.
2011-07-15
In this paper we present a methodology to characterize surface nanostructures of thin films. The methodology identifies and isolates nanostructures using Atomic Force Microscopy (AFM) data and extracts quantitative information, such as their size and shape. The fuzzy logic based methodology relies on a Fuzzy Inference Engine (FIE) to classify the data points as being top, bottom, uphill, or downhill. The resulting data sets are then further processed to extract quantitative information about the nanostructures. In the present work we introduce a mechanism which can consistently distinguish crowded surfaces from those with sparsely distributed structures and present an omni-directional searchmore » technique to improve the structural recognition accuracy. In order to demonstrate the effectiveness of our approach we present a case study which uses our approach to quantitatively identify particle sizes of two specimens each with a unique gold nanoparticle size distribution. - Research Highlights: {yields} A Fuzzy logic analysis technique capable of characterizing AFM images of thin films. {yields} The technique is applicable to different surfaces regardless of their densities. {yields} Fuzzy logic technique does not require manual adjustment of the algorithm parameters. {yields} The technique can quantitatively capture differences between surfaces. {yields} This technique yields more realistic structure boundaries compared to other methods.« less
Birkemeyer, Ralf; Toelg, Ralph; Zeymer, Uwe; Wessely, Rainer; Jäckle, Sebastian; Hairedini, Bajram; Lübke, Mike; Aßfalg, Manfred; Jung, Werner
2012-12-01
Cardiogoniometry (CGM) is a spatio-temporal five-lead resting electrocardiographic method utilizing automated analysis. The purpose of this study was to determine CGM's and electrocardiography (ECG)'s accuracy for detecting myocardial ischaemia and/or lesions in comparison with perfusion cardiac magnetic resonance imaging (CMRI) and late gadolinium enhancement (LGE). Forty (n= 40) patients with suspected or known stable coronary artery disease were examined by CGM and resting ECG directly prior to CMRI including adenosine stress perfusion (ASP) and LGE. The investigators visually reading the CMRI were blinded to the CGM and ECG results. Half of the patients (n= 20) had a normal CMRI while the other half presented with either abnormal ASP and/or detectable LGE. Cardiogoniometry yielded an accuracy of 83% (sensitivity 70%) and ECG of 63% (sensitivity 35%) compared with CMRI. In this pilot study CGM compares more favourably than ECG with the detection of ischaemia and/or structural myocardial lesions on CMRI.
Automated spike sorting algorithm based on Laplacian eigenmaps and k-means clustering.
Chah, E; Hok, V; Della-Chiesa, A; Miller, J J H; O'Mara, S M; Reilly, R B
2011-02-01
This study presents a new automatic spike sorting method based on feature extraction by Laplacian eigenmaps combined with k-means clustering. The performance of the proposed method was compared against previously reported algorithms such as principal component analysis (PCA) and amplitude-based feature extraction. Two types of classifier (namely k-means and classification expectation-maximization) were incorporated within the spike sorting algorithms, in order to find a suitable classifier for the feature sets. Simulated data sets and in-vivo tetrode multichannel recordings were employed to assess the performance of the spike sorting algorithms. The results show that the proposed algorithm yields significantly improved performance with mean sorting accuracy of 73% and sorting error of 10% compared to PCA which combined with k-means had a sorting accuracy of 58% and sorting error of 10%.A correction was made to this article on 22 February 2011. The spacing of the title was amended on the abstract page. No changes were made to the article PDF and the print version was unaffected.
Kannada character recognition system using neural network
NASA Astrophysics Data System (ADS)
Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.
2013-03-01
Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.
Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner
Yu, Chengyi; Chen, Xiaobo; Xi, Juntong
2017-01-01
A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844
Fat fraction bias correction using T1 estimates and flip angle mapping.
Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A
2014-01-01
To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Park, Dong-Kiu; Kim, Hyun-Sok; Seo, Moo-Young; Ju, Jae-Wuk; Kim, Young-Sik; Shahrjerdy, Mir; van Leest, Arno; Soco, Aileen; Miceli, Giacomo; Massier, Jennifer; McNamara, Elliott; Hinnen, Paul; Böcker, Paul; Oh, Nang-Lyeom; Jung, Sang-Hoon; Chai, Yvon; Lee, Jun-Hyung
2018-03-01
This paper demonstrates the improvement using the YieldStar S-1250D small spot, high-NA, after-etch overlay in-device measurements in a DRAM HVM environment. It will be demonstrated that In-device metrology (IDM) captures after-etch device fingerprints more accurately compared to the industry-standard CDSEM. Also, IDM measurements (acquiring both CD and overlay) can be executed significantly faster increasing the wafer sampling density that is possible within a realistic metrology budget. The improvements to both speed and accuracy open the possibility of extended modeling and correction capabilities for control. The proof-book data of this paper shows a 36% improvement of device overlay after switching to control in a DRAM HVM environment using indevice metrology.
Browning, Brian L.; Yu, Zhaoxia
2009-01-01
We present a novel method for simultaneous genotype calling and haplotype-phase inference. Our method employs the computationally efficient BEAGLE haplotype-frequency model, which can be applied to large-scale studies with millions of markers and thousands of samples. We compare genotype calls made with our method to genotype calls made with the BIRDSEED, CHIAMO, GenCall, and ILLUMINUS genotype-calling methods, using genotype data from the Illumina 550K and Affymetrix 500K arrays. We show that our method has higher genotype-call accuracy and yields fewer uncalled genotypes than competing methods. We perform single-marker analysis of data from the Wellcome Trust Case Control Consortium bipolar disorder and type 2 diabetes studies. For bipolar disorder, the genotype calls in the original study yield 25 markers with apparent false-positive association with bipolar disorder at a p < 10−7 significance level, whereas genotype calls made with our method yield no associated markers at this significance threshold. Conversely, for markers with replicated association with type 2 diabetes, there is good concordance between genotype calls used in the original study and calls made by our method. Results from single-marker and haplotypic analysis of our method's genotype calls for the bipolar disorder study indicate that our method is highly effective at eliminating genotyping artifacts that cause false-positive associations in genome-wide association studies. Our new genotype-calling methods are implemented in the BEAGLE and BEAGLECALL software packages. PMID:19931040
Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.
2011-01-01
Purpose We assessed the quantitation accuracy of small animal pinhole single photon emission computed tomography (SPECT) under the current preclinical settings, where image compensations are not routinely applied. Procedures The effects of several common image-degrading factors and imaging parameters on quantitation accuracy were evaluated using Monte-Carlo simulation methods. Typical preclinical imaging configurations were modeled, and quantitative analyses were performed based on image reconstructions without compensating for attenuation, scatter, and limited system resolution. Results Using mouse-sized phantom studies as examples, attenuation effects alone degraded quantitation accuracy by up to −18% (Tc-99m or In-111) or −41% (I-125). The inclusion of scatter effects changed the above numbers to −12% (Tc-99m or In-111) and −21% (I-125), respectively, indicating the significance of scatter in quantitative I-125 imaging. Region-of-interest (ROI) definitions have greater impacts on regional quantitation accuracy for small sphere sources as compared to attenuation and scatter effects. For the same ROI, SPECT acquisitions using pinhole apertures of different sizes could significantly affect the outcome, whereas the use of different radii-of-rotation yielded negligible differences in quantitation accuracy for the imaging configurations simulated. Conclusions We have systematically quantified the influence of several factors affecting the quantitation accuracy of small animal pinhole SPECT. In order to consistently achieve accurate quantitation within 5% of the truth, comprehensive image compensation methods are needed. PMID:19048346
NASA Astrophysics Data System (ADS)
Aricò, P.; Aloise, F.; Schettini, F.; Salinari, S.; Mattia, D.; Cincotti, F.
2014-06-01
Objective. Several ERP-based brain-computer interfaces (BCIs) that can be controlled even without eye movements (covert attention) have been recently proposed. However, when compared to similar systems based on overt attention, they displayed significantly lower accuracy. In the current interpretation, this is ascribed to the absence of the contribution of short-latency visual evoked potentials (VEPs) in the tasks performed in the covert attention modality. This study aims to investigate if this decrement (i) is fully explained by the lack of VEP contribution to the classification accuracy; (ii) correlates with lower temporal stability of the single-trial P300 potentials elicited in the covert attention modality. Approach. We evaluated the latency jitter of P300 evoked potentials in three BCI interfaces exploiting either overt or covert attention modalities in 20 healthy subjects. The effect of attention modality on the P300 jitter, and the relative contribution of VEPs and P300 jitter to the classification accuracy have been analyzed. Main results. The P300 jitter is higher when the BCI is controlled in covert attention. Classification accuracy negatively correlates with jitter. Even disregarding short-latency VEPs, overt-attention BCI yields better accuracy than covert. When the latency jitter is compensated offline, the difference between accuracies is not significant. Significance. The lower temporal stability of the P300 evoked potential generated during the tasks performed in covert attention modality should be regarded as the main contributing explanation of lower accuracy of covert-attention ERP-based BCIs.
Point cloud registration from local feature correspondences-Evaluation on challenging datasets.
Petricek, Tomas; Svoboda, Tomas
2017-01-01
Registration of laser scans, or point clouds in general, is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. A coarse alignment of the point clouds is generally needed before applying local methods such as the Iterative Closest Point (ICP) algorithm. We propose a feature-based approach to point cloud registration and evaluate the proposed method and its individual components on challenging real-world datasets. For a moderate overlap between the laser scans, the method provides a superior registration accuracy compared to state-of-the-art methods including Generalized ICP, 3D Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets. Compared to the surface normals, the points as the underlying features yield higher performance in both keypoint detection and establishing local reference frames. Moreover, sign disambiguation of the basis vectors proves to be an important aspect in creating repeatable local reference frames. A novel method for sign disambiguation is proposed which yields highly repeatable reference frames.
Multigrid methods in structural mechanics
NASA Technical Reports Server (NTRS)
Raju, I. S.; Bigelow, C. A.; Taasan, S.; Hussaini, M. Y.
1986-01-01
Although the application of multigrid methods to the equations of elasticity has been suggested, few such applications have been reported in the literature. In the present work, multigrid techniques are applied to the finite element analysis of a simply supported Bernoulli-Euler beam, and various aspects of the multigrid algorithm are studied and explained in detail. In this study, six grid levels were used to model half the beam. With linear prolongation and sequential ordering, the multigrid algorithm yielded results which were of machine accuracy with work equivalent to 200 standard Gauss-Seidel iterations on the fine grid. Also with linear prolongation and sequential ordering, the V(1,n) cycle with n greater than 2 yielded better convergence rates than the V(n,1) cycle. The restriction and prolongation operators were derived based on energy principles. Conserving energy during the inter-grid transfers required that the prolongation operator be the transpose of the restriction operator, and led to improved convergence rates. With energy-conserving prolongation and sequential ordering, the multigrid algorithm yielded results of machine accuracy with a work equivalent to 45 Gauss-Seidel iterations on the fine grid. The red-black ordering of relaxations yielded solutions of machine accuracy in a single V(1,1) cycle, which required work equivalent to about 4 iterations on the finest grid level.
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; Moore, Kathleen; Liu, Hong; Zheng, Bin
2017-03-01
Abdominal obesity is strongly associated with a number of diseases and accurately assessment of subtypes of adipose tissue volume plays a significant role in predicting disease risk, diagnosis and prognosis. The objective of this study is to develop and evaluate a new computer-aided detection (CAD) scheme based on deep learning models to automatically segment subcutaneous fat areas (SFA) and visceral (VFA) fat areas depicting on CT images. A dataset involving CT images from 40 patients were retrospectively collected and equally divided into two independent groups (i.e. training and testing group). The new CAD scheme consisted of two sequential convolutional neural networks (CNNs) namely, Selection-CNN and Segmentation-CNN. Selection-CNN was trained using 2,240 CT slices to automatically select CT slices belonging to abdomen areas and SegmentationCNN was trained using 84,000 fat-pixel patches to classify fat-pixels as belonging to SFA or VFA. Then, data from the testing group was used to evaluate the performance of the optimized CAD scheme. Comparing to manually labelled results, the classification accuracy of CT slices selection generated by Selection-CNN yielded 95.8%, while the accuracy of fat pixel segmentation using Segmentation-CNN yielded 96.8%. Therefore, this study demonstrated the feasibility of using deep learning based CAD scheme to recognize human abdominal section from CT scans and segment SFA and VFA from CT slices with high agreement compared with subjective segmentation results.
Tilak, Gaurie; Tuncali, Kemal; Song, Sang-Eun; Tokuda, Junichi; Olubiyi, Olutayo; Fennessy, Fiona; Fedorov, Andriy; Penzkofer, Tobias; Tempany, Clare; Hata, Nobuhiko
2015-07-01
To demonstrate the utility of a robotic needle-guidance template device as compared to a manual template for in-bore 3T transperineal magnetic resonance imaging (MRI)-guided prostate biopsy. This two-arm mixed retrospective-prospective study included 99 cases of targeted transperineal prostate biopsies. The biopsy needles were aimed at suspicious foci noted on multiparametric 3T MRI using manual template (historical control) as compared with a robotic template. The following data were obtained: the accuracy of average and closest needle placement to the focus, histologic yield, percentage of cancer volume in positive core samples, complication rate, and time to complete the procedure. In all, 56 cases were performed using the manual template and 43 cases were performed using the robotic template. The mean accuracy of the best needle placement attempt was higher in the robotic group (2.39 mm) than the manual group (3.71 mm, P < 0.027). The mean core procedure time was shorter in the robotic (90.82 min) than the manual group (100.63 min, P < 0.030). Percentage of cancer volume in positive core samples was higher in the robotic group (P < 0.001). Cancer yields and complication rates were not statistically different between the two subgroups (P = 0.557 and P = 0.172, respectively). The robotic needle-guidance template helps accurate placement of biopsy needles in MRI-guided core biopsy of prostate cancer. © 2014 Wiley Periodicals, Inc.
A review of methods for monitoring streamflow for sustainable water resource management
NASA Astrophysics Data System (ADS)
Dobriyal, Pariva; Badola, Ruchi; Tuboi, Chongpi; Hussain, Syed Ainul
2017-10-01
Monitoring of streamflow may help to determine the optimum levels of its use for sustainable water management in the face of climate change. We reviewed available methods for monitoring streamflow on the basis of six criteria viz. their applicability across different terrains and size of the streams, operational ease, time effectiveness, accuracy, environmental impact that they may cause and cost involve in it. On the basis of the strengths and weaknesses of each of the methods reviewed, we conclude that the timed volume method is apt for hilly terrain having smaller streams due to its operational ease and accuracy of results. Although comparatively expensive, the weir and flume methods are suitable for long term studies of small hill streams, since once the structure is put in place, it yields accurate results. In flat terrain, the float method is best suited for smaller streams for its operational ease and cost effectiveness, whereas, for larger streams, the particle image velocimetry may be used for its accuracy. Our review suggests that the selection of a method for monitoring streamflow may be based on volume of the stream, accuracy of the method, accessibility of the terrain and financial and physical resources available.
NASA Astrophysics Data System (ADS)
Samsudin, Sarah Hanim; Shafri, Helmi Z. M.; Hamedianfar, Alireza
2016-04-01
Status observations of roofing material degradation are constantly evolving due to urban feature heterogeneities. Although advanced classification techniques have been introduced to improve within-class impervious surface classifications, these techniques involve complex processing and high computation times. This study integrates field spectroscopy and satellite multispectral remote sensing data to generate degradation status maps of concrete and metal roofing materials. Field spectroscopy data were used as bases for selecting suitable bands for spectral index development because of the limited number of multispectral bands. Mapping methods for roof degradation status were established for metal and concrete roofing materials by developing the normalized difference concrete condition index (NDCCI) and the normalized difference metal condition index (NDMCI). Results indicate that the accuracies achieved using the spectral indices are higher than those obtained using supervised pixel-based classification. The NDCCI generated an accuracy of 84.44%, whereas the support vector machine (SVM) approach yielded an accuracy of 73.06%. The NDMCI obtained an accuracy of 94.17% compared with 62.5% for the SVM approach. These findings support the suitability of the developed spectral index methods for determining roof degradation statuses from satellite observations in heterogeneous urban environments.
Predicting the accuracy of ligand overlay methods with Random Forest models.
Nandigam, Ravi K; Evans, David A; Erickson, Jon A; Kim, Sangtae; Sutherland, Jeffrey J
2008-12-01
The accuracy of binding mode prediction using standard molecular overlay methods (ROCS, FlexS, Phase, and FieldCompare) is studied. Previous work has shown that simple decision tree modeling can be used to improve accuracy by selection of the best overlay template. This concept is extended to the use of Random Forest (RF) modeling for template and algorithm selection. An extensive data set of 815 ligand-bound X-ray structures representing 5 gene families was used for generating ca. 70,000 overlays using four programs. RF models, trained using standard measures of ligand and protein similarity and Lipinski-related descriptors, are used for automatically selecting the reference ligand and overlay method maximizing the probability of reproducing the overlay deduced from X-ray structures (i.e., using rmsd < or = 2 A as the criteria for success). RF model scores are highly predictive of overlay accuracy, and their use in template and method selection produces correct overlays in 57% of cases for 349 overlay ligands not used for training RF models. The inclusion in the models of protein sequence similarity enables the use of templates bound to related protein structures, yielding useful results even for proteins having no available X-ray structures.
Empirical trials of plant field guides.
Hawthorne, W D; Cable, S; Marshall, C A M
2014-06-01
We designed 3 image-based field guides to tropical forest plant species in Ghana, Grenada, and Cameroon and tested them with 1095 local residents and 20 botanists in the United Kingdom. We compared users' identification accuracy with different image formats, including drawings, specimen photos, living plant photos, and paintings. We compared users' accuracy with the guides to their accuracy with only their prior knowledge of the flora. We asked respondents to score each format for usability, beauty, and how much they would pay for it. Prior knowledge of plant names was generally low (<22%). With a few exceptions, identification accuracy did not differ significantly among image formats. In Cameroon, users identifying sterile Cola species achieved 46-56% accuracy across formats; identification was most accurate with living plant photos. Botanists in the United Kingdom accurately identified 82-93% of the same Cameroonian species; identification was most accurate with specimens. In Grenada, users accurately identified 74-82% of plants; drawings yielded significantly less accurate identifications than paintings and photos of living plants. In Ghana, users accurately identified 85% of plants. Digital color photos of living plants ranked high for beauty, usability, and what users would pay. Black and white drawings ranked low. Our results show the potential and limitations of the use of field guides and nonspecialists to identify plants, for example, in conservation applications. We recommend authors of plant field guides use the cheapest or easiest illustration format because image type had limited bearing on accuracy; match the type of illustration to the most likely use of the guide for slight improvements in accuracy; avoid black and white formats unless the audience is experienced at interpreting illustrations or keeping costs low is imperative; discourage false-positive identifications, which were common; and encourage users to ask an expert or use a herbarium for groups that are difficult to identify. © 2014 Society for Conservation Biology.
Ultrasound-guided synovial Tru-cut biopsy: indications, technique, and outcome in 111 cases.
Sitt, Jacqueline C M; Griffith, James F; Lai, Fernand M; Hui, Mamie; Chiu, K H; Lee, Ryan K L; Ng, Alex W H; Leung, Jason
2017-05-01
To investigate the diagnostic performance of ultrasound-guided synovial biopsy. Clinical notes, pathology and microbiology reports, ultrasound and other imaging studies of 100 patients who underwent 111 ultrasound-guided synovial biopsies were reviewed. Biopsies were compared with the final clinical diagnosis established after synovectomy (n = 43) or clinical/imaging follow-up (n = 57) (mean 30 months). Other than a single vasovagal episode, no complication of synovial biopsy was encountered. One hundred and seven (96 %) of the 111 biopsies yielded synovium histologically. Pathology ± microbiology findings for these 107 conclusive biopsies comprised synovial tumour (n = 30, 28 %), synovial infection (n = 18, 17 %), synovial inflammation (n = 45, 42 %), including gouty arthritis (n = 3), and no abnormality (n = 14, 13 %). The accuracy, sensitivity, and specificity of synovial biopsy was 99 %, 97 %, and 100 % for synovial tumour; 100 %, 100 %, and 100 % for native joint infection; and 78 %, 45 %, and 100 % for prosthetic joint infection. False-negative synovial biopsy did not seem to be related to antibiotic therapy. Ultrasound-guided Tru-cut synovial biopsy is a safe and reliable technique with a high diagnostic yield for diagnosing synovial tumour and also, most likely, for joint infection. Regarding joint infection, synovial biopsy of native joints seems to have a higher diagnostic yield than that for infected prosthetic joints. • Ultrasound-guided Tru-cut synovial biopsy has high accuracy (99 %) for diagnosing synovial tumour. • It has good accuracy, sensitivity, and high specificity for diagnosis of joint infection. • Synovial biopsy of native joints works better than biopsy of prosthetic joints. • A negative synovial biopsy culture from a native joint largely excludes septic arthritis. • Ultrasound-guided Tru-cut synovial biopsy is a safe and well-tolerated procedure.
Cannell, R C; Tatum, J D; Belk, K E; Wise, J W; Clayton, R P; Smith, G C
1999-11-01
An improved ability to quantify differences in the fabrication yields of beef carcasses would facilitate the application of value-based marketing. This study was conducted to evaluate the ability of the Dual-Component Australian VIASCAN to 1) predict fabricated beef subprimal yields as a percentage of carcass weight at each of three fat-trim levels and 2) augment USDA yield grading, thereby improving accuracy of grade placement. Steer and heifer carcasses (n = 240) were evaluated using VIASCAN, as well as by USDA expert and online graders, before fabrication of carcasses to each of three fat-trim levels. Expert yield grade (YG), online YG, VIASCAN estimates, and VIASCAN estimated ribeye area used to augment actual and expert grader estimates of the remaining YG factors (adjusted fat thickness, percentage of kidney-pelvic-heart fat, and hot carcass weight), respectively, 1) accounted for 51, 37, 46, and 55% of the variation in fabricated yields of commodity-trimmed subprimals, 2) accounted for 74, 54, 66, and 75% of the variation in fabricated yields of closely trimmed subprimals, and 3) accounted for 74, 54, 71, and 75% of the variation in fabricated yields of very closely trimmed subprimals. The VIASCAN system predicted fabrication yields more accurately than current online yield grading and, when certain VIASCAN-measured traits were combined with some USDA yield grade factors in an augmentation system, the accuracy of cutability prediction was improved, at packing plant line speeds, to a level matching that of expert graders applying grades at a comfortable rate.
Field-scale experiments reveal persistent yield gaps in low-input and organic cropping systems
Kravchenko, Alexandra N.; Snapp, Sieglinde S.; Robertson, G. Philip
2017-01-01
Knowledge of production-system performance is largely based on observations at the experimental plot scale. Although yield gaps between plot-scale and field-scale research are widely acknowledged, their extent and persistence have not been experimentally examined in a systematic manner. At a site in southwest Michigan, we conducted a 6-y experiment to test the accuracy with which plot-scale crop-yield results can inform field-scale conclusions. We compared conventional versus alternative, that is, reduced-input and biologically based–organic, management practices for a corn–soybean–wheat rotation in a randomized complete block-design experiment, using 27 commercial-size agricultural fields. Nearby plot-scale experiments (0.02-ha to 1.0-ha plots) provided a comparison of plot versus field performance. We found that plot-scale yields well matched field-scale yields for conventional management but not for alternative systems. For all three crops, at the plot scale, reduced-input and conventional managements produced similar yields; at the field scale, reduced-input yields were lower than conventional. For soybeans at the plot scale, biological and conventional managements produced similar yields; at the field scale, biological yielded less than conventional. For corn, biological management produced lower yields than conventional in both plot- and field-scale experiments. Wheat yields appeared to be less affected by the experimental scale than corn and soybean. Conventional management was more resilient to field-scale challenges than alternative practices, which were more dependent on timely management interventions; in particular, mechanical weed control. Results underscore the need for much wider adoption of field-scale experimentation when assessing new technologies and production-system performance, especially as related to closing yield gaps in organic farming and in low-resourced systems typical of much of the developing world. PMID:28096409
Field-scale experiments reveal persistent yield gaps in low-input and organic cropping systems.
Kravchenko, Alexandra N; Snapp, Sieglinde S; Robertson, G Philip
2017-01-31
Knowledge of production-system performance is largely based on observations at the experimental plot scale. Although yield gaps between plot-scale and field-scale research are widely acknowledged, their extent and persistence have not been experimentally examined in a systematic manner. At a site in southwest Michigan, we conducted a 6-y experiment to test the accuracy with which plot-scale crop-yield results can inform field-scale conclusions. We compared conventional versus alternative, that is, reduced-input and biologically based-organic, management practices for a corn-soybean-wheat rotation in a randomized complete block-design experiment, using 27 commercial-size agricultural fields. Nearby plot-scale experiments (0.02-ha to 1.0-ha plots) provided a comparison of plot versus field performance. We found that plot-scale yields well matched field-scale yields for conventional management but not for alternative systems. For all three crops, at the plot scale, reduced-input and conventional managements produced similar yields; at the field scale, reduced-input yields were lower than conventional. For soybeans at the plot scale, biological and conventional managements produced similar yields; at the field scale, biological yielded less than conventional. For corn, biological management produced lower yields than conventional in both plot- and field-scale experiments. Wheat yields appeared to be less affected by the experimental scale than corn and soybean. Conventional management was more resilient to field-scale challenges than alternative practices, which were more dependent on timely management interventions; in particular, mechanical weed control. Results underscore the need for much wider adoption of field-scale experimentation when assessing new technologies and production-system performance, especially as related to closing yield gaps in organic farming and in low-resourced systems typical of much of the developing world.
GStream: Improving SNP and CNV Coverage on Genome-Wide Association Studies
Alonso, Arnald; Marsal, Sara; Tortosa, Raül; Canela-Xandri, Oriol; Julià, Antonio
2013-01-01
We present GStream, a method that combines genome-wide SNP and CNV genotyping in the Illumina microarray platform with unprecedented accuracy. This new method outperforms previous well-established SNP genotyping software. More importantly, the CNV calling algorithm of GStream dramatically improves the results obtained by previous state-of-the-art methods and yields an accuracy that is close to that obtained by purely CNV-oriented technologies like Comparative Genomic Hybridization (CGH). We demonstrate the superior performance of GStream using microarray data generated from HapMap samples. Using the reference CNV calls generated by the 1000 Genomes Project (1KGP) and well-known studies on whole genome CNV characterization based either on CGH or genotyping microarray technologies, we show that GStream can increase the number of reliably detected variants up to 25% compared to previously developed methods. Furthermore, the increased genome coverage provided by GStream allows the discovery of CNVs in close linkage disequilibrium with SNPs, previously associated with disease risk in published Genome-Wide Association Studies (GWAS). These results could provide important insights into the biological mechanism underlying the detected disease risk association. With GStream, large-scale GWAS will not only benefit from the combined genotyping of SNPs and CNVs at an unprecedented accuracy, but will also take advantage of the computational efficiency of the method. PMID:23844243
3D prostate MR-TRUS non-rigid registration using dual optimization with volume-preserving constraint
NASA Astrophysics Data System (ADS)
Qiu, Wu; Yuan, Jing; Fenster, Aaron
2016-03-01
We introduce an efficient and novel convex optimization-based approach to the challenging non-rigid registration of 3D prostate magnetic resonance (MR) and transrectal ultrasound (TRUS) images, which incorporates a new volume preserving constraint to essentially improve the accuracy of targeting suspicious regions during the 3D TRUS guided prostate biopsy. Especially, we propose a fast sequential convex optimization scheme to efficiently minimize the employed highly nonlinear image fidelity function using the robust multi-channel modality independent neighborhood descriptor (MIND) across the two modalities of MR and TRUS. The registration accuracy was evaluated using 10 patient images by calculating the target registration error (TRE) using manually identified corresponding intrinsic fiducials in the whole prostate gland. We also compared the MR and TRUS manually segmented prostate surfaces in the registered images in terms of the Dice similarity coefficient (DSC), mean absolute surface distance (MAD), and maximum absolute surface distance (MAXD). Experimental results showed that the proposed method with the introduced volume-preserving prior significantly improves the registration accuracy comparing to the method without the volume-preserving constraint, by yielding an overall mean TRE of 2:0+/-0:7 mm, and an average DSC of 86:5+/-3:5%, MAD of 1:4+/-0:6 mm and MAXD of 6:5+/-3:5 mm.
Deuterium-tritium neutron yield measurements with the 4.5 m neutron-time-of-flight detectors at NIF.
Moran, M J; Bond, E J; Clancy, T J; Eckart, M J; Khater, H Y; Glebov, V Yu
2012-10-01
The first several campaigns of laser fusion experiments at the National Ignition Facility (NIF) included a family of high-sensitivity scintillator∕photodetector neutron-time-of-flight (nTOF) detectors for measuring deuterium-deuterium (DD) and DT neutron yields. The detectors provided consistent neutron yield (Y(n)) measurements from below 10(9) (DD) to nearly 10(15) (DT). The detectors initially demonstrated detector-to-detector Y(n) precisions better than 5%, but lacked in situ absolute calibrations. Recent experiments at NIF now have provided in situ DT yield calibration data that establish the absolute sensitivity of the 4.5 m differential tissue harmonic imaging (DTHI) detector with an accuracy of ± 10% and precision of ± 1%. The 4.5 m nTOF calibration measurements also have helped to establish improved detector impulse response functions and data analysis methods, which have contributed to improving the accuracy of the Y(n) measurements. These advances have also helped to extend the usefulness of nTOF measurements of ion temperature and downscattered neutron ratio (neutron yield 10-12 MeV divided by yield 13-15 MeV) with other nTOF detectors.
Fang, Hongqing; He, Lei; Si, Hao; Liu, Peng; Xie, Xiaolei
2014-09-01
In this paper, Back-propagation(BP) algorithm has been used to train the feed forward neural network for human activity recognition in smart home environments, and inter-class distance method for feature selection of observed motion sensor events is discussed and tested. And then, the human activity recognition performances of neural network using BP algorithm have been evaluated and compared with other probabilistic algorithms: Naïve Bayes(NB) classifier and Hidden Markov Model(HMM). The results show that different feature datasets yield different activity recognition accuracy. The selection of unsuitable feature datasets increases the computational complexity and degrades the activity recognition accuracy. Furthermore, neural network using BP algorithm has relatively better human activity recognition performances than NB classifier and HMM. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
On the accuracy of adults' auditory perception of normophonic and dysphonic children's personality.
Verduyckt, Ingrid; Remacle, Marc; Morsomme, Dominique
2015-10-01
We investigated the accuracy of auditory inferences of personality of Belgian children with vocal fold nodules (VFN). External judges (n = 57) were asked to infer the personality of normophonic (NP) children and children with VFN (n = 10) on the basis of vowels and sentences. The auditory inferred profiles were compared to the actual personality of NP and VFN children. Positive and partly accurate inferences of VFN children's personality were made on the basis of connected speech, while sustained vowels yielded negative and inaccurate inferences of personality traits of children with VFN. Dysphonic voice quality, as defined by the overall severity of vocal abnormality, conveyed inaccurate and low degrees of extraversion. This effect was counterbalanced in connected speech by faster speaking rate that accurately conveyed higher degrees of extraversion, a characteristic trait of VFN children's actual personality.
A High-Order Direct Solver for Helmholtz Equations with Neumann Boundary Conditions
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Zhuang, Yu
1997-01-01
In this study, a compact finite-difference discretization is first developed for Helmholtz equations on rectangular domains. Special treatments are then introduced for Neumann and Neumann-Dirichlet boundary conditions to achieve accuracy and separability. Finally, a Fast Fourier Transform (FFT) based technique is used to yield a fast direct solver. Analytical and experimental results show this newly proposed solver is comparable to the conventional second-order elliptic solver when accuracy is not a primary concern, and is significantly faster than that of the conventional solver if a highly accurate solution is required. In addition, this newly proposed fourth order Helmholtz solver is parallel in nature. It is readily available for parallel and distributed computers. The compact scheme introduced in this study is likely extendible for sixth-order accurate algorithms and for more general elliptic equations.
Incorporating conditional random fields and active learning to improve sentiment identification.
Zhang, Kunpeng; Xie, Yusheng; Yang, Yi; Sun, Aaron; Liu, Hengchang; Choudhary, Alok
2014-10-01
Many machine learning, statistical, and computational linguistic methods have been developed to identify sentiment of sentences in documents, yielding promising results. However, most of state-of-the-art methods focus on individual sentences and ignore the impact of context on the meaning of a sentence. In this paper, we propose a method based on conditional random fields to incorporate sentence structure and context information in addition to syntactic information for improving sentiment identification. We also investigate how human interaction affects the accuracy of sentiment labeling using limited training data. We propose and evaluate two different active learning strategies for labeling sentiment data. Our experiments with the proposed approach demonstrate a 5%-15% improvement in accuracy on Amazon customer reviews compared to existing supervised learning and rule-based methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Linear combination methods to improve diagnostic/prognostic accuracy on future observations
Kang, Le; Liu, Aiyi; Tian, Lili
2014-01-01
Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714
Structured light system calibration method with optimal fringe angle.
Li, Beiwen; Zhang, Song
2014-11-20
For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H) mm×250(W) mm×500(D) mm.
Diffusion-like recommendation with enhanced similarity of objects
NASA Astrophysics Data System (ADS)
An, Ya-Hui; Dong, Qiang; Sun, Chong-Jing; Nie, Da-Cheng; Fu, Yan
2016-11-01
In the last decade, diversity and accuracy have been regarded as two important measures in evaluating a recommendation model. However, a clear concern is that a model focusing excessively on one measure will put the other one at risk, thus it is not easy to greatly improve diversity and accuracy simultaneously. In this paper, we propose to enhance the Resource-Allocation (RA) similarity in resource transfer equations of diffusion-like models, by giving a tunable exponent to the RA similarity, and traversing the value of this exponent to achieve the optimal recommendation results. In this way, we can increase the recommendation scores (allocated resource) of many unpopular objects. Experiments on three benchmark data sets, MovieLens, Netflix and RateYourMusic show that the modified models can yield remarkable performance improvement compared with the original ones.
Accuracy of genomic selection in European maize elite breeding populations.
Zhao, Yusheng; Gowda, Manje; Liu, Wenxin; Würschum, Tobias; Maurer, Hans P; Longin, Friedrich H; Ranc, Nicolas; Reif, Jochen C
2012-03-01
Genomic selection is a promising breeding strategy for rapid improvement of complex traits. The objective of our study was to investigate the prediction accuracy of genomic breeding values through cross validation. The study was based on experimental data of six segregating populations from a half-diallel mating design with 788 testcross progenies from an elite maize breeding program. The plants were intensively phenotyped in multi-location field trials and fingerprinted with 960 SNP markers. We used random regression best linear unbiased prediction in combination with fivefold cross validation. The prediction accuracy across populations was higher for grain moisture (0.90) than for grain yield (0.58). The accuracy of genomic selection realized for grain yield corresponds to the precision of phenotyping at unreplicated field trials in 3-4 locations. As for maize up to three generations are feasible per year, selection gain per unit time is high and, consequently, genomic selection holds great promise for maize breeding programs.
Puippe, Gilbert D; Alkadhi, Hatem; Hunziker, Roger; Nanz, Daniel; Pfammatter, Thomas; Baumueller, Stephan
2012-08-01
To prospectively evaluate the performance of unenhanced respiratory-gated magnetization-prepared 3D-SSFP inversion recovery MRA (unenhanced-MRA) to depict hepatic and visceral artery anatomy and variants in comparison to contrast-enhanced dynamic gradient-echo MRI (CE-MRI) and to digital subtraction angiography (DSA). Eighty-four patients (55.6±12.4 years) were imaged with CE-MRI (TR/TE 3.5/1.7ms, TI 1.7ms, flip-angle 15°) and unenhanced-MRA (TR/TE 4.4/2.2ms, TI 200ms, flip-angle 90°). Two independent readers assessed image quality of hepatic and visceral arteries on a 4-point-scale. Vessel contrast was measured by a third reader. In 28 patients arterial anatomy was compared to DSA. Interobserver agreement regarding image quality was good for CE-MRI (κ=0.77) and excellent for unenhanced-MRA (κ=0.83). Unenhanced-MRA yielded diagnostic image quality in 71.6% of all vessels, whereas CE-MRI provided diagnostic image quality in 90.6% (p<0.001). Vessel-based image quality was significantly superior for all vessels at CE-MRI compared to unenhanced-MRA (p<0.01). Vessel contrast was similar among both sequences (p=0.15). Compared to DSA, CE-MRI and unenhanced-MRA yielded equal accuracy of 92.9-96.4% for depiction of hepatic and visceral artery variants (p=0.93). Unenhanced-MRA provides diagnostic image quality in 72% of hepatic and visceral arteries with no significant difference in vessel contrast and similar accuracy to CE-MRI for depiction of hepatic and visceral anatomy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pan, Zhuokun; Huang, Jingfeng; Wang, Fumin
2013-12-01
Spectral feature fitting (SFF) is a commonly used strategy for hyperspectral imagery analysis to discriminate ground targets. Compared to other image analysis techniques, SFF does not secure higher accuracy in extracting image information in all circumstances. Multi range spectral feature fitting (MRSFF) from ENVI software allows user to focus on those interesting spectral features to yield better performance. Thus spectral wavelength ranges and their corresponding weights must be determined. The purpose of this article is to demonstrate the performance of MRSFF in oilseed rape planting area extraction. A practical method for defining the weighted values, the variance coefficient weight method, was proposed to set up criterion. Oilseed rape field canopy spectra from the whole growth stage were collected prior to investigating its phenological varieties; oilseed rape endmember spectra were extracted from the Hyperion image as identifying samples to be used in analyzing the oilseed rape field. Wavelength range divisions were determined by the difference between field-measured spectra and image spectra, and image spectral variance coefficient weights for each wavelength range were calculated corresponding to field-measured spectra from the closest date. By using MRSFF, wavelength ranges were classified to characterize the target's spectral features without compromising spectral profile's entirety. The analysis was substantially successful in extracting oilseed rape planting areas (RMSE ≤ 0.06), and the RMSE histogram indicated a superior result compared to a conventional SFF. Accuracy assessment was based on the mapping result compared with spectral angle mapping (SAM) and the normalized difference vegetation index (NDVI). The MRSFF yielded a robust, convincible result and, therefore, may further the use of hyperspectral imagery in precision agriculture.
Experimental Techniques Verified for Determining Yield and Flow Surfaces
NASA Technical Reports Server (NTRS)
Lerch, Brad A.; Ellis, Rod; Lissenden, Cliff J.
1998-01-01
Structural components in aircraft engines are subjected to multiaxial loads when in service. For such components, life prediction methodologies are dependent on the accuracy of the constitutive models that determine the elastic and inelastic portions of a loading cycle. A threshold surface (such as a yield surface) is customarily used to differentiate between reversible and irreversible flow. For elastoplastic materials, a yield surface can be used to delimit the elastic region in a given stress space. The concept of a yield surface is central to the mathematical formulation of a classical plasticity theory, but at elevated temperatures, material response can be highly time dependent. Thus, viscoplastic theories have been developed to account for this time dependency. Since the key to many of these theories is experimental validation, the objective of this work (refs. 1 and 2) at the NASA Lewis Research Center was to verify that current laboratory techniques and equipment are sufficient to determine flow surfaces at elevated temperatures. By probing many times in the axial-torsional stress space, we could define the yield and flow surfaces. A small offset definition of yield (10 me) was used to delineate the boundary between reversible and irreversible behavior so that the material state remained essentially unchanged and multiple probes could be done on the same specimen. The strain was measured with an off-the-shelf multiaxial extensometer that could measure the axial and torsional strains over a wide range of temperatures. The accuracy and resolution of this extensometer was verified by comparing its data with strain gauge data at room temperature. The extensometer was found to have sufficient resolution for these experiments. In addition, the amount of crosstalk (i.e., the accumulation of apparent strain in one direction when strain in the other direction is applied) was found to be negligible. Tubular specimens were induction heated to determine the flow surfaces at elevated temperatures. The heating system induced a large amount of noise in the data. By reducing thermal fluctuations and using appropriate data averaging schemes, we could render the noise inconsequential. Thus, accurate and reproducible flow surfaces (see the figure) could be obtained.
Steidl, Matthew; Zimmern, Philippe
2013-01-01
We determined whether a custom computer program can improve the extraction and accuracy of key outcome measures from progress notes in an electronic medical record compared to a traditional data recording system for incontinence and prolapse repair procedures. Following institutional review board approval, progress notes were exported from the Epic electronic medical record system for outcome measure extraction by a custom computer program. The extracted data (D1) were compared against a manually maintained outcome measures database (D2). This work took place in 2 phases. During the first phase, volatile data such as questionnaires and standardized physical examination findings using the POP-Q (pelvic organ prolapse quantification) system were extracted from existing progress notes. The second phase used a progress note template incorporating key outcome measures to evaluate improvement in data accuracy and extraction rates. Phase 1 compared 6,625 individual outcome measures from 316 patients in D2 to 3,534 outcome measures extracted from progress notes in D1, resulting in an extraction rate of 53.3%. A subset of 3,763 outcome measures from D1 was created by excluding data that did not exist in the extraction, yielding an accuracy rate of 93.9%. With the use of the template in phase 2, the extraction rate improved to 91.9% (273 of 297) and the accuracy rate improved to 100% (273 of 273). In the field of incontinence and prolapse, the disciplined use of an electronic medical record template containing a preestablished set of key outcome measures can provide the ideal interface between required documentation and clinical research. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Riegel, Adam C; Chen, Yu; Kapur, Ajay; Apicello, Laura; Kuruvilla, Abraham; Rea, Anthony J; Jamshidi, Abolghassem; Potters, Louis
Optically stimulated luminescent dosimeters (OSLDs) are utilized for in vivo dosimetry (IVD) of modern radiation therapy techniques such as intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). Dosimetric precision achieved with conventional techniques may not be attainable. In this work, we measured accuracy and precision for a large sample of clinical OSLD-based IVD measurements. Weekly IVD measurements were collected from 4 linear accelerators for 2 years and were expressed as percent differences from planned doses. After outlier analysis, 10,224 measurements were grouped in the following way: overall, modality (photons, electrons), treatment technique (3-dimensional [3D] conformal, field-in-field intensity modulation, inverse-planned IMRT, and VMAT), placement location (gantry angle, cardinality, and central axis positioning), and anatomical site (prostate, breast, head and neck, pelvis, lung, rectum and anus, brain, abdomen, esophagus, and bladder). Distributions were modeled via a Gaussian function. Fitting was performed with least squares, and goodness-of-fit was assessed with the coefficient of determination. Model means (μ) and standard deviations (σ) were calculated. Sample means and variances were compared for statistical significance by analysis of variance and the Levene tests (α = 0.05). Overall, μ ± σ was 0.3 ± 10.3%. Precision for electron measurements (6.9%) was significantly better than for photons (10.5%). Precision varied significantly among treatment techniques (P < .0001) with field-in-field lowest (σ = 7.2%) and IMRT and VMAT highest (σ = 11.9% and 13.4%, respectively). Treatment site models with goodness-of-fit greater than 0.90 (6 of 10) yielded accuracy within ±3%, except for head and neck (μ = -3.7%). Precision varied with treatment site (range, 7.3%-13.0%), with breast and head and neck yielding the best and worst precision, respectively. Placement on the central axis of cardinal gantry angles yielded more precise results (σ = 8.5%) compared with other locations (range, 10.5%-11.4%). Accuracy of ±3% was achievable. Precision ranged from 6.9% to 13.4% depending on modality, technique, and treatment site. Simple, standardized locations may improve IVD precision. These findings may aid development of patient-specific tolerances for OSLD-based IVD. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Daily monitoring of 30 m crop condition over complex agricultural landscapes
NASA Astrophysics Data System (ADS)
Sun, L.; Gao, F.; Xie, D.; Anderson, M. C.; Yang, Y.
2017-12-01
Crop progress provides information necessary for efficient irrigation, scheduling fertilization and harvesting operations at optimal times for achieving higher yields. In the United States, crop progress reports are released online weekly by US Department of Agriculture (USDA) - National Agricultural Statistics Service (NASS). However, the ground data collection is time consuming and subjective, and these reports are provided at either district (multiple counties) or state level. Remote sensing technologies have been widely used to map crop conditions, to extract crop phenology, and to predict crop yield. However, for current satellite-based sensors, it is difficult to acquire both high spatial resolution and frequent coverage. For example, Landsat satellites are capable to capture 30 m resolution images, while the long revisit cycles, cloud contamination further limited their use in detecting rapid surface changes. On the other hand, MODIS can provide daily observations, but with coarse spatial resolutions range from 250 to 1000 m. In recent years, multi-satellite data fusion technology such as the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) has been used to combine the spatial resolution of Landsat with the temporal frequency of MODIS. It has been found that this synthetic dataset could provide more valuable information compared to the images acquired from only one single sensor. However, accuracy of STARFM depends on heterogeneity of landscape and available clear image pairs of MODIS and Landsat. In this study, a new fusion method was developed using the crop vegetation index (VI) timeseries extracted from "pure" MODIS pixels and Landsat overpass images to generate daily 30 m VI for crops. The fusion accuracy was validated by comparing to the original Landsat images. Results show that the relative error in non-rapid growing period is around 3-5% and in rapid growing period is around 6-8% . The accuracy is much better than that of STARFM which is 4-9% in non-rapid growing period and 10-16% in rapid growing period based on 13 image pairs. The predicted VI from this approach looks consistent and smooth in the SLC-off gap stripes of Landsat 7 ETM+ image. The new fusion results will be used to map crop phenology and to predict crop yield at field scale in the complex agricultural landscapes.
Cannell, R C; Belk, K E; Tatum, J D; Wise, J W; Chapman, P L; Scanga, J A; Smith, G C
2002-05-01
Objective quantification of differences in wholesale cut yields of beef carcasses at plant chain speeds is important for the application of value-based marketing. This study was conducted to evaluate the ability of a commercial video image analysis system, the Computer Vision System (CVS) to 1) predict commercially fabricated beef subprimal yield and 2) augment USDA yield grading, in order to improve accuracy of grade assessment. The CVS was evaluated as a fully installed production system, operating on a full-time basis at chain speeds. Steer and heifer carcasses (n = 296) were evaluated using CVS, as well as by USDA expert and online graders, before the fabrication of carcasses into industry-standard subprimal cuts. Expert yield grade (YG), online YG, CVS estimated carcass yield, and CVS measured ribeye area in conjunction with expert grader estimates of the remaining YG factors (adjusted fat thickness, percentage of kidney-pelvic-heart fat, hot carcass weight) accounted for 67, 39, 64, and 65% of the observed variation in fabricated yields of closely trimmed subprimals. The dual component CVS predicted wholesale cut yields more accurately than current online yield grading, and, in an augmentation system, CVS ribeye measurement replaced estimated ribeye area in determination of USDA yield grade, and the accuracy of cutability prediction was improved, under packing plant conditions and speeds, to a level close to that of expert graders applying grades at a comfortable rate of speed offline.
Shandilya, Sharad; Kurz, Michael C.; Ward, Kevin R.; Najarian, Kayvan
2016-01-01
Objective The timing of defibrillation is mostly at arbitrary intervals during cardio-pulmonary resuscitation (CPR), rather than during intervals when the out-of-hospital cardiac arrest (OOH-CA) patient is physiologically primed for successful countershock. Interruptions to CPR may negatively impact defibrillation success. Multiple defibrillations can be associated with decreased post-resuscitation myocardial function. We hypothesize that a more complete picture of the cardiovascular system can be gained through non-linear dynamics and integration of multiple physiologic measures from biomedical signals. Materials and Methods Retrospective analysis of 153 anonymized OOH-CA patients who received at least one defibrillation for ventricular fibrillation (VF) was undertaken. A machine learning model, termed Multiple Domain Integrative (MDI) model, was developed to predict defibrillation success. We explore the rationale for non-linear dynamics and statistically validate heuristics involved in feature extraction for model development. Performance of MDI is then compared to the amplitude spectrum area (AMSA) technique. Results 358 defibrillations were evaluated (218 unsuccessful and 140 successful). Non-linear properties (Lyapunov exponent > 0) of the ECG signals indicate a chaotic nature and validate the use of novel non-linear dynamic methods for feature extraction. Classification using MDI yielded ROC-AUC of 83.2% and accuracy of 78.8%, for the model built with ECG data only. Utilizing 10-fold cross-validation, at 80% specificity level, MDI (74% sensitivity) outperformed AMSA (53.6% sensitivity). At 90% specificity level, MDI had 68.4% sensitivity while AMSA had 43.3% sensitivity. Integrating available end-tidal carbon dioxide features into MDI, for the available 48 defibrillations, boosted ROC-AUC to 93.8% and accuracy to 83.3% at 80% sensitivity. Conclusion At clinically relevant sensitivity thresholds, the MDI provides improved performance as compared to AMSA, yielding fewer unsuccessful defibrillations. Addition of partial end-tidal carbon dioxide (PetCO2) signal improves accuracy and sensitivity of the MDI prediction model. PMID:26741805
Not a load of rubbish: simulated field trials in large-scale containers.
Hohmann, M; Stahl, A; Rudloff, J; Wittkop, B; Snowdon, R J
2016-09-01
Assessment of yield performance under fluctuating environmental conditions is a major aim of crop breeders. Unfortunately, results from controlled-environment evaluations of complex agronomic traits rarely translate to field performance. A major cause is that crops grown over their complete lifecycle in a greenhouse or growth chamber are generally constricted in their root growth, which influences their response to important abiotic constraints like water or nutrient availability. To overcome this poor transferability, we established a plant growth system comprising large refuse containers (120 L 'wheelie bins') that allow detailed phenotyping of small field-crop populations under semi-controlled growth conditions. Diverse winter oilseed rape cultivars were grown at field densities throughout the crop lifecycle, in different experiments over 2 years, to compare seed yields from individual containers to plot yields from multi-environment field trials. We found that we were able to predict yields in the field with high accuracy from container-grown plants. The container system proved suitable for detailed studies of stress response physiology and performance in pre-breeding populations. Investment in automated large-container systems may help breeders improve field transferability of greenhouse experiments, enabling screening of pre-breeding materials for abiotic stress response traits with a positive influence on yield. © 2016 John Wiley & Sons Ltd.
Application of the SRI cloud-tracking technique to rapid-scan GOES observations
NASA Technical Reports Server (NTRS)
Wolf, D. E.; Endlich, R. M.
1980-01-01
An automatic cloud tracking system was applied to multilayer clouds associated with severe storms. The method was tested using rapid scan observations of Hurricane Eloise obtained by the GOES satellite on 22 September 1975. Cloud tracking was performed using clustering based either on visible or infrared data. The clusters were tracked using two different techniques. The data of 4 km and 8 km resolution of the automatic system yielded comparable in accuracy and coverage to those obtained by NASA analysts using the Atmospheric and Oceanographic Information Processing System.
Samanci, Yavuz; Karagöz, Yeşim; Yaman, Mehmet; Atçı, İbrahim Burak; Emre, Ufuk; Kılıçkesmez, Nuri Özgür; Çelik, Suat Erol
2016-11-01
To determine the accuracy of median nerve T2 evaluation and its relation with Boston Questionnaire (BQ) and nerve conduction studies (NCSs) in pre-operative and post-operative carpal tunnel syndrome (CTS) patients in comparison with healthy volunteers. Twenty-three CTS patients and 24 healthy volunteers underwent NCSs, median nerve T2 evaluation and self-administered BQ. Pre-operative and 1st year post-operative median nerve T2 values and cross-sectional areas (CSAs) were compared both within pre-operative and post-operative CTS groups, and with healthy volunteers. The relationship between MRI findings and BQ and NCSs was analyzed. The ROC curve analysis was used for determining the accuracy. The comparison of pre-operative and post-operative T2 values and CSAs revealed statistically significant improvements in the post-operative patient group (p<0.001 for all parameters). There were positive correlations between T2 values at all levels and BQ values, and positive and negative correlations were also found regarding T2 values and NCS findings in CTS patients. The receiver operating characteristic curve analysis for defined cut-off levels of median nerve T2 values in hands with severe CTS yielded excellent accuracy at all levels. However, this accuracy could not be demonstrated in hands with mild CTS. This study is the first to analyze T2 values in both pre-operative and post-operative CTS patients. The presence of increased T2 values in CTS patients compared to controls and excellent accuracy in hands with severe CTS indicates T2 signal changes related to CTS pathophysiology and possible utilization of T2 signal evaluation in hands with severe CTS. Copyright © 2016 Elsevier B.V. All rights reserved.
The value of rapid on-site evaluation during EBUS-TBNA.
Cardoso, A V; Neves, I; Magalhães, A; Sucena, M; Barroca, H; Fernandes, G
2015-01-01
Rapid on-site evaluation (ROSE) has the potential to increase endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) accuracy in the diagnosis of mediastinal lesions and lung cancer staging. However, studies have reported controversial results. The purpose of our study was to evaluate the influence of ROSE on sample adequacy and diagnostic accuracy of EBUS-TBNA. Prospective observational study that enrolled 81 patients who underwent EBUS-TBNA for investigation of hilo-mediastinal lesions or lung cancer staging. The first 41 patients underwent EBUS-TBNA with ROSE (ROSE group) and the last 40 patients without ROSE (non-ROSE group). Sample adequacy and diagnostic accuracy of EBUS-TBNA in both groups were compared. Adequate samples were obtained in 93% of the patients in the ROSE group and 80% in non-ROSE group (p=0.10). The diagnostic accuracy of EBUS-TBNA was 91% in ROSE group and 83% in non-ROSE group (p=0.08). Analyzing the EBUS-TBNA purpose, in the subgroup of patients who underwent EBUS-TBNA for investigation of hilo-mediastinal lesions, these differences between ROSE and non-ROSE group were higher compared to lung cancer staging, 93% of patients with adequate samples in the ROSE group vs. 75% in the non-ROSE group (p=0.06) and 87% of diagnostic accuracy in ROSE group vs. 77% in non-ROSE group (p=0.10). Despite the lack of statistical significance, ROSE appears to be particularly useful in the diagnostic work-up of hilo-mediastinal lesions, increasing the diagnostic yield of EBUS-TBNA. Copyright © 2014 Sociedade Portuguesa de Pneumologia. Published by Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Majumder, S. K.; Krishna, H.; Sidramesh, M.; Chaturvedi, P.; Gupta, P. K.
2011-08-01
We report the results of a comparative evaluation of in vivo fluorescence and Raman spectroscopy for diagnosis of oral neoplasia. The study carried out at Tata Memorial Hospital, Mumbai, involved 26 healthy volunteers and 138 patients being screened for neoplasm of oral cavity. Spectral measurements were taken from multiple sites of abnormal as well as apparently uninvolved contra-lateral regions of the oral cavity in each patient. The different tissue sites investigated belonged to one of the four histopathology categories: 1) squamous cell carcinoma (SCC), 2) oral sub-mucous fibrosis (OSMF), 3) leukoplakia (LP) and 4) normal squamous tissue. A probability based multivariate statistical algorithm utilizing nonlinear Maximum Representation and Discrimination Feature for feature extraction and Sparse Multinomial Logistic Regression for classification was developed for direct multi-class classification in a leave-one-patient-out cross validation mode. The results reveal that the performance of Raman spectroscopy is considerably superior to that of fluorescence in stratifying the oral tissues into respective histopathologic categories. The best classification accuracy was observed to be 90%, 93%, 94%, and 89% for SCC, SMF, leukoplakia, and normal oral tissues, respectively, on the basis of leave-one-patient-out cross-validation, with an overall accuracy of 91%. However, when a binary classification was employed to distinguish spectra from all the SCC, SMF and leukoplakik tissue sites together from normal, fluorescence and Raman spectroscopy were seen to have almost comparable performances with Raman yielding marginally better classification accuracy of 98.5% as compared to 94% of fluorescence.
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines.
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families.
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families. PMID:27783639
Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C
2018-06-01
Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Maddalon, Dal V.
1998-01-01
Flight-measured high Reynolds number turbulent-flow pressure distributions on a transport wing in transonic flow are compared to unstructured-grid calculations to assess the predictive ability of a three-dimensional Euler code (USM3D) coupled to an interacting boundary layer module. The two experimental pressure distributions selected for comparative analysis with the calculations are complex and turbulent but typical of an advanced technology laminar flow wing. An advancing front method (VGRID) was used to generate several tetrahedral grids for each test case. Initial calculations left considerable room for improvement in accuracy. Studies were then made of experimental errors, transition location, viscous effects, nacelle flow modeling, number and placement of spanwise boundary layer stations, and grid resolution. The most significant improvements in the accuracy of the calculations were gained by improvement of the nacelle flow model and by refinement of the computational grid. Final calculations yield results in close agreement with the experiment. Indications are that further grid refinement would produce additional improvement but would require more computer memory than is available. The appendix data compare the experimental attachment line location with calculations for different grid sizes. Good agreement is obtained between the experimental and calculated attachment line locations.
Sareen, Rateesh; Pandey, C L
2016-01-01
Background: Early diagnosis of lung cancer plays a pivotal role in reducing lung cancer death rate. Cytological techniques are safer, economical and provide quick results. Bronchoscopic washing, brushing and fine needle aspirations not only complement tissue biopsies in the diagnosis of lung cancer but also comparable. Objectives: (1) To find out diagnostic yields of bronchioalveolar lavage, bronchial brushings, FNAC in diagnosis of lung malignancy. (2) To compare relative accuracy of these three cytological techniques. (3) To correlate the cytologic diagnosis with clinical, bronchoscopic and CT findings. (4) Cytological and histopathological correlation of lung lesions. Methods: All the patients who came with clinical or radiological suspicion of lung malignancy in two and a half year period were included in study. Bronchoalveolar lavage was the most common type of cytological specimen (82.36%), followed by CT guided FNAC (9.45%) and bronchial brushings (8.19%). Sensitivity, specificity, positive and negative predictive value for all techniques and correlation with histopathology was done using standard formulas. Results: The most sensitive technique was CT FNAC – (87.25%) followed by brushings (77.78%) and BAL (72.69%). CT FNAC had highest diagnostic yield (90.38%), followed by brushings (86.67%) and BAL (83.67%). Specificity and positive predictive value were 100 % each of all techniques. Lowest false negatives were obtained in CT FNAC (12.5%) and highest in BAL (27.3%). Highest negative predictive value was of BAL 76.95 % followed by BB 75.59% and CT FNAC 70.59%. Conclusion: Before administering antitubercular treatment every effort should be made to rule out malignancy. CT FNAC had highest diagnostic yield among three cytological techniques. BAL is an important tool in screening central as well as in accessible lesions. It can be used at places where CT guided FNAC is not available or could not be done due to technical or financial limitations PMID:27890992
Cryobiopsy: should this be used in place of endobronchial forceps biopsies?
Rubio, Edmundo R; le, Susanti R; Whatley, Ralph E; Boyd, Michael B
2013-01-01
Forceps biopsies of airway lesions have variable yields. The yield increases when combining techniques in order to collect more material. With the use of cryotherapy probes (cryobiopsy) larger specimens can be obtained, resulting in an increase in the diagnostic yield. However, the utility and safety of cryobiopsy with all types of lesions, including flat mucosal lesions, is not established. Demonstrate the utility/safety of cryobiopsy versus forceps biopsy to sample exophytic and flat airway lesions. Teaching hospital-based retrospective analysis. Retrospective analysis of patients undergoing cryobiopsies (singly or combined with forceps biopsies) from August 2008 through August 2010. Statistical Analysis. Wilcoxon signed-rank test. The comparative analysis of 22 patients with cryobiopsy and forceps biopsy of the same lesion showed the mean volumes of material obtained with cryobiopsy were significantly larger (0.696 cm(3) versus 0.0373 cm(3), P = 0.0014). Of 31 cryobiopsies performed, one had minor bleeding. Cryopbiopsy allowed sampling of exophytic and flat lesions that were located centrally or distally. Cryobiopsies were shown to be safe, free of artifact, and provided a diagnostic yield of 96.77%. Cryobiopsy allows safe sampling of exophytic and flat airway lesions, with larger specimens, excellent tissue preservation and high diagnostic accuracy.
Accuracy of different impression materials in parallel and nonparallel implants
Vojdani, Mahroo; Torabi, Kianoosh; Ansarifard, Elham
2015-01-01
Background: A precise impression is mandatory to obtain passive fit in implant-supported prostheses. The aim of this study was to compare the accuracy of three impression materials in both parallel and nonparallel implant positions. Materials and Methods: In this experimental study, two partial dentate maxillary acrylic models with four implant analogues in canines and lateral incisors areas were used. One model was simulating the parallel condition and the other nonparallel one, in which implants were tilted 30° bucally and 20° in either mesial or distal directions. Thirty stone casts were made from each model using polyether (Impregum), additional silicone (Monopren) and vinyl siloxanether (Identium), with open tray technique. The distortion values in three-dimensions (X, Y and Z-axis) were measured by coordinate measuring machine. Two-way analysis of variance (ANOVA), one-way ANOVA and Tukey tests were used for data analysis (α = 0.05). Results: Under parallel condition, all the materials showed comparable, accurate casts (P = 0.74). In the presence of angulated implants, while Monopren showed more accurate results compared to Impregum (P = 0.01), Identium yielded almost similar results to those produced by Impregum (P = 0.27) and Monopren (P = 0.26). Conclusion: Within the limitations of this study, in parallel conditions, the type of impression material cannot affect the accuracy of the implant impressions; however, in nonparallel conditions, polyvinyl siloxane is shown to be a better choice, followed by vinyl siloxanether and polyether respectively. PMID:26288620
Analytic closures for M1 neutrino transport
Murchikova, E. M.; Abdikamalov, E.; Urbatsch, T.
2017-04-25
Carefully accounting for neutrino transport is an essential component of many astrophysical studies. Solving the full transport equation is too expensive for most realistic applications, especially those involving multiple spatial dimensions. For such cases, resorting to approximations is often the only viable option for obtaining solutions. One such approximation, which recently became popular, is the M1 method. It utilizes the system of the lowest two moments of the transport equation and closes the system with an ad hoc closure relation. The accuracy of the M1 solution depends on the quality of the closure. Several closures have been proposed in themore » literature and have been used in various studies. We carry out an extensive study of these closures by comparing the results of M1 calculations with precise Monte Carlo calculations of the radiation field around spherically symmetric protoneutron star models. We find that no closure performs consistently better or worse than others in all cases. The level of accuracy that a given closure yields depends on the matter configuration, neutrino type and neutrino energy. As a result, given this limitation, the maximum entropy closure by Minerbo on average yields relatively accurate results in the broadest set of cases considered in this work.« less
Measuring Total Column Water Vapor by Pointing an Infrared Thermometer at the Sky
NASA Technical Reports Server (NTRS)
Mims, Forrest M., III; Chambers, Lin H.; Brooks, David R.
2011-01-01
A 2-year study affirms that the temperature (Tz) indicated by an inexpensive ($20 to $60) IR thermometer pointed at the cloud-free zenith sky provides an approximate indication of the total column water vapor (precipitable water or PW). PW was measured by a MICROTOPS II sun photometer. The coefficient of correlation (r2) of the PW and Tz was 0.90, and the rms difference was 3.2 mm. A comparison of the Tz data with the PW provided by a GPS site 31 km NNE yielded an r2 of 0.79, and an rms difference of 5.8 mm. An expanded study compared Tz from eight IR thermometers with PW at various times during the day and night from 17 May to 18 October 2010, mainly at the Texas site and 10 days at Hawaii's Mauna Loa Observatory (MLO). The best results of this comparison were provided by two IR thermometers models that yielded an r2 of 0.96 and an rms difference with the PW of 2.7 mm. The results of both the ongoing 2-year study and the 5-month instrument comparison show that IR thermometers can measure PW with an accuracy (rms difference/mean PW) approaching 10%, the accuracy typically ascribed to sun photometers.
The uncertainty of crop yield projections is reduced by improved temperature response functions
USDA-ARS?s Scientific Manuscript database
Increasing the accuracy of crop productivity estimates is a key Increasing the accuracy of crop productivity estimates is a key element in planning adaptation strategies to ensure global food security under climate change. Process-based crop models are effective means to project climate impact on cr...
Use of multivariable asymptotic expansions in a satellite theory
NASA Technical Reports Server (NTRS)
Dallas, S. S.
1973-01-01
Initial conditions and perturbative force of satellite are restricted to yield motion of equatorial satellite about oblate body. In this manner, exact analytic solution exists and can be used as standard of comparison in numerical accuracy comparisons. Detailed numerical accuracy studies of uniformly valid asymptotic expansions were made.
NASA Astrophysics Data System (ADS)
Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude
2010-02-01
Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.
Code of Federal Regulations, 2014 CFR
2014-01-01
... territory or possession of the United States. Subsequent crop means any crop planted after an initial crop... itself to the greatest level of accuracy, as determined by the FSA State committee. USDA means United... history yield means the average of the actual production history yields for each insurable or noninsurable...
Code of Federal Regulations, 2012 CFR
2012-01-01
... territory or possession of the United States. Subsequent crop means any crop planted after an initial crop... itself to the greatest level of accuracy, as determined by the FSA State committee. USDA means United... history yield means the average of the actual production history yields for each insurable or noninsurable...
Shokri, Abbas; Eskandarloo, Amir; Norouzi, Marouf; Poorolajal, Jalal; Majidi, Gelareh; Aliyaly, Alireza
2018-03-01
This study compared the diagnostic accuracy of cone-beam computed tomography (CBCT) scans obtained with 2 CBCT systems with high- and low-resolution modes for the detection of root perforations in endodontically treated mandibular molars. The root canals of 72 mandibular molars were cleaned and shaped. Perforations measuring 0.2, 0.3, and 0.4 mm in diameter were created at the furcation area of 48 roots, simulating strip perforations, or on the external surfaces of 48 roots, simulating root perforations. Forty-eight roots remained intact (control group). The roots were filled using gutta-percha (Gapadent, Tianjin, China) and AH26 sealer (Dentsply Maillefer, Ballaigues, Switzerland). The CBCT scans were obtained using the NewTom 3G (QR srl, Verona, Italy) and Cranex 3D (Soredex, Helsinki, Finland) CBCT systems in high- and low-resolution modes, and were evaluated by 2 observers. The chi-square test was used to assess the nominal variables. In strip perforations, the accuracies of low- and high-resolution modes were 75% and 83% for NewTom 3G and 67% and 69% for Cranex 3D. In root perforations, the accuracies of low- and high-resolution modes were 79% and 83% for NewTom 3G and was 56% and 73% for Cranex 3D. The accuracy of the 2 CBCT systems was different for the detection of strip and root perforations. The Cranex 3D had non-significantly higher accuracy than the NewTom 3G. In both scanners, the high-resolution mode yielded significantly higher accuracy than the low-resolution mode. The diagnostic accuracy of CBCT scans was not affected by the perforation diameter.
Accuracy of quantum sensors measuring yield photon flux and photosynthetic photon flux
NASA Technical Reports Server (NTRS)
Barnes, C.; Tibbitts, T.; Sager, J.; Deitzer, G.; Bubenheim, D.; Koerner, G.; Bugbee, B.; Knott, W. M. (Principal Investigator)
1993-01-01
Photosynthesis is fundamentally driven by photon flux rather than energy flux, but not all absorbed photons yield equal amounts of photosynthesis. Thus, two measures of photosynthetically active radiation have emerged: photosynthetic photon flux (PPF), which values all photons from 400 to 700 nm equally, and yield photon flux (YPF), which weights photons in the range from 360 to 760 nm according to plant photosynthetic response. We selected seven common radiation sources and measured YPF and PPF from each source with a spectroradiometer. We then compared these measurements with measurements from three quantum sensors designed to measure YPF, and from six quantum sensors designed to measure PPF. There were few differences among sensors within a group (usually <5%), but YPF values from sensors were consistently lower (3% to 20%) than YPF values calculated from spectroradiometric measurements. Quantum sensor measurements of PPF also were consistently lower than PPF values calculated from spectroradiometric measurements, but the differences were <7% for all sources, except red-light-emitting diodes. The sensors were most accurate for broad-band sources and least accurate for narrow-band sources. According to spectroradiometric measurements, YPF sensors were significantly less accurate (>9% difference) than PPF sensors under metal halide, high-pressure sodium, and low-pressure sodium lamps. Both sensor types were inaccurate (>18% error) under red-light-emitting diodes. Because both YPF and PPF sensors are imperfect integrators, and because spectroradiometers can measure photosynthetically active radiation much more accurately, researchers should consider developing calibration factors from spectroradiometric data for some specific radiation sources to improve the accuracy of integrating sensors.
Capsule Endoscopy in the Assessment of Obscure Gastrointestinal Bleeding: An Evidence-Based Analysis
2015-01-01
Background Obscure gastrointestinal bleeding (OGIB) is defined as persistent or recurrent bleeding associated with negative findings on upper and lower gastrointestinal (GI) endoscopic evaluations. The diagnosis and management of patients with OGIB is particularly challenging because of the length and complex loops of the small intestine. Capsule endoscopy (CE) is 1 diagnostic modality that is used to determine the etiology of bleeding. Objectives The objective of this analysis was to review the diagnostic accuracy, safety, and impact on health outcomes of CE in patients with OGIB in comparison with other diagnostic modalities. Data Sources A literature search was performed using Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid Embase, the Wiley Cochrane Library, and the Centre for Reviews and Dissemination database, for studies published between 2007 and 2013. Review Methods Data on diagnostic accuracy, safety, and impact on health outcomes were abstracted from included studies. Quality of evidence was assessed using Grading of Recommendations Assessment, Development, and Evaluation (GRADE). Results The search yielded 1,189 citations, and 24 studies were included. Eight studies reported diagnostic accuracy comparing CE with other diagnostic modalities. Capsule endoscopy has a higher sensitivity and lower specificity than magnetic resonance enteroclysis, computed tomography, and push enteroscopy. Capsule endoscopy has a good safety profile with few adverse events, although comparative safety data with other diagnostic modalities are limited. Capsule endoscopy is associated with no difference in patient health-related outcomes such as rebleeding or follow-up treatment compared with push enteroscopy, small-bowel follow-through, and angiography. Limitations There was significant heterogeneity in estimates of diagnostic accuracy, which prohibited a statistical summary of findings. The analysis was also limited by the fact that there is no established reference standard to which the diagnostic accuracy of CE can be compared. Conclusions There is very-low-quality evidence that CE has a higher sensitivity but a lower specificity than other diagnostic modalities. Capsule endoscopy has few adverse events, with capsule retention being the most serious complication. Capsule endoscopy is perceived by patients as less painful and less burdensome compared with other modalities. There is low-quality evidence that patients who undergo CE have similar rates of rebleeding, further therapeutic interventions, and hospitalization compared with other diagnostic modalities. PMID:26357529
Smeets, Miek; Degryse, Jan; Janssens, Stefan; Matheï, Catharina; Wallemacq, Pierre; Vanoverschelde, Jean-Louis; Aertgeerts, Bert; Vaes, Bert
2016-10-06
Different diagnostic algorithms for non-acute heart failure (HF) exist. Our aim was to compare the ability of these algorithms to identify HF in symptomatic patients aged 80 years and older and identify those patients at highest risk for mortality. Diagnostic accuracy and validation study. General practice, Belgium. 365 patients with HF symptoms aged 80 years and older (BELFRAIL cohort). Participants underwent a full clinical assessment, including a detailed echocardiographic examination at home. The diagnostic accuracy of 4 different algorithms was compared using an intention-to-diagnose analysis. The European Society of Cardiology (ESC) definition of HF was used as the reference standard for HF diagnosis. Kaplan-Meier curves for 5-year all-cause mortality were plotted and HRs and corresponding 95% CIs were calculated to compare the mortality risk predicting abilities of the different algorithms. Net reclassification improvement (NRI) was calculated. The prevalence of HF was 20% (n=74). The 2012 ESC algorithm yielded the highest sensitivity (92%, 95% CI 83% to 97%) as well as the highest referral rate (71%, n=259), whereas the Oudejans algorithm yielded the highest specificity (73%, 95% CI 68% to 78%) and the lowest referral rate (36%, n=133). These differences could be ascribed to differences in N-terminal probrain natriuretic peptide cut-off values (125 vs 400 pg/mL). The Kelder and Oudejans algorithms exhibited NRIs of 12% (95% CI 0.7% to 22%, p=0.04) and 22% (95% CI 9% to 32%, p<0.001), respectively, compared with the ESC algorithm. All algorithms detected patients at high risk for mortality (HR 1.9, 95% CI 1.4 to 2.5; Kelder) to 2.3 (95% CI 1.7 to 3.1; Oudejans). No significant differences were observed among the algorithms with respect to mortality risk predicting abilities. Choosing a diagnostic algorithm for non-acute HF in elderly patients represents a trade-off between sensitivity and specificity, mainly depending on differences between cut-off values for natriuretic peptides. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
NASA Astrophysics Data System (ADS)
Dube, Timothy; Mutanga, Onisimo; Sibanda, Mbulisi; Bangamwabo, Victor; Shoko, Cletah
2017-08-01
The remote sensing of freshwater resources is increasingly becoming important, due to increased patterns of water use and the current or projected impacts of climate change and the rapid invasion by lethal water weeds. This study therefore sought to explore the potential of the recently-launched Landsat 8 OLI/TIRS sensor in mapping invasive species in inland lakes. Specifically, the study compares the performance of the newly-launched Landsat 8 sensor, with more advanced sensor design and image acquisition approach to the traditional Landsat-7 ETM+ in detecting and mapping the water hyacinth (Eichhornia crassipes) invasive species across Lake Chivero, in Zimbabwe. The analysis of variance test was used to identify windows of spectral separability between water hyacinth and other land cover types. The results showed that portions of the visible (B3), NIR (B4), as well as the shortwave bands (Band 8, 9 and 10) of both Landsat 8 OLI and Landsat 7 ETM, exhibited windows of separability between water hyacinth and other land cover types. It was also observed that on the use of Landsat 8 OLI produced high overall classification accuracy of 72%, when compared Landsat 7 ETM, which yielded lower accuracy of 57%. Water hyacinth had optimal accuracies (i.e. 92%), when compared to other land cover types, based on Landsat 8 OLI data. However, when using Landsat 7 ETM data, classification accuracies of water hyacinth were relatively lower (i.e. 67%), when compared to other land cover types (i.e. water with accuracy of 100%). Spectral curves of the old, intermediate and the young water hyacinth in Lake Chivero based on: (a) Landsat 8 OLI, and (b) Landsat 7 ETM were derived. Overall, the findings of this study underscores the relevance of the new generation multispectral sensors in providing primary data-source required for mapping the spatial distribution, and even configuration of water weeds at lower or no cost over time and space.
Progress Towards a Cartesian Cut-Cell Method for Viscous Compressible Flow
NASA Technical Reports Server (NTRS)
Berger, Marsha; Aftosmis, Michael J.
2011-01-01
The proposed paper reports advances in developing a method for high Reynolds number compressible viscous flow simulations using a Cartesian cut-cell method with embedded boundaries. This preliminary work focuses on accuracy of the discretization near solid wall boundaries. A model problem is used to investigate the accuracy of various difference stencils for second derivatives and to guide development of the discretization of the viscous terms in the Navier-Stokes equations. Near walls, quadratic reconstruction in the wall-normal direction is used to mitigate mesh irregularity and yields smooth skin friction distributions along the body. Multigrid performance is demonstrated using second-order coarse grid operators combined with second-order restriction and prolongation operators. Preliminary verification and validation for the method is demonstrated using flat-plate and airfoil examples at compressible Mach numbers. Simulations of flow on laminar and turbulent flat plates show skin friction and velocity profiles compared with those from boundary-layer theory. Airfoil simulations are performed at laminar and turbulent Reynolds numbers with results compared to both other simulations and experimental data
2017-01-01
We have calculated the excess free energy of mixing of 1053 binary mixtures with the OPLS-AA force field using two different methods: thermodynamic integration (TI) of molecular dynamics simulations and the Pair Configuration to Molecular Activity Coefficient (PAC-MAC) method. PAC-MAC is a force field based quasi-chemical method for predicting miscibility properties of various binary mixtures. The TI calculations yield a root mean squared error (RMSE) compared to experimental data of 0.132 kBT (0.37 kJ/mol). PAC-MAC shows a RMSE of 0.151 kBT with a calculation speed being potentially 1.0 × 104 times greater than TI. OPLS-AA force field parameters are optimized using PAC-MAC based on vapor–liquid equilibrium data, instead of enthalpies of vaporization or densities. The RMSE of PAC-MAC is reduced to 0.099 kBT by optimizing 50 force field parameters. The resulting OPLS-PM force field has a comparable accuracy as the OPLS-AA force field in the calculation of mixing free energies using TI. PMID:28418655
Ex vivo characterization of normal and adenocarcinoma colon samples by Mueller matrix polarimetry.
Ahmad, Iftikhar; Ahmad, Manzoor; Khan, Karim; Ashraf, Sumara; Ahmad, Shakil; Ikram, Masroor
2015-05-01
Mueller matrix polarimetry along with polar decomposition algorithm was employed for the characterization of ex vivo normal and adenocarcinoma human colon tissues by polarized light in the visible spectral range (425-725 nm). Six derived polarization metrics [total diattenuation (DT ), retardance (RT ), depolarization(ΔT ), linear diattenuation (DL), retardance (δ), and depolarization (ΔL)] were compared for normal and adenocarcinoma colon tissue samples. The results show that all six polarimetric properties for adenocarcinoma samples were significantly higher as compared to the normal samples for all wavelengths. The Wilcoxon rank sum test illustrated that total retardance is a good candidate for the discrimination of normal and adenocarcinoma colon samples. Support vector machine classification for normal and adenocarcinoma based on the four polarization properties spectra (ΔT , ΔL, RT ,and δ) yielded 100% accuracy, sensitivity, and specificity, while both DTa nd DL showed 66.6%, 33.3%, and 83.3% accuracy, sensitivity, and specificity, respectively. The combination of polarization analysis and given classification methods provides a framework to distinguish the normal and cancerous tissues.
Using Smartphone Sensors for Improving Energy Expenditure Estimation
Zhu, Jindan; Das, Aveek K.; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J.
2015-01-01
Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings. PMID:27170901
Using Smartphone Sensors for Improving Energy Expenditure Estimation.
Pande, Amit; Zhu, Jindan; Das, Aveek K; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J
2015-01-01
Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings.
Z{gamma}{gamma}{gamma} {yields} 0 Processes in SANC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardin, D. Yu., E-mail: bardin@nu.jinr.ru; Kalinovskaya, L. V., E-mail: kalinov@nu.jinr.ru; Uglov, E. D., E-mail: corner@nu.jinr.ru
2013-11-15
We describe the analytic and numerical evaluation of the {gamma}{gamma} {yields} {gamma}Z process cross section and the Z {yields} {gamma}{gamma}{gamma} decay rate within the SANC system multi-channel approach at the one-loop accuracy level with all masses taken into account. The corresponding package for numeric calculations is presented. For checking of the results' correctness we make a comparison with the other independent calculations.
Evaluation of a methodology for model identification in the time domain
NASA Technical Reports Server (NTRS)
Beck, R. T.; Beck, J. L.
1988-01-01
A model identification methodology for structural dynamics has been applied to simulated vibrational data as a first step in evaluating its accuracy. The evaluation has taken into account a wide variety of factors which affect the accuracy of the procedure. The effects of each of these factors were observed in both the response time histories and the estimates of the parameters of the model by comparing them with the exact values of the system. Each factor was varied independently but combinations of these have also been considered in an effort to simulate real situations. The results of the tests have shown that for the chain model, the procedure yields robust estimates of the stiffness parameters under the conditions studied whenever uniqueness is ensured. When inaccuracies occur in the results, they are intimately related to non-uniqueness conditions inherent in the inverse problem and not to shortcomings in the methodology.
Cost-Benefit Arbitration Between Multiple Reinforcement-Learning Systems.
Kool, Wouter; Gershman, Samuel J; Cushman, Fiery A
2017-09-01
Human behavior is sometimes determined by habit and other times by goal-directed planning. Modern reinforcement-learning theories formalize this distinction as a competition between a computationally cheap but inaccurate model-free system that gives rise to habits and a computationally expensive but accurate model-based system that implements planning. It is unclear, however, how people choose to allocate control between these systems. Here, we propose that arbitration occurs by comparing each system's task-specific costs and benefits. To investigate this proposal, we conducted two experiments showing that people increase model-based control when it achieves greater accuracy than model-free control, and especially when the rewards of accurate performance are amplified. In contrast, they are insensitive to reward amplification when model-based and model-free control yield equivalent accuracy. This suggests that humans adaptively balance habitual and planned action through on-line cost-benefit analysis.
NASA Astrophysics Data System (ADS)
Kalthoff, Mona; Keim, Frederik; Krull, Holger; Uhrig, Götz S.
2017-05-01
The density matrix formalism and the equation of motion approach are two semi-analytical methods that can be used to compute the non-equilibrium dynamics of correlated systems. While for a bilinear Hamiltonian both formalisms yield the exact result, for any non-bilinear Hamiltonian a truncation is necessary. Due to the fact that the commonly used truncation schemes differ for these two methods, the accuracy of the obtained results depends significantly on the chosen approach. In this paper, both formalisms are applied to the quantum Rabi model. This allows us to compare the approximate results and the exact dynamics of the system and enables us to discuss the accuracy of the approximations as well as the advantages and the disadvantages of both methods. It is shown to which extent the results fulfill physical requirements for the observables and which properties of the methods lead to unphysical results.
NASA Technical Reports Server (NTRS)
Ko, William L.; Olona, Timothy
1987-01-01
The effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter was investigated. Several structural performance and resizing (SPAR) thermal models and NASA structural analysis (NASTRAN) structural models were set up for the orbiter wing midspan bay 3. The thermal model was found to be the one that determines the limit of finite-element fineness because of the limitation of computational core space required for the radiation view factor calculations. The thermal stresses were found to be extremely sensitive to a slight variation of structural temperature distributions. The minimum degree of element fineness required for the thermal model to yield reasonably accurate solutions was established. The radiation view factor computation time was found to be insignificant compared with the total computer time required for the SPAR transient heat transfer analysis.
Independent Peer Evaluation of the Large Area Crop Inventory Experiment (LACIE): The LACIE Symposium
NASA Technical Reports Server (NTRS)
1978-01-01
Yield models and crop estimate accuracy are discussed within the Large Area Crop Inventory Experiment. The wheat yield estimates in the United States, Canada, and U.S.S.R. are emphasized. Experimental results design, system implementation, data processing systems, and applications were considered.
USDA-ARS?s Scientific Manuscript database
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection is an attractive technology to generate rapid genetic gains in switchgrass and ...
Accuracy of post-bomb 137Cs and 14C in dating fluvial deposits
Ely, L.L.; Webb, R.H.; Enzel, Y.
1992-01-01
The accuracy and precision of 137Cs and 14C for dating post-1950 alluvial deposits were evaluated for deposits from known floods on two rivers in Arizona. The presence of 137Cs reliably indicates that deposition occurred after intensive above-ground nuclear testing was initiated around 1950. There was a positive correlation between the measured level of 137Cs activity and the clay content of the sediments, although 137Cs was detected even in sandy flood sediments with low clay content. 137Cs is a valuable dating tool in arid environments where organic materials for 14C or tree-ring dating are scarce and observational records are limited. The 14C activity measured in different types of fine organic detritus yielded dates within 1 to 8 yr of a 1980 flood deposit, and the accuracy was species-dependent. However, undifferentiated mixtures of fine organic materials from several post-bomb deposits of various ages repeatedly yielded dates between 1958 and 1962, and detrital charcoal yielded a date range of 1676-1939. In semiarid environments, the residence time of most types of organic debris precludes accurate annual resolution of post-bomb 14C dates. ?? 1992.
Training set selection for the prediction of essential genes.
Cheng, Jian; Xu, Zhao; Wu, Wenwu; Zhao, Li; Li, Xiangchen; Liu, Yanlin; Tao, Shiheng
2014-01-01
Various computational models have been developed to transfer annotations of gene essentiality between organisms. However, despite the increasing number of microorganisms with well-characterized sets of essential genes, selection of appropriate training sets for predicting the essential genes of poorly-studied or newly sequenced organisms remains challenging. In this study, a machine learning approach was applied reciprocally to predict the essential genes in 21 microorganisms. Results showed that training set selection greatly influenced predictive accuracy. We determined four criteria for training set selection: (1) essential genes in the selected training set should be reliable; (2) the growth conditions in which essential genes are defined should be consistent in training and prediction sets; (3) species used as training set should be closely related to the target organism; and (4) organisms used as training and prediction sets should exhibit similar phenotypes or lifestyles. We then analyzed the performance of an incomplete training set and an integrated training set with multiple organisms. We found that the size of the training set should be at least 10% of the total genes to yield accurate predictions. Additionally, the integrated training sets exhibited remarkable increase in stability and accuracy compared with single sets. Finally, we compared the performance of the integrated training sets with the four criteria and with random selection. The results revealed that a rational selection of training sets based on our criteria yields better performance than random selection. Thus, our results provide empirical guidance on training set selection for the identification of essential genes on a genome-wide scale.
Comparison of fecal egg counting methods in four livestock species.
Paras, Kelsey L; George, Melissa M; Vidyashankar, Anand N; Kaplan, Ray M
2018-06-15
Gastrointestinal nematode parasites are important pathogens of all domesticated livestock species. Fecal egg counts (FEC) are routinely used for evaluating anthelmintic efficacy and for making targeted anthelmintic treatment decisions. Numerous FEC techniques exist and vary in precision and accuracy. These performance characteristics are especially important when performing fecal egg count reduction tests (FECRT). The objective of this study was to compare the accuracy and precision of three commonly used FEC methods and determine if differences existed among livestock species. In this study, we evaluated the modified-Wisconsin, 3-chamber (high-sensitivity) McMaster, and Mini-FLOTAC methods in cattle, sheep, horses, and llamas in three phases. In the first phase, we performed an egg-spiking study to assess the egg recovery rate and accuracy of the different FEC methods. In the second phase, we examined clinical samples from four different livestock species and completed multiple replicate FEC using each method. In the last phase, we assessed the cheesecloth straining step as a potential source of egg loss. In the egg-spiking study, the Mini-FLOTAC recovered 70.9% of the eggs, which was significantly higher than either the McMaster (P = 0.002) or Wisconsin (P = 0.002). In the clinical samples from ruminants, Mini-FLOTAC consistently yielded the highest EPG, revealing a significantly higher level of egg recovery (P < 0.0001). For horses and llamas, both McMaster and Mini-FLOTAC yielded significantly higher EPG than Wisconsin (P < 0.0001, P < 0.0001, P < 0.001, and P = 0.024). Mini-FLOTAC was the most accurate method and was the most precise test for both species of ruminants. The Wisconsin method was the most precise for horses and McMaster was more precise for llama samples. We compared the Wisconsin and Mini-FLOTAC methods using a modified technique where both methods were performed using either the Mini-FLOTAC sieve or cheesecloth. The differences in the estimated mean EPG on log scale between the Wisconsin and mini-FLOTAC methods when cheesecloth was used (P < 0.0001) and when cheesecloth was excluded (P < 0.0001) were significant, providing strong evidence that the straining step is an important source of error. The high accuracy and precision demonstrated in this study for the Mini-FLOTAC, suggest that this method can be recommended for routine use in all host species. The benefits of Mini-FLOTAC will be especially relevant when high accuracy is important, such as when performing FECRT. Copyright © 2018 Elsevier B.V. All rights reserved.
Arakawa, H; Nakajima, Y; Kurihara, Y; Niimi, H; Ishikawa, T
1996-07-01
We retrospectively investigated the diagnostic accuracy and complication rate of transthoracic core biopsy using an automated biopsy gun and compared the findings with those of aspiration needle biopsy. Seventy-three patients underwent 74 core biopsy procedures and 50 patients underwent 52 aspiration biopsy procedures. Of these, a final diagnosis was obtained in 107 lesions with surgery or clinical course. Fifteen patients in which a final diagnosis was not obtained were excluded from the study on diagnostic accuracy. Thus, in the study of diagnostic accuracy, 63 core biopsy procedures for 62 lesions are included. Core biopsy was performed with an 18 G cutting needle using an automated biopsy gun. Aspiration biopsy was performed with a 20 G aspiration needle. Core biopsy yielded sufficient material in 57/63 procedures (90.5%). A correct diagnosis was obtained in 36 procedures (85.7%) for malignant leisons and a specific benign diagnosis was obtained in 11 procedures (52.4%). Aspiration biopsy yielded a correct diagnosis in 26 procedures (81.3%) for malignant leisons and in seven (46.7%) for benign lesions. The overall correct diagnosis were 75.8% and 71.7% with core biopsy and aspiration biopsy, respectively. Core biopsy gave a higher predictive rate than that of aspiration biopsy for both benign and malignant lessons (P < 0.02). Pneumothorax occurred in 18/74 (24.3%) patients with core biopsy and in 18/45 (40.0%) patients with aspiration biopsy. Of these, three with core biopsy and two with aspiration biopsy needed tube drainage. The other complication was haemoptysis, which occurred in six patients following core biopsy and in three after aspiration biopsy. All nine cases subsided spontaneously. There were no fatal complications. Core biopsy with a biopsy gun increase the diagnostic accuracy with a higher histologic predictive rate and no obvious additional risk of complications.
Prospects and Potential Uses of Genomic Prediction of Key Performance Traits in Tetraploid Potato.
Stich, Benjamin; Van Inghelandt, Delphine
2018-01-01
Genomic prediction is a routine tool in breeding programs of most major animal and plant species. However, its usefulness for potato breeding has not yet been evaluated in detail. The objectives of this study were to (i) examine the prospects of genomic prediction of key performance traits in a diversity panel of tetraploid potato modeling additive, dominance, and epistatic effects, (ii) investigate the effects of size and make up of training set, number of test environments and molecular markers on prediction accuracy, and (iii) assess the effect of including markers from candidate genes on the prediction accuracy. With genomic best linear unbiased prediction (GBLUP), BayesA, BayesCπ, and Bayesian LASSO, four different prediction methods were used for genomic prediction of relative area under disease progress curve after a Phytophthora infestans infection, plant maturity, maturity corrected resistance, tuber starch content, tuber starch yield (TSY), and tuber yield (TY) of 184 tetraploid potato clones or subsets thereof genotyped with the SolCAP 8.3k SNP array. The cross-validated prediction accuracies with GBLUP and the three Bayesian approaches for the six evaluated traits ranged from about 0.5 to about 0.8. For traits with a high expected genetic complexity, such as TSY and TY, we observed an 8% higher prediction accuracy using a model with additive and dominance effects compared with a model with additive effects only. Our results suggest that for oligogenic traits in general and when diagnostic markers are available in particular, the use of Bayesian methods for genomic prediction is highly recommended and that the diagnostic markers should be modeled as fixed effects. The evaluation of the relative performance of genomic prediction vs. phenotypic selection indicated that the former is superior, assuming cycle lengths and selection intensities that are possible to realize in commercial potato breeding programs.
Prospects and Potential Uses of Genomic Prediction of Key Performance Traits in Tetraploid Potato
Stich, Benjamin; Van Inghelandt, Delphine
2018-01-01
Genomic prediction is a routine tool in breeding programs of most major animal and plant species. However, its usefulness for potato breeding has not yet been evaluated in detail. The objectives of this study were to (i) examine the prospects of genomic prediction of key performance traits in a diversity panel of tetraploid potato modeling additive, dominance, and epistatic effects, (ii) investigate the effects of size and make up of training set, number of test environments and molecular markers on prediction accuracy, and (iii) assess the effect of including markers from candidate genes on the prediction accuracy. With genomic best linear unbiased prediction (GBLUP), BayesA, BayesCπ, and Bayesian LASSO, four different prediction methods were used for genomic prediction of relative area under disease progress curve after a Phytophthora infestans infection, plant maturity, maturity corrected resistance, tuber starch content, tuber starch yield (TSY), and tuber yield (TY) of 184 tetraploid potato clones or subsets thereof genotyped with the SolCAP 8.3k SNP array. The cross-validated prediction accuracies with GBLUP and the three Bayesian approaches for the six evaluated traits ranged from about 0.5 to about 0.8. For traits with a high expected genetic complexity, such as TSY and TY, we observed an 8% higher prediction accuracy using a model with additive and dominance effects compared with a model with additive effects only. Our results suggest that for oligogenic traits in general and when diagnostic markers are available in particular, the use of Bayesian methods for genomic prediction is highly recommended and that the diagnostic markers should be modeled as fixed effects. The evaluation of the relative performance of genomic prediction vs. phenotypic selection indicated that the former is superior, assuming cycle lengths and selection intensities that are possible to realize in commercial potato breeding programs. PMID:29563919
Detailed Hydrographic Feature Extraction from High-Resolution LiDAR Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danny L. Anderson
Detailed hydrographic feature extraction from high-resolution light detection and ranging (LiDAR) data is investigated. Methods for quantitatively evaluating and comparing such extractions are presented, including the use of sinuosity and longitudinal root-mean-square-error (LRMSE). These metrics are then used to quantitatively compare stream networks in two studies. The first study examines the effect of raster cell size on watershed boundaries and stream networks delineated from LiDAR-derived digital elevation models (DEMs). The study confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes generally yielded better stream network delineations, based on sinuosity and LRMSE. The second study demonstrates amore » new method of delineating a stream directly from LiDAR point clouds, without the intermediate step of deriving a DEM. Direct use of LiDAR point clouds could improve efficiency and accuracy of hydrographic feature extractions. The direct delineation method developed herein and termed “mDn”, is an extension of the D8 method that has been used for several decades with gridded raster data. The method divides the region around a starting point into sectors, using the LiDAR data points within each sector to determine an average slope, and selecting the sector with the greatest downward slope to determine the direction of flow. An mDn delineation was compared with a traditional grid-based delineation, using TauDEM, and other readily available, common stream data sets. Although, the TauDEM delineation yielded a sinuosity that more closely matches the reference, the mDn delineation yielded a sinuosity that was higher than either the TauDEM method or the existing published stream delineations. Furthermore, stream delineation using the mDn method yielded the smallest LRMSE.« less
NASA Technical Reports Server (NTRS)
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
NASA Technical Reports Server (NTRS)
Levy, Lionel L., Jr.; Yoshikawa, Kenneth K.
1959-01-01
A method based on linearized and slender-body theories, which is easily adapted to electronic-machine computing equipment, is developed for calculating the zero-lift wave drag of single- and multiple-component configurations from a knowledge of the second derivative of the area distribution of a series of equivalent bodies of revolution. The accuracy and computational time required of the method to calculate zero-lift wave drag is evaluated relative to another numerical method which employs the Tchebichef form of harmonic analysis of the area distribution of a series of equivalent bodies of revolution. The results of the evaluation indicate that the total zero-lift wave drag of a multiple-component configuration can generally be calculated most accurately as the sum of the zero-lift wave drag of each component alone plus the zero-lift interference wave drag between all pairs of components. The accuracy and computational time required of both methods to calculate total zero-lift wave drag at supersonic Mach numbers is comparable for airplane-type configurations. For systems of bodies of revolution both methods yield similar results with comparable accuracy; however, the present method only requires up to 60 percent of the computing time required of the harmonic-analysis method for two bodies of revolution and less time for a larger number of bodies.
Azevedo Peixoto, Leonardo de; Laviola, Bruno Galvêas; Alves, Alexandre Alonso; Rosado, Tatiana Barbosa; Bhering, Leonardo Lopes
2017-01-01
Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY) and the weight of 100 seeds (W100S) using restricted maximum likelihood (REML); to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits.
NASA Astrophysics Data System (ADS)
Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.
2012-01-01
The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.
Armon-Lotem, Sharon; Meir, Natalia
2016-11-01
Previous research demonstrates that repetition tasks are valuable tools for diagnosing specific language impairment (SLI) in monolingual children in English and a variety of other languages, with non-word repetition (NWR) and sentence repetition (SRep) yielding high levels of sensitivity and specificity. Yet, only a few studies have addressed the diagnostic accuracy of repetition tasks in bilingual children, and most available research focuses on English-Spanish sequential bilinguals. To evaluate the efficacy of three repetition tasks (forward digit span (FWD), NWR and SRep) in order to distinguish mono- and bilingual children with and without SLI in Russian and Hebrew. A total of 230 mono- and bilingual children aged 5;5-6;8 participated in the study: 144 bilingual Russian-Hebrew-speaking children (27 with SLI); and 52 monolingual Hebrew-speaking children (14 with SLI) and 34 monolingual Russian-speaking children (14 with SLI). Parallel repetition tasks were designed in both Russian and Hebrew. Bilingual children were tested in both languages. The findings confirmed that NWR and SRep are valuable tools in distinguishing monolingual children with and without SLI in Russian and Hebrew, while the results for FWD were mixed. Yet, testing of bilingual children with the same tools using monolingual cut-off points resulted in inadequate diagnostic accuracy. We demonstrate, however, that the use of bilingual cut-off points yielded acceptable levels of diagnostic accuracy. The combination of SRep tasks in L1/Russian and L2/Hebrew yielded the highest overall accuracy (i.e., 94%), but even SRep alone in L2/Hebrew showed excellent levels of sensitivity (i.e., 100%) and specificity (i.e., 89%), reaching 91% of total diagnostic accuracy. The results are very promising for identifying SLI in bilingual children and for showing that testing in the majority language with bilingual cut-off points can provide an accurate classification. © 2016 Royal College of Speech and Language Therapists.
Hernández-Ibáñez, C; Blazquez-Sánchez, N; Aguilar-Bernier, M; Fúnez-Liébana, R; Rivas-Ruiz, F; de Troya-Martín, M
Incisional biopsy may not always provide a correct classification of histologic subtypes of basal cell carcinoma (BCC). High-frequency ultrasound (HFUS) imaging of the skin is useful for the diagnosis and management of this tumor. The main aim of this study was to compare the diagnostic value of HFUS compared with punch biopsy for the correct classification of histologic subtypes of primary BCC. We also analyzed the influence of tumor size and histologic subtype (single subtype vs. mixed) on the diagnostic yield of HFUS and punch biopsy. Retrospective observational study of primary BCCs treated by the Dermatology Department of Hospital Costa del Sol in Marbella, Spain, between october 2013 and may 2014. Surgical excision was preceded by HFUS imaging (Dermascan C © , 20-MHz linear probe) and a punch biopsy in all cases. We compared the overall diagnostic yield and accuracy (sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) of HFUS and punch biopsy against the gold standard (excisional biopsy with serial sections) for overall and subgroup results. We studied 156 cases. The overall diagnostic yield was 73.7% for HFUS (sensitivity, 74.5%; specificity, 73%) and 79.9% for punch biopsy (sensitivity, 76%; specificity, 82%). In the subgroup analyses, HFUS had a PPV of 93.3% for superficial BCC (vs. 92% for punch biopsy). In the analysis by tumor size, HFUS achieved an overall diagnostic yield of 70.4% for tumors measuring 40mm 2 or less and 77.3% for larger tumors; the NPV was 82% in both size groups. Punch biopsy performed better in the diagnosis of small lesions (overall diagnostic yield of 86.4% for lesions ≤40mm 2 vs. 72.6% for lesions >40mm 2 ). HFUS imaging was particularly useful for ruling out infiltrating BCCs, diagnosing simple, superficial BCCs, and correctly classifying BCCs larger than 40mm 2 . Copyright © 2016 AEDV. Publicado por Elsevier España, S.L.U. All rights reserved.
Modeling of Iron K Lines: Radiative and Auger Decay Data for Fe II-Fe IX
NASA Technical Reports Server (NTRS)
Palmeri, P.; Mendoza, C.; Kallman, T. R.; Bautista, M. A.; Melendez, M.
2003-01-01
A detailed analysis of the radiative and Auger de-excitation channels of K-shell vacancy states in Fe II-Fe IX has been carried out. Level energies, wavelengths, A-values, Auger rates and fluorescence yields have been calculated for the lowest fine-structure levels populated by photoionization of the ground state of the parent ion. Different branching ratios, namely K alpha 2/K alpha 1, K beta/K alpha, KLM/KLL, KMM/KLL, and the total K-shell fluorescence yields, omega(sub k), obtained in the present work have been compared with other theoretical data and solid-state measurements, finding good general agreement with the latter. The Kalpha 2/K alpha l ratio is found to be sensitive to the excitation mechanism. From these comparisons it has been possible to estimate an accuracy of approx.10% for the present transition probabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.
Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less
NASA Technical Reports Server (NTRS)
Nelson, C. C.; Nguyen, D. T.
1987-01-01
A new analysis procedure has been presented which solves for the flow variables of an annular pressure seal in which the rotor has a large static displacement (eccentricity) from the centered position. The present paper incorporates the solutions to investigate the effect of eccentricity on the rotordynamic coefficients. The analysis begins with a set of governing equations based on a turbulent bulk-flow model and Moody's friction factor equation. Perturbations of the flow variables yields a set of zeroth- and first-order equations. After integration of the zeroth-order equations, the resulting zeroth-order flow variables are used as input in the solution of the first-order equations. Further integration of the first order pressures yields the eccentric rotordynamic coefficients. The results from this procedure compare well with available experimental and theoretical data, with accuracy just as good or slightly better than the predictions based on a finite-element model.
Donders, Jacobus; Janke, Kelly
2008-07-01
The performance of 40 children with complicated mild to severe traumatic brain injury on the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV; Wechsler, 2003) was compared with that of 40 demographically matched healthy controls. Of the four WISC-IV factor index scores, only Processing Speed yielded a statistically significant group difference (p < .001) as well as a statistically significant negative correlation with length of coma (p < .01). Logistic regression, using Processing Speed to classify individual children, yielded a sensitivity of 72.50% and a specificity of 62.50%, with false positive and false negative rates both exceeding 30%. We conclude that Processing Speed has acceptable criterion validity in the evaluation of children with complicated mild to severe traumatic brain injury but that the WISC-IV should be supplemented with other measures to assure sufficient accuracy in the diagnostic process.
Optimal threshold estimation for binary classifiers using game theory.
Sanchez, Ignacio Enrique
2016-01-01
Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC ) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.
NASA Astrophysics Data System (ADS)
Li, Qin; Berman, Benjamin P.; Schumacher, Justin; Liang, Yongguang; Gavrielides, Marios A.; Yang, Hao; Zhao, Binsheng; Petrick, Nicholas
2017-03-01
Tumor volume measured from computed tomography images is considered a biomarker for disease progression or treatment response. The estimation of the tumor volume depends on the imaging system parameters selected, as well as lesion characteristics. In this study, we examined how different image reconstruction methods affect the measurement of lesions in an anthropomorphic liver phantom with a non-uniform background. Iterative statistics-based and model-based reconstructions, as well as filtered back-projection, were evaluated and compared in this study. Statistics-based and filtered back-projection yielded similar estimation performance, while model-based yielded higher precision but lower accuracy in the case of small lesions. Iterative reconstructions exhibited higher signal-to-noise ratio but slightly lower contrast of the lesion relative to the background. A better understanding of lesion volumetry performance as a function of acquisition parameters and lesion characteristics can lead to its incorporation as a routine sizing tool.
Evaluating the effectiveness of low cost UAV generated topography for geomorphic change detection
NASA Astrophysics Data System (ADS)
Cook, K. L.
2014-12-01
With the recent explosion in the use and availability of unmanned aerial vehicle platforms and development of easy to use structure from motion software, UAV based photogrammetry is increasingly being adopted to produce high resolution topography for the study of surface processes. UAV systems can vary substantially in price and complexity, but the tradeoffs between these and the quality of the resulting data are not well constrained. We look at one end of this spectrum and evaluate the effectiveness of a simple low cost UAV setup for obtaining high resolution topography in a challenging field setting. Our study site is the Daan River gorge in western Taiwan, a rapidly eroding bedrock gorge that we have monitored with terrestrial Lidar since 2009. The site presents challenges for the generation and analysis of high resolution topography, including vertical gorge walls, vegetation, wide variation in surface roughness, and a complicated 3D morphology. In order to evaluate the accuracy of the UAV-derived topography, we compare it with terrestrial Lidar data collected during the same survey period. Our UAV setup combines a DJI Phantom 2 quadcopter with a 16 megapixel Canon Powershot camera for a total platform cost of less than $850. The quadcopter is flown manually, and the camera is programmed to take a photograph every 5 seconds, yielding 200-250 pictures per flight. We measured ground control points and targets for both the Lidar scans and the aerial surveys using a Leica RTK GPS with 1-2 cm accuracy. UAV derived point clouds were obtained using Agisoft Photoscan software. We conducted both Lidar and UAV surveys before and after a summer typhoon season, allowing us to evaluate the reliability of the UAV survey to detect geomorphic changes in the range of one to several meters. We find that this simple UAV setup can yield point clouds with an average accuracy on the order of 10 cm compared to the Lidar point clouds. Well-distributed and accurately located ground control points are critical, but we achieve good accuracy with even with relatively few ground control points (25) over a 150,000 sq m area. The large number of photographs taken during each flight also allows us to explore the reproducibility of the UAV-derived topography by generating point clouds from different subsets of photographs taken of the same area during a single survey.
NASA Astrophysics Data System (ADS)
Landry, Brian R.; Subotnik, Joseph E.
2011-11-01
We evaluate the accuracy of Tully's surface hopping algorithm for the spin-boson model for the case of a small diabatic coupling parameter (V). We calculate the transition rates between diabatic surfaces, and we compare our results to the expected Marcus rates. We show that standard surface hopping yields an incorrect scaling with diabatic coupling (linear in V), which we demonstrate is due to an incorrect treatment of decoherence. By modifying standard surface hopping to include decoherence events, we recover the correct scaling (˜V2).
A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection
Thounaojam, Dalton Meitei; Khelchandra, Thongam; Singh, Kh. Manglem; Roy, Sudipta
2016-01-01
This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter. PMID:27127500
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2011-01-01
Slide presentation discusses: (1) Modifications to JPL 5.9.12 compared to V5.9.1, (2) Some results showing that V5.9.12 O, with original water vapor sounding channels, is preferable to V5.9.12 N with Antonia Gambacorta s new water vapor channels. (3) Comparison of V5.9.12, V5.9.12 AO, V5.9.1, and V5.0, (4) Accuracy and yield of channel by channel Quality Controlled clear-column radiances R(sub i) and (5) Plans for Version-7.
Accuracy of endoscopic and videofluoroscopic evaluations of swallowing for oropharyngeal dysphagia.
Giraldo-Cadavid, Luis Fernando; Leal-Leaño, Lorena Renata; Leon-Basantes, Guillermo Alfredo; Bastidas, Alirio Rodrigo; Garcia, Rafael; Ovalle, Sergio; Abondano-Garavito, Jorge E
2017-09-01
A systematic review and meta-analysis of the literature was conducted to compare the accuracy with which flexible endoscopic evaluation of swallowing (FEES) and videofluoroscopic swallowing study (VFSS) assessed oropharyngeal dysphagia in adults. PubMed, Embase, and the Latin American and Caribbean Health Sciences Literature (LILACS) database. A review of published studies was conducted in parallel by two groups of researchers. We evaluated the methodological quality, homogeneity, threshold effect, and publication bias. The results are presented as originally published, then with each test compared against the other as a reference and both compared against a composite reference standard, and then pooled using a random effects model. Software use consisted of Meta-DiSc and SPSS. The search yielded 5,697 articles. Fifty-two articles were reviewed in full text, and six articles were included in the meta-analysis. FEES showed greater sensitivity than VFSS for aspiration (0.88 vs. 0.77; P = .03), penetration (0.97 vs. 0.83; P = .0002), and laryngopharyngeal residues (0.97 vs. 0.80; P < .0001). Sensitivity to detect pharyngeal premature spillage was similar for both tests (VFSS: 0.80; FEES: 0.69; P = .28). The specificities of both tests were similar (range, 0.93-0.98). In the sensitivity analysis there were statistically significant differences between the tests regarding residues but only marginally significant differences regarding aspiration and penetration. FEES had a slight advantage over VFSS to detect aspiration, penetration, and residues. Prospective studies comparing both tests against an appropriate reference standard are needed to define which test has greater accuracy. 2a Laryngoscope, 127:2002-2010, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
NOBLE - Flexible concept recognition for large-scale biomedical natural language processing.
Tseytlin, Eugene; Mitchell, Kevin; Legowski, Elizabeth; Corrigan, Julia; Chavan, Girish; Jacobson, Rebecca S
2016-01-14
Natural language processing (NLP) applications are increasingly important in biomedical data analysis, knowledge engineering, and decision support. Concept recognition is an important component task for NLP pipelines, and can be either general-purpose or domain-specific. We describe a novel, flexible, and general-purpose concept recognition component for NLP pipelines, and compare its speed and accuracy against five commonly used alternatives on both a biological and clinical corpus. NOBLE Coder implements a general algorithm for matching terms to concepts from an arbitrary vocabulary set. The system's matching options can be configured individually or in combination to yield specific system behavior for a variety of NLP tasks. The software is open source, freely available, and easily integrated into UIMA or GATE. We benchmarked speed and accuracy of the system against the CRAFT and ShARe corpora as reference standards and compared it to MMTx, MGrep, Concept Mapper, cTAKES Dictionary Lookup Annotator, and cTAKES Fast Dictionary Lookup Annotator. We describe key advantages of the NOBLE Coder system and associated tools, including its greedy algorithm, configurable matching strategies, and multiple terminology input formats. These features provide unique functionality when compared with existing alternatives, including state-of-the-art systems. On two benchmarking tasks, NOBLE's performance exceeded commonly used alternatives, performing almost as well as the most advanced systems. Error analysis revealed differences in error profiles among systems. NOBLE Coder is comparable to other widely used concept recognition systems in terms of accuracy and speed. Advantages of NOBLE Coder include its interactive terminology builder tool, ease of configuration, and adaptability to various domains and tasks. NOBLE provides a term-to-concept matching system suitable for general concept recognition in biomedical NLP pipelines.
Swartz, Jordan; Koziatek, Christian; Theobald, Jason; Smith, Silas; Iturrate, Eduardo
2017-05-01
Testing for venous thromboembolism (VTE) is associated with cost and risk to patients (e.g. radiation). To assess the appropriateness of imaging utilization at the provider level, it is important to know that provider's diagnostic yield (percentage of tests positive for the diagnostic entity of interest). However, determining diagnostic yield typically requires either time-consuming, manual review of radiology reports or the use of complex and/or proprietary natural language processing software. The objectives of this study were twofold: 1) to develop and implement a simple, user-configurable, and open-source natural language processing tool to classify radiology reports with high accuracy and 2) to use the results of the tool to design a provider-specific VTE imaging dashboard, consisting of both utilization rate and diagnostic yield. Two physicians reviewed a training set of 400 lower extremity ultrasound (UTZ) and computed tomography pulmonary angiogram (CTPA) reports to understand the language used in VTE-positive and VTE-negative reports. The insights from this review informed the arguments to the five modifiable parameters of the NLP tool. A validation set of 2,000 studies was then independently classified by the reviewers and by the tool; the classifications were compared and the performance of the tool was calculated. The tool was highly accurate in classifying the presence and absence of VTE for both the UTZ (sensitivity 95.7%; 95% CI 91.5-99.8, specificity 100%; 95% CI 100-100) and CTPA reports (sensitivity 97.1%; 95% CI 94.3-99.9, specificity 98.6%; 95% CI 97.8-99.4). The diagnostic yield was then calculated at the individual provider level and the imaging dashboard was created. We have created a novel NLP tool designed for users without a background in computer programming, which has been used to classify venous thromboembolism reports with a high degree of accuracy. The tool is open-source and available for download at http://iturrate.com/simpleNLP. Results obtained using this tool can be applied to enhance quality by presenting information about utilization and yield to providers via an imaging dashboard. Copyright © 2017 Elsevier B.V. All rights reserved.
Fused methods for visual saliency estimation
NASA Astrophysics Data System (ADS)
Danko, Amanda S.; Lyu, Siwei
2015-02-01
In this work, we present a new model of visual saliency by combing results from existing methods, improving upon their performance and accuracy. By fusing pre-attentive and context-aware methods, we highlight the abilities of state-of-the-art models while compensating for their deficiencies. We put this theory to the test in a series of experiments, comparatively evaluating the visual saliency maps and employing them for content-based image retrieval and thumbnail generation. We find that on average our model yields definitive improvements upon recall and f-measure metrics with comparable precisions. In addition, we find that all image searches using our fused method return more correct images and additionally rank them higher than the searches using the original methods alone.
Influence of Tension Stiffening on the Flexural Stiffness of Reinforced Concrete Circular Sections
Morelli, Francesco; Amico, Cosimo; Salvatore, Walter; Squeglia, Nunziante; Stacul, Stefano
2017-01-01
Within this paper, the assessment of tension stiffening effects on a reinforced concrete element with circular section subjected to axial and bending loads is presented. To this purpose, an enhancement of an analytical model already present within the actual technical literature is proposed. The accuracy of the enhanced method is assessed by comparing the experimental results carried out in past research and the numerical ones obtained by the model. Finally, a parametric study is executed in order to study the influence of axial compressive force on the flexural stiffness of reinforced concrete elements that are characterized by a circular section, comparing the secant stiffness evaluated at yielding and at maximum resistance, considering and not considering the effects of tension stiffness. PMID:28773028
Influence of Tension Stiffening on the Flexural Stiffness of Reinforced Concrete Circular Sections.
Morelli, Francesco; Amico, Cosimo; Salvatore, Walter; Squeglia, Nunziante; Stacul, Stefano
2017-06-18
Within this paper, the assessment of tension stiffening effects on a reinforced concrete element with the circular sections subjected to axial and bending loads is presented. To this purpose, an enhancement of an analytical model already present within the actual technical literature is proposed. The accuracy of the enhanced method is assessed by comparing the experimental results carried out in past research and the numerical ones obtained by the model. Finally, a parametric study is executed in order to study the influence of axial compressive force on the flexural stiffness of reinforced concrete elements that are characterized by a circular section, comparing the secant stiffness evaluated at yielding and at maximum resistance, considering and not considering the effects of tension stiffness.
Comparison of Histograms for Use in Cloud Observation and Modeling
NASA Technical Reports Server (NTRS)
Green, Lisa; Xu, Kuan-Man
2005-01-01
Cloud observation and cloud modeling data can be presented in histograms for each characteristic to be measured. Combining information from single-cloud histograms yields a summary histogram. Summary histograms can be compared to each other to reach conclusions about the behavior of an ensemble of clouds in different places at different times or about the accuracy of a particular cloud model. As in any scientific comparison, it is necessary to decide whether any apparent differences are statistically significant. The usual methods of deciding statistical significance when comparing histograms do not apply in this case because they assume independent data. Thus, a new method is necessary. The proposed method uses the Euclidean distance metric and bootstrapping to calculate the significance level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakjevskii, V; Knill, C; Rakowski, J
2014-06-01
Purpose: To develop a comprehensive end-to-end test for Varian's TrueBeam linear accelerator for head and neck IMRT using a custom phantom designed to utilize multiple dosimetry devices. Methods: The initial end-to-end test and custom H and N phantom were designed to yield maximum information in anatomical regions significant to H and N plans with respect to: i) geometric accuracy, ii) dosimetric accuracy, and iii) treatment reproducibility. The phantom was designed in collaboration with Integrated Medical Technologies. A CT image was taken with a 1mm slice thickness. The CT was imported into Varian's Eclipse treatment planning system, where OARs and themore » PTV were contoured. A clinical template was used to create an eight field static gantry angle IMRT plan. After optimization, dose was calculated using the Analytic Anisotropic Algorithm with inhomogeneity correction. Plans were delivered with a TrueBeam equipped with a high definition MLC. Preliminary end-to-end results were measured using film and ion chambers. Ion chamber dose measurements were compared to the TPS. Films were analyzed with FilmQAPro using composite gamma index. Results: Film analysis for the initial end-to-end plan with a geometrically simple PTV showed average gamma pass rates >99% with a passing criterion of 3% / 3mm. Film analysis of a plan with a more realistic, ie. complex, PTV yielded pass rates >99% in clinically important regions containing the PTV, spinal cord and parotid glands. Ion chamber measurements were on average within 1.21% of calculated dose for both plans. Conclusion: trials have demonstrated that our end-to-end testing methods provide baseline values for the dosimetric and geometric accuracy of Varian's TrueBeam system.« less
Accuracy and reliability of the Pfeffer Questionnaire for the Brazilian elderly population
Dutra, Marina Carneiro; Ribeiro, Raynan dos Santos; Pinheiro, Sarah Brandão; de Melo, Gislane Ferreira; Carvalho, Gustavo de Azevedo
2015-01-01
The aging population calls for instruments to assess functional and cognitive impairment in the elderly, aiming to prevent conditions that affect functional abilities. Objective To verify the accuracy and reliability of the Pfeffer (FAQ) scale for the Brazilian elderly population and to evaluate the reliability and reproducibility of the translated version of the Pfeffer Questionnaire. Methods The Brazilian version of the FAQ was applied to 110 elderly divided into two groups. Both groups were assessed by two blinded investigators at baseline and again after 15 days. In order to verify the accuracy and reliability of the instrument, sensitivity and specificity measurements for the presence or absence of functional and cognitive decline were calculated for various cut-off points and the ROC curve. Intra and inter-examiner reliability were assessed using the Interclass Correlation Coefficient (ICC) and Bland-Altman plots. Results For the occurrence of cognitive decline, the ROC curve yielded an area under the curve of 0.909 (95%CI of 0.845 to 0.972), sensitivity of 75.68% (95%CI of 93.52% to 100%) and specificity of 97.26%. For the occurrence of functional decline, the ROC curve yielded an area under the curve of 0.851 (95%CI of 64.52% to 87.33%) and specificity of 80.36% (95%CI of 69.95% to 90.76%). The ICC was excellent, with all values exceeding 0.75. On the Bland-Altman plot, intra-examiner agreement was good, with p>0.05consistently close to 0. A systematic difference was found for inter-examiner agreement. Conclusion The Pfeffer Questionnaire is applicable in the Brazilian elderly population and showed reliability and reproducibility compared to the original test. PMID:29213959
The impact of system matrix dimension on small FOV SPECT reconstruction with truncated projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Chung, E-mail: Chung.Chan@yale.edu, E-mail: Chi.Liu@yale.edu; Wu, Jing; Liu, Chi, E-mail: Chung.Chan@yale.edu, E-mail: Chi.Liu@yale.edu
Purpose: A dedicated cardiac hybrid single photon emission computed tomography (SPECT)/CT scanner that uses cadmium zinc telluride detectors and multiple pinhole collimators for stationary acquisition offers many advantages. However, the impact of the reconstruction system matrix (SM) dimension on the reconstructed image quality from truncated projections and 19 angular samples acquired on this scanner has not been extensively investigated. In this study, the authors aimed to investigate the impact of the dimensions of SM and the use of body contour derived from adjunctive CT imaging as an object support in reconstruction on this scanner, in relation to background extracardiac activity.more » Methods: The authors first simulated a generic SPECT/CT system to image four NCAT phantoms with various levels of extracardiac activity and compared the reconstructions using SM in different dimensions and with/without body contour as a support for quantitative evaluations. The authors then compared the reconstructions of 18 patient studies, which were acquired on a GE Discovery NM570c scanner following injection of different radiotracers, including {sup 99m}Tc-Tetrofosmin and {sup 123}I-mIBG, comparing the scanner’s default SM that incompletely covers the body with a large SM that incorporates a patient specific full body contour. Results: The simulation studies showed that the reconstructions using a SM that only partially covers the body yielded artifacts on the edge of the field of view (FOV), overestimation of activity and increased nonuniformity in the blood pool for the phantoms with higher relative levels of extracardiac activity. However, the impact on the quantitative accuracy in the high activity region, such as the myocardium, was subtle. On the other hand, an excessively large SM that enclosed the entire body alleviated the artifacts and reduced overestimation in the blood pool, but yielded slight underestimation in myocardium and defect regions. The reconstruction using the larger SM with body contour yielded the most quantitatively accurate results in all the regions of interest for a range of uptake levels in the extracardiac regions. In patient studies, the SM incorporating patient specific body contour minimized extracardiac artifacts, yielded similar myocardial activity, lower blood pool activity, and subsequently improved myocardium-to-blood pool contrast (p < 0.0001) by an average of 7% (range 0%–18%) across all the patients, compared to the reconstructions using the scanner’s default SM. Conclusions: Their results demonstrate that using a large SM that incorporates a CT derived body contour in the reconstruction could improve quantitative accuracy within the FOV for clinical studies with high extracardiac activity.« less
NASA Astrophysics Data System (ADS)
Susanti, Yuliana; Zukhronah, Etik; Pratiwi, Hasih; Respatiwulan; Sri Sulistijowati, H.
2017-11-01
To achieve food resilience in Indonesia, food diversification by exploring potentials of local food is required. Corn is one of alternating staple food of Javanese society. For that reason, corn production needs to be improved by considering the influencing factors. CHAID and CRT are methods of data mining which can be used to classify the influencing variables. The present study seeks to dig up information on the potentials of local food availability of corn in regencies and cities in Java Island. CHAID analysis yields four classifications with accuracy of 78.8%, while CRT analysis yields seven classifications with accuracy of 79.6%.
Accuracy and usefulness of the HEDIS childhood immunization measures.
Bundy, David G; Solomon, Barry S; Kim, Julia M; Miller, Marlene R
2012-04-01
With the use of Centers for Disease Control and Prevention (CDC) immunization recommendations as the gold standard, our objectives were to measure the accuracy ("is this child up-to-date on immunizations?") and usefulness ("is this child due for catch-up immunizations?") of the Healthcare Effectiveness Data and Information Set (HEDIS) childhood immunization measures. For children aged 24 to 35 months from the 2009 National Immunization Survey, we assessed the accuracy and usefulness of the HEDIS childhood immunization measures for 6 individual immunizations and a composite. A total of 12 096 children met all inclusion criteria and composed the study sample. The HEDIS measures had >90% accuracy when compared with the CDC gold standard for each of the 6 immunizations (range, 94.3%-99.7%) and the composite (93.8%). The HEDIS measure was least accurate for hepatitis B and pneumococcal conjugate immunizations. The proportion of children for which the HEDIS measure yielded a nonuseful result (ie, an incorrect answer to the question, "is this child due for catch-up immunization?") ranged from 0.33% (varicella) to 5.96% (pneumococcal conjugate). The most important predictor of HEDIS measure accuracy and usefulness was the CDC-recommended number of immunizations due at age 2 years; children with zero or all immunizations due were the most likely to be correctly classified. HEDIS childhood immunization measures are, on the whole, accurate and useful. Certain immunizations (eg, hepatitis B, pneumococcal conjugate) and children (eg, those with a single overdue immunization), however, are more prone to HEDIS misclassification.
Giménez, Beatriz; Pradíes, Guillermo; Martínez-Rus, Francisco; Özcan, Mutlu
2015-01-01
To evaluate the accuracy of two digital impression systems based on the same technology but different postprocessing correction modes of customized software, with consideration of several clinical parameters. A maxillary master model with six implants located in the second molar, second premolar, and lateral incisor positions was fitted with six cylindrical scan bodies. Scan bodies were placed at different angulations or depths apical to the gingiva. Two experienced and two inexperienced operators performed scans with either 3D Progress (MHT) or ZFX Intrascan (Zimmer Dental). Five different distances between implants (scan bodies) were measured, yielding five data points per impression and 100 per impression system. Measurements made with a high-accuracy three-dimensional coordinate measuring machine (CMM) of the master model acted as the true values. The values obtained from the digital impressions were subtracted from the CMM values to identify the deviations. The differences between experienced and inexperienced operators and implant angulation and depth were compared statistically. Experience of the operator, implant angulation, and implant depth were not associated with significant differences in deviation from the true values with both 3D Progress and ZFX Intrascan. Accuracy in the first scanned quadrant was significantly better with 3D Progress, but ZFX Intrascan presented better accuracy in the full arch. Neither of the two systems tested would be suitable for digital impression of multiple-implant prostheses. Because of the errors, further development of both systems is required.
Analysis of spatial distribution of land cover maps accuracy
NASA Astrophysics Data System (ADS)
Khatami, R.; Mountrakis, G.; Stehman, S. V.
2017-12-01
Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.
Cryobiopsy: Should This Be Used in Place of Endobronchial Forceps Biopsies?
Rubio, Edmundo R.; le, Susanti R.; Whatley, Ralph E.; Boyd, Michael B.
2013-01-01
Forceps biopsies of airway lesions have variable yields. The yield increases when combining techniques in order to collect more material. With the use of cryotherapy probes (cryobiopsy) larger specimens can be obtained, resulting in an increase in the diagnostic yield. However, the utility and safety of cryobiopsy with all types of lesions, including flat mucosal lesions, is not established. Aims. Demonstrate the utility/safety of cryobiopsy versus forceps biopsy to sample exophytic and flat airway lesions. Settings and Design. Teaching hospital-based retrospective analysis. Methods. Retrospective analysis of patients undergoing cryobiopsies (singly or combined with forceps biopsies) from August 2008 through August 2010. Statistical Analysis. Wilcoxon signed-rank test. Results. The comparative analysis of 22 patients with cryobiopsy and forceps biopsy of the same lesion showed the mean volumes of material obtained with cryobiopsy were significantly larger (0.696 cm3 versus 0.0373 cm3, P = 0.0014). Of 31 cryobiopsies performed, one had minor bleeding. Cryopbiopsy allowed sampling of exophytic and flat lesions that were located centrally or distally. Cryobiopsies were shown to be safe, free of artifact, and provided a diagnostic yield of 96.77%. Conclusions. Cryobiopsy allows safe sampling of exophytic and flat airway lesions, with larger specimens, excellent tissue preservation and high diagnostic accuracy. PMID:24066296
Lane, Shannon J; Heddle, Nancy M; Arnold, Emmy; Walker, Irwin
2006-01-01
Background Handheld computers are increasingly favoured over paper and pencil methods to capture data in clinical research. Methods This study systematically identified and reviewed randomized controlled trials (RCTs) that compared the two methods for self-recording and reporting data, and where at least one of the following outcomes was assessed: data accuracy; timeliness of data capture; and adherence to protocols for data collection. Results A comprehensive key word search of NLM Gateway's database yielded 9 studies fitting the criteria for inclusion. Data extraction was performed and checked by two of the authors. None of the studies included all outcomes. The results overall, favor handheld computers over paper and pencil for data collection among study participants but the data are not uniform for the different outcomes. Handheld computers appear superior in timeliness of receipt and data handling (four of four studies) and are preferred by most subjects (three of four studies). On the other hand, only one of the trials adequately compared adherence to instructions for recording and submission of data (handheld computers were superior), and comparisons of accuracy were inconsistent between five studies. Conclusion Handhelds are an effective alternative to paper and pencil modes of data collection; they are faster and were preferred by most users. PMID:16737535
Heidaritabar, M; Wolc, A; Arango, J; Zeng, J; Settar, P; Fulton, J E; O'Sullivan, N P; Bastiaansen, J W M; Fernando, R L; Garrick, D J; Dekkers, J C M
2016-10-01
Most genomic prediction studies fit only additive effects in models to estimate genomic breeding values (GEBV). However, if dominance genetic effects are an important source of variation for complex traits, accounting for them may improve the accuracy of GEBV. We investigated the effect of fitting dominance and additive effects on the accuracy of GEBV for eight egg production and quality traits in a purebred line of brown layers using pedigree or genomic information (42K single-nucleotide polymorphism (SNP) panel). Phenotypes were corrected for the effect of hatch date. Additive and dominance genetic variances were estimated using genomic-based [genomic best linear unbiased prediction (GBLUP)-REML and BayesC] and pedigree-based (PBLUP-REML) methods. Breeding values were predicted using a model that included both additive and dominance effects and a model that included only additive effects. The reference population consisted of approximately 1800 animals hatched between 2004 and 2009, while approximately 300 young animals hatched in 2010 were used for validation. Accuracy of prediction was computed as the correlation between phenotypes and estimated breeding values of the validation animals divided by the square root of the estimate of heritability in the whole population. The proportion of dominance variance to total phenotypic variance ranged from 0.03 to 0.22 with PBLUP-REML across traits, from 0 to 0.03 with GBLUP-REML and from 0.01 to 0.05 with BayesC. Accuracies of GEBV ranged from 0.28 to 0.60 across traits. Inclusion of dominance effects did not improve the accuracy of GEBV, and differences in their accuracies between genomic-based methods were small (0.01-0.05), with GBLUP-REML yielding higher prediction accuracies than BayesC for egg production, egg colour and yolk weight, while BayesC yielded higher accuracies than GBLUP-REML for the other traits. In conclusion, fitting dominance effects did not impact accuracy of genomic prediction of breeding values in this population. © 2016 Blackwell Verlag GmbH.
Tan, Kok Chooi; Lim, Hwee San; Matjafri, Mohd Zubir; Abdullah, Khiruddin
2012-06-01
Atmospheric corrections for multi-temporal optical satellite images are necessary, especially in change detection analyses, such as normalized difference vegetation index (NDVI) rationing. Abrupt change detection analysis using remote-sensing techniques requires radiometric congruity and atmospheric correction to monitor terrestrial surfaces over time. Two atmospheric correction methods were used for this study: relative radiometric normalization and the simplified method for atmospheric correction (SMAC) in the solar spectrum. A multi-temporal data set consisting of two sets of Landsat images from the period between 1991 and 2002 of Penang Island, Malaysia, was used to compare NDVI maps, which were generated using the proposed atmospheric correction methods. Land surface temperature (LST) was retrieved using ATCOR3_T in PCI Geomatica 10.1 image processing software. Linear regression analysis was utilized to analyze the relationship between NDVI and LST. This study reveals that both of the proposed atmospheric correction methods yielded high accuracy through examination of the linear correlation coefficients. To check for the accuracy of the equation obtained through linear regression analysis for every single satellite image, 20 points were randomly chosen. The results showed that the SMAC method yielded a constant value (in terms of error) to predict the NDVI value from linear regression analysis-derived equation. The errors (average) from both proposed atmospheric correction methods were less than 10%.
Accuracy of genomic predictions in Gyr (Bos indicus) dairy cattle.
Boison, S A; Utsunomiya, A T H; Santos, D J A; Neves, H H R; Carvalheiro, R; Mészáros, G; Utsunomiya, Y T; do Carmo, A S; Verneque, R S; Machado, M A; Panetto, J C C; Garcia, J F; Sölkner, J; da Silva, M V G B
2017-07-01
Genomic selection may accelerate genetic progress in breeding programs of indicine breeds when compared with traditional selection methods. We present results of genomic predictions in Gyr (Bos indicus) dairy cattle of Brazil for milk yield (MY), fat yield (FY), protein yield (PY), and age at first calving using information from bulls and cows. Four different single nucleotide polymorphism (SNP) chips were studied. Additionally, the effect of the use of imputed data on genomic prediction accuracy was studied. A total of 474 bulls and 1,688 cows were genotyped with the Illumina BovineHD (HD; San Diego, CA) and BovineSNP50 (50K) chip, respectively. Genotypes of cows were imputed to HD using FImpute v2.2. After quality check of data, 496,606 markers remained. The HD markers present on the GeneSeek SGGP-20Ki (15,727; Lincoln, NE), 50K (22,152), and GeneSeek GGP-75Ki (65,018) were subset and used to assess the effect of lower SNP density on accuracy of prediction. Deregressed breeding values were used as pseudophenotypes for model training. Data were split into reference and validation to mimic a forward prediction scheme. The reference population consisted of animals whose birth year was ≤2004 and consisted of either only bulls (TR1) or a combination of bulls and dams (TR2), whereas the validation set consisted of younger bulls (born after 2004). Genomic BLUP was used to estimate genomic breeding values (GEBV) and reliability of GEBV (R 2 PEV ) was based on the prediction error variance approach. Reliability of GEBV ranged from ∼0.46 (FY and PY) to 0.56 (MY) with TR1 and from 0.51 (PY) to 0.65 (MY) with TR2. When averaged across all traits, R 2 PEV were substantially higher (R 2 PEV of TR1 = 0.50 and TR2 = 0.57) compared with reliabilities of parent averages (0.35) computed from pedigree data and based on diagonals of the coefficient matrix (prediction error variance approach). Reliability was similar for all the 4 marker panels using either TR1 or TR2, except that imputed HD cow data set led to an inflation of reliability. Reliability of GEBV could be increased by enlarging the limited bull reference population with cow information. A reduced panel of ∼15K markers resulted in reliabilities similar to using HD markers. Reliability of GEBV could be increased by enlarging the limited bull reference population with cow information. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Scullin, Matthew H.; Bonner, Karri
2006-01-01
The current study examined the relations among 3- to 5-year-olds' theory of mind, inhibitory control, and three measures of suggestibility: yielding to suggestive questions (yield), shifting answers in response to negative feedback (shift), and accuracy in response to misleading questions during a pressured interview about a live event. Theory of…
Quantitative and descriptive comparison of four acoustic analysis systems: vowel measurements.
Burris, Carlyn; Vorperian, Houri K; Fourakis, Marios; Kent, Ray D; Bolt, Daniel M
2014-02-01
This study examines accuracy and comparability of 4 trademarked acoustic analysis software packages (AASPs): Praat, WaveSurfer, TF32, and CSL by using synthesized and natural vowels. Features of AASPs are also described. Synthesized and natural vowels were analyzed using each of the AASP's default settings to secure 9 acoustic measures: fundamental frequency (F0), formant frequencies (F1-F4), and formant bandwidths (B1-B4). The discrepancy between the software measured values and the input values (synthesized, previously reported, and manual measurements) was used to assess comparability and accuracy. Basic AASP features are described. Results indicate that Praat, WaveSurfer, and TF32 generate accurate and comparable F0 and F1-F4 data for synthesized vowels and adult male natural vowels. Results varied by vowel for women and children, with some serious errors. Bandwidth measurements by AASPs were highly inaccurate as compared with manual measurements and published data on formant bandwidths. Values of F0 and F1-F4 are generally consistent and fairly accurate for adult vowels and for some child vowels using the default settings in Praat, WaveSurfer, and TF32. Manipulation of default settings yields improved output values in TF32 and CSL. Caution is recommended especially before accepting F1-F4 results for children and B1-B4 results for all speakers.
Bisgin, Halil; Bera, Tanmay; Ding, Hongjian; Semey, Howard G; Wu, Leihong; Liu, Zhichao; Barnes, Amy E; Langley, Darryl A; Pava-Ripoll, Monica; Vyas, Himansu J; Tong, Weida; Xu, Joshua
2018-04-25
Insect pests, such as pantry beetles, are often associated with food contaminations and public health risks. Machine learning has the potential to provide a more accurate and efficient solution in detecting their presence in food products, which is currently done manually. In our previous research, we demonstrated such feasibility where Artificial Neural Network (ANN) based pattern recognition techniques could be implemented for species identification in the context of food safety. In this study, we present a Support Vector Machine (SVM) model which improved the average accuracy up to 85%. Contrary to this, the ANN method yielded ~80% accuracy after extensive parameter optimization. Both methods showed excellent genus level identification, but SVM showed slightly better accuracy for most species. Highly accurate species level identification remains a challenge, especially in distinguishing between species from the same genus which may require improvements in both imaging and machine learning techniques. In summary, our work does illustrate a new SVM based technique and provides a good comparison with the ANN model in our context. We believe such insights will pave better way forward for the application of machine learning towards species identification and food safety.
Veraart, Jelle; Sijbers, Jan; Sunaert, Stefan; Leemans, Alexander; Jeurissen, Ben
2013-11-01
Linear least squares estimators are widely used in diffusion MRI for the estimation of diffusion parameters. Although adding proper weights is necessary to increase the precision of these linear estimators, there is no consensus on how to practically define them. In this study, the impact of the commonly used weighting strategies on the accuracy and precision of linear diffusion parameter estimators is evaluated and compared with the nonlinear least squares estimation approach. Simulation and real data experiments were done to study the performance of the weighted linear least squares estimators with weights defined by (a) the squares of the respective noisy diffusion-weighted signals; and (b) the squares of the predicted signals, which are reconstructed from a previous estimate of the diffusion model parameters. The negative effect of weighting strategy (a) on the accuracy of the estimator was surprisingly high. Multi-step weighting strategies yield better performance and, in some cases, even outperformed the nonlinear least squares estimator. If proper weighting strategies are applied, the weighted linear least squares approach shows high performance characteristics in terms of accuracy/precision and may even be preferred over nonlinear estimation methods. Copyright © 2013 Elsevier Inc. All rights reserved.
Very accurate upward continuation to low heights in a test of non-Newtonian theory
NASA Technical Reports Server (NTRS)
Romaides, Anestis J.; Jekeli, Christopher
1989-01-01
Recently, gravity measurements were made on a tall, very stable television transmitting tower in order to detect a non-Newtonian gravitational force. This experiment required the upward continuation of gravity from the Earth's surface to points as high as only 600 m above ground. The upward continuation was based on a set of gravity anomalies in the vicinity of the tower whose data distribution exhibits essential circular symmetry and appropriate radial attenuation. Two methods were applied to perform the upward continuation - least-squares solution of a local harmonic expansion and least-squares collocation. Both methods yield comparable results, and have estimated accuracies on the order of 50 microGal or better (1 microGal = 10(exp -8) m/sq s). This order of accuracy is commensurate with the tower gravity measurments (which have an estimated accuracy of 20 microGal), and enabled a definitive detection of non-Newtonian gravity. As expected, such precise upward continuations require very dense data near the tower. Less expected was the requirement of data (though sparse) up to 220 km away from the tower (in the case that only an ellipsoidal reference gravity is applied).
A novel method for interactive multi-objective dose-guided patient positioning
NASA Astrophysics Data System (ADS)
Haehnle, Jonas; Süss, Philipp; Landry, Guillaume; Teichert, Katrin; Hille, Lucas; Hofmaier, Jan; Nowak, Dimitri; Kamp, Florian; Reiner, Michael; Thieke, Christian; Ganswindt, Ute; Belka, Claus; Parodi, Katia; Küfer, Karl-Heinz; Kurz, Christopher
2017-01-01
In intensity-modulated radiation therapy (IMRT), 3D in-room imaging data is typically utilized for accurate patient alignment on the basis of anatomical landmarks. In the presence of non-rigid anatomical changes, it is often not obvious which patient position is most suitable. Thus, dose-guided patient alignment is an interesting approach to use available in-room imaging data for up-to-date dose calculation, aimed at finding the position that yields the optimal dose distribution. This contribution presents the first implementation of dose-guided patient alignment as multi-criteria optimization problem. User-defined clinical objectives are employed for setting up a multi-objective problem. Using pre-calculated dose distributions at a limited number of patient shifts and dose interpolation, a continuous space of Pareto-efficient patient shifts becomes accessible. Pareto sliders facilitate interactive browsing of the possible shifts with real-time dose display to the user. Dose interpolation accuracy is validated and the potential of multi-objective dose-guided positioning demonstrated for three head and neck (H&N) and three prostate cancer patients. Dose-guided positioning is compared to replanning for all cases. A delineated replanning CT served as surrogate for in-room imaging data. Dose interpolation accuracy was high. Using a 2 % dose difference criterion, a median pass-rate of 95.7% for H&N and 99.6% for prostate cases was determined in a comparison to exact dose calculations. For all patients, dose-guided positioning allowed to find a clinically preferable dose distribution compared to bony anatomy based alignment. For all H&N cases, mean dose to the spared parotid glands was below 26~\\text{Gy} (up to 27.5~\\text{Gy} with bony alignment) and clinical target volume (CTV) {{V}95 % } above 99.1% (compared to 95.1%). For all prostate patients, CTV {{V}95 % } was above 98.9% (compared to 88.5%) and {{V}50~\\text{Gy}} to the rectum below 50 % (compared to 56.1%). Replanning yielded improved results for the H&N cases. For the prostate cases, differences to dose-guided positioning were minor.
Identification and delineation of areas flood hazard using high accuracy of DEM data
NASA Astrophysics Data System (ADS)
Riadi, B.; Barus, B.; Widiatmaka; Yanuar, M. J. P.; Pramudya, B.
2018-05-01
Flood incidents that often occur in Karawang regency need to be mitigated. These expectations exist on technologies that can predict, anticipate and reduce disaster risks. Flood modeling techniques using Digital Elevation Model (DEM) data can be applied in mitigation activities. High accuracy DEM data used in modeling, will result in better flooding flood models. The result of high accuracy DEM data processing will yield information about surface morphology which can be used to identify indication of flood hazard area. The purpose of this study was to identify and describe flood hazard areas by identifying wetland areas using DEM data and Landsat-8 images. TerraSAR-X high-resolution data is used to detect wetlands from landscapes, while land cover is identified by Landsat image data. The Topography Wetness Index (TWI) method is used to detect and identify wetland areas with basic DEM data, while for land cover analysis using Tasseled Cap Transformation (TCT) method. The result of TWI modeling yields information about potential land of flood. Overlay TWI map with land cover map that produces information that in Karawang regency the most vulnerable areas occur flooding in rice fields. The spatial accuracy of the flood hazard area in this study was 87%.
King, Alice; Shipley, Martin; Markus, Hugh
2011-10-01
Improved methods are required to identify patients with asymptomatic carotid stenosis at high risk for stroke. The Asymptomatic Carotid Emboli Study recently showed embolic signals (ES) detected by transcranial Doppler on 2 recordings that lasted 1-hour independently predict 2-year stroke risk. ES detection is time-consuming, and whether similar predictive information could be obtained from simpler recording protocols is unknown. In a predefined secondary analysis of Asymptomatic Carotid Emboli Study, we looked at the temporal variation of ES. We determined the predictive yield associated with different recording protocols and with the use of a higher threshold to indicate increased risk (≥2 ES). To compare the different recording protocols, sensitivity and specificity analyses were performed using analysis of receiver-operator characteristic curves. Of 477 patients, 467 had baseline recordings adequate for analysis; 77 of these had ES on 1 or both of the 2 recordings. ES status on the 2 recordings was significantly associated (P<0.0001), but there was poor agreement between ES positivity on the 2 recordings (κ=0.266). For the primary outcome of ipsilateral stroke or transient ischemic attack, the use of 2 baseline recordings lasting 1 hour had greater predictive accuracy than either the first baseline recording alone (P=0.0005), a single 30-minute (P<0.0001) recording, or 2 recordings lasting 30 minutes (P<0.0001). For the outcome of ipsilateral stroke alone, two recordings lasting 1 hour had greater predictive accuracy when compared to all other recording protocols (all P<0.0001). Our analysis demonstrates the relative predictive yield of different recording protocols that can be used in application of the technique in clinical practice. Two baseline recordings lasting 1 hour as used in Asymptomatic Carotid Emboli Study gave the best risk prediction.
Hall, Mats Guerrero Garcia; Wenner, Jörgen; Öberg, Stefan
2016-01-01
The poor sensitivity of esophageal pH monitoring substantially limits the clinical value of the test. The aim of this study was to compare the diagnostic accuracy of esophageal pH monitoring and symptom association analysis performed at the conventional level with that obtained in the most distal esophagus. Eighty-two patients with typical reflux symptoms and 49 asymptomatic subjects underwent dual 48-h pH monitoring with the electrodes positioned immediately above, and 6 cm above the squamo-columnar junction (SCJ). The degree of esophageal acid exposure and the temporal relationship between reflux events and symptoms were evaluated. The sensitivity of pH recording and the diagnostic yield of Symptom Association Probability (SAP) were significantly higher for pH monitoring performed at the distal compared with the conventional level (82% versus 65%, p<0.001 and 74% versus 62%, p<0.001, respectively). The greatest improvement was observed in patients with non-erosive disease. In this group, the sensitivity increased from 46% at the standard level to 66% immediately above the SCJ, and with the combination of a positive SAP as a marker for a positive pH test, the diagnostic yield further increased to 94%. The diagnostic accuracy of esophageal pH monitoring in the most distal esophagus is superior to that performed at the conventional level and it is further improved with the combination of symptom association analysis. PH monitoring with the pH electrode positioned immediately above the SCJ should be introduced in clinical practice and always combined with symptom association analysis.
Non-equilibrium dynamics from RPMD and CMD.
Welsch, Ralph; Song, Kai; Shi, Qiang; Althorpe, Stuart C; Miller, Thomas F
2016-11-28
We investigate the calculation of approximate non-equilibrium quantum time correlation functions (TCFs) using two popular path-integral-based molecular dynamics methods, ring-polymer molecular dynamics (RPMD) and centroid molecular dynamics (CMD). It is shown that for the cases of a sudden vertical excitation and an initial momentum impulse, both RPMD and CMD yield non-equilibrium TCFs for linear operators that are exact for high temperatures, in the t = 0 limit, and for harmonic potentials; the subset of these conditions that are preserved for non-equilibrium TCFs of non-linear operators is also discussed. Furthermore, it is shown that for these non-equilibrium initial conditions, both methods retain the connection to Matsubara dynamics that has previously been established for equilibrium initial conditions. Comparison of non-equilibrium TCFs from RPMD and CMD to Matsubara dynamics at short times reveals the orders in time to which the methods agree. Specifically, for the position-autocorrelation function associated with sudden vertical excitation, RPMD and CMD agree with Matsubara dynamics up to O(t 4 ) and O(t 1 ), respectively; for the position-autocorrelation function associated with an initial momentum impulse, RPMD and CMD agree with Matsubara dynamics up to O(t 5 ) and O(t 2 ), respectively. Numerical tests using model potentials for a wide range of non-equilibrium initial conditions show that RPMD and CMD yield non-equilibrium TCFs with an accuracy that is comparable to that for equilibrium TCFs. RPMD is also used to investigate excited-state proton transfer in a system-bath model, and it is compared to numerically exact calculations performed using a recently developed version of the Liouville space hierarchical equation of motion approach; again, similar accuracy is observed for non-equilibrium and equilibrium initial conditions.
The best prostate biopsy scheme is dictated by the gland volume: a monocentric study.
Dell'Atti, L
2015-08-01
Accuracy of biopsy scheme depends on different parameters. Prostate-specific antigen (PSA) level and digital rectal examination (DRE) influenced the detection rate and suggested the biopsy scheme to approach each patient. Another parameter is the prostate volume. Sampling accuracy tends to decrease progressively with an increasing prostate volume. We prospectively observed detection cancer rate in suspicious prostate cancer (PCa) and improved by applying a protocol biopsy according to prostate volume (PV). Clinical data and pathological features of these 1356 patients were analysed and included in this study. This protocol is a combined scheme that includes transrectal (TR) 12-core PBx (TR12PBx) for PV ≤ 30 cc, TR 14-core PBx (TR14PBx) for PV > 30 cc but < 60 cc, TR 18-core PBx (TR18PBx) for PV ≥ 60 cc. Out of a total of 1356 patients, in 111 (8.2%) PCa was identified through TR12PBx scheme, in 198 (14.6%) through TR14PBx scheme and in 253 (18.6%) through TR18PBx scheme. The PCa detection rate was increased by 44% by adding two TZ cores (TR14PBx scheme). The TR18PBx scheme increased this rate by 21.7% vs. TR14PBx scheme. The diagnostic yield offered by TR18PBx was statistically significant compared to the detection rate offered by the TR14PBx scheme (p < 0.003). The biopsy Gleason score and the percentage of core involvement were comparable between PCa detected by the TR14PBx scheme diagnostic yield and those detected by the TR18PBx scheme (p = 0.362). The only PV parameter, in our opinion, can be significant in choosing the best biopsy scheme to approach in a first setting of biopsies increasing PCa detection rate.
NASA Astrophysics Data System (ADS)
Kabiri, K.
2017-09-01
The capabilities of Sentinel-2A imagery to determine bathymetric information in shallow coastal waters were examined. In this regard, two Sentinel-2A images (acquired on February and March 2016 in calm weather and relatively low turbidity) were selected from Nayband Bay, located in the northern Persian Gulf. In addition, a precise and accurate bathymetric map for the study area were obtained and used for both calibrating the models and validating the results. Traditional linear and ratio transform techniques, as well as a novel integrated method, were employed to determine depth values. All possible combinations of the three bands (Band 2: blue (458-523 nm), Band 3: green (543-578 nm), and Band 4: red (650-680 nm), spatial resolution: 10 m) have been considered (11 options) using the traditional linear and ratio transform techniques, together with 10 model options for the integrated method. The accuracy of each model was assessed by comparing the determined bathymetric information with field measured values. The correlation coefficients (R2), and root mean square errors (RMSE) for validation points were calculated for all models and for two satellite images. When compared with the linear transform method, the method employing ratio transformation with a combination of all three bands yielded more accurate results (R2Mac = 0.795, R2Feb = 0.777, RMSEMac = 1.889 m, and RMSEFeb =2.039 m). Although most of the integrated transform methods (specifically the method including all bands and band ratios) have yielded the highest accuracy, these increments were not significant, hence the ratio transformation has selected as optimum method.
Accuracy of cytology in sub typing non small cell lung carcinomas.
Patel, Trupti S; Shah, Majal G; Gandhi, Jahnavi S; Patel, Pratik
2017-07-01
Sub typing of non small cell lung carcinoma (NSCLC) has an important task in the era of molecular and targeted therapies. Differentiating between squamous cell carcinoma (SQCC) and adenocarcinoma (ADC) is challenging when limited material is available in lung carcinoma. We investigated the accuracy and feasibility of sub typing NSCLCs in cytology and small biopsy material. Concurrent cytology and biopsy material obtained in a single CT- guided procedure in lung carcinoma over a year period retrospectively. Both materials were individually sub typed and analyzed. Immunohistochemistry (IHC) was performed. Accuracy was determined by comparing the results with IHC. Total 107 of 126 cases of NSCLCs were included for analysis, where both cytology and biopsy material were adequate for interpretation. FNAC allowed tumor typing in 83 (77.6%) cases; 36 (33.6%) were ADC, 47 (43.9%) cases were SQCC and 24 (22.4%) cases diagnosed as Non-small cell carcinoma not otherwise specified (NSCLC-NOS). In biopsy, 86 cases (80.4%) were typed, among which 34 (31.8%) were ADC, 52 (48.6%) were SQCC and 21 (19.6%) were of NSCLC-NOS type. The result of Chi-square index was significant. With the aid of IHC, NSCLC-NOS reduced from 14 (13%) cases to 2 (1.9%) cases. Cytology and small biopsy specimens achieved comparable specificity and accuracy in sub-typing NSCLC and optimal results were obtain when findings from both modalities combine. The advantage of paired specimens is to maximize overall diagnostic yield and the remaining material will be available for ancillary technique like IHC or for molecular testing. Diagn. Cytopathol. 2017;45:598-603. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Rotondano, Gianluca; Bianco, Maria Antonia; Sansone, Stefano; Prisco, Antonio; Meucci, Costantino; Garofano, Maria Lucia; Cipolletta, Livio
2012-03-01
The purpose of this study is to evaluate an endoscopic trimodal imaging (ETMI) system (high resolution, autofluorescence, and NBI) in the detection and differentiation of colorectal adenomas. A prospective randomised trial of tandem colonoscopies was carried out using the Olympus XCF-FH260AZI system. Each colonic segment was examined twice for lesions, once with HRE and once with AFI, in random order per patient. All detected lesions were assessed with NBI for pit pattern and with AFI for colour. All lesions were removed and sent for histology. Any lesion identified on the second examination was considered as missed by the first examination. Outcome measures are adenoma miss rates of AFI and HRE, and diagnostic accuracy of NBI and AFI for differentiating neoplastic from non-neoplastic lesions. Ninety-four patients underwent colonoscopy with ETMI (47 in each group). Among 47 patients examined with AFI first, 31 adenomas in 15 patients were detected initially [detection rate 0.66 (0.52-0.75)]. Subsequent HRE inspection identified six additional adenomas. Among 47 patients examined with HRE first, 29 adenomas in 14 patients were detected initially [detection rate 0.62 (0.53-0.79)]. Successive AFI yielded seven additional adenomas. Adenoma miss rates of AFI and HRE were 14% and 16.2%, respectively (p = 0.29). Accuracy of AFI alone for differentiation was lower than NBI (63% vs. 80%, p < 0.001). Combined use of AFI and NBI achieved improved accuracy for differentiation (84%), showing a trend for superiority compared with NBI alone (p = 0.064). AFI did not significantly reduce the adenoma miss rate compared with HRE. AFI alone had a disappointing accuracy for adenoma differentiation, which could be improved by combination of AFI and NBI.
van den Broek, Frank J C; Fockens, Paul; Van Eeden, Susanne; Kara, Mohammed A; Hardwick, James C H; Reitsma, Johannes B; Dekker, Evelien
2009-03-01
Endoscopic trimodal imaging (ETMI) incorporates high-resolution endoscopy (HRE) and autofluorescence imaging (AFI) for adenoma detection, and narrow-band imaging (NBI) for differentiation of adenomas from nonneoplastic polyps. The aim of this study was to compare AFI with HRE for adenoma detection and to assess the diagnostic accuracy of NBI for differentiation of polyps. This was a randomized trial of tandem colonoscopies. The study was performed at the Academic Medical Center in Amsterdam. One hundred patients underwent colonoscopy with ETMI. Each colonic segment was examined twice for polyps, once with HRE and once with AFI, in random order per patient. All detected polyps were assessed with NBI for pit pattern and with AFI for color, and subsequently removed. Histopathology served as the gold standard for diagnosis. The main outcome measures of this study were adenoma miss-rates of AFI and HRE, and diagnostic accuracy of NBI and AFI for differentiating adenomas from nonneoplastic polyps. Among 50 patients examined with AFI first, 32 adenomas were detected initially. Subsequent inspection with HRE identified 8 additional adenomas. Among 50 patients examined with HRE first, 35 adenomas were detected initially. Successive AFI yielded 14 additional adenomas. The adenoma miss-rates of AFI and HRE therefore were 20% and 29%, respectively (P = .351). The sensitivity, specificity, and overall accuracy of NBI for differentiation were 90%, 70%, and 79%, respectively; corresponding figures for AFI were 99%, 35%, and 63%, respectively. The overall adenoma miss-rate was 25%; AFI did not significantly reduce the adenoma miss-rate compared with HRE. Both NBI and AFI had a disappointing diagnostic accuracy for polyp differentiation, although AFI had a high sensitivity.
The Arrival of Robotics in Spine Surgery: A Review of the Literature.
Ghasem, Alexander; Sharma, Akhil; Greif, Dylan N; Alam, Milad; Maaieh, Motasem Al
2018-04-18
Systematic Review. The authors aim to review comparative outcome measures between robotic and free-hand spine surgical procedures including: accuracy of spinal instrumentation, radiation exposure, operative time, hospital stay, and complication rates. Misplacement of pedicle screws in conventional open as well as minimally invasive surgical procedures has prompted the need for innovation and allowed the emergence of robotics in spine surgery. Prior to incorporation of robotic surgery in routine practice, demonstration of improved instrumentation accuracy, operative efficiency, and patient safety is required. A systematic search of the PubMed, OVID-MEDLINE, and Cochrane databases was performed for papers relevant to robotic assistance of pedicle screw placement. Inclusion criteria were constituted by English written randomized control trials, prospective and retrospective cohort studies involving robotic instrumentation in the spine. Following abstract, title, and full-text review, 32 articles were selected for study inclusion. Intrapedicular accuracy in screw placement and subsequent complications were at least comparable if not superior in the robotic surgery cohort. There is evidence supporting that total operative time is prolonged in robot assisted surgery compared to conventional free-hand. Radiation exposure appeared to be variable between studies; radiation time did decrease in the robot arm as the total number of robotic cases ascended, suggesting a learning curve effect. Multi-level procedures appeared to tend toward earlier discharge in patients undergoing robotic spine surgery. The implementation of robotic technology for pedicle screw placement yields an acceptable level of accuracy on a highly consistent basis. Surgeons should remain vigilant about confirmation of robotic assisted screw trajectory, as drilling pathways have been shown to be altered by soft tissue pressures, forceful surgical application, and bony surface skiving. However, the effective consequence of robot-assistance on radiation exposure, length of stay, and operative time remains unclear and requires meticulous examination in future studies. 4.
Classification of EMG signals using PSO optimized SVM for diagnosis of neuromuscular disorders.
Subasi, Abdulhamit
2013-06-01
Support vector machine (SVM) is an extensively used machine learning method with many biomedical signal classification applications. In this study, a novel PSO-SVM model has been proposed that hybridized the particle swarm optimization (PSO) and SVM to improve the EMG signal classification accuracy. This optimization mechanism involves kernel parameter setting in the SVM training procedure, which significantly influences the classification accuracy. The experiments were conducted on the basis of EMG signal to classify into normal, neurogenic or myopathic. In the proposed method the EMG signals were decomposed into the frequency sub-bands using discrete wavelet transform (DWT) and a set of statistical features were extracted from these sub-bands to represent the distribution of wavelet coefficients. The obtained results obviously validate the superiority of the SVM method compared to conventional machine learning methods, and suggest that further significant enhancements in terms of classification accuracy can be achieved by the proposed PSO-SVM classification system. The PSO-SVM yielded an overall accuracy of 97.41% on 1200 EMG signals selected from 27 subject records against 96.75%, 95.17% and 94.08% for the SVM, the k-NN and the RBF classifiers, respectively. PSO-SVM is developed as an efficient tool so that various SVMs can be used conveniently as the core of PSO-SVM for diagnosis of neuromuscular disorders. Copyright © 2013 Elsevier Ltd. All rights reserved.
Elevations of novel cytokines in bacterial meningitis in infants.
Srinivasan, Lakshmi; Kilpatrick, Laurie; Shah, Samir S; Abbasi, Soraya; Harris, Mary C
2018-01-01
Bacterial meningitis is challenging to diagnose in infants, especially in the common setting of antibiotic pre-treatment, which diminishes yield of cerebrospinal fluid (CSF) cultures. Prior studies of diagnostic markers have not demonstrated sufficient accuracy. Interleukin-23 (IL-23), interleukin-18 (IL-18) and soluble receptor for advanced glycation end products (sRAGE) possess biologic plausibility, and may be useful as diagnostic markers in bacterial meningitis. In a prospective cohort study, we measured IL-23, IL-18 and sRAGE levels in CSF. We compared differences between infected and non-infected infants, and conducted receiver operating characteristic (ROC) analyses to identify individual markers and combinations of markers with the best diagnostic accuracy. 189 infants <6 months, including 8 with bacterial meningitis, 30 without meningitis, and 151 with indeterminate diagnosis (due to antibiotic pretreatment) were included. CSF IL-23, IL-18 and sRAGE levels were significantly elevated in infants with culture proven meningitis. Among individual markers, IL-23 possessed the greatest accuracy for diagnosis of bacterial meningitis (area under the curve (AUC) 0.9698). The combination of all three markers had an AUC of 1. IL-23, alone and in combination with IL-18 and sRAGE, identified bacterial meningitis with excellent accuracy. Following validation, these markers could aid clinicians in diagnosis of bacterial meningitis and decision-making regarding prolongation of antibiotic therapy.
de Paula, Adelzon Assis; Pires, Denise Franqueira; Filho, Pedro Alves; de Lemos, Kátia Regina Valente; Barçante, Eduardo; Pacheco, Antonio Guilherme
2018-06-01
While cross-referencing information from people living with HIV/AIDS (PLWHA) to the official mortality database is a critical step in monitoring the HIV/AIDS epidemic in Brazil, the accuracy of the linkage routine may compromise the validity of the final database, yielding to biased epidemiological estimates. We compared the accuracy and the total runtime of two linkage algorithms applied to retrieve vital status information from PLWHA in Brazilian public databases. Nominally identified records from PLWHA were obtained from three distinct government databases. Linkage routines included an algorithm in Python language (PLA) and Reclink software (RlS), a probabilistic software largely utilized in Brazil. Records from PLWHA 1 known to be alive were added to those from patients reported as deceased. Data were then searched into the mortality system. Scenarios where 5% and 50% of patients actually dead were simulated, considering both complete cases and 20% missing maternal names. When complete information was available both algorithms had comparable accuracies. In the scenario of 20% missing maternal names, PLA 2 and RlS 3 had sensitivities of 94.5% and 94.6% (p > 0.5), respectively; after manual reviewing, PLA sensitivity increased to 98.4% (96.6-100.0) exceeding that for RlS (p < 0.01). PLA had higher positive predictive value in 5% death proportion. Manual reviewing was intrinsically required by RlS in up to 14% register for people actually dead, whereas the corresponding proportion ranged from 1.5% to 2% for PLA. The lack of manual inspection did not alter PLA sensitivity when complete information was available. When incomplete data was available PLA sensitivity increased from 94.5% to 98.4%, thus exceeding that presented by RlS (94.6%, p < 0.05). RlS spanned considerably less processing time compared to PLA. Both linkage algorithms presented interchangeable accuracies in retrieving vital status data from PLWHA. RlS had a considerably lesser runtime but intrinsically required manually reviewing a fastidious proportion of the matched registries. On the other hand, PLA spent quite more runtime but spared manual reviewing at no expense of accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.
Sankey, Joel B.; McVay, Jason C.; Kreitler, Jason R.; Hawbaker, Todd J.; Vaillant, Nicole; Lowe, Scott
2015-01-01
Increased sedimentation following wildland fire can negatively impact water supply and water quality. Understanding how changing fire frequency, extent, and location will affect watersheds and the ecosystem services they supply to communities is of great societal importance in the western USA and throughout the world. In this work we assess the utility of the InVEST (Integrated Valuation of Ecosystem Services and Tradeoffs) Sediment Retention Model to accurately characterize erosion and sedimentation of burned watersheds. InVEST was developed by the Natural Capital Project at Stanford University (Tallis et al., 2014) and is a suite of GIS-based implementations of common process models, engineered for high-end computing to allow the faster simulation of larger landscapes and incorporation into decision-making. The InVEST Sediment Retention Model is based on common soil erosion models (e.g., USLE – Universal Soil Loss Equation) and determines which areas of the landscape contribute the greatest sediment loads to a hydrological network and conversely evaluate the ecosystem service of sediment retention on a watershed basis. In this study, we evaluate the accuracy and uncertainties for InVEST predictions of increased sedimentation after fire, using measured postfire sediment yields available for many watersheds throughout the western USA from an existing, published large database. We show that the model can be parameterized in a relatively simple fashion to predict post-fire sediment yield with accuracy. Our ultimate goal is to use the model to accurately predict variability in post-fire sediment yield at a watershed scale as a function of future wildfire conditions.
A comparative study of an ABC and an artificial absorber for truncating finite element meshes
NASA Technical Reports Server (NTRS)
Oezdemir, T.; Volakis, John L.
1993-01-01
The type of mesh termination used in the context of finite element formulations plays a major role on the efficiency and accuracy of the field solution. The performance of an absorbing boundary condition (ABC) and an artificial absorber (a new concept) for terminating the finite element mesh was evaluated. This analysis is done in connection with the problem of scattering by a finite slot array in a thick ground plane. The two approximate mesh truncation schemes are compared with the exact finite element-boundary integral (FEM-BI) method in terms of accuracy and efficiency. It is demonstrated that both approximate truncation schemes yield reasonably accurate results even when the mesh is extended only 0.3 wavelengths away from the array aperture. However, the artificial absorber termination method leads to a substantially more efficient solution. Moreover, it is shown that the FEM-BI method remains quite competitive with the FEM-artificial absorber method when the FFT is used for computing the matrix-vector products in the iterative solution algorithm. These conclusions are indeed surprising and of major importance in electromagnetic simulations based on the finite element method.
Tkach, D C; Hargrove, L J
2013-01-01
Advances in battery and actuator technology have enabled clinical use of powered lower limb prostheses such as the BiOM Powered Ankle. To allow ambulation over various types of terrains, such devices rely on built-in mechanical sensors or manual actuation by the amputee to transition into an operational mode that is suitable for a given terrain. It is unclear if mechanical sensors alone can accurately modulate operational modes while voluntary actuation prevents seamless, naturalistic gait. Ensuring that the prosthesis is ready to accommodate new terrain types at first step is critical for user safety. EMG signals from patient's residual leg muscles may provide additional information to accurately choose the proper mode of prosthesis operation. Using a pattern recognition classifier we compared the accuracy of predicting 8 different mode transitions based on (1) prosthesis mechanical sensor output (2) EMG recorded from residual limb and (3) fusion of EMG and mechanical sensor data. Our findings indicate that the neuromechanical sensor fusion significantly decreases errors in predicting 10 mode transitions as compared to using either mechanical sensors or EMG alone (2.3±0.7% vs. 7.8±0.9% and 20.2±2.0% respectively).
NASA Astrophysics Data System (ADS)
House, Rachael; Lasso, Andras; Harish, Vinyas; Baum, Zachary; Fichtinger, Gabor
2017-03-01
PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK's built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.
Estimation of Center of Mass Trajectory using Wearable Sensors during Golf Swing.
Najafi, Bijan; Lee-Eng, Jacqueline; Wrobel, James S; Goebel, Ruben
2015-06-01
This study suggests a wearable sensor technology to estimate center of mass (CoM) trajectory during a golf swing. Groups of 3, 4, and 18 participants were recruited, respectively, for the purpose of three validation studies. Study 1 examined the accuracy of the system to estimate a 3D body segment angle compared to a camera-based motion analyzer (Vicon®). Study 2 assessed the accuracy of three simplified CoM trajectory models. Finally, Study 3 assessed the accuracy of the proposed CoM model during multiple golf swings. A relatively high agreement was observed between wearable sensors and the reference (Vicon®) for angle measurement (r > 0.99, random error <1.2° (1.5%) for anterior-posterior; <0.9° (2%) for medial-lateral; and <3.6° (2.5%) for internal-external direction). The two-link model yielded a better agreement with the reference system compared to one-link model (r > 0.93 v. r = 0.52, respectively). On the same note, the proposed two-link model estimated CoM trajectory during golf swing with relatively good accuracy (r > 0.9, A-P random error <1cm (7.7%) and <2cm (10.4%) for M-L). The proposed system appears to accurately quantify the kinematics of CoM trajectory as a surrogate of dynamic postural control during an athlete's movement and its portability, makes it feasible to fit the competitive environment without restricting surface type. Key pointsThis study demonstrates that wearable technology based on inertial sensors are accurate to estimate center of mass trajectory in complex athletic task (e.g., golf swing)This study suggests that two-link model of human body provides optimum tradeoff between accuracy and minimum number of sensor module for estimation of center of mass trajectory in particular during fast movements.Wearable technologies based on inertial sensors are viable option for assessing dynamic postural control in complex task outside of gait laboratory and constraints of cameras, surface, and base of support.
Modeling Mediterranean forest structure using airborne laser scanning data
NASA Astrophysics Data System (ADS)
Bottalico, Francesca; Chirici, Gherardo; Giannini, Raffaello; Mele, Salvatore; Mura, Matteo; Puxeddu, Michele; McRoberts, Ronald E.; Valbuena, Ruben; Travaglini, Davide
2017-05-01
The conservation of biological diversity is recognized as a fundamental component of sustainable development, and forests contribute greatly to its preservation. Structural complexity increases the potential biological diversity of a forest by creating multiple niches that can host a wide variety of species. To facilitate greater understanding of the contributions of forest structure to forest biological diversity, we modeled relationships between 14 forest structure variables and airborne laser scanning (ALS) data for two Italian study areas representing two common Mediterranean forests, conifer plantations and coppice oaks subjected to irregular intervals of unplanned and non-standard silvicultural interventions. The objectives were twofold: (i) to compare model prediction accuracies when using two types of ALS metrics, echo-based metrics and canopy height model (CHM)-based metrics, and (ii) to construct inferences in the form of confidence intervals for large area structural complexity parameters. Our results showed that the effects of the two study areas on accuracies were greater than the effects of the two types of ALS metrics. In particular, accuracies were less for the more complex study area in terms of species composition and forest structure. However, accuracies achieved using the echo-based metrics were only slightly greater than when using the CHM-based metrics, thus demonstrating that both options yield reliable and comparable results. Accuracies were greatest for dominant height (Hd) (R2 = 0.91; RMSE% = 8.2%) and mean height weighted by basal area (R2 = 0.83; RMSE% = 10.5%) when using the echo-based metrics, 99th percentile of the echo height distribution and interquantile distance. For the forested area, the generalized regression (GREG) estimate of mean Hd was similar to the simple random sampling (SRS) estimate, 15.5 m for GREG and 16.2 m SRS. Further, the GREG estimator with standard error of 0.10 m was considerable more precise than the SRS estimator with standard error of 0.69 m.
Laufer, Shlomi; D'Angelo, Anne-Lise D; Kwan, Calvin; Ray, Rebbeca D; Yudkowsky, Rachel; Boulet, John R; McGaghie, William C; Pugh, Carla M
2017-12-01
Develop new performance evaluation standards for the clinical breast examination (CBE). There are several, technical aspects of a proper CBE. Our recent work discovered a significant, linear relationship between palpation force and CBE accuracy. This article investigates the relationship between other technical aspects of the CBE and accuracy. This performance assessment study involved data collection from physicians (n = 553) attending 3 different clinical meetings between 2013 and 2014: American Society of Breast Surgeons, American Academy of Family Physicians, and American College of Obstetricians and Gynecologists. Four, previously validated, sensor-enabled breast models were used for clinical skills assessment. Models A and B had solitary, superficial, 2 cm and 1 cm soft masses, respectively. Models C and D had solitary, deep, 2 cm hard and moderately firm masses, respectively. Finger movements (search technique) from 1137 CBE video recordings were independently classified by 2 observers. Final classifications were compared with CBE accuracy. Accuracy rates were model A = 99.6%, model B = 89.7%, model C = 75%, and model D = 60%. Final classification categories for search technique included rubbing movement, vertical movement, piano fingers, and other. Interrater reliability was (k = 0.79). Rubbing movement was 4 times more likely to yield an accurate assessment (odds ratio 3.81, P < 0.001) compared with vertical movement and piano fingers. Piano fingers had the highest failure rate (36.5%). Regression analysis of search pattern, search technique, palpation force, examination time, and 6 demographic variables, revealed that search technique independently and significantly affected CBE accuracy (P < 0.001). Our results support measurement and classification of CBE techniques and provide the foundation for a new paradigm in teaching and assessing hands-on clinical skills. The newly described piano fingers palpation technique was noted to have unusually high failure rates. Medical educators should be aware of the potential differences in effectiveness for various CBE techniques.
Calibration methodology for proportional counters applied to yield measurements of a neutron burst.
Tarifeño-Saldivia, Ariel; Mayer, Roberto E; Pavez, Cristian; Soto, Leopoldo
2014-01-01
This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.
Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna
2017-07-20
The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.
NASA Astrophysics Data System (ADS)
Henry, Michael E.; Lauriat, Tara L.; Shanahan, Meghan; Renshaw, Perry F.; Jensen, J. Eric
2011-02-01
Proton magnetic resonance spectroscopy has the potential to provide valuable information about alterations in gamma-aminobutyric acid (GABA), glutamate (Glu), and glutamine (Gln) in psychiatric and neurological disorders. In order to use this technique effectively, it is important to establish the accuracy and reproducibility of the methodology. In this study, phantoms with known metabolite concentrations were used to compare the accuracy of 2D J-resolved MRS, single-echo 30 ms PRESS, and GABA-edited MEGA-PRESS for measuring all three aforementioned neurochemicals simultaneously. The phantoms included metabolite concentrations above and below the physiological range and scans were performed at baseline, 1 week, and 1 month time-points. For GABA measurement, MEGA-PRESS proved optimal with a measured-to-target correlation of R2 = 0.999, with J-resolved providing R2 = 0.973 for GABA. All three methods proved effective in measuring Glu with R2 = 0.987 (30 ms PRESS), R2 = 0.996 (J-resolved) and R2 = 0.910 (MEGA-PRESS). J-resolved and MEGA-PRESS yielded good results for Gln measures with respective R2 = 0.855 (J-resolved) and R2 = 0.815 (MEGA-PRESS). The 30 ms PRESS method proved ineffective in measuring GABA and Gln. When measurement stability at in vivo concentration was assessed as a function of varying spectral quality, J-resolved proved the most stable and immune to signal-to-noise and linewidth fluctuation compared to MEGA-PRESS and 30 ms PRESS.
A globally efficient means of distributing UTC time and frequency through GPS
NASA Technical Reports Server (NTRS)
Kusters, John A.; Giffard, Robin P.; Cutler, Leonard S.; Allan, David W.; Miranian, Mihran
1995-01-01
Time and frequency outputs comparable in quality to the best laboratories have been demonstrated on an integrated system suitable for field application on a global basis. The system measures the time difference between 1 pulse-per-second (pps) signals derived from local primary frequency standards and from a multi-channel GPS C/A receiver. The measured data is processed through optimal SA Filter algorithms that enhance both the stability and accuracy of GPS timing signals. Experiments were run simultaneously at four different sites. Even with large distances between sites, the overall results show a high degree of cross-correlation of the SA noise. With sufficiently long simultaneous measurement sequences, the data shows that determination of the difference in local frequency from an accepted remote standard to better than 1 x 10(exp -14) is possible. This method yields frequency accuracy, stability, and timing stability comparable to that obtained with more conventional common-view experiments. In addition, this approach provides UTC(USNO MC) in real time to an accuracy better than 20 ns without the problems normally associated with conventional common-view techniques. An experimental tracking loop was also set up to demonstrate the use of enhanced GPS for dissemination of UTC(USNO MC) over a wide geographic area. Properly disciplining a cesium standard with a multi-channel GPS receiver, with additional input from USNO, has been found to permit maintaining a timing precision of better than 10 ns between Palo Alto, CA and Washington, DC.
Bernard R. Parresol; Steven C. Stedman
2004-01-01
The accuracy of forest growth and yield forecasts affects the quality of forest management decisions (Rauscher et al. 2000). Users of growth and yield models want assurance that model outputs are reasonable and mimic local/regional forest structure and composition and accurately reflect the influences of stand dynamics such as competition and disturbance. As such,...
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Variability of Diabetes Alert Dog Accuracy in a Real-World Setting
Gonder-Frederick, Linda A.; Grabman, Jesse H.; Shepard, Jaclyn A.; Tripathi, Anand V.; Ducar, Dallas M.; McElgunn, Zachary R.
2017-01-01
Background: Diabetes alert dogs (DADs) are growing in popularity as an alternative method of glucose monitoring for individuals with type 1 diabetes (T1D). Only a few empirical studies have assessed DAD accuracy, with inconsistent results. The present study examined DAD accuracy and variability in performance in real-world conditions using a convenience sample of owner-report diaries. Method: Eighteen DAD owners (44.4% female; 77.8% youth) with T1D completed diaries of DAD alerts during the first year after placement. Diary entries included daily BG readings and DAD alerts. For each DAD, percentage hits (alert with BG ≤ 5.0 or ≥ 11.1 mmol/L; ≤90 or ≥200 mg/dl), percentage misses (no alert with BG out of range), and percentage false alarms (alert with BG in range) were computed. Sensitivity, specificity, positive likelihood ratio (PLR), and true positive rates were also calculated. Results: Overall comparison of DAD Hits to Misses yielded significantly more Hits for both low and high BG. Total sensitivity was 57.0%, with increased sensitivity to low BG (59.2%) compared to high BG (56.1%). Total specificity was 49.3% and PLR = 1.12. However, high variability in accuracy was observed across DADs, with low BG sensitivity ranging from 33% to 100%. Number of DADs achieving ≥ 60%, 65% and 70% true positive rates was 71%, 50% and 44%, respectively. Conclusions: DADs may be able to detect out-of-range BG, but variability across DADs is evident. Larger trials are needed to further assess DAD accuracy and to identify factors influencing the complexity of DAD accuracy in BG detection. PMID:28627305
Multivariate pattern analysis for MEG: A comparison of dissimilarity measures.
Guggenmos, Matthias; Sterzer, Philipp; Cichy, Radoslaw Martin
2018-06-01
Multivariate pattern analysis (MVPA) methods such as decoding and representational similarity analysis (RSA) are growing rapidly in popularity for the analysis of magnetoencephalography (MEG) data. However, little is known about the relative performance and characteristics of the specific dissimilarity measures used to describe differences between evoked activation patterns. Here we used a multisession MEG data set to qualitatively characterize a range of dissimilarity measures and to quantitatively compare them with respect to decoding accuracy (for decoding) and between-session reliability of representational dissimilarity matrices (for RSA). We tested dissimilarity measures from a range of classifiers (Linear Discriminant Analysis - LDA, Support Vector Machine - SVM, Weighted Robust Distance - WeiRD, Gaussian Naïve Bayes - GNB) and distances (Euclidean distance, Pearson correlation). In addition, we evaluated three key processing choices: 1) preprocessing (noise normalisation, removal of the pattern mean), 2) weighting decoding accuracies by decision values, and 3) computing distances in three different partitioning schemes (non-cross-validated, cross-validated, within-class-corrected). Four main conclusions emerged from our results. First, appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures. Second, LDA, SVM and WeiRD yielded high peak decoding accuracies and nearly identical time courses. Third, while using decoding accuracies for RSA was markedly less reliable than continuous distances, this disadvantage was ameliorated by decision-value-weighting of decoding accuracies. Fourth, the cross-validated Euclidean distance provided unbiased distance estimates and highly replicable representational dissimilarity matrices. Overall, we strongly advise the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross-validated Euclidean distance as a reliable and unbiased default choice for RSA. Copyright © 2018 Elsevier Inc. All rights reserved.
Aiello, Francesco A; Judelson, Dejah R; Messina, Louis M; Indes, Jeffrey; FitzGerald, Gordon; Doucet, Danielle R; Simons, Jessica P; Schanzer, Andres
2016-08-01
Vascular surgery procedural reimbursement depends on accurate procedural coding and documentation. Despite the critical importance of correct coding, there has been a paucity of research focused on the effect of direct physician involvement. We hypothesize that direct physician involvement in procedural coding will lead to improved coding accuracy, increased work relative value unit (wRVU) assignment, and increased physician reimbursement. This prospective observational cohort study evaluated procedural coding accuracy of fistulograms at an academic medical institution (January-June 2014). All fistulograms were coded by institutional coders (traditional coding) and by a single vascular surgeon whose codes were verified by two institution coders (multidisciplinary coding). The coding methods were compared, and differences were translated into revenue and wRVUs using the Medicare Physician Fee Schedule. Comparison between traditional and multidisciplinary coding was performed for three discrete study periods: baseline (period 1), after a coding education session for physicians and coders (period 2), and after a coding education session with implementation of an operative dictation template (period 3). The accuracy of surgeon operative dictations during each study period was also assessed. An external validation at a second academic institution was performed during period 1 to assess and compare coding accuracy. During period 1, traditional coding resulted in a 4.4% (P = .004) loss in reimbursement and a 5.4% (P = .01) loss in wRVUs compared with multidisciplinary coding. During period 2, no significant difference was found between traditional and multidisciplinary coding in reimbursement (1.3% loss; P = .24) or wRVUs (1.8% loss; P = .20). During period 3, traditional coding yielded a higher overall reimbursement (1.3% gain; P = .26) than multidisciplinary coding. This increase, however, was due to errors by institution coders, with six inappropriately used codes resulting in a higher overall reimbursement that was subsequently corrected. Assessment of physician documentation showed improvement, with decreased documentation errors at each period (11% vs 3.1% vs 0.6%; P = .02). Overall, between period 1 and period 3, multidisciplinary coding resulted in a significant increase in additional reimbursement ($17.63 per procedure; P = .004) and wRVUs (0.50 per procedure; P = .01). External validation at a second academic institution was performed to assess coding accuracy during period 1. Similar to institution 1, traditional coding revealed an 11% loss in reimbursement ($13,178 vs $14,630; P = .007) and a 12% loss in wRVU (293 vs 329; P = .01) compared with multidisciplinary coding. Physician involvement in the coding of endovascular procedures leads to improved procedural coding accuracy, increased wRVU assignments, and increased physician reimbursement. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Feinstein, Wei P; Brylinski, Michal
2015-01-01
Computational approaches have emerged as an instrumental methodology in modern research. For example, virtual screening by molecular docking is routinely used in computer-aided drug discovery. One of the critical parameters for ligand docking is the size of a search space used to identify low-energy binding poses of drug candidates. Currently available docking packages often come with a default protocol for calculating the box size, however, many of these procedures have not been systematically evaluated. In this study, we investigate how the docking accuracy of AutoDock Vina is affected by the selection of a search space. We propose a new procedure for calculating the optimal docking box size that maximizes the accuracy of binding pose prediction against a non-redundant and representative dataset of 3,659 protein-ligand complexes selected from the Protein Data Bank. Subsequently, we use the Directory of Useful Decoys, Enhanced to demonstrate that the optimized docking box size also yields an improved ranking in virtual screening. Binding pockets in both datasets are derived from the experimental complex structures and, additionally, predicted by eFindSite. A systematic analysis of ligand binding poses generated by AutoDock Vina shows that the highest accuracy is achieved when the dimensions of the search space are 2.9 times larger than the radius of gyration of a docking compound. Subsequent virtual screening benchmarks demonstrate that this optimized docking box size also improves compound ranking. For instance, using predicted ligand binding sites, the average enrichment factor calculated for the top 1 % (10 %) of the screening library is 8.20 (3.28) for the optimized protocol, compared to 7.67 (3.19) for the default procedure. Depending on the evaluation metric, the optimal docking box size gives better ranking in virtual screening for about two-thirds of target proteins. This fully automated procedure can be used to optimize docking protocols in order to improve the ranking accuracy in production virtual screening simulations. Importantly, the optimized search space systematically yields better results than the default method not only for experimental pockets, but also for those predicted from protein structures. A script for calculating the optimal docking box size is freely available at www.brylinski.org/content/docking-box-size. Graphical AbstractWe developed a procedure to optimize the box size in molecular docking calculations. Left panel shows the predicted binding pose of NADP (green sticks) compared to the experimental complex structure of human aldose reductase (blue sticks) using a default protocol. Right panel shows the docking accuracy using an optimized box size.
NASA Astrophysics Data System (ADS)
Matongera, Trylee Nyasha; Mutanga, Onisimo; Dube, Timothy; Sibanda, Mbulisi
2017-05-01
Bracken fern is an invasive plant that presents serious environmental, ecological and economic problems around the world. An understanding of the spatial distribution of bracken fern weeds is therefore essential for providing appropriate management strategies at both local and regional scales. The aim of this study was to assess the utility of the freely available medium resolution Landsat 8 OLI sensor in the detection and mapping of bracken fern at the Cathedral Peak, South Africa. To achieve this objective, the results obtained from Landsat 8 OLI were compared with those derived using the costly, high spatial resolution WorldView-2 imagery. Since previous studies have already successfully mapped bracken fern using high spatial resolution WorldView-2 image, the comparison was done to investigate the magnitude of difference in accuracy between the two sensors in relation to their acquisition costs. To evaluate the performance of Landsat 8 OLI in discriminating bracken fern compared to that of Worldview-2, we tested the utility of (i) spectral bands; (ii) derived vegetation indices as well as (iii) the combination of spectral bands and vegetation indices based on discriminant analysis classification algorithm. After resampling the training and testing data and reclassifying several times (n = 100) based on the combined data sets, the overall accuracies for both Landsat 8 and WorldView-2 were tested for significant differences based on Mann-Whitney U test. The results showed that the integration of the spectral bands and derived vegetation indices yielded the best overall classification accuracy (80.08% and 87.80% for Landsat 8 OLI and WorldView-2 respectively). Additionally, the use of derived vegetation indices as a standalone data set produced the weakest overall accuracy results of 62.14% and 82.11% for both the Landsat 8 OLI and WorldView-2 images. There were significant differences {U (100) = 569.5, z = -10.8242, p < 0.01} between the classification accuracies derived based on Landsat OLI 8 and those derived using WorldView-2 sensor. Although there were significant differences between Landsat and WorldView-2 accuracies, the magnitude of variation (9%) between the two sensors was within an acceptable range. Therefore, the findings of this study demonstrated that the recently launched Landsat 8 OLI multispectral sensor provides valuable information that could aid in the long term continuous monitoring and formulation of effective bracken fern management with acceptable accuracies that are comparable to those obtained from the high resolution WorldView-2 commercial sensor.
Ground-Laboratory to In-Space Atomic Oxygen Correlation for the PEACE Polymers
NASA Astrophysics Data System (ADS)
Stambler, Arielle H.; Inoshita, Karen E.; Roberts, Lily M.; Barbagallo, Claire E.; de Groh, Kim K.; Banks, Bruce A.
2009-01-01
The Materials International Space Station Experiment 2 (MISSE 2) Polymer Erosion and Contamination Experiment (PEACE) polymers were exposed to the environment of low Earth orbit (LEO) for 3.95 years from 2001 to 2005. There were forty-one different PEACE polymers, which were flown on the exterior of the International Space Station (ISS) in order to determine their atomic oxygen erosion yields. In LEO, atomic oxygen is an environmental durability threat, particularly for long duration mission exposures. Although space flight experiments, such as the MISSE 2 PEACE experiment, are ideal for determining LEO environmental durability of spacecraft materials, ground-laboratory testing is often relied upon for durability evaluation and prediction. Unfortunately, significant differences exist between LEO atomic oxygen exposure and atomic oxygen exposure in ground-laboratory facilities. These differences include variations in species, energies, thermal exposures and radiation exposures, all of which may result in different reactions and erosion rates. In an effort to improve the accuracy of ground-based durability testing, ground-laboratory to in-space atomic oxygen correlation experiments have been conducted. In these tests, the atomic oxygen erosion yields of the PEACE polymers were determined relative to Kapton H using a radio-frequency (RF) plasma asher (operated on air). The asher erosion yields were compared to the MISSE 2 PEACE erosion yields to determine the correlation between erosion rates in the two environments. This paper provides a summary of the MISSE 2 PEACE experiment; it reviews the specific polymers tested as well as the techniques used to determine erosion yield in the asher, and it provides a correlation between the space and ground-laboratory erosion yield values. Using the PEACE polymers' asher to in-space erosion yield ratios will allow more accurate in-space materials performance predictions to be made based on plasma asher durability evaluation.
ERIC Educational Resources Information Center
Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria
2014-01-01
The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…
Yang, Rendong; Nelson, Andrew C; Henzler, Christine; Thyagarajan, Bharat; Silverstein, Kevin A T
2015-12-07
Comprehensive identification of insertions/deletions (indels) across the full size spectrum from second generation sequencing is challenging due to the relatively short read length inherent in the technology. Different indel calling methods exist but are limited in detection to specific sizes with varying accuracy and resolution. We present ScanIndel, an integrated framework for detecting indels with multiple heuristics including gapped alignment, split reads and de novo assembly. Using simulation data, we demonstrate ScanIndel's superior sensitivity and specificity relative to several state-of-the-art indel callers across various coverage levels and indel sizes. ScanIndel yields higher predictive accuracy with lower computational cost compared with existing tools for both targeted resequencing data from tumor specimens and high coverage whole-genome sequencing data from the human NIST standard NA12878. Thus, we anticipate ScanIndel will improve indel analysis in both clinical and research settings. ScanIndel is implemented in Python, and is freely available for academic use at https://github.com/cauyrd/ScanIndel.
Forage quantity estimation from MERIS using band depth parameters
NASA Astrophysics Data System (ADS)
Ullah, Saleem; Yali, Si; Schlerf, Martin
Saleem Ullah1 , Si Yali1 , Martin Schlerf1 Forage quantity is an important factor influencing feeding pattern and distribution of wildlife. The main objective of this study was to evaluate the predictive performance of vegetation indices and band depth analysis parameters for estimation of green biomass using MERIS data. Green biomass was best predicted by NBDI (normalized band depth index) and yielded a calibration R2 of 0.73 and an accuracy (independent validation dataset, n=30) of 136.2 g/m2 (47 % of the measured mean) compared to a much lower accuracy obtained by soil adjusted vegetation index SAVI (444.6 g/m2, 154 % of the mean) and by other vegetation indices. This study will contribute to map and monitor foliar biomass over the year at regional scale which intern can aid the understanding of bird migration pattern. Keywords: Biomass, Nitrogen density, Nitrogen concentration, Vegetation indices, Band depth analysis parameters 1 Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, The Netherlands
Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G
2016-05-09
The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.
Assessing the accuracy of different simplified frictional rolling contact algorithms
NASA Astrophysics Data System (ADS)
Vollebregt, E. A. H.; Iwnicki, S. D.; Xie, G.; Shackleton, P.
2012-01-01
This paper presents an approach for assessing the accuracy of different frictional rolling contact theories. The main characteristic of the approach is that it takes a statistically oriented view. This yields a better insight into the behaviour of the methods in diverse circumstances (varying contact patch ellipticities, mixed longitudinal, lateral and spin creepages) than is obtained when only a small number of (basic) circumstances are used in the comparison. The range of contact parameters that occur for realistic vehicles and tracks are assessed using simulations with the Vampire vehicle system dynamics (VSD) package. This shows that larger values for the spin creepage occur rather frequently. Based on this, our approach is applied to typical cases for which railway VSD packages are used. The results show that particularly the USETAB approach but also FASTSIM give considerably better results than the linear theory, Vermeulen-Johnson, Shen-Hedrick-Elkins and Polach methods, when compared with the 'complete theory' of the CONTACT program.
Huang, Z H; Li, N; Rao, K F; Liu, C T; Huang, Y; Ma, M; Wang, Z J
2018-03-01
Genotoxicants can be identified as aneugens and clastogens through a micronucleus (MN) assay. The current high-content screening-based MN assays usually discriminate an aneugen from a clastogen based on only one parameter, such as the MN size, intensity, or morphology, which yields low accuracies (70-84%) because each of these parameters may contribute to the results. Therefore, the development of an algorithm that can synthesize high-dimensionality data to attain comparative results is important. To improve the automation and accuracy of detection using the current parameter-based mode of action (MoA), the MN MoA signatures of 20 chemicals were systematically recruited in this study to develop an algorithm. The results of the algorithm showed very good agreement (93.58%) between the prediction and reality, indicating that the proposed algorithm is a validated analytical platform for the rapid and objective acquisition of genotoxic MoA messages.
Alexander, Michael B; Hodges, Theresa K; Wescott, Daniel J; Aitkenhead-Peterson, Jacqueline A
2016-05-01
Despite technological advances, human remains detection (HRD) dogs still remain one of the best tools for locating clandestine graves. However, soil texture may affect the escape of decomposition gases and therefore the effectiveness of HDR dogs. Six nationally credentialed HRD dogs (three HRD only and three cross-trained) were evaluated on novel buried human remains in contrasting soils, a clayey and a sandy soil. Search time and accuracy were compared for the clayey soil and sandy soil to assess odor location difficulty. Sandy soil (p < 0.001) yielded significantly faster trained response times, but no significant differences were found in performance accuracy between soil textures or training method. Results indicate soil texture may be significant factor in odor detection difficulty. Prior knowledge of soil texture and moisture may be useful for search management and planning. Appropriate adjustments to search segment sizes, sweep widths and search time allotment depending on soil texture may optimize successful detection. © 2016 American Academy of Forensic Sciences.
Forward and correctional OFDM-based visible light positioning
NASA Astrophysics Data System (ADS)
Li, Wei; Huang, Zhitong; Zhao, Runmei; He, Peixuan; Ji, Yuefeng
2017-09-01
Visible light positioning (VLP) has attracted much attention in both academic and industrial areas due to the extensive deployment of light-emitting diodes (LEDs) as next-generation green lighting. Generally, the coverage of a single LED lamp is limited, so LED arrays are always utilized to achieve uniform illumination within the large-scale indoor environment. However, in such dense LED deployment scenario, the superposition of the light signals becomes an important challenge for accurate VLP. To solve this problem, we propose a forward and correctional orthogonal frequency division multiplexing (OFDM)-based VLP (FCO-VLP) scheme with low complexity in generating and processing of signals. In the first forward procedure of FCO-VLP, an initial position is obtained by the trilateration method based on OFDM-subcarriers. The positioning accuracy will be further improved in the second correctional procedure based on the database of reference points. As demonstrated in our experiments, our approach yields an improved average positioning error of 4.65 cm and an enhanced positioning accuracy by 24.2% compared with trilateration method.
Random Deep Belief Networks for Recognizing Emotions from Speech Signals.
Wen, Guihua; Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang
2017-01-01
Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition.
Random Deep Belief Networks for Recognizing Emotions from Speech Signals
Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang
2017-01-01
Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition. PMID:28356908
A novel modality for intrapartum fetal heart rate monitoring.
Ashwal, Eran; Shinar, Shiri; Aviram, Amir; Orbach, Sharon; Yogev, Yariv; Hiersch, Liran
2017-11-02
Intrapartum fetal heart rate (FHR) monitoring is well recommended during labor to assess fetal wellbeing. Though commonly used, the external Doppler and fetal scalp electrode monitor have significant shortcomings. Lately, non-invasive technologies were developed as possible alternatives. The objective of this study is to compare the accuracy of FHR trace using novel Electronic Uterine Monitoring (EUM) to that of external Doppler and fetal scalp electrode monitor. A comparative study conducted in a single tertiary medical center. Intrapartum FHR trace was recorded simultaneously using three different methods: internal fetal scalp electrode, external Doppler, and EUM. The latter, a multichannel electromyogram (EMG) device acquires a uterine signal and maternal and fetal electrocardiograms. FHR traces obtained from all devices during the first and second stages of labor were analyzed. Positive percent of agreement (PPA) and accuracy (by measuring root means square error between observed and predicted values) of EUM and external Doppler were both compared to internal scalp electrode monitoring. A Bland-Altman agreement plot was used to compare the differences in FHR trace between all modalities. For momentary recordings of fetal heart rate <110 bpm or >160 bpm level of agreement, sensitivity, and specificity were also evaluated. Overall, 712,800 FHR momentary recordings were obtained from 33 parturients. Although both EUM and external Doppler highly correlated with internal scalp electrode monitoring (r 2 = 0.98, p < .001 for both methods), the accuracy of EUM was significantly higher than external Doppler (99.0% versus 96.6%, p < .001). In addition, for fetal heart rate <110 bpm or >160 bpm, the PPA, sensitivity, and specificity of EUM as compared with internal fetal scalp electrode, were significantly greater than those of external Doppler (p < .001). Intrapartum FHR using EUM is both valid and accurate, yielding higher correlations with internal scalp electrode monitoring than external Doppler. As such, it may provide a good framework for non-invasive evaluation of intrapartum FHR.
Research yields precise uncertainty equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, E.H.; Ferguson, K.R.
1987-08-03
Results of a study of orifice-meter accuracy by Chevron Oil Field Research Co. at its Venice, La., calibration facility have important implications for natural gas custody-transfer measurement. The calibration facility, data collection, and equipment calibration were described elsewhere. This article explains the derivation of uncertainty factors and details the study's findings. The results were based on calibration of two 16-in. orifice-meter runs. The experimental data cover a beta-ratio range of from 0.27 to 0.71 and a Reynolds number range of from 4,000,000 to 35,000,000. Discharge coefficients were determined by comparing the orifice flow to the flow from critical-flow nozzles.
Improving absolute gravity estimates by the L p -norm approximation of the ballistic trajectory
NASA Astrophysics Data System (ADS)
Nagornyi, V. D.; Svitlov, S.; Araya, A.
2016-04-01
Iteratively re-weighted least squares (IRLS) were used to simulate the L p -norm approximation of the ballistic trajectory in absolute gravimeters. Two iterations of the IRLS delivered sufficient accuracy of the approximation without a significant bias. The simulations were performed on different samplings and perturbations of the trajectory. For the platykurtic distributions of the perturbations, the L p -approximation with 3 < p < 4 was found to yield several times more precise gravity estimates compared to the standard least-squares. The simulation results were confirmed by processing real gravity observations performed at the excessive noise conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu Shioumin; Kruijs, Robbert van de; Zoethout, Erwin
Ion sputtering yields for Ru, Mo, and Si under Ar{sup +} ion bombardment in the near-threshold energy range have been studied using an in situ weight-loss method with a Kaufman ion source, Faraday cup, and quartz crystal microbalance. The results are compared to theoretical models. The accuracy of the in situ weight-loss method was verified by thickness-decrease measurements using grazing incidence x-ray reflectometry, and results from both methods are in good agreement. These results provide accurate data sets for theoretical modeling in the near-threshold sputter regime and are of relevance for (optical) surfaces exposed to plasmas, as, for instance, inmore » extreme ultraviolet photolithography.« less
Predicting elastic properties of β-HMX from first-principles calculations.
Peng, Qing; Rahul; Wang, Guangyu; Liu, Gui-Rong; Grimme, Stefan; De, Suvranu
2015-05-07
We investigate the performance of van der Waals (vdW) functions in predicting the elastic constants of β cyclotetramethylene tetranitramine (HMX) energetic molecular crystals using density functional theory (DFT) calculations. We confirm that the accuracy of the elastic constants is significantly improved using the vdW corrections with environment-dependent C6 together with PBE and revised PBE exchange-correlation functionals. The elastic constants obtained using PBE-D3(0) calculations yield the most accurate mechanical response of β-HMX when compared with experimental stress-strain data. Our results suggest that PBE-D3 calculations are reliable in predicting the elastic constants of this material.
NASA Astrophysics Data System (ADS)
Latypov, Marat I.; Kalidindi, Surya R.
2017-10-01
There is a critical need for the development and verification of practically useful multiscale modeling strategies for simulating the mechanical response of multiphase metallic materials with heterogeneous microstructures. In this contribution, we present data-driven reduced order models for effective yield strength and strain partitioning in such microstructures. These models are built employing the recently developed framework of Materials Knowledge Systems that employ 2-point spatial correlations (or 2-point statistics) for the quantification of the heterostructures and principal component analyses for their low-dimensional representation. The models are calibrated to a large collection of finite element (FE) results obtained for a diverse range of microstructures with various sizes, shapes, and volume fractions of the phases. The performance of the models is evaluated by comparing the predictions of yield strength and strain partitioning in two-phase materials with the corresponding predictions from a classical self-consistent model as well as results of full-field FE simulations. The reduced-order models developed in this work show an excellent combination of accuracy and computational efficiency, and therefore present an important advance towards computationally efficient microstructure-sensitive multiscale modeling frameworks.
NASA Astrophysics Data System (ADS)
Pietropolli Charmet, Andrea; Cornaton, Yann
2018-05-01
This work presents an investigation of the theoretical predictions yielded by anharmonic force fields having the cubic and quartic force constants are computed analytically by means of density functional theory (DFT) using the recursive scheme developed by M. Ringholm et al. (J. Comput. Chem. 35 (2014) 622). Different functionals (namely B3LYP, PBE, PBE0 and PW86x) and basis sets were used for calculating the anharmonic vibrational spectra of two halomethanes. The benchmark analysis carried out demonstrates the reliability and overall good performances offered by hybrid approaches, where the harmonic data obtained at the coupled cluster with single and double excitations level of theory augmented by a perturbational estimate of the effects of connected triple excitations, CCSD(T), are combined with the fully analytic higher order force constants yielded by DFT functionals. These methods lead to reliable and computationally affordable calculations of anharmonic vibrational spectra with an accuracy comparable to that yielded by hybrid force fields having the anharmonic force fields computed at second order Møller-Plesset perturbation theory (MP2) level of theory using numerical differentiation but without the corresponding potential issues related to computational costs and numerical errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biplab, S; Soumya, R; Paul, S
2014-06-01
Purpose: For the first time in the world, BrainLAB has integrated its iPlan treatment planning system for clinical use with Elekta linear accelerator (Axesse with a Beam Modulator). The purpose of this study was to compare the calculated and measured doses with different chambers to establish the calculation accuracy of iPlan system. Methods: The iPlan has both Pencil beam (PB) and Monte Carlo (MC) calculation algorithms. Beam data include depth doses, profiles and output measurements for different field sizes. Collected data was verified by vendor and beam modelling was done. Further QA tests were carried out in our clinic. Dosemore » calculation accuracy verified point, volumetric dose measurement using ion chambers of different volumes (0.01cc and 0.125cc). Planner dose verification was done using diode array. Plans were generated in iPlan and irradiated in Elekta Axesse linear accelerator. Results: Dose calculation accuracies verified using ion chamber for 6 and 10 MV beam were 3.5+/-0.33(PB), 1.7%+/-0.7(MC) and 3.9%+/-0.6(PB), 3.4%+/-0.6(MC) respectively. Using a pin point chamber, dose calculation accuracy for 6MV and 10MV was 3.8%+/-0.06(PB), 1.21%+/-0.2(MC) and 4.2%+/-0.6(PB), 3.1%+/-0.7(MC) respectively. The calculated planar dose distribution for 10.4×10.4 cm2 was verified using a diode array and the gamma analysis for 2%-2mm criteria yielded pass rates of 88 %(PB) and 98.8%(MC) respectively. 3mm-3% yields 100% passing for both MC and PB algorithm. Conclusion: Dose calculation accuracy was found to be within acceptable limits for MC for 6MV beam. PB for both beams and MC for 10 MV beam were found to be outside acceptable limits. The output measurements were done twice for conformation. The lower gamma matching was attributed to meager number of measured profiles (only two profiles for PB) and coarse measurement resolution for diagonal profile measurement (5mm). Based on these measurements we concluded that 6 MV MC algorithm is suitable for patient treatment.« less
Comparing Paper and Tablet Modes of Retrospective Activity Space Data Collection.
Yabiku, Scott T; Glick, Jennifer E; Wentz, Elizabeth A; Ghimire, Dirgha; Zhao, Qunshan
2017-01-01
Individual actions are both constrained and facilitated by the social context in which individuals are embedded. But research to test specific hypotheses about the role of space on human behaviors and well-being is limited by the difficulty of collecting accurate and personally relevant social context data. We report on a project in Chitwan, Nepal, that directly addresses challenges to collect accurate activity space data. We test if a computer assisted interviewing (CAI) tablet-based approach to collecting activity space data was more accurate than a paper map-based approach; we also examine which subgroups of respondents provided more accurate data with the tablet mode compared to paper. Results show that the tablet approach yielded more accurate data when comparing respondent-indicated locations to the known locations as verified by on-the-ground staff. In addition, the accuracy of the data provided by older and less healthy respondents benefited more from the tablet mode.
Comparing Paper and Tablet Modes of Retrospective Activity Space Data Collection*
Yabiku, Scott T.; Glick, Jennifer E.; Wentz, Elizabeth A.; Ghimire, Dirgha; Zhao, Qunshan
2018-01-01
Individual actions are both constrained and facilitated by the social context in which individuals are embedded. But research to test specific hypotheses about the role of space on human behaviors and well-being is limited by the difficulty of collecting accurate and personally relevant social context data. We report on a project in Chitwan, Nepal, that directly addresses challenges to collect accurate activity space data. We test if a computer assisted interviewing (CAI) tablet-based approach to collecting activity space data was more accurate than a paper map-based approach; we also examine which subgroups of respondents provided more accurate data with the tablet mode compared to paper. Results show that the tablet approach yielded more accurate data when comparing respondent-indicated locations to the known locations as verified by on-the-ground staff. In addition, the accuracy of the data provided by older and less healthy respondents benefited more from the tablet mode. PMID:29623133
Xu, Zhoubing; Gertz, Adam L.; Burke, Ryan P.; Bansal, Neil; Kang, Hakmook; Landman, Bennett A.; Abramson, Richard G.
2016-01-01
OBJECTIVES Multi-atlas fusion is a promising approach for computer-assisted segmentation of anatomical structures. The purpose of this study was to evaluate the accuracy and time efficiency of multi-atlas segmentation for estimating spleen volumes on clinically-acquired CT scans. MATERIALS AND METHODS Under IRB approval, we obtained 294 deidentified (HIPAA-compliant) abdominal CT scans on 78 subjects from a recent clinical trial. We compared five pipelines for obtaining splenic volumes: Pipeline 1–manual segmentation of all scans, Pipeline 2–automated segmentation of all scans, Pipeline 3–automated segmentation of all scans with manual segmentation for outliers on a rudimentary visual quality check, Pipelines 4 and 5–volumes derived from a unidimensional measurement of craniocaudal spleen length and three-dimensional splenic index measurements, respectively. Using Pipeline 1 results as ground truth, the accuracy of Pipelines 2–5 (Dice similarity coefficient [DSC], Pearson correlation, R-squared, and percent and absolute deviation of volume from ground truth) were compared for point estimates of splenic volume and for change in splenic volume over time. Time cost was also compared for Pipelines 1–5. RESULTS Pipeline 3 was dominant in terms of both accuracy and time cost. With a Pearson correlation coefficient of 0.99, average absolute volume deviation 23.7 cm3, and 1 minute per scan, Pipeline 3 yielded the best results. The second-best approach was Pipeline 5, with a Pearson correlation coefficient 0.98, absolute deviation 46.92 cm3, and 1 minute 30 seconds per scan. Manual segmentation (Pipeline 1) required 11 minutes per scan. CONCLUSION A computer-automated segmentation approach with manual correction of outliers generated accurate splenic volumes with reasonable time efficiency. PMID:27519156
The Accuracy of Pedometers in Measuring Walking Steps on a Treadmill in College Students.
Husted, Hannah M; Llewellyn, Tamra L
2017-01-01
Pedometers are a popular way for people to track if they have reached the recommended 10,000 daily steps. Therefore, the purpose of this study was to determine the accuracy of four brands of pedometers at measuring steps, and to determine if a relationship exists between pedometer cost and accuracy. The hypothesis was that the more expensive brands of pedometers (the Fitbit Charge™ and Omron HJ-303™) would yield more accurate step counts than less expensive brands (the SmartHealth - Walking FIT™ and Sportline™). While wearing all pedometers at once, one male and eleven female college students (mean ± SD; age = 20.8 ± 0.94 years) walked 400 meters on a treadmill for 5 minutes at 3.5 miles per hour. The pedometer step counts were recorded at the end. Video analysis of the participants' feet was later completed to count the number of steps actually taken (actual steps). When compared to the actual steps, the Sportline™ brand (-3.83 ± 22.05) was the only pedometer that was significantly similar. The other three brands significantly under-estimated steps (Fitbit™ 55.00 ± 42.58, SmartHealth™ 43.50 ± 49.71, and Omron™ 28.58 ± 33.86), with the Fitbit being the least accurate. These results suggest an inverse relationship between cost and accuracy for the four specific brands tested, and that waist pedometers are more accurate than wrist pedometers. The results concerning the Fitbit are striking considering its high cost and popularity among consumers today. Further research should be conducted to improve the accuracy of pedometers.
Effect of Kinesiotape Applications on Ball Velocity and Accuracy in Amateur Soccer and Handball
Müller, Carsten; Brandes, Mirko
2015-01-01
Evidence supporting performance enhancing effects of kinesiotape in sports is missing. The aims of this study were to evaluate effects of kinesiotape applications with regard to shooting and throwing performance in 26 amateur soccer and 32 handball players, and to further investigate if these effects were influenced by the players’ level of performance. Ball speed as the primary outcome and accuracy of soccer kicks and handball throws were analyzed with and without kinesiotape by means of radar units and video recordings. The application of kinesiotapes significantly increased ball speed in soccer by 1.4 km/h (p=0.047) and accuracy with a lesser distance from the target by −6.9 cm (p=0.039). Ball velocity in handball throws also significantly increased by 1.2 km/h (p=0.013), while accuracy was deteriorated with a greater distance from the target by 3.4 cm (p=0.005). Larger effects with respect to ball speed were found in players with a lower performance level in kicking (1.7 km/h, p=0.028) and throwing (1.8 km/h, p=0.001) compared with higher level soccer and handball players (1.2 km/h, p=0.346 and 0.5 km/h, p=0.511, respectively). In conclusion, the applications of kinesiotape used in this study might have beneficial effects on performance in amateur soccer, but the gain in ball speed in handball is counteracted by a significant deterioration of accuracy. Subgroup analyses indicate that kinesiotape may yield larger effects on ball velocity in athletes with lower kicking and throwing skills. PMID:26839612
Accuracy and Usefulness of the HEDIS Childhood Immunization Measures
Solomon, Barry S.; Kim, Julia M.; Miller, Marlene R.
2012-01-01
OBJECTIVE: With the use of Centers for Disease Control and Prevention (CDC) immunization recommendations as the gold standard, our objectives were to measure the accuracy (“is this child up-to-date on immunizations?”) and usefulness (“is this child due for catch-up immunizations?”) of the Healthcare Effectiveness Data and Information Set (HEDIS) childhood immunization measures. METHODS: For children aged 24 to 35 months from the 2009 National Immunization Survey, we assessed the accuracy and usefulness of the HEDIS childhood immunization measures for 6 individual immunizations and a composite. RESULTS: A total of 12 096 children met all inclusion criteria and composed the study sample. The HEDIS measures had >90% accuracy when compared with the CDC gold standard for each of the 6 immunizations (range, 94.3%–99.7%) and the composite (93.8%). The HEDIS measure was least accurate for hepatitis B and pneumococcal conjugate immunizations. The proportion of children for which the HEDIS measure yielded a nonuseful result (ie, an incorrect answer to the question, “is this child due for catch-up immunization?”) ranged from 0.33% (varicella) to 5.96% (pneumococcal conjugate). The most important predictor of HEDIS measure accuracy and usefulness was the CDC-recommended number of immunizations due at age 2 years; children with zero or all immunizations due were the most likely to be correctly classified. CONCLUSIONS: HEDIS childhood immunization measures are, on the whole, accurate and useful. Certain immunizations (eg, hepatitis B, pneumococcal conjugate) and children (eg, those with a single overdue immunization), however, are more prone to HEDIS misclassification. PMID:22451701
Effect of Kinesiotape Applications on Ball Velocity and Accuracy in Amateur Soccer and Handball.
Müller, Carsten; Brandes, Mirko
2015-12-22
Evidence supporting performance enhancing effects of kinesiotape in sports is missing. The aims of this study were to evaluate effects of kinesiotape applications with regard to shooting and throwing performance in 26 amateur soccer and 32 handball players, and to further investigate if these effects were influenced by the players' level of performance. Ball speed as the primary outcome and accuracy of soccer kicks and handball throws were analyzed with and without kinesiotape by means of radar units and video recordings. The application of kinesiotapes significantly increased ball speed in soccer by 1.4 km/h (p=0.047) and accuracy with a lesser distance from the target by -6.9 cm (p=0.039). Ball velocity in handball throws also significantly increased by 1.2 km/h (p=0.013), while accuracy was deteriorated with a greater distance from the target by 3.4 cm (p=0.005). Larger effects with respect to ball speed were found in players with a lower performance level in kicking (1.7 km/h, p=0.028) and throwing (1.8 km/h, p=0.001) compared with higher level soccer and handball players (1.2 km/h, p=0.346 and 0.5 km/h, p=0.511, respectively). In conclusion, the applications of kinesiotape used in this study might have beneficial effects on performance in amateur soccer, but the gain in ball speed in handball is counteracted by a significant deterioration of accuracy. Subgroup analyses indicate that kinesiotape may yield larger effects on ball velocity in athletes with lower kicking and throwing skills.
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
Hahn, Tim; Kircher, Tilo; Straube, Benjamin; Wittchen, Hans-Ulrich; Konrad, Carsten; Ströhle, Andreas; Wittmann, André; Pfleiderer, Bettina; Reif, Andreas; Arolt, Volker; Lueken, Ulrike
2015-01-01
Although neuroimaging research has made substantial progress in identifying the large-scale neural substrate of anxiety disorders, its value for clinical application lags behind expectations. Machine-learning approaches have predictive potential for individual-patient prognostic purposes and might thus aid translational efforts in psychiatric research. To predict treatment response to cognitive behavioral therapy (CBT) on an individual-patient level based on functional magnetic resonance imaging data in patients with panic disorder with agoraphobia (PD/AG). We included 49 patients free of medication for at least 4 weeks and with a primary diagnosis of PD/AG in a longitudinal study performed at 8 clinical research institutes and outpatient centers across Germany. The functional magnetic resonance imaging study was conducted between July 2007 and March 2010. Twelve CBT sessions conducted 2 times a week focusing on behavioral exposure. Treatment response was defined as exceeding a 50% reduction in Hamilton Anxiety Rating Scale scores. Blood oxygenation level-dependent signal was measured during a differential fear-conditioning task. Regional and whole-brain gaussian process classifiers using a nested leave-one-out cross-validation were used to predict the treatment response from data acquired before CBT. Although no single brain region was predictive of treatment response, integrating regional classifiers based on data from the acquisition and the extinction phases of the fear-conditioning task for the whole brain yielded good predictive performance (accuracy, 82%; sensitivity, 92%; specificity, 72%; P < .001). Data from the acquisition phase enabled 73% correct individual-patient classifications (sensitivity, 80%; specificity, 67%; P < .001), whereas data from the extinction phase led to an accuracy of 74% (sensitivity, 64%; specificity, 83%; P < .001). Conservative reanalyses under consideration of potential confounders yielded nominally lower but comparable accuracy rates (acquisition phase, 70%; extinction phase, 71%; combined, 79%). Predicting treatment response to CBT based on functional neuroimaging data in PD/AG is possible with high accuracy on an individual-patient level. This novel machine-learning approach brings personalized medicine within reach, directly supporting clinical decisions for the selection of treatment options, thus helping to improve response rates.
On the distance of genetic relationships and the accuracy of genomic prediction in pig breeding.
Meuwissen, Theo H E; Odegard, Jorgen; Andersen-Ranberg, Ina; Grindflek, Eli
2014-08-01
With the advent of genomic selection, alternative relationship matrices are used in animal breeding, which vary in their coverage of distant relationships due to old common ancestors. Relationships based on pedigree (A) and linkage analysis (GLA) cover only recent relationships because of the limited depth of the known pedigree. Relationships based on identity-by-state (G) include relationships up to the age of the SNP (single nucleotide polymorphism) mutations. We hypothesised that the latter relationships were too old, since QTL (quantitative trait locus) mutations for traits under selection were probably more recent than the SNPs on a chip, which are typically selected for high minor allele frequency. In addition, A and GLA relationships are too recent to cover genetic differences accurately. Thus, we devised a relationship matrix that considered intermediate-aged relationships and compared all these relationship matrices for their accuracy of genomic prediction in a pig breeding situation. Haplotypes were constructed and used to build a haplotype-based relationship matrix (GH), which considers more intermediate-aged relationships, since haplotypes recombine more quickly than SNPs mutate. Dense genotypes (38 453 SNPs) on 3250 elite breeding pigs were combined with phenotypes for growth rate (2668 records), lean meat percentage (2618), weight at three weeks of age (7387) and number of teats (5851) to estimate breeding values for all animals in the pedigree (8187 animals) using the aforementioned relationship matrices. Phenotypes on the youngest 424 to 486 animals were masked and predicted in order to assess the accuracy of the alternative genomic predictions. Correlations between the relationships and regressions of older on younger relationships revealed that the age of the relationships increased in the order A, GLA, GH and G. Use of genomic relationship matrices yielded significantly higher prediction accuracies than A. GH and G, differed not significantly, but were significantly more accurate than GLA. Our hypothesis that intermediate-aged relationships yield more accurate genomic predictions than G was confirmed for two of four traits, but these results were not statistically significant. Use of estimated genotype probabilities for ungenotyped animals proved to be an efficient method to include the phenotypes of ungenotyped animals.
A quality assessment of the MARS crop yield forecasting system for the European Union
NASA Astrophysics Data System (ADS)
van der Velde, Marijn; Bareuth, Bettina
2015-04-01
Timely information on crop production forecasts can become of increasing importance as commodity markets are more and more interconnected. Impacts across large crop production areas due to (e.g.) extreme weather and pest outbreaks can create ripple effects that may affect food prices and availability elsewhere. The MARS Unit (Monitoring Agricultural ResourceS), DG Joint Research Centre, European Commission, has been providing forecasts of European crop production levels since 1993. The operational crop production forecasting is carried out with the MARS Crop Yield Forecasting System (M-CYFS). The M-CYFS is used to monitor crop growth development, evaluate short-term effects of anomalous meteorological events, and provide monthly forecasts of crop yield at national and European Union level. The crop production forecasts are published in the so-called MARS bulletins. Forecasting crop yield over large areas in the operational context requires quality benchmarks. Here we present an analysis of the accuracy and skill of past crop yield forecasts of the main crops (e.g. soft wheat, grain maize), throughout the growing season, and specifically for the final forecast before harvest. Two simple benchmarks to assess the skill of the forecasts were defined as comparing the forecasts to 1) a forecast equal to the average yield and 2) a forecast using a linear trend established through the crop yield time-series. These reveal a variability in performance as a function of crop and Member State. In terms of production, the yield forecasts of 67% of the EU-28 soft wheat production and 80% of the EU-28 maize production have been forecast superior to both benchmarks during the 1993-2013 period. In a changing and increasingly variable climate crop yield forecasts can become increasingly valuable - provided they are used wisely. We end our presentation by discussing research activities that could contribute to this goal.
Automatic anatomy recognition via multiobject oriented active shape models.
Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A
2010-12-01
This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a recognition accuracy of > or = 90% yielded a TPVF > or = 95% and FPVF < or = 0.5%. Over the three data sets and over all tested objects, in 97% of the cases, the optimal solutions found by the proposed method constituted the true global optimum. The experimental results showed the feasibility and efficacy of the proposed automatic anatomy recognition system. Increasing the number of objects in the model can significantly improve both recognition and delineation accuracy. More spread out arrangement of objects in the model can lead to improved recognition and delineation accuracy. Including larger objects in the model also improved recognition and delineation. The proposed method almost always finds globally optimum solutions.
Movement amplitude and tempo change in piano performance
NASA Astrophysics Data System (ADS)
Palmer, Caroline
2004-05-01
Music performance places stringent temporal and cognitive demands on individuals that should yield large speed/accuracy tradeoffs. Skilled piano performance, however, shows consistently high accuracy across a wide variety of rates. Movement amplitude may affect the speed/accuracy tradeoff, so that high accuracy can be obtained even at very fast tempi. The contribution of movement amplitude changes in rate (tempo) is investigated with motion capture. Cameras recorded pianists with passive markers on hands and fingers, who performed on an electronic (MIDI) keyboard. Pianists performed short melodies at faster and faster tempi until they made errors (altering the speed/accuracy function). Variability of finger movements in the three motion planes indicated most change in the plane perpendicular to the keyboard across tempi. Surprisingly, peak amplitudes of motion before striking the keys increased as tempo increased. Increased movement amplitudes at faster rates may reduce or compensate for speed/accuracy tradeoffs. [Work supported by Canada Research Chairs program, HIMH R01 45764.
Hengsbach, Stefan; Lantada, Andrés Díaz
2014-08-01
The possibility of designing and manufacturing biomedical microdevices with multiple length-scale geometries can help to promote special interactions both with their environment and with surrounding biological systems. These interactions aim to enhance biocompatibility and overall performance by using biomimetic approaches. In this paper, we present a design and manufacturing procedure for obtaining multi-scale biomedical microsystems based on the combination of two additive manufacturing processes: a conventional laser writer to manufacture the overall device structure, and a direct-laser writer based on two-photon polymerization to yield finer details. The process excels for its versatility, accuracy and manufacturing speed and allows for the manufacture of microsystems and implants with overall sizes up to several millimeters and with details down to sub-micrometric structures. As an application example we have focused on manufacturing a biomedical microsystem to analyze the impact of microtextured surfaces on cell motility. This process yielded a relevant increase in precision and manufacturing speed when compared with more conventional rapid prototyping procedures.
NASA Astrophysics Data System (ADS)
Jafari, Mehdi; Kasaei, Shohreh
2012-01-01
Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.
NASA Astrophysics Data System (ADS)
Jafari, Mehdi; Kasaei, Shohreh
2011-12-01
Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.
Beyond Born-Mayer: Improved models for short-range repulsion in ab initio force fields
Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.; ...
2016-06-23
Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Y.-W.; Wang, M.-Z.; Chao, Y.
2009-03-01
We study the charmless decays B{yields}{lambda}{lambda}h, where h stands for {pi}{sup +}, K{sup +}, K{sup 0},K*{sup +}, or K*{sup 0}, using a 605 fb{sup -1} data sample collected at the {upsilon}(4S) resonance with the Belle detector at the KEKB asymmetric energy e{sup +}e{sup -} collider. We observe B{sup 0}{yields}{lambda}{lambda}K{sup 0} and B{sup 0}{yields}{lambda}{lambda}K*{sup 0} with branching fractions of (4.76{sub -0.68}{sup +0.84}(stat){+-}0.61(syst))x10{sup -6} and (2.46{sub -0.72}{sup +0.87}{+-}0.34)x10{sup -6}, respectively. The significances of these signals in the threshold-mass enhanced mass region, M{sub {lambda}}{sub {lambda}}<2.85 GeV/c{sup 2}, are 12.4{sigma} and 9.3{sigma}, respectively. We also update the branching fraction B(B{sup +}{yields}{lambda}{lambda}K{sup +})=(3.38{sub -0.36}{sup +0.41}{+-}0.41)x10{supmore » -6} with better accuracy, and report the following measurement or 90% confidence level upper limit in the threshold-mass-enhanced region: B(B{sup +}{yields}{lambda}{lambda}K*{sup +})=(2.19{sub -0.88}{sup +1.13}{+-}0.33)x10{sup -6} with 3.7{sigma} significance; B(B{sup +}{yields}{lambda}{lambda}{pi}{sup +})<0.94x10{sup -6}. A related search for B{sup 0}{yields}{lambda}{lambda}D{sup 0} yields a branching fraction B(B{sup 0}{yields}{lambda}{lambda}D{sup 0})=(1.05{sub -0.44}{sup +0.57}{+-}0.14)x10{sup -5}. This may be compared with the large, {approx}10{sup -4}, branching fraction observed for B{sup 0}{yields}ppD{sup 0}. The M{sub {lambda}}{sub {lambda}} enhancements near threshold and related angular distributions for the observed modes are also reported.« less
Crop Yield Simulations Using Multiple Regional Climate Models in the Southwestern United States
NASA Astrophysics Data System (ADS)
Stack, D.; Kafatos, M.; Kim, S.; Kim, J.; Walko, R. L.
2013-12-01
Agricultural productivity (described by crop yield) is strongly dependent on climate conditions determined by meteorological parameters (e.g., temperature, rainfall, and solar radiation). California is the largest producer of agricultural products in the United States, but crops in associated arid and semi-arid regions live near their physiological limits (e.g., in hot summer conditions with little precipitation). Thus, accurate climate data are essential in assessing the impact of climate variability on agricultural productivity in the Southwestern United States and other arid regions. To address this issue, we produced simulated climate datasets and used them as input for the crop production model. For climate data, we employed two different regional climate models (WRF and OLAM) using a fine-resolution (8km) grid. Performances of the two different models are evaluated in a fine-resolution regional climate hindcast experiment for 10 years from 2001 to 2010 by comparing them to the North American Regional Reanalysis (NARR) dataset. Based on this comparison, multi-model ensembles with variable weighting are used to alleviate model bias and improve the accuracy of crop model productivity over large geographic regions (county and state). Finally, by using a specific crop-yield simulation model (APSIM) in conjunction with meteorological forcings from the multi-regional climate model ensemble, we demonstrate the degree to which maize yields are sensitive to the regional climate in the Southwestern United States.
Assessment of progressively delayed prompts on guided skill learning in rats.
Reid, Alliston K; Futch, Sara E; Ball, Katherine M; Knight, Aubrey G; Tucker, Martha
2017-03-01
We examined the controlling factors that allow a prompted skill to become autonomous in a discrete-trials implementation of Touchette's (1971) progressively delayed prompting procedure, but our subjects were rats rather than children with disabilities. Our prompted skill was a left-right lever-press sequence guided by two panel lights. We manipulated (a) the effectiveness of the guiding lights prompt and (b) the presence or absence of a progressively delayed prompt in four groups of rats. The less effective prompt yielded greater autonomy than the more effective prompt. The ability of the progressively delayed prompt procedure to produce behavioral autonomy depended upon characteristics of the obtained delay (trial duration) rather than on the pending prompt. Sequence accuracy was reliably higher in unprompted trials than in prompted trials, and this difference was maintained in the 2 groups that received no prompts but yielded equivalent trial durations. Overall sequence accuracy decreased systematically as trial duration increased. Shorter trials and their greater accuracy were correlated with higher overall reinforcement rates for faster responding. Waiting for delayed prompts (even if no actual prompt was provided) was associated with lower overall reinforcement rate by decreasing accuracy and by lengthening trials. These findings extend results from previous studies regarding the controlling factors in delayed prompting procedures applied to children with disabilities.
Dual-energy CT for the diagnosis of gout: an accuracy and diagnostic yield study.
Bongartz, Tim; Glazebrook, Katrina N; Kavros, Steven J; Murthy, Naveen S; Merry, Stephen P; Franz, Walter B; Michet, Clement J; Veetil, Barath M Akkara; Davis, John M; Mason, Thomas G; Warrington, Kenneth J; Ytterberg, Steven R; Matteson, Eric L; Crowson, Cynthia S; Leng, Shuai; McCollough, Cynthia H
2015-06-01
To assess the accuracy of dual-energy CT (DECT) for diagnosing gout, and to explore whether it can have any impact on clinical decision making beyond the established diagnostic approach using polarising microscopy of synovial fluid (diagnostic yield). Diagnostic single-centre study of 40 patients with active gout, and 41 individuals with other types of joint disease. Sensitivity and specificity of DECT for diagnosing gout was calculated against a combined reference standard (polarising and electron microscopy of synovial fluid). To explore the diagnostic yield of DECT scanning, a third cohort was assembled consisting of patients with inflammatory arthritis and risk factors for gout who had negative synovial fluid polarising microscopy results. Among these patients, the proportion of subjects with DECT findings indicating a diagnosis of gout was assessed. The sensitivity and specificity of DECT for diagnosing gout was 0.90 (95% CI 0.76 to 0.97) and 0.83 (95% CI 0.68 to 0.93), respectively. All false negative patients were observed among patients with acute, recent-onset gout. All false positive patients had advanced knee osteoarthritis. DECT in the diagnostic yield cohort revealed evidence of uric acid deposition in 14 out of 30 patients (46.7%). DECT provides good diagnostic accuracy for detection of monosodium urate (MSU) deposits in patients with gout. However, sensitivity is lower in patients with recent-onset disease. DECT has a significant impact on clinical decision making when gout is suspected, but polarising microscopy of synovial fluid fails to demonstrate the presence of MSU crystals. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Evaluation of methods and marker Systems in Genomic Selection of oil palm (Elaeis guineensis Jacq.).
Kwong, Qi Bin; Teh, Chee Keng; Ong, Ai Ling; Chew, Fook Tim; Mayes, Sean; Kulaveerasingam, Harikrishna; Tammi, Martti; Yeoh, Suat Hui; Appleton, David Ross; Harikrishna, Jennifer Ann
2017-12-11
Genomic selection (GS) uses genome-wide markers as an attempt to accelerate genetic gain in breeding programs of both animals and plants. This approach is particularly useful for perennial crops such as oil palm, which have long breeding cycles, and for which the optimal method for GS is still under debate. In this study, we evaluated the effect of different marker systems and modeling methods for implementing GS in an introgressed dura family derived from a Deli dura x Nigerian dura (Deli x Nigerian) with 112 individuals. This family is an important breeding source for developing new mother palms for superior oil yield and bunch characters. The traits of interest selected for this study were fruit-to-bunch (F/B), shell-to-fruit (S/F), kernel-to-fruit (K/F), mesocarp-to-fruit (M/F), oil per palm (O/P) and oil-to-dry mesocarp (O/DM). The marker systems evaluated were simple sequence repeats (SSRs) and single nucleotide polymorphisms (SNPs). RR-BLUP, Bayesian A, B, Cπ, LASSO, Ridge Regression and two machine learning methods (SVM and Random Forest) were used to evaluate GS accuracy of the traits. The kinship coefficient between individuals in this family ranged from 0.35 to 0.62. S/F and O/DM had the highest genomic heritability, whereas F/B and O/P had the lowest. The accuracies using 135 SSRs were low, with accuracies of the traits around 0.20. The average accuracy of machine learning methods was 0.24, as compared to 0.20 achieved by other methods. The trait with the highest mean accuracy was F/B (0.28), while the lowest were both M/F and O/P (0.18). By using whole genomic SNPs, the accuracies for all traits, especially for O/DM (0.43), S/F (0.39) and M/F (0.30) were improved. The average accuracy of machine learning methods was 0.32, compared to 0.31 achieved by other methods. Due to high genomic resolution, the use of whole-genome SNPs improved the efficiency of GS dramatically for oil palm and is recommended for dura breeding programs. Machine learning slightly outperformed other methods, but required parameters optimization for GS implementation.
Wu, Guorong; Yap, Pew-Thian; Kim, Minjeong; Shen, Dinggang
2010-02-01
We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Moawad, Gaby N; Tyan, Paul; Kumar, Dipti; Krapf, Jill; Marfori, Cherie; Abi Khalil, Elias D; Robinson, James
To evaluate the effect of stress on laparoscopic skills between obstetrics and gynecology residents. Observational prospective cohort study. Prospective cohort. Urban teaching university hospital. Thirty-one obstetrics and gynecology residents, postgraduate years 1 to 4. We assessed 4 basic laparoscopic skills at 2 sessions. The first session was the baseline; 6 months later the same skills were assessed under audiovisual stressors. We compared the effect of stress on accuracy and efficiency between the 2 sessions. A linear model was used to analyze time. Under stress, residents were more efficient in 3 of the 4 modules. Ring transfer (hand-eye coordination and bimanual dexterity), p = 0.0304. Ring of fire (bimanual dexterity and measure of depth perception), p = 0.0024 and dissection glove (respect of delicate tissue planes), p = 0.0002. Poisson regression was used to analyze the total number of penalties. Residents were more likely to acquire penalties under stress. Ring transfer, p = 0.0184 and cobra (hand-to-hand coordination), p = 0.0487 yielded a statistically significant increase in penalties in the presence of stressors. Dissection glove p = 0.0605 yielded a nonsignificant increase in penalties. Our work confirmed that while under stress residents were more efficient, this translated into their ability to complete tasks faster in all the tested skills. Efficiency, however, came at the expense of accuracy. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Geodesy by radio interferometry - Water vapor radiometry for estimation of the wet delay
NASA Technical Reports Server (NTRS)
Elgered, G.; Davis, J. L.; Herring, T. A.; Shapiro, I. I.
1991-01-01
An important source of error in VLBI estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. This paper presents and discusses the method of using data from a water vapor radiomete (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data or Kalman filtering to correct for atmospheric propagation delay at the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. For the most frequently measured baseline in this study, the use of WVR data yielded a 13 percent smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the 'best' minimum elevationi angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass.
Xie, Hong-Bo; Huang, Hu; Wu, Jianhua; Liu, Lei
2015-02-01
We present a multiclass fuzzy relevance vector machine (FRVM) learning mechanism and evaluate its performance to classify multiple hand motions using surface electromyographic (sEMG) signals. The relevance vector machine (RVM) is a sparse Bayesian kernel method which avoids some limitations of the support vector machine (SVM). However, RVM still suffers the difficulty of possible unclassifiable regions in multiclass problems. We propose two fuzzy membership function-based FRVM algorithms to solve such problems, based on experiments conducted on seven healthy subjects and two amputees with six hand motions. Two feature sets, namely, AR model coefficients and room mean square value (AR-RMS), and wavelet transform (WT) features, are extracted from the recorded sEMG signals. Fuzzy support vector machine (FSVM) analysis was also conducted for wide comparison in terms of accuracy, sparsity, training and testing time, as well as the effect of training sample sizes. FRVM yielded comparable classification accuracy with dramatically fewer support vectors in comparison with FSVM. Furthermore, the processing delay of FRVM was much less than that of FSVM, whilst training time of FSVM much faster than FRVM. The results indicate that FRVM classifier trained using sufficient samples can achieve comparable generalization capability as FSVM with significant sparsity in multi-channel sEMG classification, which is more suitable for sEMG-based real-time control applications.
Effect of various putty-wash impression techniques on marginal fit of cast crowns.
Nissan, Joseph; Rosner, Ofir; Bukhari, Mohammed Amin; Ghelfan, Oded; Pilo, Raphael
2013-01-01
Marginal fit is an important clinical factor that affects restoration longevity. The accuracy of three polyvinyl siloxane putty-wash impression techniques was compared by marginal fit assessment using the nondestructive method. A stainless steel master cast containing three abutments with three metal crowns matching the three preparations was used to make 45 impressions: group A = single-step technique (putty and wash impression materials used simultaneously), group B = two-step technique with a 2-mm relief (putty as a preliminary impression to create a 2-mm wash space followed by the wash stage), and group C = two-step technique with a polyethylene spacer (plastic spacer used with the putty impression followed by the wash stage). Accuracy was assessed using a toolmaker microscope to measure and compare the marginal gaps between each crown and finish line on the duplicated stone casts. Each abutment was further measured at the mesial, buccal, and distal aspects. One-way analysis of variance was used for statistical analysis. P values and Scheffe post hoc contrasts were calculated. Significance was determined at .05. One-way analysis of variance showed significant differences among the three impression techniques in all three abutments and at all three locations (P < .001). Group B yielded dies with minimal gaps compared to groups A and C. The two-step impression technique with 2-mm relief was the most accurate regarding the crucial clinical factor of marginal fit.
Impact of point-of-care implementation of Xpert® MTB/RIF: product vs. process innovation.
Schumacher, S G; Thangakunam, B; Denkinger, C M; Oliver, A A; Shakti, K B; Qin, Z Z; Michael, J S; Luo, R; Pai, M; Christopher, D J
2015-09-01
Both product innovation (e.g., more sensitive tests) and process innovation (e.g., a point-of-care [POC] testing programme) could improve patient outcomes. To study the respective contributions of product and process innovation in improving patient outcomes. We implemented a POC programme using Xpert(®) MTB/RIF in an out-patient clinic of a tertiary care hospital in India. We measured the impact of process innovation by comparing time to diagnosis with routine testing vs. POC testing. We measured the impact of product innovation by comparing accuracy and time to diagnosis using smear microscopy vs. POC Xpert. We enrolled 1012 patients over a 15-month period. Xpert had high accuracy, but the incremental value of one Xpert over two smears was only 6% (95%CI 3-12). Implementing Xpert as a routine laboratory test did not reduce the time to diagnosis compared to smear-based diagnosis. In contrast, the POC programme reduced the time to diagnosis by 5.5 days (95%CI 4.3-6.7), but required dedicated staff and substantial adaptation of clinic workflow. Process innovation by way of a POC Xpert programme had a greater impact on time to diagnosis than the product per se, and can yield important improvements in patient care that are complementary to those achieved by introducing innovative technologies.
Supervised segmentation of microelectrode recording artifacts using power spectral density.
Bakstein, Eduard; Schneider, Jakub; Sieger, Tomas; Novak, Daniel; Wild, Jiri; Jech, Robert
2015-08-01
Appropriate detection of clean signal segments in extracellular microelectrode recordings (MER) is vital for maintaining high signal-to-noise ratio in MER studies. Existing alternatives to manual signal inspection are based on unsupervised change-point detection. We present a method of supervised MER artifact classification, based on power spectral density (PSD) and evaluate its performance on a database of 95 labelled MER signals. The proposed method yielded test-set accuracy of 90%, which was close to the accuracy of annotation (94%). The unsupervised methods achieved accuracy of about 77% on both training and testing data.
Yield estimation of sugarcane based on agrometeorological-spectral models
NASA Technical Reports Server (NTRS)
Rudorff, Bernardo Friedrich Theodor; Batista, Getulio Teixeira
1990-01-01
This work has the objective to assess the performance of a yield estimation model for sugarcane (Succharum officinarum). The model uses orbital gathered spectral data along with yield estimated from an agrometeorological model. The test site includes the sugarcane plantations of the Barra Grande Plant located in Lencois Paulista municipality in Sao Paulo State. Production data of four crop years were analyzed. Yield data observed in the first crop year (1983/84) were regressed against spectral and agrometeorological data of that same year. This provided the model to predict the yield for the following crop year i.e., 1984/85. The model to predict the yield of subsequent years (up to 1987/88) were developed similarly, incorporating all previous years data. The yield estimations obtained from these models explained 69, 54, and 50 percent of the yield variation in the 1984/85, 1985/86, and 1986/87 crop years, respectively. The accuracy of yield estimations based on spectral data only (vegetation index model) and on agrometeorological data only (agrometeorological model) were also investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelletier, C; Jung, J; Lee, C
2015-06-15
Purpose: To quantify the dosimetric uncertainty due to organ position errors when using height and weight as phantom selection criteria in the UF/NCI Hybrid Phantom Library for the purpose of out-of-field organ dose reconstruction. Methods: Four diagnostic patient CT images were used to create 7-field IMRT plans. For each patient, dose to the liver, right lung, and left lung were calculated using the XVMC Monte Carlo code. These doses were taken to be the ground truth. For each patient, the phantom with the most closely matching height and weight was selected from the body size dependent phantom library. The patientmore » plans were then transferred to the computational phantoms and organ doses were recalculated. Each plan was also run on 4 additional phantoms with reference heights and or weights. Maximum and mean doses for the three organs were computed, and the DVHs were extracted and compared. One sample t-tests were performed to compare the accuracy of the height and weight matched phantoms against the additional phantoms in regards to both maximum and mean dose. Results: For one of the patients, the height and weight matched phantom yielded the most accurate results across all three organs for both maximum and mean doses. For two additional patients, the matched phantom yielded the best match for one organ only. In 13 of the 24 cases, the matched phantom yielded better results than the average of the other four phantoms, though the results were only statistically significant at the .05 level for three cases. Conclusion: Using height and weight matched phantoms does yield better results in regards to out-of-field dosimetry than using average phantoms. Height and weight appear to be moderately good selection criteria, though this selection criteria failed to yield any better results for one patient.« less
Accuracy of Predicted Genomic Breeding Values in Purebred and Crossbred Pigs.
Hidalgo, André M; Bastiaansen, John W M; Lopes, Marcos S; Harlizius, Barbara; Groenen, Martien A M; de Koning, Dirk-Jan
2015-05-26
Genomic selection has been widely implemented in dairy cattle breeding when the aim is to improve performance of purebred animals. In pigs, however, the final product is a crossbred animal. This may affect the efficiency of methods that are currently implemented for dairy cattle. Therefore, the objective of this study was to determine the accuracy of predicted breeding values in crossbred pigs using purebred genomic and phenotypic data. A second objective was to compare the predictive ability of SNPs when training is done in either single or multiple populations for four traits: age at first insemination (AFI); total number of piglets born (TNB); litter birth weight (LBW); and litter variation (LVR). We performed marker-based and pedigree-based predictions. Within-population predictions for the four traits ranged from 0.21 to 0.72. Multi-population prediction yielded accuracies ranging from 0.18 to 0.67. Predictions across purebred populations as well as predicting genetic merit of crossbreds from their purebred parental lines for AFI performed poorly (not significantly different from zero). In contrast, accuracies of across-population predictions and accuracies of purebred to crossbred predictions for LBW and LVR ranged from 0.08 to 0.31 and 0.11 to 0.31, respectively. Accuracy for TNB was zero for across-population prediction, whereas for purebred to crossbred prediction it ranged from 0.08 to 0.22. In general, marker-based outperformed pedigree-based prediction across populations and traits. However, in some cases pedigree-based prediction performed similarly or outperformed marker-based prediction. There was predictive ability when purebred populations were used to predict crossbred genetic merit using an additive model in the populations studied. AFI was the only exception, indicating that predictive ability depends largely on the genetic correlation between PB and CB performance, which was 0.31 for AFI. Multi-population prediction was no better than within-population prediction for the purebred validation set. Accuracy of prediction was very trait-dependent. Copyright © 2015 Hidalgo et al.
NASA Astrophysics Data System (ADS)
Cook, Kristen
2015-04-01
With the recent explosion in the use and availability of unmanned aerial vehicle platforms and development of easy to use structure from motion (SfM) software, UAV based photogrammetry is increasingly being adopted to produce high resolution topography for the study of surface processes. UAV systems can vary substantially in price and complexity, but the tradeoffs between these and the quality of the resulting data are not well constrained. We look at one end of this spectrum and evaluate the effectiveness of a simple low cost UAV setup for obtaining high resolution topography in a challenging field setting. Our study site is the Daan River gorge in western Taiwan, a rapidly eroding bedrock gorge that we have monitored with terrestrial Lidar since 2009. The site presents challenges for the generation and analysis of high resolution topography, including vertical gorge walls, vegetation, wide variation in surface roughness, and a complicated 3D morphology. In order to evaluate the accuracy of the UAV-derived topography, we compare it with terrestrial Lidar data collected during the same survey period. Our UAV setup combines a DJI Phantom 2 quadcopter with a 16 megapixel Canon Powershot camera for a total platform cost of less than 850. The quadcopter is flown manually, and the camera is programmed to take a photograph every 4 seconds, yielding 200-250 pictures per flight. We measured ground control points and targets for both the Lidar scans and the aerial surveys using a Leica RTK GPS with 1-2 cm accuracy. UAV derived point clouds were obtained using Agisoft Photoscan software. We conducted both Lidar and UAV surveys before and after the 2014 typhoon season, allowing us to evaluate the reliability of the UAV survey to detect geomorphic changes in the range of one to several meters. The accuracy of the SfM point clouds depends strongly on the characteristics of the surface being considered, with vegetation and small scale texture causing inaccuracies. However, we find that this simple UAV setup can yield point clouds with 78% of points within 20 cm and 60% within 10 cm of the Lidar point clouds, with the higher errors dominated by vegetation effects. Well-distributed and accurately located ground control points are critical, but we achieve good accuracy with even with relatively few ground control points (25) over a 150,000 sq m area. The large number of photographs taken during each flight also allows us to explore the reproducibility of the UAV-derived topography by generating point clouds from different subsets of photographs taken of the same area during a single survey. These results show the same pattern of higher errors due to vegetation, but bedrock surfaces generally have errors of less than 4 cm. These results suggest that even very basic UAV surveys can yield data suitable for measuring geomorphic change on the scale of a channel reach.
Accuracy and Precision in Measurements of Biomass Oxidative Ratio and Carbon Oxidation State
NASA Astrophysics Data System (ADS)
Gallagher, M. E.; Masiello, C. A.; Randerson, J. T.; Chadwick, O. A.; Robertson, G. P.
2007-12-01
Ecosystem oxidative ratio (OR) is a critical parameter in the apportionment of anthropogenic CO2 between the terrestrial biosphere and ocean carbon reservoirs. OR is the ratio of O2 to CO2 in gas exchange fluxes between the terrestrial biosphere and atmosphere. Ecosystem OR is linearly related to biomass carbon oxidation state (Cox), a fundamental property of the earth system describing the bonding environment of carbon in molecules. Cox can range from -4 to +4 (CH4 to CO2). Variations in both Cox and OR are driven by photosynthesis, respiration, and decomposition. We are developing several techniques to accurately measure variations in ecosystem Cox and OR; these include elemental analysis, bomb calorimetry, and 13C nuclear magnetic resonance spectroscopy. A previous study, comparing the accuracy and precision of elemental analysis versus bomb calorimetry for pure chemicals, showed that elemental analysis-based measurements are more accurate, while calorimetry- based measurements yield more precise data. However, the limited biochemical range of natural samples makes it possible that calorimetry may ultimately prove most accurate, as well as most cost-effective. Here we examine more closely the accuracy of Cox and OR values generated by calorimetry on a large set of natural biomass samples collected from the Kellogg Biological Station-Long Term Ecological Research (KBS-LTER) site in Michigan.
NASA Astrophysics Data System (ADS)
Bray, Cédric; Cuisset, Arnaud; Hindle, Francis; Mouret, Gaël; Bocquet, Robin; Boudon, Vincent
2017-06-01
Several Doppler limited rotational transitions of methane induced by centrifugal distortion have been measured with an unprecedented frequency accuracy using the THz photomixing synthesizer based on a frequency comb. Compared to previous synchrotron based FT-Far-IR measurements of Boudon et al., the accuracy of the line frequency measurements is improved by one order of magnitude, this yields a corresponding increase of two orders of magnitude to the weighting of these transitions in the global fit. The rotational transitions in the ν_4←ν_4 hot band are measured for the first time by the broad spectral coverage of the photomixing CW-THz spectrometer providing access up to R(5) transitions at 2.6 THz. The new global fit including the present lines has been used to update the methane line list of the HITRAN database. Some small, but significant variations of the parameter values are observed and are accompanied by a reduction of the 1-σ uncertainties on the rotational (B_0) and centrifugal distortion (D_0) constants. V. Boudon, O. Pirali, P. Roy, J.-B. Brubach, L. Manceron, J. Vander Auwera, J. Quant. Spectrosc. Radiat. Transfer, 111, 1117-1129 (2010).
Statistical analysis to assess automated level of suspicion scoring methods in breast ultrasound
NASA Astrophysics Data System (ADS)
Galperin, Michael
2003-05-01
A well-defined rule-based system has been developed for scoring 0-5 the Level of Suspicion (LOS) based on qualitative lexicon describing the ultrasound appearance of breast lesion. The purposes of the research are to asses and select one of the automated LOS scoring quantitative methods developed during preliminary studies in benign biopsies reduction. The study has used Computer Aided Imaging System (CAIS) to improve the uniformity and accuracy of applying the LOS scheme by automatically detecting, analyzing and comparing breast masses. The overall goal is to reduce biopsies on the masses with lower levels of suspicion, rather that increasing the accuracy of diagnosis of cancers (will require biopsy anyway). On complex cysts and fibroadenoma cases experienced radiologists were up to 50% less certain in true negatives than CAIS. Full correlation analysis was applied to determine which of the proposed LOS quantification methods serves CAIS accuracy the best. This paper presents current results of applying statistical analysis for automated LOS scoring quantification for breast masses with known biopsy results. It was found that First Order Ranking method yielded most the accurate results. The CAIS system (Image Companion, Data Companion software) is developed by Almen Laboratories and was used to achieve the results.
Rydzy, M; Deslauriers, R; Smith, I C; Saunders, J K
1990-08-01
A systematic study was performed to optimize the accuracy of kinetic parameters derived from magnetization transfer measurements. Three techniques were investigated: time-dependent saturation transfer (TDST), saturation recovery (SRS), and inversion recovery (IRS). In the last two methods, one of the resonances undergoing exchange is saturated throughout the experiment. The three techniques were compared with respect to the accuracy of the kinetic parameters derived from experiments performed in a given, fixed, amount of time. Stochastic simulation of magnetization transfer experiments was performed to optimize experimental design. General formulas for the relative accuracies of the unidirectional rate constant (k) were derived for each of the three experimental methods. It was calculated that for k values between 0.1 and 1.0 s-1, T1 values between 1 and 10 s, and relaxation delays appropriate for the creatine kinase reaction, the SRS method yields more accurate values of k than does the IRS method. The TDST method is more accurate than the SRS method for reactions where T1 is long and k is large, within the range of k and T1 values examined. Experimental verification of the method was carried out on a solution in which the forward (PCr----ATP) rate constant (kf) of the creatine kinase reaction was measured.
Mapping water table depth using geophysical and environmental variables.
Buchanan, S; Triantafilis, J
2009-01-01
Despite its importance, accurate representation of the spatial distribution of water table depth remains one of the greatest deficiencies in many hydrological investigations. Historically, both inverse distance weighting (IDW) and ordinary kriging (OK) have been used to interpolate depths. These methods, however, have major limitations: namely they require large numbers of measurements to represent the spatial variability of water table depth and they do not represent the variation between measurement points. We address this issue by assessing the benefits of using stepwise multiple linear regression (MLR) with three different ancillary data sets to predict the water table depth at 100-m intervals. The ancillary data sets used are Electromagnetic (EM34 and EM38), gamma radiometric: potassium (K), uranium (eU), thorium (eTh), total count (TC), and morphometric data. Results show that MLR offers significant precision and accuracy benefits over OK and IDW. Inclusion of the morphometric data set yielded the greatest (16%) improvement in prediction accuracy compared with IDW, followed by the electromagnetic data set (5%). Use of the gamma radiometric data set showed no improvement. The greatest improvement, however, resulted when all data sets were combined (37% increase in prediction accuracy over IDW). Significantly, however, the use of MLR also allows for prediction in variations in water table depth between measurement points, which is crucial for land management.
Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.
Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben
2018-02-22
This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.
Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2008-01-01
Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.
Measurement and interpretation of skin prick test results.
van der Valk, J P M; Gerth van Wijk, R; Hoorn, E; Groenendijk, L; Groenendijk, I M; de Jong, N W
2015-01-01
There are several methods to read skin prick test results in type-I allergy testing. A commonly used method is to characterize the wheal size by its 'average diameter'. A more accurate method is to scan the area of the wheal to calculate the actual size. In both methods, skin prick test (SPT) results can be corrected for histamine-sensitivity of the skin by dividing the results of the allergic reaction by the histamine control. The objectives of this study are to compare different techniques of quantifying SPT results, to determine a cut-off value for a positive SPT for histamine equivalent prick -index (HEP) area, and to study the accuracy of predicting cashew nut reactions in double-blind placebo-controlled food challenge (DBPCFC) tests with the different SPT methods. Data of 172 children with cashew nut sensitisation were used for the analysis. All patients underwent a DBPCFC with cashew nut. Per patient, the average diameter and scanned area of the wheal size were recorded. In addition, the same data for the histamine-induced wheal were collected for each patient. The accuracy in predicting the outcome of the DBPCFC using four different SPT readings (i.e. average diameter, area, HEP-index diameter, HEP-index area) were compared in a Receiver-Operating Characteristic (ROC) plot. Characterizing the wheal size by the average diameter method is inaccurate compared to scanning method. A wheal average diameter of 3 mm is generally considered as a positive SPT cut-off value and an equivalent HEP-index area cut-off value of 0.4 was calculated. The four SPT methods yielded a comparable area under the curve (AUC) of 0.84, 0.85, 0.83 and 0.83, respectively. The four methods showed comparable accuracy in predicting cashew nut reactions in a DBPCFC. The 'scanned area method' is theoretically more accurate in determining the wheal area than the 'average diameter method' and is recommended in academic research. A HEP-index area of 0.4 is determined as cut-off value for a positive SPT. However, in clinical practice, the 'average diameter method' is also useful, because this method provides similar accuracy in predicting cashew nut allergic reactions in the DBPCFC. Trial number NTR3572.
Zanderigo, Francesca; Sparacino, Giovanni; Kovatchev, Boris; Cobelli, Claudio
2007-09-01
The aim of this article was to use continuous glucose error-grid analysis (CG-EGA) to assess the accuracy of two time-series modeling methodologies recently developed to predict glucose levels ahead of time using continuous glucose monitoring (CGM) data. We considered subcutaneous time series of glucose concentration monitored every 3 minutes for 48 hours by the minimally invasive CGM sensor Glucoday® (Menarini Diagnostics, Florence, Italy) in 28 type 1 diabetic volunteers. Two prediction algorithms, based on first-order polynomial and autoregressive (AR) models, respectively, were considered with prediction horizons of 30 and 45 minutes and forgetting factors (ff) of 0.2, 0.5, and 0.8. CG-EGA was used on the predicted profiles to assess their point and dynamic accuracies using original CGM profiles as reference. Continuous glucose error-grid analysis showed that the accuracy of both prediction algorithms is overall very good and that their performance is similar from a clinical point of view. However, the AR model seems preferable for hypoglycemia prevention. CG-EGA also suggests that, irrespective of the time-series model, the use of ff = 0.8 yields the highest accurate readings in all glucose ranges. For the first time, CG-EGA is proposed as a tool to assess clinically relevant performance of a prediction method separately at hypoglycemia, euglycemia, and hyperglycemia. In particular, we have shown that CG-EGA can be helpful in comparing different prediction algorithms, as well as in optimizing their parameters.
Accuracy of abdominal auscultation for bowel obstruction.
Breum, Birger Michael; Rud, Bo; Kirkegaard, Thomas; Nordentoft, Tyge
2015-09-14
To investigate the accuracy and inter-observer variation of bowel sound assessment in patients with clinically suspected bowel obstruction. Bowel sounds were recorded in patients with suspected bowel obstruction using a Littmann(®) Electronic Stethoscope. The recordings were processed to yield 25-s sound sequences in random order on PCs. Observers, recruited from doctors within the department, classified the sound sequences as either normal or pathological. The reference tests for bowel obstruction were intraoperative and endoscopic findings and clinical follow up. Sensitivity and specificity were calculated for each observer and compared between junior and senior doctors. Interobserver variation was measured using the Kappa statistic. Bowel sound sequences from 98 patients were assessed by 53 (33 junior and 20 senior) doctors. Laparotomy was performed in 47 patients, 35 of whom had bowel obstruction. Two patients underwent colorectal stenting due to large bowel obstruction. The median sensitivity and specificity was 0.42 (range: 0.19-0.64) and 0.78 (range: 0.35-0.98), respectively. There was no significant difference in accuracy between junior and senior doctors. The median frequency with which doctors classified bowel sounds as abnormal did not differ significantly between patients with and without bowel obstruction (26% vs 23%, P = 0.08). The 53 doctors made up 1378 unique pairs and the median Kappa value was 0.29 (range: -0.15-0.66). Accuracy and inter-observer agreement was generally low. Clinical decisions in patients with possible bowel obstruction should not be based on auscultatory assessment of bowel sounds.
Facilitating text reading in posterior cortical atrophy.
Yong, Keir X X; Rajdev, Kishan; Shakespeare, Timothy J; Leff, Alexander P; Crutch, Sebastian J
2015-07-28
We report (1) the quantitative investigation of text reading in posterior cortical atrophy (PCA), and (2) the effects of 2 novel software-based reading aids that result in dramatic improvements in the reading ability of patients with PCA. Reading performance, eye movements, and fixations were assessed in patients with PCA and typical Alzheimer disease and in healthy controls (experiment 1). Two reading aids (single- and double-word) were evaluated based on the notion that reducing the spatial and oculomotor demands of text reading might support reading in PCA (experiment 2). Mean reading accuracy in patients with PCA was significantly worse (57%) compared with both patients with typical Alzheimer disease (98%) and healthy controls (99%); spatial aspects of passages were the primary determinants of text reading ability in PCA. Both aids led to considerable gains in reading accuracy (PCA mean reading accuracy: single-word reading aid = 96%; individual patient improvement range: 6%-270%) and self-rated measures of reading. Data suggest a greater efficiency of fixations and eye movements under the single-word reading aid in patients with PCA. These findings demonstrate how neurologic characterization of a neurodegenerative syndrome (PCA) and detailed cognitive analysis of an important everyday skill (reading) can combine to yield aids capable of supporting important everyday functional abilities. This study provides Class III evidence that for patients with PCA, 2 software-based reading aids (single-word and double-word) improve reading accuracy. © 2015 American Academy of Neurology.
Facilitating text reading in posterior cortical atrophy
Rajdev, Kishan; Shakespeare, Timothy J.; Leff, Alexander P.; Crutch, Sebastian J.
2015-01-01
Objective: We report (1) the quantitative investigation of text reading in posterior cortical atrophy (PCA), and (2) the effects of 2 novel software-based reading aids that result in dramatic improvements in the reading ability of patients with PCA. Methods: Reading performance, eye movements, and fixations were assessed in patients with PCA and typical Alzheimer disease and in healthy controls (experiment 1). Two reading aids (single- and double-word) were evaluated based on the notion that reducing the spatial and oculomotor demands of text reading might support reading in PCA (experiment 2). Results: Mean reading accuracy in patients with PCA was significantly worse (57%) compared with both patients with typical Alzheimer disease (98%) and healthy controls (99%); spatial aspects of passages were the primary determinants of text reading ability in PCA. Both aids led to considerable gains in reading accuracy (PCA mean reading accuracy: single-word reading aid = 96%; individual patient improvement range: 6%–270%) and self-rated measures of reading. Data suggest a greater efficiency of fixations and eye movements under the single-word reading aid in patients with PCA. Conclusions: These findings demonstrate how neurologic characterization of a neurodegenerative syndrome (PCA) and detailed cognitive analysis of an important everyday skill (reading) can combine to yield aids capable of supporting important everyday functional abilities. Classification of evidence: This study provides Class III evidence that for patients with PCA, 2 software-based reading aids (single-word and double-word) improve reading accuracy. PMID:26138948
Ilovitsh, Tali; Meiri, Amihai; Ebeling, Carl G.; Menon, Rajesh; Gerton, Jordan M.; Jorgensen, Erik M.; Zalevsky, Zeev
2013-01-01
Localization of a single fluorescent particle with sub-diffraction-limit accuracy is a key merit in localization microscopy. Existing methods such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve localization accuracies of single emitters that can reach an order of magnitude lower than the conventional resolving capabilities of optical microscopy. However, these techniques require a sparse distribution of simultaneously activated fluorophores in the field of view, resulting in larger time needed for the construction of the full image. In this paper we present the use of a nonlinear image decomposition algorithm termed K-factor, which reduces an image into a nonlinear set of contrast-ordered decompositions whose joint product reassembles the original image. The K-factor technique, when implemented on raw data prior to localization, can improve the localization accuracy of standard existing methods, and also enable the localization of overlapping particles, allowing the use of increased fluorophore activation density, and thereby increased data collection speed. Numerical simulations of fluorescence data with random probe positions, and especially at high densities of activated fluorophores, demonstrate an improvement of up to 85% in the localization precision compared to single fitting techniques. Implementing the proposed concept on experimental data of cellular structures yielded a 37% improvement in resolution for the same super-resolution image acquisition time, and a decrease of 42% in the collection time of super-resolution data with the same resolution. PMID:24466491
John R. Brooks; Gary W. Miller
2011-01-01
Data from even-aged hardwood stands in four ecoregions across the mid-Appalachian region were used to test projection accuracy for three available growth and yield software systems: SILVAH, the Forest Vegetation Simulator, and the Stand Damage Model. Average root mean squared error (RMSE) ranged from 20 to 140 percent of actual trees per acre while RMSE ranged from 2...
NASA Astrophysics Data System (ADS)
Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad
2017-01-01
In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.
An economical method of analyzing transient motion of gas-lubricated rotor-bearing systems.
NASA Technical Reports Server (NTRS)
Falkenhagen, G. L.; Ayers, A. L.; Barsalou, L. C.
1973-01-01
A method of economically evaluating the hydrodynamic forces generated in a gas-lubricated tilting-pad bearing is presented. The numerical method consists of solving the case of the infinite width bearing and then converting this solution to the case of the finite bearing by accounting for end leakage. The approximate method is compared to the finite-difference solution of Reynolds equation and yields acceptable accuracy while running about one-hundred times faster. A mathematical model of a gas-lubricated tilting-pad vertical rotor systems is developed. The model is capable of analyzing a two-bearing-rotor system in which the rotor center of mass is not at midspan by accounting for gyroscopic moments. The numerical results from the model are compared to actual test data as well as analytical results of other investigators.
NASA Technical Reports Server (NTRS)
Kibler, J. F.; Suttles, J. T.
1977-01-01
One way to obtain estimates of the unknown parameters in a pollution dispersion model is to compare the model predictions with remotely sensed air quality data. A ground-based LIDAR sensor provides relative pollution concentration measurements as a function of space and time. The measured sensor data are compared with the dispersion model output through a numerical estimation procedure to yield parameter estimates which best fit the data. This overall process is tested in a computer simulation to study the effects of various measurement strategies. Such a simulation is useful prior to a field measurement exercise to maximize the information content in the collected data. Parametric studies of simulated data matched to a Gaussian plume dispersion model indicate the trade offs available between estimation accuracy and data acquisition strategy.
A comparative appraisal of hydrological behavior of SRTM DEM at catchment level
NASA Astrophysics Data System (ADS)
Sharma, Arabinda; Tiwari, K. N.
2014-11-01
The Shuttle Radar Topography Mission (SRTM) data has emerged as a global elevation data in the past one decade because of its free availability, homogeneity and consistent accuracy compared to other global elevation dataset. The present study explores the consistency in hydrological behavior of the SRTM digital elevation model (DEM) with reference to easily available regional 20 m contour interpolated DEM (TOPO DEM). Analysis ranging from simple vertical accuracy assessment to hydrological simulation of the studied Maithon catchment, using empirical USLE model and semidistributed, physical SWAT model, were carried out. Moreover, terrain analysis involving hydrological indices was performed for comparative assessment of the SRTM DEM with respect to TOPO DEM. Results reveal that the vertical accuracy of SRTM DEM (±27.58 m) in the region is less than the specified standard (±16 m). Statistical analysis of hydrological indices such as topographic wetness index (TWI), stream power index (SPI), slope length factor (SLF) and geometry number (GN) shows a significant differences in hydrological properties of the two studied DEMs. Estimation of soil erosion potentials of the catchment and conservation priorities of microwatersheds of the catchment using SRTM DEM and TOPO DEM produce considerably different results. Prediction of soil erosion potential using SRTM DEM is far higher than that obtained using TOPO DEM. Similarly, conservation priorities determined using the two DEMs are found to be agreed for only 34% of microwatersheds of the catchment. ArcSWAT simulation reveals that runoff predictions are less sensitive to selection of the two DEMs as compared to sediment yield prediction. The results obtained in the present study are vital to hydrological analysis as it helps understanding the hydrological behavior of the DEM without being influenced by the model structural as well as parameter uncertainty. It also reemphasized that SRTM DEM can be a valuable dataset for hydrological analysis provided any error/uncertainty therein is being properly evaluated and characterized.
Evaluation of two portable meters for determination of blood triglyceride concentration in dogs.
Kluger, Elissa K; Dhand, Navneet K; Malik, Richard; Ilkin, William J; Snow, David H; Govendir, Merran
2010-02-01
To evaluate agreement between 2 portable triglyceride meters and a veterinary laboratory for measurement of blood triglyceride concentrations in dogs and evaluate effects of Hct and blood volume analyzed. 97 blood samples collected from 60 dogs. Triglyceride concentrations were measured in blood by use of 2 meters and compared with serum triglyceride concentrations determined by a veterinary laboratory. Within- and between-day precision, accuracy, and effects of blood volume and Hct were analyzed. Accuracy of both meters varied with triglyceride concentration, although both accurately delineated dogs with triglyceride concentrations < 180 mg/dL versus > or = 180 mg/dL. One meter had results with excellent overall correlation with results of the standard laboratory method, with a concordance correlation coefficient of 0.94 and mean difference of 20.3 mg/dL. The other meter had a good overall concordance correlation coefficient of 0.86 with a higher absolute mean difference of -27.7 mg/dL. Results were only affected by blood volume; triglyceride concentrations determined via both meters were significantly lower when 7 microL of EDTA-anticoagulated blood was used, compared with larger volumes. 1 meter had greater accuracy in the range of 140 to 400 mg/dL and was therefore well suited to detect hypertriglyceridemia. The other meter was accurate with triglyceride values < 140 mg/dL and yielded results similar to those of the veterinary laboratory in the range of 140 to 400 mg/dL, therefore being suitable for determination of triglyceride concentrations in nonfed dogs and dogs with mildly high concentrations.
Practical approach to subject-specific estimation of knee joint contact force.
Knarr, Brian A; Higginson, Jill S
2015-08-20
Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data; however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models' predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. Copyright © 2015 Elsevier Ltd. All rights reserved.
León-Reina, L; García-Maté, M; Álvarez-Pinazo, G; Santacruz, I; Vallcorba, O; De la Torre, A G; Aranda, M A G
2016-06-01
This study reports 78 Rietveld quantitative phase analyses using Cu K α 1 , Mo K α 1 and synchrotron radiations. Synchrotron powder diffraction has been used to validate the most challenging analyses. From the results for three series with increasing contents of an analyte (an inorganic crystalline phase, an organic crystalline phase and a glass), it is inferred that Rietveld analyses from high-energy Mo K α 1 radiation have slightly better accuracies than those obtained from Cu K α 1 radiation. This behaviour has been established from the results of the calibration graphics obtained through the spiking method and also from Kullback-Leibler distance statistic studies. This outcome is explained, in spite of the lower diffraction power for Mo radiation when compared to Cu radiation, as arising because of the larger volume tested with Mo and also because higher energy allows one to record patterns with fewer systematic errors. The limit of detection (LoD) and limit of quantification (LoQ) have also been established for the studied series. For similar recording times, the LoDs in Cu patterns, ∼0.2 wt%, are slightly lower than those derived from Mo patterns, ∼0.3 wt%. The LoQ for a well crystallized inorganic phase using laboratory powder diffraction was established to be close to 0.10 wt% in stable fits with good precision. However, the accuracy of these analyses was poor with relative errors near to 100%. Only contents higher than 1.0 wt% yielded analyses with relative errors lower than 20%.
NASA Astrophysics Data System (ADS)
Gibril, Mohamed Barakat A.; Idrees, Mohammed Oludare; Yao, Kouame; Shafri, Helmi Zulhaidi Mohd
2018-01-01
The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.
Practical approach to subject-specific estimation of knee joint contact force
Knarr, Brian A.; Higginson, Jill S.
2015-01-01
Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data, however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models’ predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. PMID:25952546
NASA Astrophysics Data System (ADS)
Cawood, A.; Bond, C. E.; Howell, J.; Totake, Y.
2016-12-01
Virtual outcrops derived from techniques such as LiDAR and SfM (digital photogrammetry) provide a viable and potentially powerful addition or alternative to traditional field studies, given the large amounts of raw data that can be acquired rapidly and safely. The use of these digital representations of outcrops as a source of geological data has increased greatly in the past decade, and as such, the accuracy and precision of these new acquisition methods applied to geological problems has been addressed by a number of authors. Little work has been done, however, on the integration of virtual outcrops into fundamental structural geology workflows and to systematically studying the fidelity of the data derived from them. Here, we use the classic Stackpole Quay syncline outcrop in South Wales to quantitatively evaluate the accuracy of three virtual outcrop models (LiDAR, aerial and terrestrial digital photogrammetry) compared to data collected directly in the field. Using these structural data, we have built 2D and 3D geological models which make predictions of fold geometries. We examine the fidelity of virtual outcrops generated using different acquisition techniques to outcrop geology and how these affect model building and final outcomes. Finally, we utilize newly acquired data to deterministically test model validity. Based upon these results, we find that acquisition of digital imagery by UAS (Unmanned Autonomous Vehicle) yields highly accurate virtual outcrops when compared to terrestrial methods, allowing the construction of robust data-driven predictive models. Careful planning, survey design and choice of suitable acquisition method are, however, of key importance for best results.
Henry, Michael E.; Lauriat, Tara L.; Shanahan, Meghan; Renshaw, Perry F.; Jensen, J. Eric
2015-01-01
Proton magnetic resonance spectroscopy has the potential to provide valuable information about alterations in gamma-aminobutyric acid (GABA), glutamate (Glu), and glutamine (Gln) in psychiatric and neurological disorders. In order to use this technique effectively, it is important to establish the accuracy and reproducibility of the methodology. In this study, phantoms with known metabolite concentrations were used to compare the accuracy of 2D J-resolved MRS, single-echo 30 ms PRESS, and GABA-edited MEGA-PRESS for measuring all three aforementioned neurochemicals simultaneously. The phantoms included metabolite concentrations above and below the physiological range and scans were performed at baseline, 1 week, and 1 month time-points. For GABA measurement, MEGA-PRESS proved optimal with a measured-to-target correlation of R2 = 0.999, with J-resolved providing R2 = 0.973 for GABA. All three methods proved effective in measuring Glu with R2 = 0.987 (30 ms PRESS), R2 = 0.996 (J-resolved) and R2 = 0.910 (MEGA-PRESS). J-resolved and MEGA-PRESS yielded good results for Gln measures with respective R2 = 0.855 (J-resolved) and R2 = 0.815 (MEGA-PRESS). The 30 ms PRESS method proved ineffective in measuring GABA and Gln. When measurement stability at in vivo concentration was assessed as a function of varying spectral quality, J-resolved proved the most stable and immune to signal-to-noise and linewidth fluctuation compared to MEGA-PRESS and 30 ms PRESS. PMID:21130670
Thompson, Shirley; Sawyer, Jennifer; Bonam, Rathan; Valdivia, J E
2009-07-01
The German EPER, TNO, Belgium, LandGEM, and Scholl Canyon models for estimating methane production were compared to methane recovery rates for 35 Canadian landfills, assuming that 20% of emissions were not recovered. Two different fractions of degradable organic carbon (DOC(f)) were applied in all models. Most models performed better when the DOC(f) was 0.5 compared to 0.77. The Belgium, Scholl Canyon, and LandGEM version 2.01 models produced the best results of the existing models with respective mean absolute errors compared to methane generation rates (recovery rates + 20%) of 91%, 71%, and 89% at 0.50 DOC(f) and 171%, 115%, and 81% at 0.77 DOC(f). The Scholl Canyon model typically overestimated methane recovery rates and the LandGEM version 2.01 model, which modifies the Scholl Canyon model by dividing waste by 10, consistently underestimated methane recovery rates; this comparison suggested that modifying the divisor for waste in the Scholl Canyon model between one and ten could improve its accuracy. At 0.50 DOC(f) and 0.77 DOC(f) the modified model had the lowest absolute mean error when divided by 1.5 yielding 63 +/- 45% and 2.3 yielding 57 +/- 47%, respectively. These modified models reduced error and variability substantially and both have a strong correlation of r = 0.92.
NASA Astrophysics Data System (ADS)
Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2017-11-01
Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.
Tajima, Taku; Akahane, Masaaki; Takao, Hidemasa; Akai, Hiroyuki; Kiryu, Shigeru; Imamura, Hiroshi; Watanabe, Yasushi; Kokudo, Norihiro; Ohtomo, Kuni
2012-10-01
We compared diagnostic ability for detecting hepatic metastases between gadolinium ethoxy benzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA)-enhanced magnetic resonance imaging (MRI) and diffusion-weighted imaging (DWI) on a 1.5-T system, and determined whether DWI is necessary in Gd-EOB-DTPA-enhanced MRI for diagnosing colorectal liver metastases. We assessed 29 consecutive prospectively enrolled patients with suspected metachronous colorectal liver metastases; all patients underwent surgery and had preoperative Gd-EOB-DTPA-enhanced MRI. Overall detection rate, sensitivity for detecting metastases and benign lesions, positive predictive value, and diagnostic accuracy (Az value) were compared among three image sets [unenhanced MRI (DWI set), Gd-EOB-DTPA-enhanced MRI excluding DWI (EOB set), and combined set]. Gd-EOB-DTPA-enhanced MRI yielded better overall detection rate (77.8-79.0 %) and sensitivity (87.1-89.4 %) for detecting metastases than the DWI set (55.9 % and 64.7 %, respectively) for one observer (P < 0.001). No statistically significant difference was seen between the EOB and combined sets, although several metastases were newly detected on additional DWI. Gd-EOB-DTPA-enhanced MRI yielded a better overall detection rate and higher sensitivity for detecting metastases compared with unenhanced MRI. Additional DWI may be able to reduce oversight of lesions in Gd-EOB-DTPA-enhanced 1.5-T MRI for detecting colorectal liver metastases.
A Priori Estimation of Organic Reaction Yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emami, Fateme S.; Vahid, Amir; Wylie, Elizabeth K.
2015-07-21
A thermodynamically guided calculation of free energies of substrate and product molecules allows for the estimation of the yields of organic reactions. The non-ideality of the system and the solvent effects are taken into account through the activity coefficients calculated at the molecular level by perturbed-chain statistical associating fluid theory (PC-SAFT). The model is iteratively trained using a diverse set of reactions with yields that have been reported previously. This trained model can then estimate a priori the yields of reactions not included in the training set with an accuracy of ca. ±15 %. This ability has the potential tomore » translate into significant economic savings through the selection and then execution of only those reactions that can proceed in good yields.« less
Tuerk, Andreas; Wiktorin, Gregor; Güler, Serhat
2017-05-01
Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare"), a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC) Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.
Energy measurement using flow computers and chromatography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beeson, J.
1995-12-01
Arkla Pipeline Group (APG), along with most transmission companies, went to electronic flow measurement (EFM) to: (1) Increase resolution and accuracy; (2) Real time correction of flow variables; (3) Increase speed in data retrieval; (4) Reduce capital expenditures; and (5) Reduce operation and maintenance expenditures Prior to EFM, mechanical seven day charts were used which yielded 800 pressure and differential pressure readings. EFM yields 1.2-million readings, a 1500 time improvement in resolution and additional flow representation. The total system accuracy of the EFM system is 0.25 % compared with 2 % for the chart system which gives APG improved accuracy.more » A typical APG electronic measurement system includes a microprocessor-based flow computer, a telemetry communications package, and a gas chromatograph. Live relative density (specific gravity), BTU, CO{sub 2}, and N{sub 2} are updated from the chromatograph to the flow computer every six minutes which provides accurate MMBTU computations. Because the gas contract length has changed from years to monthly and from a majority of direct sales to transports both Arkla and its customers wanted access to actual volumes on a much more timely basis than is allowed with charts. The new electronic system allows volumes and other system data to be retrieved continuously, if EFM is on Supervisory Control and Data Acquisition (SCADA) or daily if on dial up telephone. Previously because of chart integration, information was not available for four to six weeks. EFM costs much less than the combined costs of telemetry transmitters, pressure and differential pressure chart recorders, and temperature chart recorder which it replaces. APG will install this equipment on smaller volume stations at a customers expense. APG requires backup measurement on metering facilities this size. It could be another APG flow computer or chart recorder, or the other companies flow computer or chart recorder.« less
Pothula, Venu M.; Yuan, Stanley C.; Maerz, David A.; Montes, Lucresia; Oleszkiewicz, Stephen M.; Yusupov, Albert; Perline, Richard
2015-01-01
Background Advanced predictive analytical techniques are being increasingly applied to clinical risk assessment. This study compared a neural network model to several other models in predicting the length of stay (LOS) in the cardiac surgical intensive care unit (ICU) based on pre-incision patient characteristics. Methods Thirty six variables collected from 185 cardiac surgical patients were analyzed for contribution to ICU LOS. The Automatic Linear Modeling (ALM) module of IBM-SPSS software identified 8 factors with statistically significant associations with ICU LOS; these factors were also analyzed with the Artificial Neural Network (ANN) module of the same software. The weighted contributions of each factor (“trained” data) were then applied to data for a “new” patient to predict ICU LOS for that individual. Results Factors identified in the ALM model were: use of an intra-aortic balloon pump; O2 delivery index; age; use of positive cardiac inotropic agents; hematocrit; serum creatinine ≥ 1.3 mg/deciliter; gender; arterial pCO2. The r2 value for ALM prediction of ICU LOS in the initial (training) model was 0.356, p <0.0001. Cross validation in prediction of a “new” patient yielded r2 = 0.200, p <0.0001. The same 8 factors analyzed with ANN yielded a training prediction r2 of 0.535 (p <0.0001) and a cross validation prediction r2 of 0.410, p <0.0001. Two additional predictive algorithms were studied, but they had lower prediction accuracies. Our validated neural network model identified the upper quartile of ICU LOS with an odds ratio of 9.8(p <0.0001). Conclusions ANN demonstrated a 2-fold greater accuracy than ALM in prediction of observed ICU LOS. This greater accuracy would be presumed to result from the capacity of ANN to capture nonlinear effects and higher order interactions. Predictive modeling may be of value in early anticipation of risks of post-operative morbidity and utilization of ICU facilities. PMID:26710254
[Accuracy of a pulse oximeter during hypoxia].
Tachibana, C; Fukada, T; Hasegawa, R; Satoh, K; Furuya, Y; Ohe, Y
1996-04-01
The accuracy of the pulse oximeter was examined in hypoxic patients. We studied 11 cyanotic congenital heart disease patients during surgery, and compared the arterial oxygen saturation determined by both the simultaneous blood gas analysis (CIBA-CORNING 288 BLOOD GAS SYSTEM, SaO2) and by the pulse oximeter (DATEX SATELITE, with finger probe, SpO2). Ninty sets of data on SpO2 and SaO2 were obtained. The bias (SpO2-SaO2) was 1.7 +/- 6.9 (mean +/- SD) %. In cyanotic congenital heart disease patients, SpO2 values were significantly higher than SaO2. Although the reason is unknown, in constantly hypoxic patients, SpO2 values are possibly over-estimated. In particular, pulse oximetry at low levels of saturation (SaO2 below 80%) was not as accurate as at a higher saturation level (SaO2 over 80%). There was a positive correlation between SpO2 and SaO2 (linear regression analysis yields the equation y = 0.68x + 26.0, r = 0.93). In conclusion, the pulse oximeter is useful to monitor oxygen saturation in constantly hypoxic patients, but the values thus obtained should be compared with the values measured directly when hypoxemia is severe.
Lamberti, A; Vanlanduit, S; De Pauw, B; Berghmans, F
2014-03-24
Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.
Reproducible segmentation of white matter hyperintensities using a new statistical definition.
Damangir, Soheil; Westman, Eric; Simmons, Andrew; Vrenken, Hugo; Wahlund, Lars-Olof; Spulber, Gabriela
2017-06-01
We present a method based on a proposed statistical definition of white matter hyperintensities (WMH), which can work with any combination of conventional magnetic resonance (MR) sequences without depending on manually delineated samples. T1-weighted, T2-weighted, FLAIR, and PD sequences acquired at 1.5 Tesla from 119 subjects from the Kings Health Partners-Dementia Case Register (healthy controls, mild cognitive impairment, Alzheimer's disease) were used. The segmentation was performed using a proposed definition for WMH based on the one-tailed Kolmogorov-Smirnov test. The presented method was verified, given all possible combinations of input sequences, against manual segmentations and a high similarity (Dice 0.85-0.91) was observed. Comparing segmentations with different input sequences to one another also yielded a high similarity (Dice 0.83-0.94) that exceeded intra-rater similarity (Dice 0.75-0.91). We compared the results with those of other available methods and showed that the segmentation based on the proposed definition has better accuracy and reproducibility in the test dataset used. Overall, the presented definition is shown to produce accurate results with higher reproducibility than manual delineation. This approach can be an alternative to other manual or automatic methods not only because of its accuracy, but also due to its good reproducibility.
Comparative study of quantitative phase imaging techniques for refractometry of optical fibers
NASA Astrophysics Data System (ADS)
de Dorlodot, Bertrand; Bélanger, Erik; Bérubé, Jean-Philippe; Vallée, Réal; Marquet, Pierre
2018-02-01
The refractive index difference profile of optical fibers is the key design parameter because it determines, among other properties, the insertion losses and propagating modes. Therefore, an accurate refractive index profiling method is of paramount importance to their development and optimization. Quantitative phase imaging (QPI) is one of the available tools to retrieve structural characteristics of optical fibers, including the refractive index difference profile. Having the advantage of being non-destructive, several different QPI methods have been developed over the last decades. Here, we present a comparative study of three different available QPI techniques, namely the transport-of-intensity equation, quadriwave lateral shearing interferometry and digital holographic microscopy. To assess the accuracy and precision of those QPI techniques, quantitative phase images of the core of a well-characterized optical fiber have been retrieved for each of them and a robust image processing procedure has been applied in order to retrieve their refractive index difference profiles. As a result, even if the raw images for all the three QPI methods were suffering from different shortcomings, our robust automated image-processing pipeline successfully corrected these. After this treatment, all three QPI techniques yielded accurate, reliable and mutually consistent refractive index difference profiles in agreement with the accuracy and precision of the refracted near-field benchmark measurement.
Cao, Xueren; Luo, Yong; Zhou, Yilin; Fan, Jieru; Xu, Xiangming; West, Jonathan S.; Duan, Xiayu; Cheng, Dengfa
2015-01-01
To determine the influence of plant density and powdery mildew infection of winter wheat and to predict grain yield, hyperspectral canopy reflectance of winter wheat was measured for two plant densities at Feekes growth stage (GS) 10.5.3, 10.5.4, and 11.1 in the 2009–2010 and 2010–2011 seasons. Reflectance in near infrared (NIR) regions was significantly correlated with disease index at GS 10.5.3, 10.5.4, and 11.1 at two plant densities in both seasons. For the two plant densities, the area of the red edge peak (Σdr 680–760 nm), difference vegetation index (DVI), and triangular vegetation index (TVI) were significantly correlated negatively with disease index at three GSs in two seasons. Compared with other parameters Σdr 680–760 nm was the most sensitive parameter for detecting powdery mildew. Linear regression models relating mildew severity to Σdr 680–760 nm were constructed at three GSs in two seasons for the two plant densities, demonstrating no significant difference in the slope estimates between the two plant densities at three GSs. Σdr 680–760 nm was correlated with grain yield at three GSs in two seasons. The accuracies of partial least square regression (PLSR) models were consistently higher than those of models based on Σdr 680760 nm for disease index and grain yield. PLSR can, therefore, provide more accurate estimation of disease index of wheat powdery mildew and grain yield using canopy reflectance. PMID:25815468
Development and evaluation of a vision based poultry debone line monitoring system
NASA Astrophysics Data System (ADS)
Usher, Colin T.; Daley, W. D. R.
2013-05-01
Efficient deboning is key to optimizing production yield (maximizing the amount of meat removed from a chicken frame while reducing the presence of bones). Many processors evaluate the efficiency of their deboning lines through manual yield measurements, which involves using a special knife to scrape the chicken frame for any remaining meat after it has been deboned. Researchers with the Georgia Tech Research Institute (GTRI) have developed an automated vision system for estimating this yield loss by correlating image characteristics with the amount of meat left on a skeleton. The yield loss estimation is accomplished by the system's image processing algorithms, which correlates image intensity with meat thickness and calculates the total volume of meat remaining. The team has established a correlation between transmitted light intensity and meat thickness with an R2 of 0.94. Employing a special illuminated cone and targeted software algorithms, the system can make measurements in under a second and has up to a 90-percent correlation with yield measurements performed manually. This same system is also able to determine the probability of bone chips remaining in the output product. The system is able to determine the presence/absence of clavicle bones with an accuracy of approximately 95 percent and fan bones with an accuracy of approximately 80%. This paper describes in detail the approach and design of the system, results from field testing, and highlights the potential benefits that such a system can provide to the poultry processing industry.
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.
2016-01-01
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs. PMID:26869619
Comprehensive study of numerical anisotropy and dispersion in 3-D TLM meshes
NASA Astrophysics Data System (ADS)
Berini, Pierre; Wu, Ke
1995-05-01
This paper presents a comprehensive analysis of the numerical anisotropy and dispersion of 3-D TLM meshes constructed using several generalized symmetrical condensed TLM nodes. The dispersion analysis is performed in isotropic lossless, isotropic lossy and anisotropic lossless media and yields a comparison of the simulation accuracy for the different TLM nodes. The effect of mesh grading on the numerical dispersion is also determined. The results compare meshes constructed with Johns' symmetrical condensed node (SCN), two hybrid symmetrical condensed nodes (HSCN) and two frequency domain symmetrical condensed nodes (FDSCN). It has been found that under certain circumstances, the time domain nodes may introduce numerical anisotropy when modelling isotropic media.
NASA Astrophysics Data System (ADS)
Wutsqa, D. U.; Marwah, M.
2017-06-01
In this paper, we consider spatial operation median filter to reduce the noise in the cervical images yielded by colposcopy tool. The backpropagation neural network (BPNN) model is applied to the colposcopy images to classify cervical cancer. The classification process requires an image extraction by using a gray level co-occurrence matrix (GLCM) method to obtain image features that are used as inputs of BPNN model. The advantage of noise reduction is evaluated by comparing the performances of BPNN models with and without spatial operation median filter. The experimental result shows that the spatial operation median filter can improve the accuracy of the BPNN model for cervical cancer classification.
A well-scaling natural orbital theory
Gebauer, Ralph; Cohen, Morrel H.; Car, Roberto
2016-11-01
Here, we introduce an energy functional for ground-state electronic structure calculations. Its variables are the natural spin-orbitals of singlet many-body wave functions and their joint occupation probabilities deriving from controlled approximations to the two-particle density matrix that yield algebraic scaling in general, and Hartree–Fock scaling in its seniority-zero version. Results from the latter version for small molecular systems are compared with those of highly accurate quantum-chemical computations. The energies lie above full configuration interaction calculations, close to doubly occupied configuration interaction calculations. Their accuracy is considerably greater than that obtained from current density-functional theory approximations and from current functionals ofmore » the oneparticle density matrix.« less
Zhou, Wengang; Dickerson, Julie A
2012-01-01
Knowledge of protein subcellular locations can help decipher a protein's biological function. This work proposes new features: sequence-based: Hybrid Amino Acid Pair (HAAP) and two structure-based: Secondary Structural Element Composition (SSEC) and solvent accessibility state frequency. A multi-class Support Vector Machine is developed to predict the locations. Testing on two established data sets yields better prediction accuracies than the best available systems. Comparisons with existing methods show comparable results to ESLPred2. When StruLocPred is applied to the entire Arabidopsis proteome, over 77% of proteins with known locations match the prediction results. An implementation of this system is at http://wgzhou.ece. iastate.edu/StruLocPred/.
He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui
2015-08-13
In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.
Statistical Deviations From the Theoretical Only-SBU Model to Estimate MCU Rates in SRAMs
NASA Astrophysics Data System (ADS)
Franco, Francisco J.; Clemente, Juan Antonio; Baylac, Maud; Rey, Solenne; Villa, Francesca; Mecha, Hortensia; Agapito, Juan A.; Puchner, Helmut; Hubert, Guillaume; Velazco, Raoul
2017-08-01
This paper addresses a well-known problem that occurs when memories are exposed to radiation: the determination if a bit flip is isolated or if it belongs to a multiple event. As it is unusual to know the physical layout of the memory, this paper proposes to evaluate the statistical properties of the sets of corrupted addresses and to compare the results with a mathematical prediction model where all of the events are single bit upsets. A set of rules easy to implement in common programming languages can be iteratively applied if anomalies are observed, thus yielding a classification of errors quite closer to reality (more than 80% accuracy in our experiments).
Piezoelectric Polymers Actuators for Precise Shape Control of Large Scale Space Antennas
NASA Technical Reports Server (NTRS)
Chen, Qin; Natale, Don; Neese, Bret; Ren, Kailiang; Lin, Minren; Zhang, Q. M.; Pattom, Matthew; Wang, K. W.; Fang, Houfei; Im, Eastwood
2007-01-01
Extremely large, lightweight, in-space deployable active and passive microwave antennas are demanded by future space missions. This paper investigates the development of PVDF based piezopolymer actuators for controlling the surface accuracy of a membrane reflector. Uniaxially stretched PVDF films were poled using an electrodeless method which yielded high quality poled piezofilms required for this application. To further improve the piezoperformance of piezopolymers, several PVDF based copolymers were examined. It was found that one of them exhibits nearly three times improvement in the in-plane piezoresponse compared with PVDF and P(VDF-TrFE) piezopolymers. Preliminary experimental results indicate that these flexible actuators are very promising in controlling precisely the shape of the space reflectors.
Sakuraba, Shun; Asai, Kiyoshi; Kameda, Tomoshi
2015-11-05
The dimerization free energies of RNA-RNA duplexes are fundamental values that represent the structural stability of RNA complexes. We report a comparative analysis of RNA-RNA duplex dimerization free-energy changes upon mutations, estimated from a molecular dynamics simulation and experiments. A linear regression for nine pairs of double-stranded RNA sequences, six base pairs each, yielded a mean absolute deviation of 0.55 kcal/mol and an R(2) value of 0.97, indicating quantitative agreement between simulations and experimental data. The observed accuracy indicates that the molecular dynamics simulation with the current molecular force field is capable of estimating the thermodynamic properties of RNA molecules.
A well-scaling natural orbital theory
Gebauer, Ralph; Cohen, Morrel H.; Car, Roberto
2016-01-01
We introduce an energy functional for ground-state electronic structure calculations. Its variables are the natural spin-orbitals of singlet many-body wave functions and their joint occupation probabilities deriving from controlled approximations to the two-particle density matrix that yield algebraic scaling in general, and Hartree–Fock scaling in its seniority-zero version. Results from the latter version for small molecular systems are compared with those of highly accurate quantum-chemical computations. The energies lie above full configuration interaction calculations, close to doubly occupied configuration interaction calculations. Their accuracy is considerably greater than that obtained from current density-functional theory approximations and from current functionals of the one-particle density matrix. PMID:27803328
Evaluation of constraint stabilization procedures for multibody dynamical systems
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.
1987-01-01
Comparative numerical studies of four constraint treatment techniques for the simulation of general multibody dynamic systems are presented, and results are presented for the example of a classical crank mechanism and for a simplified version of the seven-link manipulator deployment problem. The staggered stabilization technique (Park, 1986) is found to yield improved accuracy and robustness over Baumgarte's (1972) technique, the singular decomposition technique (Walton and Steeves, 1969), and the penalty technique (Lotstedt, 1979). Furthermore, the staggered stabilization technique offers software modularity, and the only data each solution module needs to exchange with the other is a set of vectors plus a common module to generate the gradient matrix of the constraints, B.
TOPEX/POSEIDON operational orbit determination results using global positioning satellites
NASA Technical Reports Server (NTRS)
Guinn, J.; Jee, J.; Wolff, P.; Lagattuta, F.; Drain, T.; Sierra, V.
1994-01-01
Results of operational orbit determination, performed as part of the TOPEX/POSEIDON (T/P) Global Positioning System (GPS) demonstration experiment, are presented in this article. Elements of this experiment include the GPS satellite constellation, the GPS demonstration receiver on board T/P, six ground GPS receivers, the GPS Data Handling Facility, and the GPS Data Processing Facility (GDPF). Carrier phase and P-code pseudorange measurements from up to 24 GPS satellites to the seven GPS receivers are processed simultaneously with the GDPF software MIRAGE to produce orbit solutions of T/P and the GPS satellites. Daily solutions yield subdecimeter radial accuracies compared to other GPS, LASER, and DORIS precision orbit solutions.
Berg, Wendie A.; Blume, Jeffrey D.; Cormack, Jean B.; Mendelson, Ellen B.; Lehrer, Daniel; Böhm-Vélez, Marcela; Pisano, Etta D.; Jong, Roberta A.; Evans, W. Phil; Morton, Marilyn J.; Mahoney, Mary C.; Larsen, Linda Hovanessian; Barr, Richard G.; Farria, Dione M.; Marques, Helga S.; Boparai, Karan
2008-01-01
Context Screening ultrasound (US) may depict small, node-negative breast cancers not seen on mammography (M). Objective To compare the diagnostic yield (proportion of women with a positive screen test and positive reference standard) and performance of screening with US+M compared to M alone in women at elevated risk of breast cancer. Design, Setting, and Participants From April 2004 to February 2006, 2809 women at elevated risk for breast cancer, with at least heterogeneously dense breast tissue in at least one quadrant, were recruited from 21 IRB-approved sites to undergo mammography (M) and physician-performed ultrasound (US) exams in randomized order by a radiologist masked to the results of the other exam. Reference standard was defined as a combination of pathology and 12 month follow-up, and was available for 2637 out of the 2725 eligible participants. Main Outcome Measure Diagnostic yield, sensitivity, specificity, and AUC of combined M+US compared to M alone; PPV of biopsy recommendations for M+US compared to M alone. Results Forty participants (41 breasts) were diagnosed with cancer: 8 suspicious on both US and M, 12 on US alone, 12 on M alone, and 8 participants (9 breasts) on neither (interval cancers). The diagnostic yield for M was 7.6 per 1000 women screened (20/2637) and increased to 11.8 per 1000 (31/2637) for combined US+M; the supplemental yield was 4.2 per 1000 women screened (95% CI 1.1 to 7.2 per 1000; p = 0.003 that the supplemental yield is zero). The diagnostic accuracy (AUC) for M was 0.78 (95% CI 0.67 to 0.87) and increased to 0.91 (95% CI 0.84 to 0.96) for US+M (p = 0.003 that difference is zero). Of 12 supplemental cancers seen only by US, 11 (92%) were invasive with median size 10 mm (range 5 to 40 mm; mean 12.6, SE 3.0) and 8/9 (89%) reported had negative nodes. PPV of biopsy recommendation after full diagnostic workup (PPV2) was 84/276 for M (22.6%, 95% CI 14.2 to 33%), 21/235 for US (8.9%, 95% CI 5.6 to 13.3%), and 31/276 for combined US+M (11.2%, 95% CI 7.8 to 15.6%). Conclusions Adding a single screening US to M will yield an additional 1.1 to 7.2 cancers per 1000 high-risk women, but will also substantially increase the number of false positives. Evaluation of the role of annual screening US is ongoing in this patient population. [Clinicaltrials.gov registry # NCT00072501] PMID:18477782
NASA Astrophysics Data System (ADS)
Snavely, Rachel A.
Focusing on the semi-arid and highly disturbed landscape of San Clemente Island, California, this research tests the effectiveness of incorporating a hierarchal object-based image analysis (OBIA) approach with high-spatial resolution imagery and light detection and range (LiDAR) derived canopy height surfaces for mapping vegetation communities. The study is part of a large-scale research effort conducted by researchers at San Diego State University's (SDSU) Center for Earth Systems Analysis Research (CESAR) and Soil Ecology and Restoration Group (SERG), to develop an updated vegetation community map which will support both conservation and management decisions on Naval Auxiliary Landing Field (NALF) San Clemente Island. Trimble's eCognition Developer software was used to develop and generate vegetation community maps for two study sites, with and without vegetation height data as input. Overall and class-specific accuracies were calculated and compared across the two classifications. The highest overall accuracy (approximately 80%) was observed with the classification integrating airborne visible and near infrared imagery having very high spatial resolution with a LiDAR derived canopy height model. Accuracies for individual vegetation classes differed between both classification methods, but were highest when incorporating the LiDAR digital surface data. The addition of a canopy height model, however, yielded little difference in classification accuracies for areas of very dense shrub cover. Overall, the results show the utility of the OBIA approach for mapping vegetation with high spatial resolution imagery, and emphasizes the advantage of both multi-scale analysis and digital surface data for accuracy characterizing highly disturbed landscapes. The integrated imagery and digital canopy height model approach presented both advantages and limitations, which have to be considered prior to its operational use in mapping vegetation communities.
Use of partial least squares regression to impute SNP genotypes in Italian cattle breeds.
Dimauro, Corrado; Cellesi, Massimo; Gaspa, Giustino; Ajmone-Marsan, Paolo; Steri, Roberto; Marras, Gabriele; Macciotta, Nicolò P P
2013-06-05
The objective of the present study was to test the ability of the partial least squares regression technique to impute genotypes from low density single nucleotide polymorphisms (SNP) panels i.e. 3K or 7K to a high density panel with 50K SNP. No pedigree information was used. Data consisted of 2093 Holstein, 749 Brown Swiss and 479 Simmental bulls genotyped with the Illumina 50K Beadchip. First, a single-breed approach was applied by using only data from Holstein animals. Then, to enlarge the training population, data from the three breeds were combined and a multi-breed analysis was performed. Accuracies of genotypes imputed using the partial least squares regression method were compared with those obtained by using the Beagle software. The impact of genotype imputation on breeding value prediction was evaluated for milk yield, fat content and protein content. In the single-breed approach, the accuracy of imputation using partial least squares regression was around 90 and 94% for the 3K and 7K platforms, respectively; corresponding accuracies obtained with Beagle were around 85% and 90%. Moreover, computing time required by the partial least squares regression method was on average around 10 times lower than computing time required by Beagle. Using the partial least squares regression method in the multi-breed resulted in lower imputation accuracies than using single-breed data. The impact of the SNP-genotype imputation on the accuracy of direct genomic breeding values was small. The correlation between estimates of genetic merit obtained by using imputed versus actual genotypes was around 0.96 for the 7K chip. Results of the present work suggested that the partial least squares regression imputation method could be useful to impute SNP genotypes when pedigree information is not available.
The influence of sampling interval on the accuracy of trail impact assessment
Leung, Y.-F.; Marion, J.L.
1999-01-01
Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.
NASA Astrophysics Data System (ADS)
Yang, Huijuan; Guan, Cuntai; Sui Geok Chua, Karen; San Chok, See; Wang, Chuan Chu; Kok Soon, Phua; Tang, Christina Ka Yin; Keng Ang, Kai
2014-06-01
Objective. Detection of motor imagery of hand/arm has been extensively studied for stroke rehabilitation. This paper firstly investigates the detection of motor imagery of swallow (MI-SW) and motor imagery of tongue protrusion (MI-Ton) in an attempt to find a novel solution for post-stroke dysphagia rehabilitation. Detection of MI-SW from a simple yet relevant modality such as MI-Ton is then investigated, motivated by the similarity in activation patterns between tongue movements and swallowing and there being fewer movement artifacts in performing tongue movements compared to swallowing. Approach. Novel features were extracted based on the coefficients of the dual-tree complex wavelet transform to build multiple training models for detecting MI-SW. The session-to-session classification accuracy was boosted by adaptively selecting the training model to maximize the ratio of between-classes distances versus within-class distances, using features of training and evaluation data. Main results. Our proposed method yielded averaged cross-validation (CV) classification accuracies of 70.89% and 73.79% for MI-SW and MI-Ton for ten healthy subjects, which are significantly better than the results from existing methods. In addition, averaged CV accuracies of 66.40% and 70.24% for MI-SW and MI-Ton were obtained for one stroke patient, demonstrating the detectability of MI-SW and MI-Ton from the idle state. Furthermore, averaged session-to-session classification accuracies of 72.08% and 70% were achieved for ten healthy subjects and one stroke patient using the MI-Ton model. Significance. These results and the subjectwise strong correlations in classification accuracies between MI-SW and MI-Ton demonstrated the feasibility of detecting MI-SW from MI-Ton models.
Caspi, Caitlin Eicher; Friebur, Robin
2016-03-17
A major concern in food environment research is the lack of accuracy in commercial business listings of food stores, which are convenient and commonly used. Accuracy concerns may be particularly pronounced in rural areas. Ground-truthing or on-site verification has been deemed the necessary standard to validate business listings, but researchers perceive this process to be costly and time-consuming. This study calculated the accuracy and cost of ground-truthing three town/rural areas in Minnesota, USA (an area of 564 miles, or 908 km), and simulated a modified validation process to increase efficiency without comprising accuracy. For traditional ground-truthing, all streets in the study area were driven, while the route and geographic coordinates of food stores were recorded. The process required 1510 miles (2430 km) of driving and 114 staff hours. The ground-truthed list of stores was compared with commercial business listings, which had an average positive predictive value (PPV) of 0.57 and sensitivity of 0.62 across the three sites. Using observations from the field, a modified process was proposed in which only the streets located within central commercial clusters (the 1/8 mile or 200 m buffer around any cluster of 2 stores) would be validated. Modified ground-truthing would have yielded an estimated PPV of 1.00 and sensitivity of 0.95, and would have resulted in a reduction in approximately 88 % of the mileage costs. We conclude that ground-truthing is necessary in town/rural settings. The modified ground-truthing process, with excellent accuracy at a fraction of the costs, suggests a new standard and warrants further evaluation.
Conducting Retrospective Ontological Clinical Trials in ICD-9-CM in the Age of ICD-10-CM.
Venepalli, Neeta K; Shergill, Ardaman; Dorestani, Parvaneh; Boyd, Andrew D
2014-01-01
To quantify the impact of International Classification of Disease 10th Revision Clinical Modification (ICD-10-CM) transition in cancer clinical trials by comparing coding accuracy and data discontinuity in backward ICD-10-CM to ICD-9-CM mapping via two tools, and to develop a standard ICD-9-CM and ICD-10-CM bridging methodology for retrospective analyses. While the transition to ICD-10-CM has been delayed until October 2015, its impact on cancer-related studies utilizing ICD-9-CM diagnoses has been inadequately explored. Three high impact journals with broad national and international readerships were reviewed for cancer-related studies utilizing ICD-9-CM diagnoses codes in study design, methods, or results. Forward ICD-9-CM to ICD-10-CM mapping was performing using a translational methodology with the Motif web portal ICD-9-CM conversion tool. Backward mapping from ICD-10-CM to ICD-9-CM was performed using both Centers for Medicare and Medicaid Services (CMS) general equivalence mappings (GEMs) files and the Motif web portal tool. Generated ICD-9-CM codes were compared with the original ICD-9-CM codes to assess data accuracy and discontinuity. While both methods yielded additional ICD-9-CM codes, the CMS GEMs method provided incomplete coverage with 16 of the original ICD-9-CM codes missing, whereas the Motif web portal method provided complete coverage. Of these 16 codes, 12 ICD-9-CM codes were present in 2010 Illinois Medicaid data, and accounted for 0.52% of patient encounters and 0.35% of total Medicaid reimbursements. Extraneous ICD-9-CM codes from both methods (Centers for Medicare and Medicaid Services general equivalent mapping [CMS GEMs, n = 161; Motif web portal, n = 246]) in excess of original ICD-9-CM codes accounted for 2.1% and 2.3% of total patient encounters and 3.4% and 4.1% of total Medicaid reimbursements from the 2010 Illinois Medicare database. Longitudinal data analyses post-ICD-10-CM transition will require backward ICD-10-CM to ICD-9-CM coding, and data comparison for accuracy. Researchers must be aware that all methods for backward coding are not comparable in yielding original ICD-9-CM codes. The mandated delay is an opportunity for organizations to better understand areas of financial risk with regards to data management via backward coding. Our methodology is relevant for all healthcare-related coding data, and can be replicated by organizations as a strategy to mitigate financial risk.
Sun, Chuanyu; VanRaden, Paul M.; Cole, John B.; O'Connell, Jeffrey R.
2014-01-01
Dominance may be an important source of non-additive genetic variance for many traits of dairy cattle. However, nearly all prediction models for dairy cattle have included only additive effects because of the limited number of cows with both genotypes and phenotypes. The role of dominance in the Holstein and Jersey breeds was investigated for eight traits: milk, fat, and protein yields; productive life; daughter pregnancy rate; somatic cell score; fat percent and protein percent. Additive and dominance variance components were estimated and then used to estimate additive and dominance effects of single nucleotide polymorphisms (SNPs). The predictive abilities of three models with both additive and dominance effects and a model with additive effects only were assessed using ten-fold cross-validation. One procedure estimated dominance values, and another estimated dominance deviations; calculation of the dominance relationship matrix was different for the two methods. The third approach enlarged the dataset by including cows with genotype probabilities derived using genotyped ancestors. For yield traits, dominance variance accounted for 5 and 7% of total variance for Holsteins and Jerseys, respectively; using dominance deviations resulted in smaller dominance and larger additive variance estimates. For non-yield traits, dominance variances were very small for both breeds. For yield traits, including additive and dominance effects fit the data better than including only additive effects; average correlations between estimated genetic effects and phenotypes showed that prediction accuracy increased when both effects rather than just additive effects were included. No corresponding gains in prediction ability were found for non-yield traits. Including cows with derived genotype probabilities from genotyped ancestors did not improve prediction accuracy. The largest additive effects were located on chromosome 14 near DGAT1 for yield traits for both breeds; those SNPs also showed the largest dominance effects for fat yield (both breeds) as well as for Holstein milk yield. PMID:25084281
Togashi, K; Hagiya, K; Osawa, T; Nakanishi, T; Yamazaki, T; Nagamine, Y; Lin, C Y; Matsumoto, S; Aihara, M; Hayasaka, K
2012-08-01
We first sought to clarify the effects of discounted rate, survival rate, and lactation persistency as a component trait of the selection index on net merit, defined as the first five lactation milks and herd life (HL) weighted by 1 and 0.389 (currently used in Japan), respectively, in units of genetic standard deviation. Survival rate increased the relative economic importance of later lactation traits and the first five lactation milk yields during the first 120 months from the start of the breeding scheme. In contrast, reliabilities of the estimated breeding value (EBV) in later lactation traits are lower than those of earlier lactation traits. We then sought to clarify the effects of applying single nucleotide polymorphism (SNP) on net merit to improve the reliability of EBV of later lactation traits to maximize their increased economic importance due to increase in survival rate. Net merit, selection accuracy, and HL increased by adding lactation persistency to the selection index whose component traits were only milk yields. Lactation persistency of the second and (especially) third parities contributed to increasing HL while maintaining the first five lactation milk yields compared with the selection index whose only component traits were milk yields. A selection index comprising the first three lactation milk yields and persistency accounted for 99.4% of net merit derived from a selection index whose components were identical to those for net merit. We consider that the selection index comprising the first three lactation milk yields and persistency is a practical method for increasing lifetime milk yield in the absence of data regarding HL. Applying SNP to the second- and third-lactation traits and HL increased net merit and HL by maximizing the increased economic importance of later lactation traits, reducing the effect of first-lactation milk yield on HL (genetic correlation (rG) = -0.006), and by augmenting the effects of the second- and third-lactation milk yields on HL (rG = 0.118 and 0.257, respectively).
USDA-ARS?s Scientific Manuscript database
Use of lamb body or chilled carcass weights; live-animal ultrasound or direct carcass measurements of backfat thickness (BF; mm) and LM area (LMA; cm2); and carcass body wall thickness (BWall; mm) to predict carcass yield and value was evaluated using 512 crossbred lambs produced over 3 yr by mating...
Space Station racks weight and CG measurement using the rack insertion end-effector
NASA Technical Reports Server (NTRS)
Brewer, William V.
1994-01-01
The objective was to design a method to measure weight and center of gravity (C.G.) location for Space Station Modules by adding sensors to the existing Rack Insertion End Effector (RIEE). Accomplishments included alternative sensor placement schemes organized into categories. Vendors were queried for suitable sensor equipment recommendations. Inverse mathematical models for each category determine expected maximum sensor loads. Sensors are selected using these computations, yielding cost and accuracy data. Accuracy data for individual sensors are inserted into forward mathematical models to estimate the accuracy of an overall sensor scheme. Cost of the schemes can be estimated. Ease of implementation and operation are discussed.
NASA Astrophysics Data System (ADS)
Müller-Putz, Gernot R.; Scherer, Reinhold; Brauneis, Christian; Pfurtscheller, Gert
2005-12-01
Brain-computer interfaces (BCIs) can be realized on the basis of steady-state evoked potentials (SSEPs). These types of brain signals resulting from repetitive stimulation have the same fundamental frequency as the stimulation but also include higher harmonics. This study investigated how the classification accuracy of a 4-class BCI system can be improved by incorporating visually evoked harmonic oscillations. The current study revealed that the use of three SSVEP harmonics yielded a significantly higher classification accuracy than was the case for one or two harmonics. During feedback experiments, the five subjects investigated reached a classification accuracy between 42.5% and 94.4%.
Müller-Putz, Gernot R; Scherer, Reinhold; Brauneis, Christian; Pfurtscheller, Gert
2005-12-01
Brain-computer interfaces (BCIs) can be realized on the basis of steady-state evoked potentials (SSEPs). These types of brain signals resulting from repetitive stimulation have the same fundamental frequency as the stimulation but also include higher harmonics. This study investigated how the classification accuracy of a 4-class BCI system can be improved by incorporating visually evoked harmonic oscillations. The current study revealed that the use of three SSVEP harmonics yielded a significantly higher classification accuracy than was the case for one or two harmonics. During feedback experiments, the five subjects investigated reached a classification accuracy between 42.5% and 94.4%.
Vaidyanathan, Sriram; Chattopadhyay, Arpita; Mackie, Sarah L; Scarsbrook, Andrew
2018-06-21
Large-vessel vasculitis (LVV) is a serious illness with potentially life-threatening consequences. 18 F-FDG PET-CT has emerged as a valuable diagnostic tool in suspected LVV, combining the strengths of functional and structural imaging. This study aimed to compare the accuracy of FDG PET-CT and contrast-enhanced CT (CECT) in the evaluation of patients with LVV. A retrospective database review for LVV patients undergoing CECT and PET-CT between 2011 to 2016 yielded demographics, scan interval and vasculitis type. Qualitative and quantitative PET-CT analyses included aorta: liver FDG uptake, bespoke FDG uptake distribution scores and vascular maximum standardized uptake values (SUVmax). Quantitative CECT data were assessed wall thickness and mural/lumen ratio. ROC curves were constructed to evaluate comparative diagnostic accuracy and a correlational analysis was conducted between SUVmax and wall-thickness. 36 adults (17 LVV, 19 controls) with a mean age (range) 63 (38-89) years, of which 17 (47%) were males were included. Time interval between CT and PET was mean (standard deviation (SD)) 1.9 (1.2) months. Both SUVmax and wall-thickness demonstrated a significant difference between LVV and controls, with a mean difference (95%confidence interval (CI)) for SUVmax 1.6 (1.1, 2.0) and wall thickness 1.25 (0.68, 1.83) mm, respectively. These two parameters were significantly correlated (p < .0001, R = 0.62). The area under the curve (AUC) (95% CI) for SUVmax was 0.95 (0.88-1.00), and for mural thickening was 0.83 (0.66-0.99). FDG PET-CT demonstrated excellent accuracy whilst CECT mural thickening showed good accuracy in the diagnosis of LVV. Both parameters showed a highly significant correlation. In hospitals without access to FDG PET-CT or in patients unsuitable for PET-CT (e.g., uncontrolled diabetes) CECT offers a viable alternative for the assessment LVV. Advances in knowledge: FDG PET-CT is a highly accurate test for the diagnosis of LVV. Aorta:liver SUVmax ratio is the most specific parameter for LVV. In hospitals without PET-CT or in unsuitable patients e.g. diabetics, CECT is a viable alternative.
Laura P. Leites; Andrew P. Robinson; Nicholas L. Crookston
2009-01-01
Diameter growth (DG) equations in many existing forest growth and yield models use tree crown ratio (CR) as a predictor variable. Where CR is not measured, it is estimated from other measured variables. We evaluated CR estimation accuracy for the models in two Forest Vegetation Simulator variants: the exponential and the logistic CR models used in the North...
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.
1990-01-01
The accuracy of high-alpha slender-body theory (HASBT) for bodies with elliptical cross-sections is presently demonstrated by means of a comparison with exact solutions for incompressible potential flow over a wide range of ellipsoid geometries and angles of attack and sideslip. The addition of the appropriate trigonometric coefficients to the classical slender-body theory decomposition yields the formally correct HASBT, and results in accuracies previously considered unattainable.
Proceedings of Technical Sessions, Volumes 1 and 2: the LACIE Symposium
NASA Technical Reports Server (NTRS)
1979-01-01
The technical design of the Large Area Crop Inventory Experiment is examined and data acquired over 3 global crop years is analyzed with respect to (1) sampling and aggregation; (2) growth size estimation; (3) classification and mensuration; (4) yield estimation; and (5) accuracy assessment. Seventy-nine papers delivered at conference sessions cover system implementation and operation; data processing systems; experiment results and accuracy; supporting research and technology; and the USDA application test system.
NASA Astrophysics Data System (ADS)
Qur’ania, A.; Sarinah, I.
2018-03-01
People often wrong in knowing the type of jasmine by just looking at the white color of the jasmine, while not all white flowers including jasmine and not all jasmine flowers have white. There is a jasmine that is yellow and there is a jasmine that is white and purple.The aim of this research is to identify Jasmine flower (Jasminum sp.) based on the shape of the flower image-based using Sobel edge detection and k-Nearest Neighbor. Edge detection is used to detect the type of flower from the flower shape. Edge detection aims to improve the appearance of the border of a digital image. While k-Nearest Neighbor method is used to classify the classification of test objects into classes that have neighbouring properties closest to the object of training. The data used in this study are three types of jasmine namely jasmine white (Jasminum sambac), jasmine gambir (Jasminum pubescens), and jasmine japan (Pseuderanthemum reticulatum). Testing of jasmine flower image resized 50 × 50 pixels, 100 × 100 pixels, 150 × 150 pixels yields an accuracy of 84%. Tests on distance values of the k-NN method with spacing 5, 10 and 15 resulted in different accuracy rates for 5 and 10 closest distances yielding the same accuracy rate of 84%, for the 15 shortest distance resulted in a small accuracy of 65.2%.
Estimation of Center of Mass Trajectory using Wearable Sensors during Golf Swing
Najafi, Bijan; Lee-Eng, Jacqueline; Wrobel, James S.; Goebel, Ruben
2015-01-01
This study suggests a wearable sensor technology to estimate center of mass (CoM) trajectory during a golf swing. Groups of 3, 4, and 18 participants were recruited, respectively, for the purpose of three validation studies. Study 1 examined the accuracy of the system to estimate a 3D body segment angle compared to a camera-based motion analyzer (Vicon®). Study 2 assessed the accuracy of three simplified CoM trajectory models. Finally, Study 3 assessed the accuracy of the proposed CoM model during multiple golf swings. A relatively high agreement was observed between wearable sensors and the reference (Vicon®) for angle measurement (r > 0.99, random error <1.2° (1.5%) for anterior-posterior; <0.9° (2%) for medial-lateral; and <3.6° (2.5%) for internal-external direction). The two-link model yielded a better agreement with the reference system compared to one-link model (r > 0.93 v. r = 0.52, respectively). On the same note, the proposed two-link model estimated CoM trajectory during golf swing with relatively good accuracy (r > 0.9, A-P random error <1cm (7.7%) and <2cm (10.4%) for M-L). The proposed system appears to accurately quantify the kinematics of CoM trajectory as a surrogate of dynamic postural control during an athlete’s movement and its portability, makes it feasible to fit the competitive environment without restricting surface type. Key points This study demonstrates that wearable technology based on inertial sensors are accurate to estimate center of mass trajectory in complex athletic task (e.g., golf swing) This study suggests that two-link model of human body provides optimum tradeoff between accuracy and minimum number of sensor module for estimation of center of mass trajectory in particular during fast movements. Wearable technologies based on inertial sensors are viable option for assessing dynamic postural control in complex task outside of gait laboratory and constraints of cameras, surface, and base of support. PMID:25983585
Pettersson-Yeo, William; Benetti, Stefania; Marquand, Andre F.; Joules, Richard; Catani, Marco; Williams, Steve C. R.; Allen, Paul; McGuire, Philip; Mechelli, Andrea
2014-01-01
In the pursuit of clinical utility, neuroimaging researchers of psychiatric and neurological illness are increasingly using analyses, such as support vector machine, that allow inference at the single-subject level. Recent studies employing single-modality data, however, suggest that classification accuracies must be improved for such utility to be realized. One possible solution is to integrate different data types to provide a single combined output classification; either by generating a single decision function based on an integrated kernel matrix, or, by creating an ensemble of multiple single modality classifiers and integrating their predictions. Here, we describe four integrative approaches: (1) an un-weighted sum of kernels, (2) multi-kernel learning, (3) prediction averaging, and (4) majority voting, and compare their ability to enhance classification accuracy relative to the best single-modality classification accuracy. We achieve this by integrating structural, functional, and diffusion tensor magnetic resonance imaging data, in order to compare ultra-high risk (n = 19), first episode psychosis (n = 19) and healthy control subjects (n = 23). Our results show that (i) whilst integration can enhance classification accuracy by up to 13%, the frequency of such instances may be limited, (ii) where classification can be enhanced, simple methods may yield greater increases relative to more computationally complex alternatives, and, (iii) the potential for classification enhancement is highly influenced by the specific diagnostic comparison under consideration. In conclusion, our findings suggest that for moderately sized clinical neuroimaging datasets, combining different imaging modalities in a data-driven manner is no “magic bullet” for increasing classification accuracy. However, it remains possible that this conclusion is dependent on the use of neuroimaging modalities that had little, or no, complementary information to offer one another, and that the integration of more diverse types of data would have produced greater classification enhancement. We suggest that future studies ideally examine a greater variety of data types (e.g., genetic, cognitive, and neuroimaging) in order to identify the data types and combinations optimally suited to the classification of early stage psychosis. PMID:25076868
Pettersson-Yeo, William; Benetti, Stefania; Marquand, Andre F; Joules, Richard; Catani, Marco; Williams, Steve C R; Allen, Paul; McGuire, Philip; Mechelli, Andrea
2014-01-01
In the pursuit of clinical utility, neuroimaging researchers of psychiatric and neurological illness are increasingly using analyses, such as support vector machine, that allow inference at the single-subject level. Recent studies employing single-modality data, however, suggest that classification accuracies must be improved for such utility to be realized. One possible solution is to integrate different data types to provide a single combined output classification; either by generating a single decision function based on an integrated kernel matrix, or, by creating an ensemble of multiple single modality classifiers and integrating their predictions. Here, we describe four integrative approaches: (1) an un-weighted sum of kernels, (2) multi-kernel learning, (3) prediction averaging, and (4) majority voting, and compare their ability to enhance classification accuracy relative to the best single-modality classification accuracy. We achieve this by integrating structural, functional, and diffusion tensor magnetic resonance imaging data, in order to compare ultra-high risk (n = 19), first episode psychosis (n = 19) and healthy control subjects (n = 23). Our results show that (i) whilst integration can enhance classification accuracy by up to 13%, the frequency of such instances may be limited, (ii) where classification can be enhanced, simple methods may yield greater increases relative to more computationally complex alternatives, and, (iii) the potential for classification enhancement is highly influenced by the specific diagnostic comparison under consideration. In conclusion, our findings suggest that for moderately sized clinical neuroimaging datasets, combining different imaging modalities in a data-driven manner is no "magic bullet" for increasing classification accuracy. However, it remains possible that this conclusion is dependent on the use of neuroimaging modalities that had little, or no, complementary information to offer one another, and that the integration of more diverse types of data would have produced greater classification enhancement. We suggest that future studies ideally examine a greater variety of data types (e.g., genetic, cognitive, and neuroimaging) in order to identify the data types and combinations optimally suited to the classification of early stage psychosis.
Accuracy assessment in the Large Area Crop Inventory Experiment
NASA Technical Reports Server (NTRS)
Houston, A. G.; Pitts, D. E.; Feiveson, A. H.; Badhwar, G.; Ferguson, M.; Hsu, E.; Potter, J.; Chhikara, R.; Rader, M.; Ahlers, C.
1979-01-01
The Accuracy Assessment System (AAS) of the Large Area Crop Inventory Experiment (LACIE) was responsible for determining the accuracy and reliability of LACIE estimates of wheat production, area, and yield, made at regular intervals throughout the crop season, and for investigating the various LACIE error sources, quantifying these errors, and relating them to their causes. Some results of using the AAS during the three years of LACIE are reviewed. As the program culminated, AAS was able not only to meet the goal of obtaining accurate statistical estimates of sampling and classification accuracy, but also the goal of evaluating component labeling errors. Furthermore, the ground-truth data processing matured from collecting data for one crop (small grains) to collecting, quality-checking, and archiving data for all crops in a LACIE small segment.
NASA Technical Reports Server (NTRS)
Houston, A. G.; Feiveson, A. H.; Chhikara, R. S.; Hsu, E. M. (Principal Investigator)
1979-01-01
A statistical methodology was developed to check the accuracy of the products of the experimental operations throughout crop growth and to determine whether the procedures are adequate to accomplish the desired accuracy and reliability goals. It has allowed the identification and isolation of key problems in wheat area yield estimation, some of which have been corrected and some of which remain to be resolved. The major unresolved problem in accuracy assessment is that of precisely estimating the bias of the LACIE production estimator. Topics covered include: (1) evaluation techniques; (2) variance and bias estimation for the wheat production estimate; (3) the 90/90 evaluation; (4) comparison of the LACIE estimate with reference standards; and (5) first and second order error source investigations.
Harris, Adrian L; Ullah, Roshan; Fountain, Michelle T
2017-08-01
Tetranychus urticae is a widespread polyphagous mite, found on a variety of fruit crops. Tetranychus urticae feeds on the underside of the leaves perforating plant cells and sucking the cell contents. Foliar damage and excess webbing produced by T. urticae can reduce fruit yield. Assessments of T. urticae populations while small provide reliable and accurate ways of targeting control strategies and recording their efficacy against T. urticae. The aim of this study was to evaluate four methods for extracting low levels of T. urticae from leaf samples, representative of developing infestations. These methods were compared to directly counting of mites on leaves under a dissecting microscope. These methods were ethanol washing, a modified paraffin/ethanol meniscus technique, Tullgren funnel extraction and the Henderson and McBurnie mite brushing machine with consideration to: accuracy, precision and simplicity. In addition, two physically different leaf morphologies were compared; Prunus leaves which are glabrous with Malus leaves which are setaceous. Ethanol extraction consistently yielded the highest numbers of mites and was the most rapid method for recovering T. urticae from leaf samples, irrespective of leaf structure. In addition the samples could be processed and stored before final counting. The advantages and disadvantages of each method are discussed in detail.
Reference point detection for camera-based fingerprint image based on wavelet transformation.
Khalil, Mohammed S
2015-04-30
Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2017-12-01
Hybrid polynomial correlated function expansion (H-PCFE) is a novel metamodel formulated by coupling polynomial correlated function expansion (PCFE) and Kriging. Unlike commonly available metamodels, H-PCFE performs a bi-level approximation and hence, yields more accurate results. However, till date, it is only applicable to medium scaled problems. In order to address this apparent void, this paper presents an improved H-PCFE, referred to as locally refined hp - adaptive H-PCFE. The proposed framework computes the optimal polynomial order and important component functions of PCFE, which is an integral part of H-PCFE, by using global variance based sensitivity analysis. Optimal number of training points are selected by using distribution adaptive sequential experimental design. Additionally, the formulated model is locally refined by utilizing the prediction error, which is inherently obtained in H-PCFE. Applicability of the proposed approach has been illustrated with two academic and two industrial problems. To illustrate the superior performance of the proposed approach, results obtained have been compared with those obtained using hp - adaptive PCFE. It is observed that the proposed approach yields highly accurate results. Furthermore, as compared to hp - adaptive PCFE, significantly less number of actual function evaluations are required for obtaining results of similar accuracy.
Nankali, Saber; Miandoab, Payam Samadi; Baghizadeh, Amin
2016-01-01
In external‐beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation‐based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two “Genetic” and “Ranker” searching procedures. The performance of these algorithms has been evaluated using four‐dimensional extended cardiac‐torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro‐fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F‐test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation‐based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers. PACS numbers: 87.55.km, 87.56.Fc PMID:26894358
Nankali, Saber; Torshabi, Ahmad Esmaili; Miandoab, Payam Samadi; Baghizadeh, Amin
2016-01-08
In external-beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation-based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two "Genetic" and "Ranker" searching procedures. The performance of these algorithms has been evaluated using four-dimensional extended cardiac-torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro-fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F-test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation-based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers.
Gravitational waveforms for neutron star binaries from binary black hole simulations
NASA Astrophysics Data System (ADS)
Barkett, Kevin; Scheel, Mark; Haas, Roland; Ott, Christian; Bernuzzi, Sebastiano; Brown, Duncan; Szilagyi, Bela; Kaplan, Jeffrey; Lippuner, Jonas; Muhlberger, Curran; Foucart, Francois; Duez, Matthew
2016-03-01
Gravitational waves from binary neutron star (BNS) and black-hole/neutron star (BHNS) inspirals are primary sources for detection by the Advanced Laser Interferometer Gravitational-Wave Observatory. The tidal forces acting on the neutron stars induce changes in the phase evolution of the gravitational waveform, and these changes can be used to constrain the nuclear equation of state. Current methods of generating BNS and BHNS waveforms rely on either computationally challenging full 3D hydrodynamical simulations or approximate analytic solutions. We introduce a new method for computing inspiral waveforms for BNS/BHNS systems by adding the post-Newtonian (PN) tidal effects to full numerical simulations of binary black holes (BBHs), effectively replacing the non-tidal terms in the PN expansion with BBH results. Comparing a waveform generated with this method against a full hydrodynamical simulation of a BNS inspiral yields a phase difference of < 1 radian over ~ 15 orbits. The numerical phase accuracy required of BNS simulations to measure the accuracy of the method we present here is estimated as a function of the tidal deformability parameter λ.
Gravitational waveforms for neutron star binaries from binary black hole simulations
NASA Astrophysics Data System (ADS)
Barkett, Kevin; Scheel, Mark A.; Haas, Roland; Ott, Christian D.; Bernuzzi, Sebastiano; Brown, Duncan A.; Szilágyi, Béla; Kaplan, Jeffrey D.; Lippuner, Jonas; Muhlberger, Curran D.; Foucart, Francois; Duez, Matthew D.
2016-02-01
Gravitational waves from binary neutron star (BNS) and black hole/neutron star (BHNS) inspirals are primary sources for detection by the Advanced Laser Interferometer Gravitational-Wave Observatory. The tidal forces acting on the neutron stars induce changes in the phase evolution of the gravitational waveform, and these changes can be used to constrain the nuclear equation of state. Current methods of generating BNS and BHNS waveforms rely on either computationally challenging full 3D hydrodynamical simulations or approximate analytic solutions. We introduce a new method for computing inspiral waveforms for BNS/BHNS systems by adding the post-Newtonian (PN) tidal effects to full numerical simulations of binary black holes (BBHs), effectively replacing the nontidal terms in the PN expansion with BBH results. Comparing a waveform generated with this method against a full hydrodynamical simulation of a BNS inspiral yields a phase difference of <1 radian over ˜15 orbits. The numerical phase accuracy required of BNS simulations to measure the accuracy of the method we present here is estimated as a function of the tidal deformability parameter λ .
Spectral line polarimetry with a channeled polarimeter.
van Harten, Gerard; Snik, Frans; Rietjens, Jeroen H H; Martijn Smit, J; Keller, Christoph U
2014-07-01
Channeled spectropolarimetry or spectral polarization modulation is an accurate technique for measuring the continuum polarization in one shot with no moving parts. We show how a dual-beam implementation also enables spectral line polarimetry at the intrinsic resolution, as in a classic beam-splitting polarimeter. Recording redundant polarization information in the two spectrally modulated beams of a polarizing beam-splitter even provides the possibility to perform a postfacto differential transmission correction that improves the accuracy of the spectral line polarimetry. We perform an error analysis to compare the accuracy of spectral line polarimetry to continuum polarimetry, degraded by a residual dark signal and differential transmission, as well as to quantify the impact of the transmission correction. We demonstrate the new techniques with a blue sky polarization measurement around the oxygen A absorption band using the groundSPEX instrument, yielding a polarization in the deepest part of the band of 0.160±0.010, significantly different from the polarization in the continuum of 0.2284±0.0004. The presented methods are applicable to any dual-beam channeled polarimeter, including implementations for snapshot imaging polarimetry.
An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm.
Qin, Qin; Li, Jianqing; Yue, Yinggao; Liu, Chengyu
2017-01-01
R-peak detection is crucial in electrocardiogram (ECG) signal analysis. This study proposed an adaptive and time-efficient R-peak detection algorithm for ECG processing. First, wavelet multiresolution analysis was applied to enhance the ECG signal representation. Then, ECG was mirrored to convert large negative R-peaks to positive ones. After that, local maximums were calculated by the first-order forward differential approach and were truncated by the amplitude and time interval thresholds to locate the R-peaks. The algorithm performances, including detection accuracy and time consumption, were tested on the MIT-BIH arrhythmia database and the QT database. Experimental results showed that the proposed algorithm achieved mean sensitivity of 99.39%, positive predictivity of 99.49%, and accuracy of 98.89% on the MIT-BIH arrhythmia database and 99.83%, 99.90%, and 99.73%, respectively, on the QT database. By processing one ECG record, the mean time consumptions were 0.872 s and 0.763 s for the MIT-BIH arrhythmia database and QT database, respectively, yielding 30.6% and 32.9% of time reduction compared to the traditional Pan-Tompkins method.
An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm
Qin, Qin
2017-01-01
R-peak detection is crucial in electrocardiogram (ECG) signal analysis. This study proposed an adaptive and time-efficient R-peak detection algorithm for ECG processing. First, wavelet multiresolution analysis was applied to enhance the ECG signal representation. Then, ECG was mirrored to convert large negative R-peaks to positive ones. After that, local maximums were calculated by the first-order forward differential approach and were truncated by the amplitude and time interval thresholds to locate the R-peaks. The algorithm performances, including detection accuracy and time consumption, were tested on the MIT-BIH arrhythmia database and the QT database. Experimental results showed that the proposed algorithm achieved mean sensitivity of 99.39%, positive predictivity of 99.49%, and accuracy of 98.89% on the MIT-BIH arrhythmia database and 99.83%, 99.90%, and 99.73%, respectively, on the QT database. By processing one ECG record, the mean time consumptions were 0.872 s and 0.763 s for the MIT-BIH arrhythmia database and QT database, respectively, yielding 30.6% and 32.9% of time reduction compared to the traditional Pan-Tompkins method. PMID:29104745
Gravity model improvement using GEOS 3 /GEM 9 and 10/. [and Seasat altimetry data
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Wagner, C. A.; Klosko, S. M.; Laubscher, R. E.
1979-01-01
Although errors in previous gravity models have produced large uncertainties in the orbital position of GEOS 3, significant improvement has been obtained with new geopotential solutions, Goddard Earth Model (GEM) 9 and 10. The GEM 9 and 10 solutions for the potential coefficients and station coordinates are presented along with a discussion of the new techniques employed. Also presented and discussed are solutions for three fundamental geodetic reference parameters, viz. the mean radius of the earth, the gravitational constant, and mean equatorial gravity. Evaluation of the gravity field is examined together with evaluation of GEM 9 and 10 for orbit determination accuracy. The major objectives of GEM 9 and 10 are achieved. GEOS 3 orbital accuracies from these models are about 1 m in their radial components for 5-day arc lengths. Both models yield significantly improved results over GEM solutions when compared to surface gravimetry, Skylab and GEOS 3 altimetry, and highly accurate BE-C (Beacon Explorer-C) laser ranges. The new values of the parameters discussed are given.
Gaze-independent ERP-BCIs: augmenting performance through location-congruent bimodal stimuli
Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Werkhoven, Peter
2014-01-01
Gaze-independent event-related potential (ERP) based brain-computer interfaces (BCIs) yield relatively low BCI performance and traditionally employ unimodal stimuli. Bimodal ERP-BCIs may increase BCI performance due to multisensory integration or summation in the brain. An additional advantage of bimodal BCIs may be that the user can choose which modality or modalities to attend to. We studied bimodal, visual-tactile, gaze-independent BCIs and investigated whether or not ERP components’ tAUCs and subsequent classification accuracies are increased for (1) bimodal vs. unimodal stimuli; (2) location-congruent vs. location-incongruent bimodal stimuli; and (3) attending to both modalities vs. to either one modality. We observed an enhanced bimodal (compared to unimodal) P300 tAUC, which appeared to be positively affected by location-congruency (p = 0.056) and resulted in higher classification accuracies. Attending either to one or to both modalities of the bimodal location-congruent stimuli resulted in differences between ERP components, but not in classification performance. We conclude that location-congruent bimodal stimuli improve ERP-BCIs, and offer the user the possibility to switch the attended modality without losing performance. PMID:25249947
McRoy, Susan; Jones, Sean; Kurmally, Adam
2016-09-01
This article examines methods for automated question classification applied to cancer-related questions that people have asked on the web. This work is part of a broader effort to provide automated question answering for health education. We created a new corpus of consumer-health questions related to cancer and a new taxonomy for those questions. We then compared the effectiveness of different statistical methods for developing classifiers, including weighted classification and resampling. Basic methods for building classifiers were limited by the high variability in the natural distribution of questions and typical refinement approaches of feature selection and merging categories achieved only small improvements to classifier accuracy. Best performance was achieved using weighted classification and resampling methods, the latter yielding an accuracy of F1 = 0.963. Thus, it would appear that statistical classifiers can be trained on natural data, but only if natural distributions of classes are smoothed. Such classifiers would be useful for automated question answering, for enriching web-based content, or assisting clinical professionals to answer questions. © The Author(s) 2015.
Kelly, Simon P; Lalor, Edmund C; Reilly, Richard B; Foxe, John J
2005-06-01
The steady-state visual evoked potential (SSVEP) has been employed successfully in brain-computer interface (BCI) research, but its use in a design entirely independent of eye movement has until recently not been reported. This paper presents strong evidence suggesting that the SSVEP can be used as an electrophysiological correlate of visual spatial attention that may be harnessed on its own or in conjunction with other correlates to achieve control in an independent BCI. In this study, 64-channel electroencephalography data were recorded from subjects who covertly attended to one of two bilateral flicker stimuli with superimposed letter sequences. Offline classification of left/right spatial attention was attempted by extracting SSVEPs at optimal channels selected for each subject on the basis of the scalp distribution of SSVEP magnitudes. This yielded an average accuracy of approximately 71% across ten subjects (highest 86%) comparable across two separate cases in which flicker frequencies were set within and outside the alpha range respectively. Further, combining SSVEP features with attention-dependent parieto-occipital alpha band modulations resulted in an average accuracy of 79% (highest 87%).
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance
Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600
NASA Astrophysics Data System (ADS)
Bennett, J.; Gehly, S.
2016-09-01
This paper presents results from a preliminary method for extracting more orbital information from low rate passive optical tracking data. An improvement in the accuracy of the observation data yields more accurate and reliable orbital elements. A comparison between the orbit propagations from the orbital element generated using the new data processing method is compared with the one generated from the raw observation data for several objects. Optical tracking data collected by EOS Space Systems, located on Mount Stromlo, Australia, is fitted to provide a new orbital element. The element accuracy is determined from a comparison between the predicted orbit and subsequent tracking data or reference orbit if available. The new method is shown to result in a better orbit prediction which has important implications in conjunction assessments and the Space Environment Research Centre space object catalogue. The focus is on obtaining reliable orbital solutions from sparse data. This work forms part of the collaborative effort of the Space Environment Management Cooperative Research Centre which is developing new technologies and strategies to preserve the space environment (www.serc.org.au).
NASA Astrophysics Data System (ADS)
Adjorlolo, Clement; Mutanga, Onisimo; Cho, Moses A.; Ismail, Riyad
2013-04-01
In this paper, a user-defined inter-band correlation filter function was used to resample hyperspectral data and thereby mitigate the problem of multicollinearity in classification analysis. The proposed resampling technique convolves the spectral dependence information between a chosen band-centre and its shorter and longer wavelength neighbours. Weighting threshold of inter-band correlation (WTC, Pearson's r) was calculated, whereby r = 1 at the band-centre. Various WTC (r = 0.99, r = 0.95 and r = 0.90) were assessed, and bands with coefficients beyond a chosen threshold were assigned r = 0. The resultant data were used in the random forest analysis to classify in situ C3 and C4 grass canopy reflectance. The respective WTC datasets yielded improved classification accuracies (kappa = 0.82, 0.79 and 0.76) with less correlated wavebands when compared to resampled Hyperion bands (kappa = 0.76). Overall, the results obtained from this study suggested that resampling of hyperspectral data should account for the spectral dependence information to improve overall classification accuracy as well as reducing the problem of multicollinearity.
Maximum likelihood orientation estimation of 1-D patterns in Laguerre-Gauss subspaces.
Di Claudio, Elio D; Jacovitti, Giovanni; Laurenti, Alberto
2010-05-01
A method for measuring the orientation of linear (1-D) patterns, based on a local expansion with Laguerre-Gauss circular harmonic (LG-CH) functions, is presented. It lies on the property that the polar separable LG-CH functions span the same space as the 2-D Cartesian separable Hermite-Gauss (2-D HG) functions. Exploiting the simple steerability of the LG-CH functions and the peculiar block-linear relationship among the two expansion coefficients sets, maximum likelihood (ML) estimates of orientation and cross section parameters of 1-D patterns are obtained projecting them in a proper subspace of the 2-D HG family. It is shown in this paper that the conditional ML solution, derived by elimination of the cross section parameters, surprisingly yields the same asymptotic accuracy as the ML solution for known cross section parameters. The accuracy of the conditional ML estimator is compared to the one of state of art solutions on a theoretical basis and via simulation trials. A thorough proof of the key relationship between the LG-CH and the 2-D HG expansions is also provided.
Evaluation of Semi-supervised Learning for Classification of Protein Crystallization Imagery.
Sigdel, Madhav; Dinç, İmren; Dinç, Semih; Sigdel, Madhu S; Pusey, Marc L; Aygün, Ramazan S
2014-03-01
In this paper, we investigate the performance of two wrapper methods for semi-supervised learning algorithms for classification of protein crystallization images with limited labeled images. Firstly, we evaluate the performance of semi-supervised approach using self-training with naïve Bayesian (NB) and sequential minimum optimization (SMO) as the base classifiers. The confidence values returned by these classifiers are used to select high confident predictions to be used for self-training. Secondly, we analyze the performance of Yet Another Two Stage Idea (YATSI) semi-supervised learning using NB, SMO, multilayer perceptron (MLP), J48 and random forest (RF) classifiers. These results are compared with the basic supervised learning using the same training sets. We perform our experiments on a dataset consisting of 2250 protein crystallization images for different proportions of training and test data. Our results indicate that NB and SMO using both self-training and YATSI semi-supervised approaches improve accuracies with respect to supervised learning. On the other hand, MLP, J48 and RF perform better using basic supervised learning. Overall, random forest classifier yields the best accuracy with supervised learning for our dataset.
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.
Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.
Feeney, Daniel A; Ober, Christopher P; Snyder, Laura A; Hill, Sara A; Jessen, Carl R
2013-01-01
Peritoneal, mesenteric, and omental diseases are important causes of morbidity and mortality in humans and animals, although information in the veterinary literature is limited. The purposes of this retrospective study were to determine whether objectively applied ultrasound interpretive criteria are statistically useful in differentiating among cytologically defined normal, inflammatory, and neoplastic peritoneal conditions in dogs and cats. A second goal was to determine the cytologically interpretable yield on ultrasound-guided, fine-needle sampling of peritoneal, mesenteric, or omental structures. Sonographic criteria agreed upon by the authors were retrospectively and independently applied by two radiologists to the available ultrasound images without knowledge of the cytologic diagnosis and statistically compared to the ultrasound-guided, fine-needle aspiration cytologic interpretations. A total of 72 dogs and 49 cats with abdominal peritoneal, mesenteric, or omental (peritoneal) surface or effusive disease and 17 dogs and 3 cats with no cytologic evidence of inflammation or neoplasia were included. The optimized, ultrasound criteria-based statistical model created independently for each radiologist yielded an equation-based diagnostic category placement accuracy of 63.2-69.9% across the two involved radiologists. Regional organ-associated masses or nodules as well as aggregated bowel and peritoneal thickening were more associated with peritoneal neoplasia whereas localized, severely complex fluid collections were more associated with inflammatory peritoneal disease. The cytologically interpretable yield for ultrasound-guided fine-needle sampling was 72.3% with no difference between species, making this a worthwhile clinical procedure. © 2013 Veterinary Radiology & Ultrasound.
Wang, Yunpeng; Xu, Wentao; Kou, Xiaohong; Luo, Yunbo; Zhang, Yanan; Ma, Biao; Wang, Mengsha; Huang, Kunlun
2012-08-01
Wheat germ cell-free protein synthesis systems have the potential to synthesize functional proteins safely and with high accuracy, but the poor energy supply and the instability of mRNA templates reduce the productivity of this system, which restricts its applications. In this report, phosphocreatine and pyruvate were added to the system to supply ATP as a secondary energy source. After comparing the protein yield, we found that phosphocreatine is more suitable for use in the wheat germ cell-free protein synthesis system. To stabilize the mRNA template, the plasmid vector, SP6 RNA polymerase, and Cu(2+) were optimized, and a wheat germ cell-free protein synthesis system with high yield and speed was established. When plasmid vector (30 ng/μl), SP6 RNA polymerase (15 U), phosphocreatine (25 mM), and Cu(2+) (5 mM) were added to the system and incubated at 26°C for 16 h, the yield of venom kallikrein increased from 0.13 to 0.74 mg/ml. The specific activity of the recombinant protein was 1.3 U/mg, which is only slightly lower than the crude venom kallikrein (1.74 U/mg) due to the lack of the sugar chain. In this study, the yield of venom kallikrein was improved by optimizing the system, and a good foundation has been laid for industrial applications and for further studies. Copyright © 2012 Elsevier Inc. All rights reserved.
EUS-guided biopsy for the diagnosis and classification of lymphoma.
Ribeiro, Afonso; Pereira, Denise; Escalón, Maricer P; Goodman, Mark; Byrne, Gerald E
2010-04-01
EUS-guided FNA and Tru-cut biopsy (TCB) is highly accurate in the diagnosis of lymphoma. Subclassification, however, may be difficult in low-grade non-Hodgkin lymphoma and Hodgkin lymphoma. To determine the yield of EUS-guided biopsy to classify lymphoma based on the World Health Organization classification of tumors of hematopoietic lymphoid tissues. Retrospective study. Tertiary referral center. A total of 24 patients referred for EUS-guided biopsy who had a final diagnosis of lymphoma or "highly suspicious for lymphoma." EUS-guided FNA and TCB combined with flow cytometry (FC) analysis. MAIN OUTCOMES MEASUREMENT: Lymphoma subclassification accuracy of EUS guided biopsy. Twenty-four patients were included in this study. Twenty-three patients underwent EUS-FNA, and 1 patient had only TCB. Twenty-two underwent EUS-TCB combined with FNA. EUS correctly diagnosed lymphoma in 19 out of 24 patients (79%), and subclassification was determined in 16 patients (66.6%). Flow cytometry correctly identified B-cell monoclonality in 95% (18 out of 19). In 1 patient diagnosed as having marginal-zone lymphoma by EUS-FNA/FC only, the diagnosis was changed to hairy cell leukemia after a bone marrow biopsy was obtained. EUS had a lower yield in nonlarge B-cell lymphoma (only 9 out of 15 cases [60%]) compared with large B-cell lymphoma (78%; P = .3 [Fisher exact test]). Retrospective, small number of patients. EUS-guided biopsy has a lower yield to correctly classify Hodgkin lymphoma and low-grade lymphoma compared with high-grade diffuse large B-cell lymphoma. Copyright 2010 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Greene, Barry R; Redmond, Stephen J; Caulfield, Brian
2017-05-01
Falls are the leading global cause of accidental death and disability in older adults and are the most common cause of injury and hospitalization. Accurate, early identification of patients at risk of falling, could lead to timely intervention and a reduction in the incidence of fall-related injury and associated costs. We report a statistical method for fall risk assessment using standard clinical fall risk factors (N = 748). We also report a means of improving this method by automatically combining it, with a fall risk assessment algorithm based on inertial sensor data and the timed-up-and-go test. Furthermore, we provide validation data on the sensor-based fall risk assessment method using a statistically independent dataset. Results obtained using cross-validation on a sample of 292 community dwelling older adults suggest that a combined clinical and sensor-based approach yields a classification accuracy of 76.0%, compared to either 73.6% for sensor-based assessment alone, or 68.8% for clinical risk factors alone. Increasing the cohort size by adding an additional 130 subjects from a separate recruitment wave (N = 422), and applying the same model building and validation method, resulted in a decrease in classification performance (68.5% for combined classifier, 66.8% for sensor data alone, and 58.5% for clinical data alone). This suggests that heterogeneity between cohorts may be a major challenge when attempting to develop fall risk assessment algorithms which generalize well. Independent validation of the sensor-based fall risk assessment algorithm on an independent cohort of 22 community dwelling older adults yielded a classification accuracy of 72.7%. Results suggest that the present method compares well to previously reported sensor-based fall risk assessment methods in assessing falls risk. Implementation of objective fall risk assessment methods on a large scale has the potential to improve quality of care and lead to a reduction in associated hospital costs, due to fewer admissions and reduced injuries due to falling.
MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery
Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.
2016-01-01
Purpose Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation. PMID:27330239
MIND Demons for MR-to-CT deformable image registration in image-guided spine surgery
NASA Astrophysics Data System (ADS)
Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.
2016-03-01
Purpose: Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method: The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result: The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions: A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation.
MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery.
Reaungamornrat, S; De Silva, T; Uneri, A; Wolinsky, J-P; Khanna, A J; Kleinszig, G; Vogt, S; Prince, J L; Siewerdsen, J H
2016-02-27
Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation.
Physical evaluations of Co-Cr-Mo parts processed using different additive manufacturing techniques
NASA Astrophysics Data System (ADS)
Ghani, Saiful Anwar Che; Mohamed, Siti Rohaida; Harun, Wan Sharuzi Wan; Noar, Nor Aida Zuraimi Md
2017-12-01
In recent years, additive manufacturing with highly design customization has gained an important technique for fabrication in aerospace and medical fields. Despite the ability of the process to produce complex components with highly controlled architecture geometrical features, maintaining the part's accuracy, ability to fabricate fully functional high density components and inferior surfaces quality are the major obstacles in producing final parts using additive manufacturing for any selected application. This study aims to evaluate the physical properties of cobalt chrome molybdenum (Co-Cr-Mo) alloys parts fabricated by different additive manufacturing techniques. The full dense Co-Cr-Mo parts were produced by Selective Laser Melting (SLM) and Direct Metal Laser Sintering (DMLS) with default process parameters. The density and relative density of samples were calculated using Archimedes' principle while the surface roughness on the top and side surface was measured using surface profiler. The roughness average (Ra) for top surface for SLM produced parts is 3.4 µm while 2.83 µm for DMLS produced parts. The Ra for side surfaces for SLM produced parts is 4.57 µm while 9.0 µm for DMLS produced parts. The higher Ra values on side surfaces compared to the top faces for both manufacturing techniques was due to the balling effect phenomenon. The yield relative density for both Co-Cr-Mo parts produced by SLM and DMLS are 99.3%. Higher energy density has influence the higher density of produced samples by SLM and DMLS processes. The findings of this work demonstrated that SLM and DMLS process with default process parameters have effectively produced full dense parts of Co-Cr-Mo with high density, good agreement of geometrical accuracy and better surface finish. Despite of both manufacturing process yield that produced components with higher density, the current finding shows that SLM technique could produce components with smoother surface quality compared to DMLS process with default parameters.
NASA Astrophysics Data System (ADS)
Clark, M. L.; Kilham, N. E.
2015-12-01
Land-cover maps are important science products needed for natural resource and ecosystem service management, biodiversity conservation planning, and assessing human-induced and natural drivers of land change. Most land-cover maps at regional to global scales are produced with remote sensing techniques applied to multispectral satellite imagery with 30-500 m pixel sizes (e.g., Landsat, MODIS). Hyperspectral, or imaging spectrometer, imagery measuring the visible to shortwave infrared regions (VSWIR) of the spectrum have shown impressive capacity to map plant species and coarser land-cover associations, yet techniques have not been widely tested at regional and greater spatial scales. The Hyperspectral Infrared Imager (HyspIRI) mission is a VSWIR hyperspectral and thermal satellite being considered for development by NASA. The goal of this study was to assess multi-temporal, HyspIRI-like satellite imagery for improved land cover mapping relative to multispectral satellites. We mapped FAO Land Cover Classification System (LCCS) classes over 22,500 km2 in the San Francisco Bay Area, California using 30-m HyspIRI, Landsat 8 and Sentinel-2 imagery simulated from data acquired by NASA's AVIRIS airborne sensor. Random Forests (RF) and Multiple-Endmember Spectral Mixture Analysis (MESMA) classifiers were applied to the simulated images and accuracies were compared to those from real Landsat 8 images. The RF classifier was superior to MESMA, and multi-temporal data yielded higher accuracy than summer-only data. With RF, hyperspectral data had overall accuracy of 72.2% and 85.1% with full 20-class and reduced 12-class schemes, respectively. Multispectral imagery had lower accuracy. For example, simulated and real Landsat data had 7.5% and 4.6% lower accuracy than HyspIRI data with 12 classes, respectively. In summary, our results indicate increased mapping accuracy using HyspIRI multi-temporal imagery, particularly in discriminating different natural vegetation types, such as spectrally-mixed woodlands and forests.
Impact of biliary stents on EUS-guided FNA of pancreatic mass lesions
Ranney, Nathaniel; Phadnis, Milind; Trevino, Jessica; Ramesh, Jayapal; Wilcox, C. Mel; Varadarajulu, Shyam
2014-01-01
Background Few studies have evaluated the impact of biliary stents on EUS-guided FNA. Aim To compare diagnostic yield of EUS-FNA in patients with or without biliary stents. Design Retrospective study. Setting Tertiary referral center. Patients Patients with obstructive jaundice secondary to solid pancreatic mass lesions who underwent EUS-FNA over 5 years. Main Outcome Measures The primary objective was to compare the diagnostic accuracy of EUS-FNA in patients with or without biliary stents and between patients with plastic stents or self-expandable metal stents (SEMSs). Secondary objectives were to assess the technical difficulty of EUS-FNA by comparing the number of passes required to establish diagnosis and to identify predictors of a false-negative diagnosis. Results Of 214 patients who underwent EUS-FNA, 150 (70%) had biliary stents and 64 (30%) had no stents in place. Of 150 patients with biliary stents, 105 (70%) were plastic and 45 (30%) were SEMSs. At EUS-FNA, the diagnosis was pancreatic cancer in 155 (72%), chronic pancreatitis in 17 (8%), other cancer in 31 (14%), and indeterminate in 11 (5%). There was no difference in rates of diagnostic accuracy between patients with or without stents (93.7% vs 95.3%; P = .73) and between plastic or SEMSs (95.2% vs 95.5%, P = .99), respectively. Median number of passes to diagnosis was not significantly different between patients with or without stents (2 [interquartile ratio range (IQR) = 1–3] vs 2 [IQR = 1–4]; P = .066) and between plastic or SEMS (2.5 [IQR = 1–4] vs 2 [IQR = 1–4], P = .69), respectively. On univariate analysis, EUS-FNA results were false-negative in patients with large pancreatic masses (>3 cm vs <3 cm, 9.35% vs 0.93%, P = .005) that required more FNA passes (<2 vs >2 passes, 0% vs 11.8%, P < .0001). Limitations Retrospective study. Conclusions The presence or absence of a biliary stent, whether plastic or metal, does not have an impact on the diagnostic yield or technical difficulty of EUS-FNA. PMID:22726468
NASA Technical Reports Server (NTRS)
Stambler, Arielle H.; Inoshita, Karen E.; Roberts, Lily M.; Barbagallo, Claire E.; deGroh, Kim K.; Banks, Bruce A.
2011-01-01
The Materials International Space Station Experiment 2 (MISSE 2) Polymer Erosion and Contamination Experiment (PEACE) polymers were exposed to the environment of low Earth orbit (LEO) for 3.95 years from 2001 to 2005. There were 41 different PEACE polymers, which were flown on the exterior of the International Space Station (ISS) in order to determine their atomic oxygen erosion yields. In LEO, atomic oxygen is an environmental durability threat, particularly for long duration mission exposures. Although spaceflight experiments, such as the MISSE 2 PEACE experiment, are ideal for determining LEO environmental durability of spacecraft materials, ground-laboratory testing is often relied upon for durability evaluation and prediction. Unfortunately, significant differences exist between LEO atomic oxygen exposure and atomic oxygen exposure in ground-laboratory facilities. These differences include variations in species, energies, thermal exposures and radiation exposures, all of which may result in different reactions and erosion rates. In an effort to improve the accuracy of ground-based durability testing, ground-laboratory to in-space atomic oxygen correlation experiments have been conducted. In these tests, the atomic oxygen erosion yields of the PEACE polymers were determined relative to Kapton H using a radio-frequency (RF) plasma asher (operated on air). The asher erosion yields were compared to the MISSE 2 PEACE erosion yields to determine the correlation between erosion rates in the two environments. This paper provides a summary of the MISSE 2 PEACE experiment; it reviews the specific polymers tested as well as the techniques used to determine erosion yield in the asher, and it provides a correlation between the space and ground laboratory erosion yield values. Using the PEACE polymers asher to in-space erosion yield ratios will allow more accurate in-space materials performance predictions to be made based on plasma asher durability evaluation.
2013-01-01
Background Accurate prediction of Helicobacter pylori infection status on endoscopic images can contribute to early detection of gastric cancer, especially in Asia. We identified the diagnostic yield of endoscopy for H. pylori infection at various endoscopist career levels and the effect of two years of training on diagnostic yield. Methods A total of 77 consecutive patients who underwent endoscopy were analyzed. H. pylori infection status was determined by histology, serology, and the urea breast test and categorized as H. pylori-uninfected, -infected, or -eradicated. Distinctive endoscopic findings were judged by six physicians at different career levels: beginner (<500 endoscopies), intermediate (1500–5000), and advanced (>5000). Diagnostic yield and inter- and intra-observer agreement on H. pylori infection status were evaluated. Values were compared between the two beginners after two years of training. The kappa (K) statistic was used to calculate agreement. Results For all physicians, the diagnostic yield was 88.9% for H. pylori-uninfected, 62.1% for H. pylori-infected, and 55.8% for H. pylori-eradicated. Intra-observer agreement for H. pylori infection status was good (K > 0.6) for all physicians, while inter-observer agreement was lower (K = 0.46) for beginners than for intermediate and advanced (K > 0.6). For all physicians, good inter-observer agreement in endoscopic findings was seen for atrophic change (K = 0.69), regular arrangement of collecting venules (K = 0.63), and hemorrhage (K = 0.62). For beginners, the diagnostic yield of H. pylori-infected/eradicated status and inter-observer agreement of endoscopic findings were improved after two years of training. Conclusions The diagnostic yield of endoscopic diagnosis was high for H. pylori-uninfected cases, but was low for H. pylori-eradicated cases. In beginners, daily training on endoscopic findings improved the low diagnostic yield. PMID:23947684
Meinel, Felix G; Schoepf, U Joseph; Townsend, Jacob C; Flowers, Brian A; Geyer, Lucas L; Ebersberger, Ullrich; Krazinski, Aleksander W; Kunz, Wolfgang G; Thierfelder, Kolja M; Baker, Deborah W; Khan, Ashan M; Fernandes, Valerian L; O'Brien, Terrence X
2018-06-15
We aimed to determine the diagnostic yield and accuracy of coronary CT angiography (CCTA) in patients referred for invasive coronary angiography (ICA) based on clinical concern for coronary artery disease (CAD) and an abnormal nuclear stress myocardial perfusion imaging (MPI) study. We enrolled 100 patients (84 male, mean age 59.6 ± 8.9 years) with an abnormal MPI study and subsequent referral for ICA. Each patient underwent CCTA prior to ICA. We analyzed the prevalence of potentially obstructive CAD (≥50% stenosis) on CCTA and calculated the diagnostic accuracy of ≥50% stenosis on CCTA for the detection of clinically significant CAD on ICA (defined as any ≥70% stenosis or ≥50% left main stenosis). On CCTA, 54 patients had at least one ≥50% stenosis. With ICA, 45 patients demonstrated clinically significant CAD. A positive CCTA had 100% sensitivity and 84% specificity with a 100% negative predictive value and 83% positive predictive value for clinically significant CAD on a per patient basis in MPI positive symptomatic patients. In conclusion, almost half (48%) of patients with suspected CAD and an abnormal MPI study demonstrate no obstructive CAD on CCTA.
Wide-field fluorescence diffuse optical tomography with epi-illumination of sinusoidal pattern
NASA Astrophysics Data System (ADS)
Li, Tongxin; Gao, Feng; Chen, Weiting; Qi, Caixia; Yan, Panpan; Zhao, Huijuan
2017-02-01
We present a wide-field fluorescence tomography with epi-illumination of sinusoidal pattern. In this scheme, a DMD projector is employed as a spatial light modulator to generate independently wide-field sinusoidal illumination patterns at varying spatial frequencies on a sample, and then the emitted photons at the sample surface were captured with a EM-CCD camera. This method results in a significantly reduced number of the optical field measurements as compared to the point-source-scanning ones and thereby achieves a fast data acquisition that is desired for a dynamic imaging application. Fluorescence yield images are reconstructed using the normalized-Born formulated inversion of the diffusion model. Experimental reconstructions are presented on a phantom embedding the fluorescent targets and compared for a combination of the multiply frequencies. The results validate the ability of the method to determine the target relative depth and quantification with an increasing accuracy.
Harbour, L; Dharma-Wardana, M W C; Klug, D D; Lewis, L J
2016-11-01
Ultrafast laser experiments yield increasingly reliable data on warm dense matter, but their interpretation requires theoretical models. We employ an efficient density functional neutral-pseudoatom hypernetted-chain (NPA-HNC) model with accuracy comparable to ab initio simulations and which provides first-principles pseudopotentials and pair potentials for warm-dense matter. It avoids the use of (i) ad hoc core-repulsion models and (ii) "Yukawa screening" and (iii) need not assume ion-electron thermal equilibrium. Computations of the x-ray Thomson scattering (XRTS) spectra of aluminum and beryllium are compared with recent experiments and with density-functional-theory molecular-dynamics (DFT-MD) simulations. The NPA-HNC structure factors, compressibilities, phonons, and conductivities agree closely with DFT-MD results, while Yukawa screening gives misleading results. The analysis of the XRTS data for two of the experiments, using two-temperature quasi-equilibrium models, is supported by calculations of their temperature relaxation times.